id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
7,348,730
https://en.wikipedia.org/wiki/Loculus%20%28architecture%29
Loculus (Latin, "little place"), plural loculi, is an architectural compartment or niche that houses a body, as in a catacomb, hypogeum, mausoleum or other place of entombment. In classical antiquity, the mouth of the loculus might be closed with a slab, plain, as in the Catacombs of Rome, or sculptural, as in the family tombs of ancient Palmyra. See also Kokh (tomb): sometimes translated as "loculus" Arcosolium: another niche-like tomb Glossary of architecture References Sources Architectural elements Death customs Burial monuments and structures
Loculus (architecture)
[ "Technology", "Engineering" ]
133
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
7,349,102
https://en.wikipedia.org/wiki/Allegany%20Ballistics%20Laboratory
Allegany Ballistics Laboratory (ABL) located in Rocket Center, West Virginia, is a diverse industrial complex employing some 1,000 people across . The facility is a member of the Federal Laboratory Consortium and is operated by Northrop Grumman (former Alliant Techsystems) under contract with the Naval Sea Systems Command (NAVSEA). Current operation The ABL facility is a manufacturer of advanced composite structures for the F-22 Raptor and other aerospace projects. ATK also operates 6 of 11 known advanced fiber placement machines. In addition the site produces about 80 military products, including: 30mm shells for Apache helicopters, training grenades, fuze-proximity sensors, mortars and warheads, and tank ammunition. Also on the site is the Robert C. Byrd Hilltop Office Complex and the Robert C. Byrd Institute for Advanced Flexible Manufacturing. At the Robert C. Byrd Complex on the hill companies have rented space to do secure research, among them IBM (which recently acquired National Interest Security Company) who is digitizing data on hurricane cleanup, avian influenza, and weather records. It also plays a significant role in continuity of government operations. History ABL was established in 1944 on the site of a former ammunitions plant on land owned by the Army. After World War II, the plant was transferred to the Office of Scientific Research and Development and was involved in building propulsion devices and engines for the solid-rocket industry. Later in the decade, ownership of ABL was transferred to the Navy office of Naval Sea Systems Command. Since 1946 it was operated by the Hercules Powder Company. In 1956 when it was producing Altair rocket stages for Vanguard rockets, ABL was, "A subsidiary of the Navy operated by the Hercules Powder Company." The Navy now contracts out operation of the facility to ATK (Alliant Techsystems), a $3.4 billion corporation based in Edina, Minnesota. In 1998, ATK's Conventional Munitions Group was selected by Lockheed Martin Aeronautical Systems to produce the fiber-placed composite pivot shaft assembly for the F-22 Raptor air-dominance fighter. Work on the production program was performed at Alliant's automated fiber placement production facility at the Allegany Ballistics Laboratory before the production of F-22 aircraft ended in 2012. The fiber placement facility was constructed as part of a $177 million renovation and restoration program funded by the U.S. Naval Sea Systems Command (NAVSEA), which owns the Allegany Ballistics Laboratory. Local Perception As for the ecological impact, it is believed the facility contributes greatly to the pollution of the adjacent North Branch Potomac River. While this is unsupported, the company does have numerous runoff sites. Also, the groundwater in the surrounding community has been verified to contain many contaminants, although actions have since been taken to reduce these contaminants, and are part of a constant monitoring process. Companies The following privately owned ventures are located on the ABL site: See also Barton Business Park North Branch Industrial Complex Upper Potomac Industrial Park References External links ATK: Alliant Techsystems Robert C. Byrd Institute FLC: Federal Laboratory Consortium FLC: Federal Laboratory Consortium: Mid-Atlantic National Interest Security Company Ballistics Science parks in the United States Business parks of the United States Buildings and structures in Mineral County, West Virginia Military installations in West Virginia Military Superfund sites Superfund sites in West Virginia Continuity of government in the United States
Allegany Ballistics Laboratory
[ "Physics" ]
705
[ "Applied and interdisciplinary physics", "Ballistics" ]
7,349,167
https://en.wikipedia.org/wiki/International%20Designator
The International Designator, also known as COSPAR ID, is an international identifier assigned to artificial objects in space. It consists of the launch year, a three-digit incrementing launch number of that year and up to a three-letter code representing the sequential identifier of a piece in a launch. In TLE format the first two digits of the year and the dash are dropped. For example, 1990-037A is the Space Shuttle Discovery on mission STS-31, which carried the Hubble Space Telescope (1990-037B) into space. This launch was the 37th known successful launch worldwide in 1990. The designation system has been generally known as the COSPAR system, named for the Committee on Space Research (COSPAR) of the International Council for Science. COSPAR subsumed the first designation system, devised at Harvard University. That system used letters of the Greek alphabet to designate artificial satellites. This was based on the scientific naming convention for natural satellites. For example, Sputnik 1 was designated 1957 Alpha 2. The launch vehicle, which was brighter in orbit, was designated 1957 Alpha 1. Brighter objects in the same launch were given the lower integer number, and Alpha was given since it was the first launch of the year. The Harvard designation system continued to be used for satellites launched up to the end of 1962, when it was replaced with the modern system. The first satellite to receive a new-format designator was Luna E-6 No.2, 1963-001B, although some sources, including the NSSDC website, retroactively apply the new-format designators to older satellites, even those no longer in orbit at the time of its introduction. Designators are assigned to objects by the United States Space Command along with satellite catalog numbers as they are discovered in space. The United Nations Office for Outer Space Affairs (UNOOSA) and the National Space Science Data Center (NSSDC), part of NASA, maintain two catalogs that provide additional information on the launchers and payloads associated with the designators. While UNOOSA uses COSPAR ID, many NSSDC Master Catalog (NMC) entries are created before launches so they are not always bound to a COSPAR ID. Below are examples: Spacecraft which do not complete an orbit of the Earth, for example launches which fail to achieve orbit, are not assigned IDs. Satellites launched from the International Space Station are assigned a COSPAR ID beginning with "1998-067", because the (first module of the) space station was launched in 1998. For example, the satellite GOMX-3, launched on an H-II Transfer Vehicle on August 19, 2015, from Tanegashima Space Center in Japan, is designated COSPAR ID 1998-067HA, because it first arrived on the International Space Station from where it was later launched. Notes See also Satellite Catalog Number References External links Online Index of Objects Launched into Outer Space USSTRATCOM Space-Track CelesTrak (a partial copy of Space-Track.org catalog) Small Satellite Debris Catalog Maintenance Issues Satellites Identifiers
International Designator
[ "Astronomy" ]
635
[ "Satellites", "Outer space" ]
7,349,264
https://en.wikipedia.org/wiki/Weak%20n-category
In category theory, a weak n-category is a generalization of the notion of strict n-category where composition and identities are not strictly associative and unital, but only associative and unital up to coherent equivalence. This generalisation only becomes noticeable at dimensions two and above where weak 2-, 3- and 4-categories are typically referred to as bicategories, tricategories, and tetracategories. The subject of weak n-categories is an area of ongoing research. History There is much work to determine what the coherence laws for weak n-categories should be. Weak n-categories have become the main object of study in higher category theory. There are basically two classes of theories: those in which the higher cells and higher compositions are realized algebraically (most remarkably Michael Batanin's theory of weak higher categories) and those in which more topological models are used (e.g. a higher category as a simplicial set satisfying some universality properties). In a terminology due to John Baez and James Dolan, a is a weak n-category, such that all h-cells for h > k are invertible. Some of the formalism for are much simpler than those for general n-categories. In particular, several technically accessible formalisms of (infinity, 1)-categories are now known. Now the most popular such formalism centers on a notion of quasi-category, other approaches include a properly understood theory of simplicially enriched categories and the approach via Segal categories; a class of examples of stable can be modeled (in the case of characteristics zero) also via pretriangulated A-infinity categories of Maxim Kontsevich. Quillen model categories are viewed as a presentation of an ; however not all can be presented via model categories. See also Bicategory Tricategory Tetracategory Infinity category Opetope Stabilization hypothesis External links n-Categories – Sketch of a Definition by John Baez Lectures on n-Categories and Cohomology by John Baez Tom Leinster, Higher operads, higher categories, math.CT/0305049 Jacob Lurie, Higher topos theory, math.CT/0608040, published version: pdf Higher category theory
Weak n-category
[ "Mathematics" ]
472
[ "Higher category theory", "Mathematical structures", "Category theory", "Category theory stubs" ]
5,615,223
https://en.wikipedia.org/wiki/Freezing%20drizzle
Freezing drizzle is drizzle that freezes on contact with the ground or an object at or near the surface. Its METAR code is FZDZ. Formation Although freezing drizzle and freezing rain are similar in that they both involve liquid precipitation above the surface in subfreezing temperatures and freeze on the surface, the mechanisms leading to their development are entirely different. Where freezing rain forms when frozen precipitation falls through a melting layer and turns liquid, freezing drizzle forms via the supercooled warm-rain process, in which cloud droplets coalesce until they become heavy enough to fall out of the cloud, but in subfreezing conditions. Despite this process taking place in a subfreezing environment, the liquid water will not freeze if the environmental temperature is above , via supercooling. If ice crystals are already present in this environment, the liquid droplets will freeze onto these crystals and be effectively removed before they can grow large enough to fall out of the cloud. As a result, freezing drizzle develops in shallow low-level stratus-type clouds where air saturation occurs entirely below the layer in which ice crystals can develop and grow. Effects When freezing drizzle accumulates on land, it creates an icy layer of glaze. Freezing drizzle alone does not generally result in significant ice accumulations due to its light, low-intensity nature unlike its rain counterpart. However, even thin layers of slick ice deposited on roads as black ice can be very slippery and cause extremely hazardous conditions resulting in vehicle crashes. Freezing drizzle is extremely dangerous to aircraft in icing conditions, as the supercooled water droplets will freeze onto the airframe, degrading aircraft performance considerably. The loss and accident of American Eagle Flight 4184 on October 31, 1994, has been attributed to ice buildup due to freezing drizzle aloft. See also Black ice Freezing rain References Precipitation Weather hazards
Freezing drizzle
[ "Physics" ]
395
[ "Weather", "Physical phenomena", "Weather hazards" ]
5,615,284
https://en.wikipedia.org/wiki/Bretschneider%27s%20formula
In geometry, Bretschneider's formula is a mathematical expression for the area of a general quadrilateral. It works on both convex and concave quadrilaterals, whether it is cyclic or not. The formula also works on crossed quadrilaterals provided that directed angles are used. History The German mathematician Carl Anton Bretschneider discovered the formula in 1842. The formula was also derived in the same year by the German mathematician Karl Georg Christian von Staudt. Formulation Bretschneider's formula is expressed as: Here, , , , are the sides of the quadrilateral, is the semiperimeter, and and are any two opposite angles, since as long as directed angles are used so that or (when the quadrilateral is crossed). Proof Denote the area of the quadrilateral by . Then we have Therefore The law of cosines implies that because both sides equal the square of the length of the diagonal . This can be rewritten as Adding this to the above formula for yields Note that: (a trigonometric identity true for all ) Following the same steps as in Brahmagupta's formula, this can be written as Introducing the semiperimeter the above becomes and Bretschneider's formula follows after taking the square root of both sides: The second form is given by using the cosine half-angle identity yielding Emmanuel García has used the generalized half angle formulas to give an alternative proof. Related formulae Bretschneider's formula generalizes Brahmagupta's formula for the area of a cyclic quadrilateral, which in turn generalizes Heron's formula for the area of a triangle. The trigonometric adjustment in Bretschneider's formula for non-cyclicality of the quadrilateral can be rewritten non-trigonometrically in terms of the sides and the diagonals and to give Notes References & further reading C. A. Bretschneider. Untersuchung der trigonometrischen Relationen des geradlinigen Viereckes. Archiv der Mathematik und Physik, Band 2, 1842, S. 225-261 ( online copy, German) F. Strehlke: Zwei neue Sätze vom ebenen und sphärischen Viereck und Umkehrung des Ptolemaischen Lehrsatzes. Archiv der Mathematik und Physik, Band 2, 1842, S. 323-326 (online copy, German) External links Bretschneider's formula at proofwiki.org Bretschneider's Quadrilateral Area Formula & Brahmagupta's Formula at Dynamic Geometry Sketches, interactive geometry sketches. Theorems about quadrilaterals Area Articles containing proofs
Bretschneider's formula
[ "Physics", "Mathematics" ]
583
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Articles containing proofs", "Wikipedia categories named after physical quantities", "Area" ]
5,615,371
https://en.wikipedia.org/wiki/Acetabulum%20%28unit%29
In Ancient Roman measurement, the acetabulum was a measure of volume (fluid and dry) equivalent to the Greek (oxybaphon). It was one-fourth of the hemina and therefore one-eighth of the sextarius. It contained the weight in water of fifteen Attic drachmae. Used with some frequency by Pliny the Elder, in a 1952 translation the unit was judged to be equivalent to . However, other sources estimate a higher value of perhaps (see Ancient Roman units of measurement). References Units of volume Ancient Roman units of measurement
Acetabulum (unit)
[ "Mathematics" ]
118
[ "Units of volume", "Quantity", "Units of measurement" ]
5,615,546
https://en.wikipedia.org/wiki/Interpersonal%20circumplex
The interpersonal circle or interpersonal circumplex is a model for conceptualizing, organizing, and assessing interpersonal behavior, traits, and motives. The interpersonal circumplex is defined by two orthogonal axes: a vertical axis (of status, dominance, power, ambitiousness, assertiveness, or control) and a horizontal axis (of agreeableness, compassion, nurturant, solidarity, friendliness, warmth, affiliation or love). In recent years, it has become conventional to identify the vertical and horizontal axes with the broad constructs of agency and communion. Thus, each point in the interpersonal circumplex space can be specified as a weighted combination of agency and communion. Character traits Placing a person near one of the poles of the axes implies that the person tends to convey clear or strong messages (of warmth, hostility, dominance or submissiveness). Conversely, placing a person at the midpoint of the agentic dimension implies the person conveys neither dominance nor submissiveness (and pulls neither dominance nor submissiveness from others). Likewise, placing a person at the midpoint of the communal dimension implies the person conveys neither warmth nor hostility (and pulls neither warmth nor hostility from others). The interpersonal circumplex can be divided into broad segments (such as fourths) or narrow segments (such as sixteenths), but currently most interpersonal circumplex inventories partition the circle into eight octants. As one moves around the circle, each octant reflects a progressive blend of the two axial dimensions. There exist a variety of psychological tests designed to measure these eight interpersonal circumplex octants. For example, the Interpersonal Adjective Scales (IAS; Wiggins, 1995) is a measure of interpersonal traits associated with each octant of the interpersonal circumplex. The Inventory of Interpersonal Problems (IIP; Horowitz, Alden, Wiggins, & Pincus, 2000) is a measure of problems associated with each octant of the interpersonal circumplex, whereas the Inventory of Interpersonal Strengths (IIS; Hatcher & Rogers, 2009) is a measure of strengths associated with each octant. The Circumplex Scales of Interpersonal Values (CSIV; Locke, 2000) is a 64-item measure of the value individuals place on interpersonal experiences associated with each octant of the interpersonal circumplex. The Person's Relating to Others Questionnaire (PROQ), the latest version being the PROQ3 is a 48-item measure developed by the British doctor John Birtchnell. Finally, the Impact Message Inventory-Circumplex (IMI; Kiesler, Schmidt, & Wagner, 1997) assesses the interpersonal dispositions of a target person, not by asking the target person directly, but by assessing the feelings, thoughts, and behaviors that the target evokes in another person. Since interpersonal dispositions are key features of most personality disorders, interpersonal circumplex measures can be useful tools for identifying or differentiating personality disorders (Kiesler, 1996; Leary, 1957; Locke, 2006). Applications to rapport Birtchnell argued that each end of the two axes of the interpersonal circumplex can be manifested as either positive and adaptive interpersonal behaviour or as negative and maladaptive interpersonal behaviour. Working in psychotherapy, he explored whether negative/maladaptive behaviour could be reduced over time through therapy. Laurence Alison, Emily Alison and colleagues have applied the same principle to interrogative interviewing and linked it to the notion of rapport. They propose that when interviewing terrorist suspects, interviewers who use positive/adaptive behaviours in a versatile manner will foster greater rapport with their suspects and will in turn be able to elicit enhanced information intelligence and evidence from them.  They developed the ORBIT (Observing Rapport-Based Interpersonal Techniques) coding system to measure this. Alison and Alison have also applied the interpersonal circumplex, with its adaptive and maladaptive traits, to building rapport in everyday interaction, such as between parents and children and between work colleagues. They call the circumplex the Animal Circle and use animals to represent the ends of the two axes: Good/bad Lion = High control; Good/bad Mouse = Low control; Good/bad Monkey = High agreeableness; Good/bad T-Rex = Low agreeableness. Again, they argue for the importance of adaptive behaviour and of versatility in moving along and across the two axes in order to build and maintain rapport. Helen Spencer-Oatey and her colleagues have applied the same principles to leadership. They call the interpersonal circumplex the Interaction Compass, arguing that it is helpful for guiding leadership behaviour in contexts of global diversity, where versatility and flexing are crucial for maintaining positive relationships with subordinates. Spencer-Oatey and Lazidou also apply it to a range of workplace relationships and issues in their TRIPS rapport management model. TRIPS is an acronym, with T standing for Triggers – sensitivities that can enhance or undermine rapport. The two axes of the interpersonal circumplex are identified as two of the rapport Triggers. History Originally coined Leary Circumplex or Leary Circle after Timothy Leary is defined as "a two-dimensional representation of personality organized around two major axes". In the 20th century, there were a number of efforts by personality psychologists to create comprehensive taxonomies to describe the most important and fundamental traits of human nature. Leary would later become famous for his controversial LSD experiments at Harvard. His circumplex, developed in 1957, is a circular continuum of personality formed from the intersection of two base axes: Power and Love. The opposing sides of the power axis are dominance and submission, while the opposing sides of the love axis are love and hate (Wiggins, 1996). Leary argued that all other dimensions of personality can be viewed as a blending of these two axes. For example, a person who is stubborn and inflexible in their personal relationships might graph her personality somewhere on the arc between dominance and love. However, a person who exhibits passive–aggressive tendencies might find herself best described on the arc between submission and hate. The main idea of the Leary Circumplex is that each and every human trait can be mapped as a vector coordinate within this circle. Furthermore, the Leary Circumplex also represents a kind of bull's eye of healthy psychological adjustment. Theoretically speaking, the most well-adjusted person of the planet could have their personality mapped at the exact center of the circumplex, right at the intersection of the two axes, while individuals exhibiting extremes in personality would be located on the circumference of the circle. The Leary Circumplex offers three major benefits as a taxonomy. It offers a map of interpersonal traits within a geometric circle. It allows for comparison of different traits within the system. It provides a scale of healthy and unhealthy expressions of each trait. See also Circumplex model of group tasks Interpersonal reflex Lorna Smith Benjamin – creator of the similar Structural Analysis of Social Behavior (SASB) circumplex model Personality psychology Unmitigated communion References Cited General Hatcher, R.L., & Rogers, D.T. (2009). Development and validation of a measure of interpersonal strengths: The Inventory of Interpersonal Strengths. Psychological Assessment, 21, 544-569. Horowitz, L.M. (2004). Interpersonal Foundations of Psychopathology. Washington, DC: American Psychological Association. Horowitz, L.M., Alden, L.E., Wiggins, J.S., & Pincus, A.L. (2000). Inventory of Interpersonal Problems Manual. Odessa, FL: The Psychological Corporation. Kiesler, D.J. (1996). Contemporary Interpersonal Theory and Research: Personality, psychopathology and psychotherapy. New York: Wiley. Kiesler, D.J., Schmidt, J.A. & Wagner, C.C. (1997). A circumplex inventory of impact messages: An operational bridge between emotional and interpersonal behavior. In R. Plutchik & H.R. Conte (Eds.), Circumplex models of personality and emotions (pp. 221–244). Washington, DC: American Psychological Association. Leary, T. (1957). Interpersonal Diagnosis of Personality. New York: Ronald Press. Locke, K.D. (2000). Circumplex Scales of Interpersonal Values: Reliability, validity, and applicability to interpersonal problems and personality disorders. Journal of Personality Assessment, 75, 249–267. Locke, K.D. (2006). Interpersonal circumplex measures. In S. Strack (Ed.), Differentiating normal and abnormal personality (2nd Ed., pp. 383–400). New York: Springer. Interpersonal relationships Psychological models Timothy Leary
Interpersonal circumplex
[ "Biology" ]
1,904
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
5,616,058
https://en.wikipedia.org/wiki/Ceteareth
The INCI names ceteareth-n (where n is a number) refer to polyoxyethylene ethers of a mixture of high molecular mass saturated fatty alcohols, mainly cetyl alcohol (m = 15) and stearyl alcohol (m = 17). The number n indicates the average number of ethylene oxide residues in the polyoxyethylene chain. These compounds are non-ionic surfactants that work by attracting both water and oil at the same time, frequently used as emulsifiers in soaps and cosmetics. List of ceteareth compounds Ceteareth-2 Ceteareth-3 Ceteareth-4 Ceteareth-5 Ceteareth-6 Ceteareth-7 Ceteareth-8 Ceteareth-9 Ceteareth-10 Ceteareth-11 Ceteareth-12 Ceteareth-13 Ceteareth-15 Ceteareth-16 Ceteareth-17 Ceteareth-18 Ceteareth-20 (CAS # 68439-49-6) Ceteareth-22 Ceteareth-23 Ceteareth-25 Ceteareth-27 Ceteareth-28 Ceteareth-29 Ceteareth-30 Ceteareth-33 Ceteareth-34 Ceteareth-40 Ceteareth-50 Ceteareth-55 Ceteareth-60 Ceteareth-80 Ceteareth-100 References Ethers Cosmetics chemicals
Ceteareth
[ "Chemistry" ]
332
[ "Organic compounds", "Functional groups", "Ethers" ]
5,616,154
https://en.wikipedia.org/wiki/Toilet%20brush
A toilet brush is a tool for cleaning a toilet bowl. Generally the toilet brush is used with toilet cleaner or bleach. The toilet brush can be used to clean the upper area of the toilet, around the bowl. However, it cannot be used to clean very far into the toilet's U-bend and should absolutely not be used to clean the toilet seat. In many cultures it is considered impolite to clean away biological debris without the use of chemical toilet cleaning products, as this can leave residue on the bristles. By contrast, others consider it impolite not to clean away biological debris immediately using the toilet brush. A typical toilet brush consists of a hard bristled end, usually with a rounded shape and a long handle. Today toilet brushes are commonly made of plastic, but were originally made of wood with pig bristles or from the hair of horses, oxen, squirrels and badgers. The brush is typically stored in a holder, but in some cases completely hidden in a tube. An electric toilet brush is a little different from a normal toilet brush. The bristles are fastened on the rotor of a motor which works similar to an electric tooth brush. The power supply is attached without any metal contact via electromagnetic induction. In recent years, there has been a general shift in design with a new emphasis on ergonomically designed brushes. Further design enhancements have included innovative holders that snap shut around the bristled end, thereby preventing the release of smells, germs and other unpleasantries. Further development of the traditional toilet brush focus on the risk of germ incubation within the brush holder. A toilet brush has been patented which introduces a reservoir of anti-bacterial fluid, allowing the brush to be dipped and sanitized after each use. The first successful artificial Christmas tree was made from brush bristles by Addis using the same machinery used to manufacture its toilet brushes. The trees were made from the same animal-hair bristles used in the brushes, except they were dyed green. Recent developments In recent years many new products aiming to reinvent the traditional toilet brush have emerged to the market. The LooBlade is a toilet brush with an 8-blade silicone head and hydrophobic properties that sheds water and dries quickly. It is claimed to be able to kill 99.9% of germs during and after cleaning. It was invented by Garry Stewart. The Loogun is an alternative to the toilet brush. It is a pressure washer that sprays a powerful jet of clean water that washes away marks both above and below the water line. The device never touches the toilet, so the device stays hygienic. The Handi Sani is a self-cleaning toilet brush. It works by attaching the Handi Sani brush holder to the side of the tank with one small hose running into the tank to take advantage of clean water, and another hose running into the toilet bowl for proper draining. The brush is placed inside the Handi Sani so that when the toilet is flushed, the attachment fills up with clean water while simultaneously draining the dirty water into the toilet bowl. In the popular culture The toilet brush became one of the symbols of the widespread Russian protests in support of Alexei Navalny that took place in January 2021. An investigation led by the Anti-Corruption Foundation suggested that each of the toilet brushes at the alleged personal residence of President Vladimir Putin cost about €700. See also Automatic self-clean toilet seat Bidet Shit stick Toilet (room) Washlet Xylospongium References Toilets Products introduced in 1932
Toilet brush
[ "Biology" ]
732
[ "Excretion", "Toilets" ]
5,617,513
https://en.wikipedia.org/wiki/Regeneron%20Pharmaceuticals
Regeneron Pharmaceuticals, Inc. is an American biotechnology company headquartered in Westchester County, New York. The company was founded in 1988. Originally focused on neurotrophic factors and their regenerative capabilities, giving rise to its name, the company branched out into the study of both cytokine and tyrosine kinase receptors, which gave rise to their first product, which is a VEGF-trap. Company history The company was founded by CEO Leonard Schleifer and scientist George Yancopoulos in 1988. Regeneron has developed aflibercept, a VEGF inhibitor, and rilonacept, an interleukin-1 blocker. VEGF is a protein that normally stimulates the growth of blood vessels, and interleukin-1 is a protein that is normally involved in inflammation. On March 26, 2012, Bloomberg announced that Sanofi and Regeneron were in development of a new drug that would help reduce cholesterol up to 72% more than its competitors. The new drug would target the PCSK9 gene. In July 2015, the company announced a new global collaboration with Sanofi to discover, develop, and commercialize new immuno-oncology drugs, which could generate more than $2 billion for Regeneron, with $640 million upfront, $750 million for proof-of-concept data, and $650 million from the development of REGN2810. REGN2810 was later named cemiplimab. In 2019, Regeneron Pharmaceuticals was announced the 7th best publicly listed company of the 2010s, with a total return of 1,457%. Regeneron Pharmaceuticals was home to the two highest-paid pharmaceutical executives as of 2020. In October 2017, Regeneron made a deal with the Biomedical Advanced Research and Development Authority (BARDA) that the U.S. government would fund 80% of the costs for Regeneron to develop and manufacture antibody-based medications, which subsequently, in 2020, included their COVID-19 treatments, and Regeneron would retain the right to set prices and control production. This deal was criticized in The New York Times. Such deals are not unusual for routine drug development in the American pharmaceutical market. In 2019, the company was added to the Dow Jones Sustainability World Index. In May 2020, Regeneron announced it would repurchase approx. 19.2 million of its shares for around $5 billion, held directly by Sanofi. Prior to the transaction, Sanofi held 23.2 million Regeneron shares. In April 2022, the business announced it would acquire Checkmate Pharmaceuticals for around $250 million, enhancing its number of immuno-oncology drugs. In August 2023, Regeneron announced it would acquire Decibel Therapeutics. In December 2023, Regeneron acquired an Avon Products property in Suffern, New York to be used for cold storage, research and development laboratories In April 2024, the company acquired 2seventy Bio. Experimental treatment for COVID-19 On February 4, 2020, the U.S. Department of Health and Human Services, which already worked with Regeneron, announced that Regeneron would pursue monoclonal antibodies to fight COVID-19. In July 2020, under Operation Warp Speed, Regeneron was awarded a $450 million government contract to manufacture and supply its experimental treatment REGN-COV2, an artificial "antibody cocktail" which was then undergoing clinical trials for its potential both to treat people with COVID-19 and to prevent SARS-CoV-2 coronavirus infection. The $450 million came from the Biomedical Advanced Research and Development Authority (BARDA), the DoD Joint Program Executive Office for Chemical, Biological, Radiological and Nuclear Defense, and Army Contracting Command. Regeneron expected to produce 70,000–300,000 treatment doses or 420,000–1,300,000 prevention doses. "By funding this manufacturing effort, the federal government will own the doses expected to result from the demonstration project," the government said in its July 7 news release. Regeneron similarly said in its own news release that same day that "the government has committed to making doses from these lots available to the American people at no cost and would be responsible for their distribution," noting that this depended on the government granting emergency use authorization or product approval. California based laboratory, FOMAT, is part of the clinical investigation through their doctors Augusto and Nicholas Focil. In October 2020 when U.S. President Donald Trump was infected with COVID-19 and taken to Walter Reed National Military Medical Center in Bethesda, Maryland, he was administered REGN-COV2. His doctors obtained it from Regeneron via a compassionate use request (as clinical trials had not yet been completed and the drug had not yet been approved by the US Food and Drug Administration (FDA)). On October 7, Trump posted a five-minute video to Twitter reasserting that this drug should be "free." That same day, Regeneron filed with the FDA for emergency use authorization. In the filing, it specified that it currently had 50,000 doses and that it expected to reach a total of 300,000 doses "within the next few months." The FDA granted approval for emergency use authorization in November 2020. Marketed products Arcalyst (rilonacept) is used for specific, rare autoinflammatory conditions. Approved by the FDA in February 2008. Eylea (aflibercept injection) was approved by the U.S. Food and Drug Administration (FDA) in November 2011 to treat a common cause of blindness in the elderly. Eylea is reported to cost $11,000 per year for each eye treated. Zaltrap (aflibercept injection) is used for metastatic colorectal cancer approved by the FDA in August 2012. Praluent (alirocumab) is indicated as an adjunct to diet and maximally tolerated statin therapy for the treatment of adults with heterozygous familial hypercholesterolemia or clinical atherosclerotic cardiovascular disease (ASCVD) who require additional lowering of low-density lipoprotein (LDL) cholesterol. Approved by the FDA in July 2015, It is reported to cost $4,500 to $8,000 per year. Dupixent (dupilumab injection) is for the treatment of adolescent and adult patients' atopic dermatitis. It was approved by the FDA in March 2017 and is reported to cost $37,000 per year. Kevzara (sarilumab injection) is an interleukin-6 (IL-6) receptor antagonist for treatment of adults with rheumatoid arthritis approved by the FDA in May 2017. Trials commenced in March 2020 to evaluate the effectiveness of Kevzara in the treatment of COVID-19. Libtayo (cemiplimab injection) is a monoclonal antibody targeting the PD-1 pathway as a checkpoint inhibitor, for the treatment of people with metastatic cutaneous squamous cell carcinoma (cSCC) or locally advanced cSCC who are not candidates for curative surgery or curative radiation. Libtayo was approved by the FDA in September 2018. Inmazeb (atoltivimab/maftivimab/odesivimab) is a drug made of three antibodies, developed to treat deadly Ebola virus. In October 2020, the U.S. Food and Drug Administration (FDA) approved it with an indication for the treatment of infection caused by Zaire ebolavirus. Veopoz (pozelimab-bbfg) is a fully human monoclonal antibody targeting complement factor C5, a protein involved in complement system activation. In August 2023, it was approved by the FDA for children and adults with CHAPLE disease or CD55-deficient protein-losing enteropathy. Technology platforms Trap Fusion Proteins: Regeneron's novel and patented Trap technology creates high-affinity product candidates for many types of signaling molecules, including growth factors and cytokines. The Trap technology involves fusing two distinct fully human receptor components and a fully human immunoglobulin-G constant region. Fully Human Monoclonal Antibodies: Regeneron has developed a suite (VelociSuite) of patented technologies, including VelocImmune and VelociMab, that allow Regeneron scientists to determine the best targets for therapeutic intervention and rapidly generate high-quality, fully human antibodies drug candidates addressing these targets. Financial performance Key people The founders Leonard Schleifer and George Yancopoulos are reported to hold $1.3 billion and $900 million in company stock, respectively. Both are from Queens, New York. Schleifer was formerly a professor of medicine at Weill Cornell Medical School. Yancopoulos was a post-doctoral fellow, and MD/PhD student at Columbia University. Yancopoulos was involved in each drug's development. See also Biotech and pharmaceutical companies in the New York metropolitan area Regeneron Science Talent Search References External links 1988 establishments in New York (state) American companies established in 1988 Biotechnology companies established in 1988 Biotechnology companies of the United States Companies based in Westchester County, New York Health care companies based in New York (state) Life sciences industry Companies associated with the COVID-19 pandemic Pharmaceutical companies established in 1988 Pharmaceutical companies of the United States
Regeneron Pharmaceuticals
[ "Biology" ]
1,970
[ "Life sciences industry" ]
5,617,574
https://en.wikipedia.org/wiki/Neurotrophic%20factors
Neurotrophic factors (NTFs) are a family of biomolecules – nearly all of which are peptides or small proteins – that support the growth, survival, and differentiation of both developing and mature neurons. Most NTFs exert their trophic effects on neurons by signaling through tyrosine kinases, usually a receptor tyrosine kinase. In the mature nervous system, they promote neuronal survival, induce synaptic plasticity, and modulate the formation of long-term memories. Neurotrophic factors also promote the initial growth and development of neurons in the central nervous system and peripheral nervous system, and they are capable of regrowing damaged neurons in test tubes and animal models. Some neurotrophic factors are also released by the target tissue in order to guide the growth of developing axons. Most neurotrophic factors belong to one of three families: (1) neurotrophins, (2) glial cell-line derived neurotrophic factor family ligands (GFLs), and (3) neuropoietic cytokines. Each family has its own distinct cell signaling mechanisms, although the cellular responses elicited often do overlap. Currently, neurotrophic factors are being intensely studied for use in bioartificial nerve conduits because they are necessary in vivo for directing axon growth and regeneration. In studies, neurotrophic factors are normally used in conjunction with other techniques such as biological and physical cues created by the addition of cells and specific topographies. The neurotrophic factors may or may not be immobilized to the scaffold structure, though immobilization is preferred because it allows for the creation of permanent, controllable gradients. In some cases, such as neural drug delivery systems, they are loosely immobilized such that they can be selectively released at specified times and in specified amounts. List of neurotrophic factors Although more information is being discovered about neurotrophic factors, their classification is based on different cellular mechanisms and they are grouped into three main families: the neurotrophins, the CNTF family, and GDNF family. Neurotrophins Brain-derived neurotrophic factor Brain-derived neurotrophic factor (BDNF) is structurally similar to NGF, NT-3, and NT-4/5, and shares the TrkB receptor with NT-4. The brain-derived neurotrophic factor/TrkB system promotes thymocyte survival, as studied in the thymus of mice. Other experiments suggest BDNF is more important and necessary for neuronal survival than other factors. However, this compensatory mechanism is still not known. Specifically, BDNF promotes survival of dorsal root ganglion neurons. Even when bound to a truncated TrkB, BDNF still shows growth and developmental roles. Without BDNF (homozygous (-/-)), mice do not survive past three weeks. Including development, BDNF has important regulatory roles in the development of the visual cortex, enhancing neurogenesis, and improving learning and memory. Specifically, BDNF acts within the hippocampus. Studies have shown that corticosterone treatment and adrenalectomy reduces or upregulated hippocampal BDNF expression. Consistent between human and animal studies, BDNF levels are decreased in those with untreated major depression. However, the correlation between BDNF levels and depression is controversial. Nerve growth factor Nerve growth factor (NGF) uses the high-affinity receptor TrkA to promote myelination and the differentiation of neurons. Studies have shown dysregulation of NGF causes hyperalgesia and pain. NGF production is highly correlated to the extent of inflammation. Even though it is clear that exogenous administration of NGF helps decrease tissue inflammation, the molecular mechanisms are still unknown. Moreover, blood NGF levels are increased in times of stress, during immune disease, and with asthma or arthritis, amongst other conditions. Neurotrophin-3 Whereas neurotrophic factors within the neurotrophin family commonly have a protein tyrosine kinase receptor (Trk), Neurotrophin-3 (NT-3) has the unique receptor, TrkC. In fact, the discovery of the different receptors helped differentiate scientists' understanding and classification of NT-3. NT-3 does share similar properties with other members of this class, and is known to be important in neuronal survival. The NT-3 protein is found within the thymus, spleen, intestinal epithelium but its role in the function of each organ is still unknown. Neurotrophin-4 CNTF family The CNTF family of neurotrophic factors includes ciliary neurotrophic factor (CNTF), leukemia inhibitory factor (LIF), interleukin-6 (IL-6), prolactin, growth hormone, leptin, interferons (i.e., interferon-α, ), and oncostatin M. Ciliary neurotrophic factor Ciliary neurotrophic factor affects embryonic motor neurons, dorsal root ganglion sensory neurons, and ciliary neuron hippocampal neurons. It is structurally related to leukemia inhibitory factor (LIF), interleukin 6 (IL-6), and oncostatin M (OSM). CNTF prevents degeneration of motor neurons in rats and mice which increases survival time and motor function of the mice. These results suggest exogenous CNTF could be used as a therapeutic treatment for human degenerative motor neuron diseases. It also has unexpected leptin-like characteristics as it causes weight loss. GDNF family The GDNF family of ligands includes glial cell line-derived neurotrophic factor (GDNF), artemin, neurturin, and persephin. Glial cell line-derived neurotrophic factor Glial cell line-derived neurotrophic factor (GDNF) was originally detected as survival promoter derived from a glioma cell. Later studies determined GDNF uses a receptor tyrosine kinase and a high-affinity ligand-binding co-receptor GFRα. GDNF has an especially strong affinity for dopaminergic (DA) neurons. Specifically, studies have shown GDNF plays a protective role against MPTP toxins for DA neurons. It has also been detected in motor neurons of embryonic rats and is suggested to aid development and to reduce axotomy. Artemin Neurturin Persephin Ephrins The ephrins are a family of neurotrophic factors that signal through eph receptors, a class of receptor tyrosine kinases; the family of ephrins include ephrin A1, A2, A3, A4, A5, B1, B2, and B3. EGF and TGF families The EGF and TGF families of neurotrophic factors are composed of epidermal growth factor, the neuregulins, transforming growth factor alpha (TGFα), and transforming growth factor beta (TGFβ). They signal through receptor tyrosine kinases and serine/threonine protein kinases. Other neurotrophic factors Several other biomolecules that have identified as neurotrophic factors include: glia maturation factor, insulin, insulin-like growth factor 1 (IGF-1), vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF), platelet-derived growth factor (PDGF), pituitary adenylate cyclase-activating peptide (PACAP), interleukin-1 (IL-1), interleukin-2 (IL-2), interleukin-3 (IL-3), interleukin-5 (IL-5), interleukin-8 (IL-8), macrophage colony-stimulating factor (M-CSF), granulocyte-macrophage colony-stimulating factor (GM-CSF), and neurotactin. References Neurochemistry
Neurotrophic factors
[ "Chemistry", "Biology" ]
1,776
[ "Biochemistry", "Neurotrophic factors", "Neurochemistry", "Signal transduction" ]
5,617,659
https://en.wikipedia.org/wiki/Bromine%20monochloride
Bromine monochloride, also called bromine(I) chloride, bromochloride, and bromine chloride, is an interhalogen inorganic compound with chemical formula BrCl. It is a very reactive golden yellow gas with boiling point 5 °C and melting point −66 °C. Its CAS number is 13863-41-7, and its EINECS number is 237-601-4. It is a strong oxidizing agent. Its molecular structure in the gas phase was determined by microwave spectroscopy; the Br-Cl bond has a length of re = 2.1360376(18) Å. Its crystal structure was determined by single crystal X-ray diffraction; the bond length in the solid state is 2.179(2) Å and the shortest intermolecular interaction is r(Cl···Br) = 3.145(2) Å. Uses Bromine monochloride is used in analytical chemistry in determining low levels of mercury, to quantitatively oxidize mercury in the sample to Hg(II) state. A common use of bromine monochloride is as an algaecide, fungicide, and disinfectant of industrial recirculating cooling water systems. Addition of bromine monochloride is used in some types of Li-SO2 batteries to increase voltage and energy density. See also List of highly toxic gases Interhalogen compounds References Chlorides Bromine(I) compounds Interhalogen compounds Diatomic molecules Fungicides Pesticides Disinfectants Oxidizing agents Gases with color
Bromine monochloride
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
328
[ "Fungicides", "Chlorides", "Pesticides", "Inorganic compounds", "Redox", "Toxicology", "Molecules", "Interhalogen compounds", "Oxidizing agents", "Salts", "Diatomic molecules", "Biocides", "Matter" ]
5,617,879
https://en.wikipedia.org/wiki/Aziridines
In organic chemistry, aziridines are organic compounds containing the aziridine functional group (chemical structure ), a three-membered heterocycle with one amine () and two methylene bridges (). The parent compound is aziridine (or ethylene imine), with molecular formula . Several drugs feature aziridine rings, including mitomycin C, porfiromycin, and azinomycin B (carzinophilin). Structure The bond angles in aziridine are approximately 60°, considerably less than the normal hydrocarbon bond angle of 109.5°, which results in angle strain as in the comparable cyclopropane and ethylene oxide molecules. A banana bond model explains bonding in such compounds. Aziridine is less basic than acyclic aliphatic amines, with a pKa of 7.9 for the conjugate acid, due to increased s character of the nitrogen free electron pair. Angle strain in aziridine also increases the barrier to nitrogen inversion. This barrier height permits the isolation of separate invertomers, for example the cis and trans invertomers of N-chloro-2-methylaziridine. Synthesis Several routes have been developed for the syntheses of aziridines (aziridination). Cyclization of haloamines and amino alcohols An amine functional group displaces the adjacent halide in an intramolecular nucleophilic substitution reaction to generate an aziridine. The parent aziridine is produced industrially from aminoethanol via two related routes. The Nippon Shokubai process requires an oxide catalyst and high temperatures to effect the dehydration. In the Wenker synthesis, the aminoethanol is converted to the sulfate ester, which undergoes base-induced sulfate elimination. Nitrene addition Nitrene addition to alkenes is a well-established method for the synthesis of aziridines. Photolysis or thermolysis of organic azides are good ways to generate nitrenes. Nitrenes can also be prepared in situ from iodosobenzene diacetate and sulfonamides, or the ethoxycarbonylnitrene from the N-sulfonyloxy precursor. From triazolines, epoxides, and oximes Thermolysis or photolysis of triazolines expels nitrogen, producing an aziridine. In the Blum-Ittah aziridine synthesis, sodium azide opens an epoxide, followed by reduction of the azide with triphenylphosphine accompanied by expulsion of nitrogen gas: Another method involves the ring-opening reaction of an epoxide with amines, followed by ring closing with the Mitsunobu reaction. The Hoch-Campbell ethylenimine synthesis involves the reaction of certain oximes with Grignard reagents, which affords aziridines. From alkenes using DPH Aziridines are obtained by treating a mono-, di-, tri- or tetra-substituted alkene (olefin) with (DPH) in the presence of rhodium catalysts: alkene + DPH ->[{}\atop\ce{Rh2(CO2R)4}] aziridine For instance, 2-phenyl-3-methylaziridine can be synthesized by this method and then converted by ring opening reaction to (D)- and (L)-amphetamine (the two active ingredients in Adderall). From α-chloroimines The De Kimpe aziridine synthesis allows for the generation of aziridines by reacting an α-chloroimine with a nucleophile, such as hydride, cyanide, or a Grignard reagent. From 2-azido alcohols 2-azido alcohols can be converted into aziridines with the use of trialkyl phosphines such as trimethylphosphine or tributylphosphine. Reactions Nucleophilic ring opening Aziridines are reactive substrates in ring-opening reactions with many nucleophiles due to their ring strain. Alcoholysis and aminolysis are basically the reverse reactions of the cyclizations. Carbon nucleophiles such as organolithium reagents and organocuprates are also effective. One application of a ring-opening reaction in asymmetric synthesis is that of trimethylsilylazide with an asymmetric ligand in scheme 2 in an organic synthesis of oseltamivir: 1,3-dipole formation Certain N-substituted azirines with electron withdrawing groups on both carbons form azomethine ylides in an electrocyclic thermal or photochemical ring-opening reaction. These ylides can be trapped with a suitable dipolarophile in a 1,3-dipolar cycloaddition. When the N-substituent is an electron-withdrawing group such as a tosyl group, the carbon-nitrogen bond breaks, forming another zwitterion This reaction type requires a Lewis acid catalyst such as boron trifluoride. In this way 2-phenyl-N-tosylaziridine reacts with alkynes, nitriles, ketones and alkenes. Certain 1,4-dipoles form from azetidines. Other Lewis acids, such as B(, can induce decomposition of the ring to a carbocation and linear azanide, which then attack unsaturated moieties in tandem. Oxidation to the N-oxide instead induces nitroso compound extrusion, leaving an olefin. Safety As electrophiles, aziridines are subject to attack and ring-opening by endogenous nucleophiles such as nitrogenous bases in DNA base pairs, resulting in potential mutagenicity. The International Agency for Research on Cancer (IARC) classifies aziridine compounds as possibly carcinogenic to humans (IARC Group 2B). In making the overall evaluation, the IARC Working Group took into consideration that aziridine is a direct-acting alkylating agent, which is mutagenic in a wide range of test systems and forms DNA adducts that are promutagenic. The features that are responsible for their mutagenicity are relevant to their beneficial medicinal properties. See also Binary ethylenimine, a dimeric form of aziridine References Functional groups IARC Group 2B carcinogens
Aziridines
[ "Chemistry" ]
1,388
[ "Functional groups" ]
5,618,154
https://en.wikipedia.org/wiki/Earthquake%20light
An earthquake light also known as earthquake lightning or earthquake flash is a luminous optical phenomenon that appears in the sky at or near areas of tectonic stress, seismic activity, or volcanic eruptions. There is no broad consensus as to the causes of the phenomenon (or phenomena) involved. The phenomenon differs from disruptions to electrical grids – such as arcing power lines – which can produce bright flashes as a result of ground shaking or hazardous weather conditions. Appearance One of the first records of earthquake lights is from the 869 Jōgan earthquake, described as "strange lights in the sky" in Nihon Sandai Jitsuroku. The lights are reported to appear while an earthquake is occurring, although there are reports of lights before or after earthquakes, such as reports concerning the 1975 Kalapana earthquake. They are reported to have shapes similar to those of the auroras, with a white to bluish hue, but occasionally they have been reported having a wider color spectrum. The luminosity is reported to be visible for several seconds, but has also been reported to last for tens of minutes. Accounts of viewable distance from the epicenter varies: in the 1930 Idu earthquake, lights were reported up to from the epicenter. Earthquake lights were reportedly spotted in Tianshui, Gansu, approximately north-northeast of the 2008 Sichuan earthquake's epicenter. During the 2003 Colima earthquake in Mexico, colorful lights were seen in the skies for the duration of the earthquake. During the 2007 Peru earthquake lights were seen in the skies above the sea and filmed by many people. The phenomenon was also observed and caught on film during the 2009 L'Aquila and the 2010 Chile earthquakes. The phenomenon was also reported around the North Canterbury earthquake in New Zealand, that occurred 1 September 1888. The lights were visible in the morning of 1 September in Reefton, and again on 8 September. More recent appearances of the phenomenon, along with video footage of the incidents, happened in Napa and Sonoma Counties in California on August 24, 2014, and in Wellington, New Zealand, on November 14, 2016, where blue flashes like lightning were seen in the night sky, and recorded on several videos. On September 8, 2017, many people reported such sightings in Mexico City after a 8.2 magnitude earthquake with epicenter away, near Pijijiapan in the state of Chiapas. Appearances of the earthquake light seem to occur when the quakes have a high magnitude, generally 5 or higher on the Richter scale. There have also been incidents of yellow, ball-shaped lights appearing before earthquakes. Instances of this phenomenon appear in videos taken seconds after a 7.1 magnitude earthquake in the city of Acapulco, Mexico, around 20:47 on 7 September 2021. The New York Times reported that "Videos from both Acapulco and Mexico City also showed the night sky lit up with electrical flashes as power lines swayed and buckled." A recent one was seen in Qinghai Province, China at 01:45 on 8 January 2022. Surveillance video of a local resident captured the moment. During the 2022 Fukushima earthquake the phenomena was captured on video from multiple angles. A 2023 study found the earthquake light coincided with a magnetic disturbance detected by a geomagnetic observatory, and ruled out "the possibility of the flashes being caused by explosions in transformers or power supply facilities" by checking the maintenance reports of regional power stations, none of which had malfunctioned near the location of the observed light. This phenomenon was observed around 01:18 on 22 September 2022 when a magnitude 6.8 aftershock of the 2022 Michoacán earthquake struck. Social media users including Webcams de México posted videos of blue lights which seemed to be radiating upward. This was reported in Mexico News Daily and included one of the videos. During the 2023 Turkey–Syria earthquakes, multiple lights appeared continuously in Kahramanmaraş and Hatay provinces. Later that year, blue light flashes were also seen in Agadir during the Marrakesh-Safi earthquake. Types Earthquake lights may be classified into two different groups based on their time of appearance: (1) preseismic earthquake light, which generally occur a few seconds to up to a few weeks prior to an earthquake, and are generally observed closer to the epicenter and (2) coseismic earthquake light, which can occur either near the epicenter ("earthquake‐induced stress"), or at significant distances away from the epicenter during the passage of the seismic wavetrain, in particular during the passage of S waves ("wave‐induced stress"). Earthquake light during the lower magnitude aftershock series seem to be rare. Possible explanations Research into earthquake lights is ongoing; as such, several mechanisms have been proposed. Some models suggest the generation of earthquake lights involve the ionization of oxygen to oxygen anions by breaking of peroxy bonds in some types of rocks (dolomite, rhyolite, etc.) by the high stress before and during an earthquake. After the ionisation, the ions travel up through the cracks in the rocks. Once they reach the atmosphere these ions can ionise pockets of air, forming plasma that emits light. Lab experiments have validated that some rocks do ionise the oxygen in them when subjected to high stress levels. Research suggests that the angle of the fault is related to the likelihood of earthquake light generation, with subvertical (nearly vertical) faults in rifting environments having the most incidences of earthquake lights. One hypothesis involves intense electric fields created piezoelectrically by tectonic movements of quartz-containing rocks such as granite. Another possible explanation is local disruption of the Earth's magnetic field and/or ionosphere in the region of tectonic stress, resulting in the observed glow effects either from ionospheric radiative recombination at lower altitudes and greater atmospheric pressure or as aurora. However, the effect is clearly not pronounced or notably observed at all earthquake events and is yet to be directly experimentally verified. During the American Physical Society's 2014 March meeting, research was provided that gave a possible explanation for the reason why bright lights sometimes appear during an earthquake. The research stated that when two layers of the same material rub against each other, voltage is generated. The researcher, Troy Shinbrot of Rutgers University, conducted experiments with different types of grains to mimic the crust of the Earth and emulated the occurrence of earthquakes. He reported that "when the grains split open, they measured a positive voltage spike, and when the split closed, a negative spike." The crack allows the voltage to discharge into the air which then electrifies the air and creates a bright electrical light when it does so. According to Shinbrot, they have produced these voltage spikes every single time with every material tested. While the reason for such an occurrence was not provided, Shinbrot referenced the phenomenon of triboluminescence. Researchers hope that by getting to the bottom of this phenomenon, it will provide more information that will allow seismologists to better predict earthquakes. Skepticism In 2016, podcaster Brian Dunning said he was skeptical that the phenomenon even existed, citing a lack of direct evidence. There is also a "staggering volume of literature... hardly any of these papers agree on anything... I'm forced to wonder how many of these eager researchers are familiar with Hyman's Categorical Imperative 'Do not try to explain something until you are sure there is something to be explained'." In 2016, freelance writer Robert Sheaffer wrote that skeptics and science bloggers should be more skeptical of the phenomenon. Sheaffer on his Bad UFO blog shows examples of what people claim are earthquake lights, then he shows photos of iridescent clouds which appear to be the same. He states that "It's truly remarkable how mutable "earthquake lights" are. Sometimes they look like small globes, climbing up a mountain. Sometimes they look like flashes of lightning. Other times they look exactly like iridescent clouds. Earthquake lights can look like anything at all, when you are avidly seeking evidence for them." See also Ball lightning Earthquake cloud Earthquake prediction Earthquake weather References External links Atmospheric ghost lights Earthquake and seismic risk mitigation Light sources
Earthquake light
[ "Engineering" ]
1,695
[ "Structural engineering", "Earthquake and seismic risk mitigation" ]
5,618,682
https://en.wikipedia.org/wiki/MAPK/ERK%20pathway
The MAPK/ERK pathway (also known as the Ras-Raf-MEK-ERK pathway) is a chain of proteins in the cell that communicates a signal from a receptor on the surface of the cell to the DNA in the nucleus of the cell. The signal starts when a signaling molecule binds to the receptor on the cell surface and ends when the DNA in the nucleus expresses a protein and produces some change in the cell, such as cell division. The pathway includes many proteins, such as mitogen-activated protein kinases (MAPKs), originally called extracellular signal-regulated kinases (ERKs), which communicate by adding phosphate groups to a neighboring protein (phosphorylating it), thereby acting as an "on" or "off" switch. When one of the proteins in the pathway is mutated, it can become stuck in the "on" or "off" position, a necessary step in the development of many cancers. In fact, components of the MAPK/ERK pathway were first discovered in cancer cells, and drugs that reverse the "on" or "off" switch are being investigated as cancer treatments. Background The signal that starts the MAPK/ERK pathway is the binding of extracellular mitogen to a cell surface receptor. This allows a Ras protein (a Small GTPase) to swap a GDP molecule for a GTP molecule, flipping the "on/off switch" of the pathway. The Ras protein can then activate MAP3K (e.g., Raf), which activates MAP2K, which activates MAPK. Finally, MAPK can activate a transcription factor, such as Myc. This process is described in more detail below. Ras activation Receptor-linked tyrosine kinases, such as the epidermal growth factor receptor (EGFR), are activated by extracellular ligands, such as the epidermal growth factor (EGF). Binding of EGF to the EGFR activates the tyrosine kinase activity of the cytoplasmic domain of the receptor. The EGFR becomes phosphorylated on tyrosine residues. Docking proteins such as GRB2 contain an SH2 domain that binds to the phosphotyrosine residues of the activated receptor. GRB2 binds to the guanine nucleotide exchange factor SOS by way of the two SH3 domains of GRB2. When the GRB2-SOS complex docks to phosphorylated EGFR, SOS becomes activated. Activated SOS then promotes the removal of GDP from a member of the Ras subfamily (most notably H-Ras or K-Ras). The Ras protein can then bind GTP and become active. Apart from EGFR, other cell surface receptors that can activate this pathway via GRB2 include Trk A/B, Fibroblast growth factor receptor (FGFR) and PDGFR. Kinase cascade Activated Ras then activates the protein kinase activity of a RAF kinase. The RAF kinase phosphorylates and activates a MAPK/ERK Kinase (MEK1 or MEK2). The MEK phosphorylates and activates a mitogen-activated protein kinase (MAPK). RAF and MAPK/ERK are both serine/threonine-specific protein kinases. MEK is a serine/tyrosine/threonine kinase. In a technical sense, RAF, MEK, and MAPK are all mitogen-activated kinases, as is MNK (see below). MAPKs were originally called "extracellular signal-regulated kinases" (ERKs) and "microtubule associated protein kinases" (MAPKs). One of the first proteins known to be phosphorylated by ERK was a microtubule-associated protein (MAP). As discussed below, many additional targets for phosphorylation by MAPK were later found, and the protein was renamed "mitogen-activated protein kinase" (MAPK). The series of kinases from RAF to MEK to MAPK is an example of a protein kinase cascade. Such series of kinases provide opportunities for feedback regulation and signal amplification. Regulation of translation and transcription Three of the many proteins that are phosphorylated by MAPK are shown in the figure to the right. One effect of MAPK activation is to alter the translation of mRNA to proteins. MAPK phosphorylates the 40S ribosomal protein S6 kinase (RSK). This activates RSK, which, in turn, phosphorylates ribosomal protein S6. Mitogen-activated protein kinases that phosphorylate ribosomal protein S6 were the first to be isolated. MAPK regulates the activities of several transcription factors. MAPK can phosphorylate C-myc. MAPK phosphorylates and activates MNK, which, in turn, phosphorylates CREB. MAPK also regulates the transcription of the C-Fos gene. By altering the levels and activities of transcription factors, MAPK leads to altered transcription of genes that are important for the cell cycle. The 22q11, 1q42, and 19p13 genes, by affecting the ERK pathway, are associated with schizophrenia, schizoaffective disorder, bipolar disorder, and migraines. Regulation of cell cycle entry and proliferation Role of mitogen signaling in cell cycle progression The ERK pathway plays an important role of integrating external signals from the presence of mitogens such as epidermal growth factor (EGF) into signaling events promoting cell growth and proliferation in many mammalian cell types. In a simplified model, the presence of mitogens and growth factors trigger the activation of canonical receptor tyrosine kinases such as EGFR leading to their dimerization and subsequent activation of the small GTPase Ras. This then leads to a series of phosphorylation events downstream in the MAPK cascade (Raf-MEK-ERK) ultimately resulting in the phosphorylation and activation of ERK. The phosphorylation of ERK results in an activation of its kinase activity and leads to phosphorylation of its many downstream targets involved in regulation of cell proliferation. In most cells, some form of sustained ERK activity is required for cells to activate genes that induce cell cycle entry and suppress negative regulators of the cell cycle. Two such important targets include Cyclin D complexes with Cdk4 and Cdk6 (Cdk4/6) which are both phosphorylated by ERK. The transition from G1 to S phase is coordinated by the activity of Cyclin D-Cdk4/6, which increases during late G1 phase as cells prepare to enter S-phase in response to mitogens. Cdk4/6 activation contributes to hyper-phosphorylation and the subsequent destabilization of retinoblastoma protein (Rb). Hypo-phosphorylated Rb, is normally bound to transcription factor E2F in early G1 and inhibits its transcriptional activity, preventing expression of S-phase entry genes including Cyclin E, Cyclin A2 and Emi1. ERK1/2 activation downstream of mitogen induced Ras signaling is necessary and sufficient to remove this cell cycle block and allow cells to progress to S-phase in most mammalian cells. Downstream feedback control and generation of a bistable G1/S switch The restriction point (R-point) marks the critical event when a mammalian cell commits to proliferation and becomes independent of growth stimulation. It is fundamental for normal differentiation and tissue homeostasis, and seems to be dysregulated in virtually all cancers. Although the R-point has been linked to various activities involved in the regulation of G1–S transition of the mammalian cell cycle, the underlying mechanism remains unclear. Using single-cell measurements, Yao et al., shows that the Rb–E2F pathway functions as a bistable switch to convert graded serum inputs into all-or-none E2F responses. Growth and mitogen signals are transmitted downstream of the ERK pathway are incorporated into multiple positive feedback loops to generate a bistable switch at the level of E2F activation. This occurs due to three main interactions during late G1 phase. The first is a result of mitogen stimulation though the ERK leading to the expression of the transcription factor Myc, which is a direct activator of E2F. The second pathway is a result of ERK activation leading to the accumulation of active complexes of Cyclin D and Cdk4/6 which destabilize Rb via phosphorylation and further serve to activate E2F and promote expression of its targets. Finally, these interactions are all reinforced by an additional positive feedback loop by E2F on itself, as its own expression leads to production of the active complex of Cyclin E and CDK2, which further serves to lock in a cell's decision to enter S-phase. As a result, when serum concentration is increased in a gradual manner, most mammalian cells respond in a switch-like manner in entering S-phase. This mitogen stimulated, bistable E2F switch is exhibits hysteresis, as cells are inhibited from returning to G1 even after mitogen withdrawal post E2F activation. Dynamic signal processing by the ERK pathway The EGFR-ERK/MAPK (epidermal growth factor receptor extracellular-regulated kinase/mitogen-activated protein kinase) pathway stimulated by EGF is critical for cellular proliferation, but the temporal separation between signal and response obscures the signal-response relationship in previous research.In 2013, Albeck et al. provided key experimental evidence to fill this gap of knowledge. They measured signal strength and dynamics with steady-state EGF stimulation, in which the signaling and output can be easily related. They further mapped the signal-response relationship across the pathway’s full dynamic range. Using high-content immunofluorescence (HCIF) detection of phosphorylated ERK (pERK) and live cell FRET biosensors, they monitored downstream output of the ERK pathway in both live cells and fixed cells. To further link the quantitative characteristics of ERK signaling to proliferation rates, they established a series of steady-state conditions using a range of EGF concentrations by applying EGF with different concentrations. Single cell imaging experiments have shown ERK to be activated in stochastic bursts in the presence of EGF. Furthermore, the pathway has been shown to encode the strength of signaling inputs though frequency modulated pulses of its activity. Using live cell FRET biosensors, cells induced with different concentrations of EGF illicit activity bursts of different frequency, where higher levels of EGF resulted in more frequent bursts of ERK activity. To figure out how S phase entry can be affected by sporadic pulses of ERK activity at low EGF concentrations, they used MCF-10A cells co-expressing EKAR-EV and RFP-geminin and identified the pulses of ERK activity with  the scoring and then align this ERK activity profiles with time of GFP-geminin induction. They found that longer periods of ERK activity stimulate S phase entry, as suggested by increased pulse length. To understand the dynamics of EGFR-ERK pathway, specifically how is the frequency and amplitude modulated, they applied the EGFR inhibitor gefitinib or the highly selective MAPK/ERK kinase (MEK) inhibitor PD0325901 (PD). Two inhibitors yield actually a little bit different result: gefitinib, at intermediate concentration, would induce pulsatory behavior and also bimodal shift, which is not observed with PD. They further combine EGF and PD together and draw the conclusion that the frequency of ERK activities is modulated by quantitative variation while the amplitude is modulated by MEK activity’s change. Lastly they turned to Fra-1, one of downstream effectors of ERK pathway, as it’s technically challenging to estimate ERK activities directly. To understand how the integrated ERK pathway output (which should be independent of either frequency or amplitude) affect the proliferation rate, they used the combination of a wide range of EGF and PD concentrations and find that there’s actually an inverted “L” shape single curvilinear relationship, which suggests that at low levels of ERK pathway output, small changes in signal intensity correspond to large changes in proliferative rate, while large changes in signal intensity near the high end of the dynamic range have little impact on proliferation. The fluctuation of ERK signaling highlights potential issues with current therapeutic approaches, providing new perspective in terms of thinking about drug targeting in the ERK pathway in cancer. Integration of mitogen and stress signals in proliferation Recent live cell imaging experiments in MCF10A and MCF7 cells have shown that a combination of mitogen signaling though ERK and stress signals through activation of p53 in mother cells contributes to the likelihood of whether newly formed daughter cells will immediately re-enter the cell cycle or enter quiescence (G0) preceding mitosis. Rather than daughter cells starting with no key signaling proteins after division, mitogen/ERK induced Cyclin D1 mRNA and DNA damage induced p53 protein, both long lived factors in cells, can be stably inherited from mother cells after cell division. The levels of these regulators vary from cell to cell after mitosis and stoichiometry between them strongly influences cell cycle commitment though activation of Cdk2. Chemical perturbations using inhibitors of ERK signaling or inducers p53 signaling in mother cells suggest daughter cells with high levels of p53 protein and low levels of Cyclin D1 transcripts were shown to primarily enter G0 whereas cells with high Cyclin D1 and low levels of p53 are most likely to reenter the cell cycle. These results illustrate a form of encoded molecular memory though the history of mitogen signaling through ERK and stress response though p53. Clinical significance Uncontrolled growth is a necessary step for the development of all cancers. In many cancers (e.g. melanoma), a defect in the MAP/ERK pathway leads to that uncontrolled growth. Many compounds can inhibit steps in the MAP/ERK pathway, and therefore are potential drugs for treating cancer, such as Hodgkin disease. The first drug licensed to act on this pathway is sorafenib — a Raf kinase inhibitor. Other Raf inhibitors include SB590885, PLX4720, XL281, RAF265, encorafenib, dabrafenib, and vemurafenib. Some MEK inhibitors include cobimetinib, CI-1040, PD0325901, binimetinib (MEK162), selumetinib, and trametinib (GSK1120212) It has been found that acupoint-moxibustion has a role in relieving alcohol-induced gastric mucosal injury in a mouse model, which may be closely associated with its effects in up-regulating activities of the epidermal growth factor/ERK signal transduction pathway. RAF-ERK pathway is also involved in the pathophysiology of Noonan syndrome, a polymalformative disease. Protein microarray analysis can be used to detect subtle changes in protein activity in signaling pathways. The developmental syndromes caused by germline mutations in genes that alter the RAS components of the MAP/ERK signal transduction pathway are called RASopathies. See also Janus kinase Phosphatase Signal transducing adaptor protein G protein-coupled receptor References External links MAP Kinase Resource . Kyoto Encyclopedia of Genes and Genomes — MAPK pathway Signal transduction Cell signaling
MAPK/ERK pathway
[ "Chemistry", "Biology" ]
3,311
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
5,619,097
https://en.wikipedia.org/wiki/Jay%20Keasling
Jay D. Keasling is a professor of chemical engineering and bioengineering at the University of California, Berkeley. He is also associate laboratory director for biosciences at the Lawrence Berkeley National Laboratory and chief executive officer of the Joint BioEnergy Institute. He is considered one of the foremost authorities in synthetic biology, especially in the field of metabolic engineering. Keasling was elected a member of the National Academy of Engineering in 2010 for developing synthetic biology tools to engineer the antimalarial drug artemisinin. Education Keasling received his bachelor's degree at the University of Nebraska-Lincoln where he was a member of Delta Tau Delta International Fraternity. He went on to complete his Doctor of Philosophy degree at the University of Michigan in 1991 under the supervision of Bernhard Palsson. Keasling performed post-doctoral research with Arthur Kornberg at Stanford University in 1991–1992. Research Keasling's current research is focused on engineering chemistry inside microorganisms, an area known as metabolic engineering, for production of useful chemicals or for environmental cleanup. In much the same way that synthetic organic and industrial chemistry has allowed chemists and chemical engineers to produce from fossil fuel resources chemicals that we use every day, metabolic engineering can revolutionize the production of some of the same useful chemicals and more from renewable resources, like sugar and cellulosic biomass. For many years, work in metabolic engineering was limited by the lack of enzymes to perform the necessary chemistry and tools to manipulate and monitor the chemistry inside cells. Seeing a need for better genetic tools, Keasling began working on genetic tool development, an area now known as synthetic biology. Keasling’s laboratory has developed or adopted many of the latest analytical tools to troubleshoot our genetic manipulations. Keasling's laboratory has applied metabolic chemistry to a number of real-world problems including the production of the antimalarial drug artemisinin and drop-in biofuels. Keasling has published over 300 papers in peer-reviewed journals and has over 30 issued patents. Artemisinin Malaria is a global health problem that threatens 300–500 million people and kills more than one million people annually. The chloroquine-based drugs that were used widely in the past have lost effectiveness because the Plasmodium parasite that causes malaria has become resistant to them. Artemisinin, a sesquiterpene lactone endoperoxide, extracted from Artemisia annua L is highly effective against Plasmodium spp. resistant to other anti-malarial drugs. However, there are several problems with current production methods for artemisinin. First, artemisinin combination therapies (ACTs) are too expensive for people in the developing world to afford. Second, artemisinin is extracted from A. annua, and its yield and consistency depend on climate and the extraction process. While there is a method for chemical synthesis of artemisinin, it is too low yielding and therefore too expensive for use in producing low-cost drugs. Third, although the World Health Organization has recommended that artemisinin be formulated with other active pharmaceutical ingredients in ACTs, many manufacturers are still producing mono-therapies of artemisinin, which increase the chance that Plasmodium spp. will develop resistance to artemisinin. Keasling's laboratory at the University of California, Berkeley, has engineered both Escherichia coli and Saccharomyces cerevisiae to produce artemisinic acid, a precursor to artemisinin that can be derivatized using established, simple, inexpensive chemistry to form artemisinin or any artemisinin derivative currently used to treat malaria. The microorganisms were engineered with a ten-enzyme biosynthetic pathway using genes from Artemisia annua, Saccharomyces cerevisiae, and Escherichia coli (twelve genes in all) to transform a simple and renewable sugar, like glucose, into the complicated chemical structure of the anti-malarial drug artemisinin. The engineered microorganism is capable of secreting the final product from the cell, thereby purifying it from all other intracellular chemicals and reducing the purification costs and therefore the cost of the final drug. Given the existence of known, relatively high-yielding chemistry for the conversion of artemisinic acid to artemisinin or any other artemisinin derivative, microbially-produced artemisinic acid is a viable, renewable, and scalable source of this potent family of anti-malarial drugs. A critical element of Keasling's work was the development of genetic tools to aid in the manipulation of microbial metabolism, particularly for low-value products that require high yields from sugar.His laboratory developed single-copy plasmids for the expression of complex metabolic pathways, promoter systems that allow regulated control of transcription consistently in all cells of a culture, mRNA stabilization technologies to regulate the stability of mRNA segments, and a protein engineering approach to attach several enzymes of a metabolic pathway onto a synthetic protein scaffold to increase pathway flux. These and other gene expression tools now enable precise control of the expression of the genes that encode novel metabolic pathways to maximize chemical production, to minimize losses to side products, and minimize the accumulation of toxic intermediates that may poison the microbial host, all of which are important for economical production of this important drug. Another critical aspect of Keasling's work was discovering the chemistry and enzymes in Artemisia annua responsible for synthesis of artemisinin. These enzymes included the cytochrome P450 that oxidizes amorphadiene to artemisinic acid and the redox partners that transfer reducing equivalents from the enzyme to cofactors. The discovery of these enzymes and their functional expression in both yeast and E. coli, along with the other nine enzymes in the metabolic pathway, allowed production of artemisinic acid by these two microorganisms. S. cerevisiae was chosen for the large-scale production process and was further engineered to improve artemisinic acid production. Keasling's microbial production process has a number of advantages over extraction from plants. First, microbial synthesis will reduce the cost of artemisinin, the most expensive component of artemisinin-based combination therapies—by as much as tenfold—and therefore make artemisinin-derived anti-malarial drugs more affordable to people in the developing world. Second, weather conditions or political climates that might otherwise affect the yield or cost of the plant-derived version of the drug will not affect the microbial source for the drug. Third, microbial production of artemisinin in large tanks will allow for more careful distribution of artemisinin to legitimate drug manufacturers that formulate artemisinin combination therapies, rather than monotherapies. This will, in turn, slow the development of resistance to this drug. Fourth, severe shortages of plant-derived artemisinin are projected for 2011 and beyond, which will increase the cost of artemisinin combination therapies. Finally, microbially-derived artemisinic acid will enable production of new derivatives of artemisinin that Plasmodium may not be resistant to, thereby extending the time over which artemisinin may be used. To ensure that the process he developed would benefit people in the developing world, Keasling assembled a unique team consisting of his laboratory at the University of California, Berkeley, Amyris Biotechnologies ( a company founded on this technology) and the Institute for OneWorld Health (a non-profit pharmaceutical company located in San Francisco). In addition to assembling the team, Keasling developed an intellectual property model to ensure that microbially-sourced artemisinin could be offered as inexpensively as possible to people in the developing world: patents granted from his work at UCB are licensed royalty free to Amyris Biotechnologies and the Institute for OneWorld Health for use in producing artemisinin so long as they do not make a profit on artemisinin sold in the developing world. The team was funded in December 2004 by the Bill & Melinda Gates Foundation to develop the microbial production process. The science was completed in December 2007. In 2008, Sanofi-Aventis licensed the technology and worked with Amyris to develop the production process. Sanofi-Aventis has produced 35 tons of artemisinin using Keasling’s microbial production process, which is enough for 70 million treatments. Distribution of artemisinin combination therapies containing the microbially-sourced artemisinin began in August 2014 with 1.7 million treatments shipped to Africa. It is anticipated that 100-150 million treatments will be produced using this technology and shipped annually to Africa, Asia and South America. Biofuels Renewable fuels are needed for all modes of transportation but most microbially-sourced fuels can be used only as a small fraction of gasoline in conventional spark-ignition engines. Keasling’s laboratory has engineered microorganisms to produce hydrocarbons with similar properties to the fuels now derived from petroleum. These fuels are synthesized from plant-derived sugars, such as cellulose feedstock, which is of little economic value. Consequently, microbes can minimize the carbon footprint by minimizing the energy expenditure in sourcing fuel, such off-shore drilling and hydraulic fracturing. Keasling and his colleagues demonstrated that Escherichia coli and Saccharomyces cerevisiae can be engineered to produce the fatty acid-based biofuels fatty acid ethyl esters, alkenes, and methyl ketones. As linear hydrocarbons are the key components of diesel, these biologically produced fuels are excellent diesel replacements. However, fuels containing only long, linear, hydrocarbon chains will freeze under cold conditions. To develop fuels suitable for cold applications, Keasling's laboratory engineered E. coli and S. cerevisiae to produce branched and cyclic hydrocarbons using the isoprenoid biosynthetic pathway: isopentanol, a drop-in replacement for gasoline; pinene, a replacement for jet fuel; and bisabolene, a replacement for diesel fuel. Because isoprenoids add a methyl side chain every four carbons in the backbone, fuels made from isoprenoids have very low freeze and cloud points, making them suitable as cold-weather diesels and jet fuels. One of the biggest challenges in scaling up microbial fermentations is the stability of the microbial strain: the engineered microorganism will attempt to mutate or shed the metabolic pathway, in part because intermediates in the metabolic pathway accumulate and are toxic to the cells. To balance pathway flux and reduce the cost of producing a desired biofuel, Keasling's laboratory developed dynamic regulators to sense the levels of intermediates in the pathway and regulate pathway activity. These regulators stabilized the pathway and the cell and improved biofuel yields making it possible to grow the engineered cells in large-scale fermentation tanks for fuel production. Many of the best fuels and chemicals are toxic to the producer organism. One way to limit fuel toxicity is to actively pump the fuel from the cell. To identify pumps ideally suited for a particular fuel, Keasling and his colleagues bioprospected environmental microorganisms for many, different, three-component transporters and selected for the pumps most effective for a particular fuel. These transporters allowed E. coli to grow in the presence of the fuels and, as a result, produce more of the target fuel than it would have been able to do so in the absence of the transporter. The starting materials (generally sugars) are the most significant factor in the biofuel production cost. Cellulose, a potentially low-cost starting material, must be depolymerized into sugars by adding an expensive cocktail of enzymes. One way to reduce this cost is to engineer the fuel-producing microbe to also produce the enzymes to depolymerize cellulose and hemicellulose. Recently, Keasling's laboratory demonstrated that a microorganism could be engineered to synthesize and secrete enzymes to depolymerize cellulose and hemicellulose into sugars and to produce a gasoline replacement (butanol), a diesel-fuel replacement (fatty acid ethyl ester), or a jet fuel replacement (pinene). As a technological platform, biofuel manufacturing faces huge economic hurdles many of which depend on the market pricing of crude oil and other conventionally sourced fuels. Nonetheless, metabolic engineering is a technology that is becoming increasingly competitive and is expected to have wide-reaching effects by 2020. Awards Graduation with High Distinction, The University of Nebraska, 1986 Regents Scholarship, The University of Nebraska, 1982-1986 NIH Postdoctoral Fellowship, Stanford University, 1991-1992 Zeneca Young Faculty Fellowship, Zeneca Ltd., 1992-1997 CAREER Award, National Science Foundation, 1995 Chevron Young Faculty Fellowship, Chevron, 1995 AIChE Award for Chemical Engineering Excellence in Academic Teaching, Northern California Section of the American Institute for Chemical Engineers, 1999 Elected Fellow of the American Institute of Medical and Biological Engineering, 2000 Allan P. Colburn Memorial Lecturer, Department of Chemical Engineering, University of Delaware, 2002 Inaugural Schwartz Lecturer, Department of Chemical Engineering, Johns Hopkins University, 2003 Blue-Green Lecturer, Department of Chemical Engineering, University of Michigan & Department of Chemical Engineering and Materials Sciences, Michigan State University, 2005 Seventh Annual Frontiers of Biotechnology Lecture, Department of Chemical Engineering, Massachusetts Institute of Technology, 2005 Technology Pioneer, World Economic Forum, 2005 Scientist of the Year, Discover magazine, 2006. Eastman Lectureship, Department of Chemical Engineering, Georgia Tech, 2007 Research Project of the Year, Northern California Section of the American Institute for Chemical Engineers, 2007 Elected Fellow of the American Academy for Microbiology, 2007 Professional Progress Award, American Institute for Chemical Engineers, 2007 Truman Lecturer, Sandia National Laboratories, 2007 Visionary Award, Bay Bio, 2007 Sierra Section Recognition for Leadership in the Chemical Engineering Profession, American Institute of Chemical Engineers – Northern California Section, 2008 Patten Distinguished Seminar, Department of Chemical Engineering, University of Colorado, 2008 2008 Britton Chance Distinguished Lecturer, Department of Chemical and Biomolecular Engineering and Institute Medicine and Engineering, University of Pennsylvania, 2008 Chancellor;s Award for Public Service for Research in the Public Interest, University of California, Berkeley, 2009 The Sixteenth F. A. Bourke Distinguished Lecture in Biotechnology, Center for Advanced Biotechnology and Department of Biomedical Engineering, Boston University, 2009 2009 University Lectures in Chemistry, Department of Chemistry, Boston College, 2009 Inaugural Biotech Humanitarian Award, Biotechnology Industry Organization (BIO), 2009 Danckwerts Lectureship, World Congress on Chemical Engineering, 2009 Cox Distinguished Lectureship, Washington University, 2009. Ashland Lectureship, University of Kentucky, 2009 LGBTQ Engineer of the Year, National Organization of Gay and Lesbian Scientists and Technical Professionals, 2010 National Academy of Engineering, 2010 Eyring Lectures in Chemistry and Biochemistry, Arizona State University, 2010 Treat B Johnson Lecture, Department of Chemistry, Yale University, 2010 Division O (Fermentation and Biotechnology) Lectureship, American Society for Microbiology, 2010 Presidential Green Chemistry Challenge Award, United States Environmental Protection Agency, 2010 Kewaunee Lectureship, Pratt School of Engineering, Duke University, 2011 Tetelman Fellowship Lectureship, Jonathan Edwards College, Yale University, 2012 Henry McGee Lecturer, Virginia Commonwealth University, School of Engineering, 2012 Katz Lectureship, Department of Chemical Engineering, University of Michigan, 2012 Heuermann Lecture, Institute of Agriculture and Natural Resources, University of Nebraska-Lincoln, 2012 International Metabolic Engineering Award, Metabolic Engineering Society, 2012 18th Annual Heinz Award for Technology, the Economy and Employment, Heinz Family Foundation, 2012 Marvin Johnson Award in Microbial and Biochemical Technology, Division of Biochemical Technology, American Chemical Society, 2013 Promega Biotechnology Research Award, American Society for Microbiology, 2013 George Washington Carver Award for Innovation in Industrial Biotechnology, Biotechnology Industry Organization, 2013 Food, Pharmaceutical and Bioengineering Division Award, Food, Pharmaceutical and Bioengineering Division, American Institute of Chemical Engineers, 2013 Herman S. Block Award Lectureship, Department of Chemistry, University of Chicago, 2014 Arun Guthikonda Memorial Award Lectureship, Department of Chemistry, Columbia University, 2014 Devon Walter Meek Award Lectures, Department of Chemistry, Ohio State University, 2014 Eni Renewable Energy Prize, Eni S.p.A., 2014 Innovator Award – Biosciences, Economist magazine, 2014 National Academy of Inventors, 2014 Companies Keasling is a founder of Amyris (with Vincent Martin, Jack Newman, Neil Renninger and Kinkead Reiling), LS9 (now part of REG with George Church and Chris Sommerville), and Lygos (with Leonard Katz, Clem Fortman, Jeffrey Dietrich and Eric Steen). Personal life Keasling is originally from Harvard, Nebraska, and is openly gay. See also LGBT people in science References External links Jay Keasling talk given at PopTech conference UC Berkeley College of Engineering faculty University of Nebraska–Lincoln alumni University of Michigan alumni Systems biologists Living people Members of the United States National Academy of Engineering American LGBTQ scientists Fellows of the American Institute for Medical and Biological Engineering Synthetic biologists Gay academics Gay scientists 20th-century American scientists 20th-century American engineers 21st-century American scientists 21st-century American engineers Year of birth missing (living people) UC Berkeley College of Chemistry faculty
Jay Keasling
[ "Biology" ]
3,591
[ "Synthetic biology", "Synthetic biologists" ]
5,619,296
https://en.wikipedia.org/wiki/Expensive%20Desk%20Calculator
Expensive Desk Calculator by Robert A. Wagner is thought to be computing's first interactive calculation program. The software first ran on the TX-0 computer loaned to the Massachusetts Institute of Technology (MIT) by Lincoln Laboratory. It was ported to the PDP-1 donated to MIT in 1961 by Digital Equipment Corporation. Friends from the MIT Tech Model Railroad Club, Wagner and a group of fellow students had access to these room-sized machines outside classes, signing up for time during off hours. Overseen by Jack Dennis, John McKenzie and faculty advisors, they were personal computer users as early as the late 1950s. The calculators Wagner needed to complete his numerical analysis homework were across campus and in short supply so he wrote one himself. Although the program has about three thousand lines of code and took months to write, Wagner received a grade of zero on his homework. His professor's reaction was, "You used a computer! This can't be right." Steven Levy wrote, "The professor would learn in time, as would everyone, that the world opened up by the computer was a limitless one." References See also PDP-1 Expensive Typewriter Expensive Planetarium Expensive Tape Recorder Calculators History of software
Expensive Desk Calculator
[ "Mathematics", "Technology" ]
250
[ "Calculators", "History of software", "History of computing" ]
5,619,413
https://en.wikipedia.org/wiki/Bo%20Diddley%20beat
The Bo Diddley beat is a syncopated musical rhythm that is widely used in rock and roll and pop music. The beat is named after rhythm and blues musician Bo Diddley, who introduced and popularized the beat with his self-titled debut single, "Bo Diddley", in 1955. The beat is essentially the Afro-Cuban clave rhythm or based on the clave or a variation thereof. Music educator and author Michael Campbell explains that it "shows the relationship between Afro-Cuban music, Americanized Latin rhythms, and rock rhythm... [The beats] are more active and complicated than a simple rock rhythm, but less complex than a real Afro-Cuban rhythm. History and composition The Bo Diddley beat is a variation of the 3-2 clave, one of the most common bell patterns found in Afro-Cuban music that has been traced to sub-Saharan African music traditions. It is also akin to the rhythmic pattern known as "shave and a haircut, two bits", that has been linked to Yoruba drumming from West Africa. A folk tradition called "hambone", a style used by street performers who play out the beat by slapping and patting their arms, legs, chest, and cheeks while chanting rhymes has also been suggested. According to musician and author Ned Sublette, "In the context of the time, and especially those maracas [heard on the record], 'Bo Diddley' has to be understood as a Latin-tinged record. A rejected cut recorded at the same session was titled only 'Rhumba' on the track sheets." Bo Diddley employed maracas, a percussion instrument used in Caribbean and Latin music, as a basic component of the sound. Jerome Green was the maraca player on Diddley's early records, initially using the instrument as a more portable alternative to a drum set. When asked how he began to use this rhythm, Bo Diddley gave many different accounts. In a 2005 interview with Rolling Stone magazine, he said that he came up with the beat after listening to gospel music in church when he was twelve years old. Use by other artists Prior to Bo Diddley's self-titled song, the rhythm occurred in at least 13 rhythm and blues songs recorded between 1944 and 1955, including two by Johnny Otis from 1948. In 1944, "Rum and Coca Cola", containing the beat, was recorded by the Andrews Sisters and in 1952, a song with similar syncopation, "Hambone", was recorded by Red Saunders' Orchestra with the Hambone Kids. Later, the beat was included in many songs composed by artists other than Bo Diddley: "I Wish You Would" by Billy Boy Arnold (1955) "Not Fade Away" by Buddy Holly (1957) "Cannonball" by Duane Eddy (1958) "Willie and the Hand Jive" by Johnny Otis (1958) "Hey Little Girl" by Dee Clark (1959) "(Marie's the Name) His Latest Flame" by Elvis Presley (1961) "Mickey's Monkey" by the Miracles (1963) "When the Lovelight Starts Shining Through His Eyes" by the Supremes (1963) "Rosalyn" by Pretty Things (1964) "Don't Doubt Yourself, Babe" by the Byrds (1965) "Mystic Eyes" by Them (1965) "I Want Candy" by the Strangeloves (1965) "Please Go Home" by the Rolling Stones (1966) "Bummer in the Summer" by Love (1967) "Get Me to the World on Time" by the Electric Prunes (1967) "She Has Funny Cars" by Jefferson Airplane (1967) "Magic Bus" by the Who (1968) "1969" by the Stooges "Panic in Detroit" by David Bowie (1973) "Shame, Shame, Shame" by Shirley & Company (1974) "New York Groove" by Hello (1975) "Billy Bones and the White Bird" by Elton John (1975) "She's the One" by Bruce Springsteen (1975) "Bad Blood" by Neil Sedaka (1975) "American Girl" by Tom Petty and the Heartbreakers (1977) "Big Alice" by Don Pullen (1977) "Hateful" by the Clash (1979) "Rudie Can't Fail" by The Clash (1979) "(She's So) Selfish" by the Knack (1979) "Lovers Walk" by Elvis Costello and the Attractions (1980) "Cuban Slide" by the Pretenders (1980) "What A Blow" by Ian Gomm (1980) "Europa and the Pirate Twins" by Thomas Dolby (1981) "Don't Let Him Go" by REO Speedwagon (1981) "Hare Krsna" by Hüsker Dü (1984) "How Soon Is Now?" by the Smiths (1985) (Diddley-style tremolo) "Mr. Brownstone" by Guns N' Roses (1987) "Faith" by George Michael (1987) "Ruby Dear" by Talking Heads (1988) "Desire" by U2 (1988) "Movin' On Up" by Primal Scream (1991) "Tribal Thunder" by Dick Dale and the Del-Tones (1993) "No One to Run With" by the Allman Brothers Band (1994) "Party at the Leper Colony" by "Weird Al" Yankovic (2003) "That Big 5-0" by Stan Ridgway (2004) "Black Horse and the Cherry Tree" by KT Tunstall (2005) "If It's Lovin' that You Want" by Rihanna (2005) "At the Bottom of the Ocean" by Ezra Furman (2013) "Water Fountain" by Tune-Yards (2014) "Fool For Love" by Lord Huron (2015) "Bluey Theme Tune" by Joff Bush (2018) References Rhythm and meter Bo Diddley
Bo Diddley beat
[ "Physics" ]
1,217
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
5,619,423
https://en.wikipedia.org/wiki/Noggin%20%28protein%29
Noggin, also known as NOG, is a protein that is involved in the development of many body tissues, including nerve tissue, muscles, and bones. In humans, noggin is encoded by the NOG gene. The amino acid sequence of human noggin is highly homologous to that of rat, mouse, and Xenopus (an aquatic frog genus). Noggin is an inhibitor of several bone morphogenetic proteins (BMPs): it inhibits at least BMP2, 4, 5, 6, 7, 13, and 14. The protein's name, which is a slang English-language word for "head", was coined in reference to its ability to produce embryos with large heads when exposed at high concentrations. Function Noggin is a signaling molecule that plays an important role in promoting somite patterning in the developing embryo. It is released from the notochord and regulates bone morphogenic protein 4 (BMP4) during development. The absence of BMP4 will cause the patterning of the neural tube and somites from the neural plate in the developing embryo. It also causes formation of the head and other dorsal structures. Noggin function is required for correct nervous system, somite, and skeletal development. Experiments in mice have shown that noggin also plays a role in learning, cognition, bone development, and neural tube fusion. Heterozygous missense mutations in the noggin gene can cause deformities such as joint fusions and syndromes such as multiple synostosis syndrome (SYNS1) and proximal symphalangism (SIM1). SYNS1 is different from SYM1 by causing hip and vertebral fusions. The embryo may also develop shorter bones, miss any skeletal elements, or lack multiple articulating joints. Increased plasma levels of Noggin have been observed in obese mice and in patients with a body mass index over 27. Additionally, it has been shown that Noggin depletion in adipose tissue leads to obesity. Mechanism of action The secreted polypeptide noggin, encoded by the NOG gene, binds and inactivates members of the transforming growth factor-beta (TGF-beta) superfamily signaling proteins, such as bone morphogenetic protein 4 (BMP4). By diffusing through extracellular matrices more efficiently than members of the TGF-beta superfamily, noggin may have a principal role in creating morphogenic gradients. Noggin appears to have pleiotropic effects, both early in development and in later stages. Knockout model A study of a mouse knockout model tracked the extent to which the absence of noggin affected embryological development. The focus of the study was the formation of the ear and its role in conductive hearing loss. The inner ear underwent multiple deformations affecting the cochlear duct, semicircular canals, and otic capsule portions. Noggin's involvement in the malformations was also shown to be indirect, through its interaction with the notochord and neural axis. The kinking of the notochord and disorientation of the body axis results in a caudal shift in the embryonic body plan of the hindbrain. Major signaling molecules from the rhombomere structures in the hindbrain could not properly induce inner ear formation. This reflected noggin's regulating of BMP as the major source of deformation, rather than noggin directly affecting inner ear development. Specific knockout models have been created using the Cre-lox system. A model knocking out Noggin specifically in adipocytes has allowed to elucidate that Noggin also plays a role in adipose tissue: its depletion in adipocytes causes alterations in the structure of both brown and white adipose tissue, along with brown fat dysfunction (impaired thermogenesis and β-oxidation) that results in dramatic increases of body weight and percent body fat that causes alterations in the lipid profile and in the liver; the effects vary with gender. Clinical significance Noggin proteins play a role in germ layer-specific derivation of specialized cells. The formation of neural tissues, the notochord, hair follicles, and eye structures arise from the ectoderm germ layer. Noggin activity in the mesoderm gives way to the formation of cartilage, bone and muscle growth, and in the endoderm noggin is involved in the development of the lungs. Early craniofacial development is heavily influenced by the presence of noggin, in accordance with its multiple tissue-specific requirements. Noggin influences the formation and growth of the palate, mandible and skull through its interaction with neural crest cells. Mice with a lack of NOG gene are shown to have an outgrowth of the mandible and a cleft palate. Another craniofacial related deformity due to the absence of noggin is conductive hearing loss caused by uncontrolled outgrowth of the cochlear duct and coiling. Recently, several heterozygous missense human NOG mutations in unrelated families with proximal symphalangism (SYM1) and multiple synostoses syndrome (SYNS1) have been identified; both SYM1 and SYNS1 have multiple joint fusion as their principal feature, and map to the same region on chromosome 17 (17q22) as NOG. These mutations indicate functional haploinsufficiency where the homozygous forms are embryonically lethal. All these NOG mutations have altered evolutionarily conserved amino acid residues. Mutations in this gene have been associated with middle ear abnormalities. Discovery Noggin was originally isolated from the aquatic-frog genus Xenopus. The discovery was based on the organism's ability to restore normal dorsal-ventral body axis in embryos that had been artificially ventralized by ultraviolet treatment. Noggin was discovered in the laboratory of Richard M. Harland and William C. Smith at the University of California, Berkeley because of this ability to induce secondary axis formation in frog embryos. References Further reading External links BMPedia - the Bone Morphogenetic Protein Wiki Noggin publications, gene expression data, sequences and interactants from Xenbase Developmental genes and proteins
Noggin (protein)
[ "Biology" ]
1,334
[ "Induced stem cells", "Developmental genes and proteins" ]
5,620,084
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%207
Bone morphogenetic protein 7 or BMP7 (also known as osteogenic protein-1 or OP-1) is a protein that in humans is encoded by the BMP7 gene. Function The protein encoded by this gene is a member of the TGF-β superfamily. Like other members of the bone morphogenetic protein family of proteins, it plays a key role in the transformation of mesenchymal cells into bone and cartilage. It is inhibited by noggin and a similar protein, chordin, which are expressed in the Spemann-Mangold Organizer. BMP7 may be involved in bone homeostasis. It is expressed in the brain, kidneys and bladder. BMP7 induces the phosphorylation of SMAD1 and SMAD5, which in turn induce transcription of numerous osteogenic genes. It has been demonstrated that BMP7 treatment is sufficient to induce all of the genetic markers of osteoblast differentiation in many cell types. Role in vertebrate development The role of BMP7 in mammalian kidney development is through induction of MET of the metanephrogenic blastema. The epithelial tissue emerging from this MET process eventually forms the tubules and glomeruli of the nephron. BMP-7 is also important in homeostasis of the adult kidney by inhibiting ephithelial-mesenchymal transition (EMT). BMP-7 expression is attenuated when the nephron is placed under inflammatory or ischemic stress, leading to EMT, which can result in fibrosis of the kidney. This type of fibrosis often leads to renal failure, and is predictive of end stage renal disease. BMP7 has been discovered to be crucial in the determination of ventral-dorsal organization in zebrafish. BMP7 causes the expression of ventral phenotypes while its complete inhibition creates a dorsal phenotype. Moreover, BMP7 is eventually partially "turned off" in embryonic development in order to create the dorsal parts of the organism. In many early developmental experiments using zebrafish, scientists used caBMPR (constitutively active) and tBMP (truncated receptor) to determine the effect of BMP7 in embryogenesis. They found that the constitutively active, which causes BMP to be expressed everywhere creates a ventralized phenotype, whereas truncated, dorsalized. Therapeutic application Human recombinant BMP7 has surgical uses and was originally marketed under the brand name OP1 (discontinued by Olympus Biotech, who bought it from Stryker). It can be used to aid in the fusion of vertebral bodies to prevent neurologic trauma. Also in the treatment of tibial non-union, frequently in cases where a bone graft has failed. rhBMP-2 is much more widely used clinically because it helps grow bone better than rhBMP-7 and other BMPs. BMP7 also has the potential for treatment of chronic kidney disease. Kidney disease is characterized by derangement of the tubular architecture by both myofibroblast buildup and monocyte infiltration Because endogenous BMP-7 is an inhibitor of the TGF-β signaling cascade that induces fibrosis, the use of exogenous recombinant BMP-7 (rhBMP-7) could be a viable treatment of chronic kidney disease. It is also thought that BMP-7 reverses fibrosis and EMT through reduction in monocyte infiltration into inflamed tissue. On a molecular level, BMP-7 represses inflammation by knocking down the expression of several pro-inflammatory cytokines produced by monocytes. Reducing this inflammatory stress, in turn, reduces the chance of fibrosis. Regardless of the mechanism of fibrosis or the origin of myofibroblasts, exogenous BMP-7 has been shown to reverse the EMT process and trigger MET. Eventually this restores the healthy epithelial cell population, and normal function of the kidneys in mice. This is pertinent in humans as well, because many diseases stemming from organ fibrosis occur via the EMT process. The epithelial-menenchymal transition is also problematic in cancer metastasis, so the diminution of EMT with recombinant DNA could have great implications in future cancer treatment options. BMP7 administration has been proposed as a possible treatment for human infertility due to poor response to FSH treatment. Promotion of brown fat It was discovered that mice injected with BMP7 increased their production of "good" brown fat cells, while keeping their levels of the normal white fat cells constant. A BMP7 therapy for obesity in humans may be developed as a result. BMP7 not only stimulates brown adipogenesis, it also stimulates the "browning" of brite or beige adipocytes, turning them from a white-like phenotype into a brown-like phenotype (with induction of UCP1 and able to perform non-shivering thermogenesis, which allows to disperse energy as heat). Other possible effects Several studies suggest that BMP7 may regulate or affect food intake. References Further reading External links BMP7 as Molecule of the Year 2011 Bone morphogenetic protein Developmental genes and proteins TGFβ domain
Bone morphogenetic protein 7
[ "Biology" ]
1,124
[ "Induced stem cells", "Developmental genes and proteins" ]
5,620,279
https://en.wikipedia.org/wiki/Hydrological%20transport%20model
An hydrological transport model is a mathematical model used to simulate the flow of rivers, streams, groundwater movement or drainage front displacement, and calculate water quality parameters. These models generally came into use in the 1960s and 1970s when demand for numerical forecasting of water quality and drainage was driven by environmental legislation, and at a similar time widespread access to significant computer power became available. Much of the original model development took place in the United States and United Kingdom, but today these models are refined and used worldwide. There are dozens of different transport models that can be generally grouped by pollutants addressed, complexity of pollutant sources, whether the model is steady state or dynamic, and time period modeled. Another important designation is whether the model is distributed (i.e. capable of predicting multiple points within a river) or lumped. In a basic model, for example, only one pollutant might be addressed from a simple point discharge into the receiving waters. In the most complex of models, various line source inputs from surface runoff might be added to multiple point sources, treating a variety of chemicals plus sediment in a dynamic environment including vertical river stratification and interactions of pollutants with in-stream biota. In addition watershed groundwater may also be included. The model is termed "physically based" if its parameters can be measured in the field. Often models have separate modules to address individual steps in the simulation process. The most common module is a subroutine for calculation of surface runoff, allowing variation in land use type, topography, soil type, vegetative cover, precipitation and land management practice (such as the application rate of a fertilizer). The concept of hydrological modeling can be extended to other environments such as the oceans, but most commonly (and in this article) the subject of a river watershed is generally implied. History In 1850, T. J. Mulvany was probably the first investigator to use mathematical modeling in a stream hydrology context, although there was no chemistry involved. By 1892 M.E. Imbeau had conceived an event model to relate runoff to peak rainfall, again still with no chemistry. Robert E. Horton’s seminal work on surface runoff along with his coupling of quantitative treatment of erosion laid the groundwork for modern chemical transport hydrology. Types Physically based models Physically based models (sometimes known as deterministic, comprehensive or process-based models) try to represent the physical processes observed in the real world. Typically, such models contain representations of surface runoff, subsurface flow, evapotranspiration, and channel flow, but they can be far more complicated. "Large scale simulation experiments were begun by the U.S. Army Corps of Engineers in 1953 for reservoir management on the main stem of the Missouri River". This, and other early work that dealt with the River Nile and the Columbia River<ref>F.S. Brown, Water Resource Development – Columbia River Basin, in Report of Meeting of Columbia Basin Inter-Agency Committee, Portland, OR, Dec. 1958</ref> are discussed, in a wider context, in a book published by the Harvard Water Resources Seminar, that contains the sentence just quoted. Another early model that integrated many submodels for basin chemical hydrology was the Stanford Watershed Model (SWM). The SWMM (Storm Water Management Model), the HSPF (Hydrological Simulation Program – FORTRAN) and other modern American derivatives are successors to this early work. In Europe a favoured comprehensive model is the Système Hydrologique Européen (SHE),Vijay P. Singh,, Computer Models of Watershed Hydrology, Water Resource Publications, pgs. 563-594 (1995) which has been succeeded by MIKE SHE and SHETRAN. MIKE SHE is a watershed-scale physically based, spatially distributed model for water flow and sediment transport. Flow and transport processes are represented by either finite difference representations of partial differential equations or by derived empirical equations. The following principal submodels are involved: Evapotranspiration: Penman-Monteith formalism Erosion: Detachment equations for raindrop and overland flow Overland and Channel Flow: Saint-Venant equations of continuity and momentum Overland Flow Sediment Transport: 2D total sediment load conservation equation Unsaturated Flow: Richards equation Saturated Flow: Darcy's law and the mass conservation of 2D laminar flow Channel Sediment Transport 1D mass conservation equation. This model can analyze effects of land use and climate changes upon in-stream water quality, with consideration of groundwater interactions. Worldwide a number of basin models have been developed, among them RORB (Australia), Xinanjiang (China), Tank model (Japan), ARNO (Italy), TOPMODEL (Europe), UBC (Canada) and HBV (Scandinavia), MOHID Land (Portugal). However, not all of these models have a chemistry component. Generally speaking, SWM, SHE and TOPMODEL have the most comprehensive stream chemistry treatment and have evolved to accommodate the latest data sources including remote sensing and geographic information system data. In the United States, the Corps of Engineers, Engineer Research and Development Center in conjunction with a researchers at a number of universities have developed the Gridded Surface/Subsurface Hydrologic Analysis GSSHA model. GSSHA is widely used in the U.S. for research and analysis by U.S. Army Corps of Engineers districts and larger consulting companies to compute flow, water levels, distributed erosion, and sediment delivery in complex engineering designs. A distributed nutrient and contaminant fate and transport component is undergoing testing. GSSHA input/output processing and interface with GIS is facilitated by the Watershed Modeling System (WMS). Another model used in the United States and worldwide is Vflo, a physics-based distributed hydrologic model developed by Vieux & Associates, Inc. Vflo'' employs radar rainfall and GIS data to compute spatially distributed overland flow and channel flow. Evapotranspiration, inundation, infiltration, and snowmelt modeling capabilities are included. Applications include civil infrastructure operations and maintenance, stormwater prediction and emergency management, soil moisture monitoring, land use planning, water quality monitoring, and others. Stochastic models These models based on data are black box systems, using mathematical and statistical concepts to link a certain input (for instance rainfall) to the model output (for instance runoff). Commonly used techniques are regression, transfer functions, neural networks and system identification. These models are known as stochastic hydrology models. Data based models have been used within hydrology to simulate the rainfall-runoff relationship, represent the impacts of antecedent moisture and perform real-time control on systems. Model components Surface runoff modelling A key component of a hydrological transport model is the surface runoff element, which allows assessment of sediment, fertilizer, pesticide and other chemical contaminants. Building on the work of Horton, the unit hydrograph theory was developed by Dooge in 1959. It required the presence of the National Environmental Policy Act and kindred other national legislation to provide the impetus to integrate water chemistry to hydrology model protocols. In the early 1970s the U.S. Environmental Protection Agency (EPA) began sponsoring a series of water quality models in response to the Clean Water Act. An example of these efforts was developed at the Southeast Water Laboratory, one of the first attempts to calibrate a surface runoff model with field data for a variety of chemical contaminants. The attention given to surface runoff contaminant models has not matched the emphasis on pure hydrology models, in spite of their role in the generation of stream loading contaminant data. In the United States the EPA has had difficulty interpreting diverse proprietary contaminant models and has to develop its own models more often than conventional resource agencies, who, focused on flood forecasting, have had more of a centroid of common basin models. Example applications Liden applied the HBV model to estimate the riverine transport of three different substances, nitrogen, phosphorus and suspended sediment in four different countries: Sweden, Estonia, Bolivia and Zimbabwe. The relation between internal hydrological model variables and nutrient transport was assessed. A model for nitrogen sources was developed and analysed in comparison with a statistical method. A model for suspended sediment transport in tropical and semi-arid regions was developed and tested. It was shown that riverine total nitrogen could be well simulated in the Nordic climate and riverine suspended sediment load could be estimated fairly well in tropical and semi-arid climates. The HBV model for material transport generally estimated material transport loads well. The main conclusion of the study was that the HBV model can be used to predict material transport on the scale of the drainage basin during stationary conditions, but cannot be easily generalised to areas not specifically calibrated. In a different work, Castanedo et al. applied an evolutionary algorithm to automated watershed model calibration. The United States EPA developed the DSSAM Model to analyze water quality impacts from land use and wastewater management decisions in the Truckee River basin, an area which include the cities of Reno and Sparks, Nevada as well as the Lake Tahoe basin. The model satisfactorily predicted nutrient, sediment and dissolved oxygen parameters in the river. It is based on a pollutant loading metric called "Total Maximum Daily Load" (TMDL). The success of this model contributed to the EPA's commitment to the use of the underlying TMDL protocol in EPA's national policy for management of many river systems in the United States. The DSSAM Model is constructed to allow dynamic decay of most pollutants; for example, total nitrogen and phosphorus are allowed to be consumed by benthic algae in each time step, and the algal communities are given a separate population dynamic in each river reach (e.g. based upon river temperature). Regarding stormwater runoff in Washoe County, the specific elements within a new xeriscape ordinance were analyzed for efficacy using the model. For the varied agricultural uses in the watershed, the model was run to understand the principal sources of impact, and management practices were developed to reduce in-river pollution. Use of the model has specifically been conducted to analyze survival of two endangered species found in the Truckee River and Pyramid Lake: the Cui-ui sucker fish (endangered 1967) and the Lahontan cutthroat trout (threatened 1970). See also Aquifer Differential equation HBV model Hydrometry Infiltration Runoff model (reservoir) Storm Water Management Model United States Army Corps of Engineers WAFLEX model SWAT model References External links HBV model applied to climate change in the Rhine River basin TOPMODEL characteristics and parameters Xinanjiang model and its application in northern China Evolutionary Computation Technique Applied to HSPF Model Calibration of a Spanish Watershed Computer-aided engineering software Environmental chemistry Environmental soil science Soil science Water pollution Hydrology models
Hydrological transport model
[ "Chemistry", "Biology", "Environmental_science" ]
2,231
[ "Hydrology", "Biological models", "Environmental chemistry", "Water pollution", "Hydrology models", "nan", "Environmental soil science", "Environmental modelling" ]
5,620,282
https://en.wikipedia.org/wiki/Gatra%20%28music%29
A gatra ("embryo" or "semantic unit") is a unit of melody in Indonesian Javanese gamelan music, analogous to a measure in Western music. It is often considered the smallest unit of a gamelan composition. A gatra consists of a sequence of four beats (keteg), which are filled with notes (or rests, pin) from the balungan. In general, the second and fourth beats of a gatra are stronger than the first and third, and the final note of a gatra, called the seleh, dominates the gatra. In other words, the gatras are like Western measures in reverse, with the strongest beat at the end. Important colotomic instruments, most notably the gong ageng, are played on that final beat. If the final beat in a gatra is a rest, the seleh is the last note played. It is not uncommon in gamelan repertoire to find entire gatras of rests. Note that the actual length of time it takes to play a gatra varies from less than a second to nearly a minute, depending on the tempo (laya) and the irama. In kepatihan notation, gatras are generally grouped together in the notation of the balungan, with space added between them. There is, however, no pause between one gatra and the next. The different patterns of notes and rests in a gatra are explained at balungan. Rahayu Supanggah considers the hierarchical nature of the four beats of a gatra to be reflected on a larger scale in gamelan compositions; in particular by the four nongan in a gongan of the merong and inggah of a gendhing. This is similar to the padang-ulihan ("question-answer") structure key to gamelan composition. Names of beats in a gatra At least two sets of terms are used to describe the four notes of a gatra:Ki Sindusawarno's system: (1) ding kecil (2) dong kecil (3) ding besar (4) dong besarMartopangrawit's system: (1) maju ('forward') (2) mundur ('back') (3) maju (4) seleh The first word in the names is Ki Sindusawarno's system are similar to the names of the hierarchy of pitches used in Balinese music: ding represents the secondary note of a pathet, and dong is similar to the Western idea of tonic. Kecil and besar mean "small" and "large," respectively, so clearly this articulates the hierarchical system explained in the introduction, with the largest and most significant beat at the end of the gatra, and a somewhat smaller one halfway through. The reference for the names in Martopangrawit's system is to kosokan, the direction of bowing of the rebab. It also reflects the idea that the second beat is stronger than the first and third, since drawing a bow tends to produce a stronger sound than pushing it. Experimental gatras Although traditionally gatras have always contained four notes, a few recent experimental pieces have used gatras of other lengths, often because of Western music. An example by a Javanese composer is Pak Marto's Parisuka. See also Gamelan Pathet Gendhing structures Irama Gamelan notation Music of Indonesia Music of Java References Gamelan theory Musical form Rhythm and meter
Gatra (music)
[ "Physics" ]
733
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
5,620,519
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%202
Bone morphogenetic protein 2 or BMP-2 belongs to the TGF-β superfamily of proteins. Function BMP-2 like other bone morphogenetic proteins, plays an important role in the development of bone and cartilage. It is involved in the hedgehog pathway, TGF beta signaling pathway, and in cytokine-cytokine receptor interaction. It is also involved in cardiac cell differentiation and epithelial to mesenchymal transition. Like many other proteins from the BMP family, BMP-2 has been demonstrated to potently induce osteoblast differentiation in a variety of cell types. BMP-2 may be involved in white adipogenesis and may have metabolic effects. Interactions Bone morphogenetic protein 2 has been shown to interact with BMPR1A. Clinical use and complications Bone morphogenetic protein 2 is shown to stimulate the production of bone. Recombinant human protein (rhBMP-2) is currently available for orthopaedic usage in the United States. Implantation of BMP-2 is performed using a variety of biomaterial carriers ("metals, ceramics, polymers, and composites") and delivery systems ("hydrogel, microsphere, nanoparticles, and fibers"). While used primarily in orthopedic procedures such as spinal fusion, BMP-2 has also found its way into the field of dentistry. The use of dual tapered threaded fusion cages and recombinant human bone morphogenetic protein-2 on an absorbable collagen sponge obtained and maintained intervertebral spinal fusion, improved clinical outcomes, and reduced pain after anterior lumbar interbody arthrodesis in patients with degenerative lumbar disc disease. As an adjuvant to allograft bone or as a replacement for harvested autograft, bone morphogenetic proteins (BMPs) appear to improve fusion rates after spinal arthrodesis in both animal models and humans, while reducing the donor-site morbidity previously associated with such procedures. A study published in 2011 noted "reports of frequent and occasionally catastrophic complications associated with use of [BMP-2] in spinal fusion surgeries", with a level of risk far in excess of estimates reported in earlier studies. An additional review by Agrawal and Sinha of BMP-2 and its common delivery systems in early 2016 showed how "problems like ectopic growth, lesser protein delivery, [and] inactivation of the protein" reveal a further need "to modify the available carrier systems as well as explore other biomaterials with desired properties." References Further reading External links Bone morphogenetic protein Developmental genes and proteins Implants (medicine) TGFβ domain
Bone morphogenetic protein 2
[ "Biology" ]
576
[ "Induced stem cells", "Developmental genes and proteins" ]
5,620,659
https://en.wikipedia.org/wiki/Caustic%20pencil
A caustic pencil (or silver nitrate stick) is a device for applying topical medication containing silver nitrate and potassium nitrate, used to chemically cauterize skin, providing hemostasis or permanently destroying unwanted tissue such as a wart, skin tag, aphthous ulcers, or over-production of granulation tissue. They are not used as a treatment for minor skin cuts, and are not to be confused with a styptic pencil. The silver and potassium nitrates in caustic pencils is in a dried, solid form at the tip of a wooden or plastic stick. When the material is applied to a wound or lesion, the tissue moisture or blood dissolves the dried nitrate salts, which then chemically burn the tissue. It requires moisture for activation. Silver nitrate sticks are often used for minor hemostasis where patients are not under general anesthesia, and where electrocautery would be painful and inconvenient. One common use of silver nitrate sticks is in emergency medicine, to control epistaxis (nosebleed). The stick is rolled on the affected mucous membrane or visible blood vessel in the nares (nostril) where the chemical cauterization stops the minor bleeding. If the bleeding is too copious, the chemical cautery may not be effective, as the flowing blood can wash away the chemical before it can react with the tissue. It can also be accidentally spread to undesirable locations where it can cause skin staining and tissue burns. This is especially important, as it is often used in the nose where accidental aerosolization can occur, splattering the clinician or other parts of the patient and causing unintentional burns. Accidental application to unintended tissue is treated with copious water irrigation, and saline solution will inactivate the chemical reaction. References Medical equipment
Caustic pencil
[ "Biology" ]
384
[ "Medical equipment", "Medical technology" ]
5,620,705
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%204
Bone morphogenetic protein 4 is a protein that in humans is encoded by BMP4 gene. BMP4 is found on chromosome 14q22-q23. BMP4 is a member of the bone morphogenetic protein family which is part of the transforming growth factor-beta superfamily. The superfamily includes large families of growth and differentiation factors. BMP4 is highly conserved evolutionarily. BMP4 is found in early embryonic development in the ventral marginal zone and in the eye, heart blood and otic vesicle. Discovery Bone morphogenetic proteins were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site. Function BMP4 is a polypeptide belonging to the TGF-β superfamily of proteins. It, like other bone morphogenetic proteins, is involved in bone and cartilage development, specifically tooth and limb development and fracture repair. This particular family member plays an important role in the onset of endochondral bone formation in humans. It has been shown to be involved in muscle development, bone mineralization, and ureteric bud development. BMP4 stimulates differentiation of overlying ectodermal tissue. Bone morphogenetic proteins are known to stimulate bone formation in adult animals. This is thought that inducing osteoblastic commitment and differentiation of stem cells such as mesenchymal stem cells.BMPs are known to play a large role in embryonic development. In the embryo BMP4 helps establish dorsal-ventral axis formation in Xenopus frog through inducing ventral mesoderm. In mice targeted inactivation of BMP4 disrupts mesoderm from forming. As well establishes dorsal-ventral patterning of the developing neural tube with the help of BMP7, and inducing dorsal characters. BMP4 also limits the extent to which neural differentiation in Xenopus embryos occurs by inducing epidermis formation rather than neural tissue. They can aid in inducing the lateral characteristics in somites. Somites are required for the development of cartilage, bone, dermis on the dorsal side of the body, thoracic muscles and muscles within limbs. BMP4 helps in the patterning of the developing head though inducing apoptosis of the neural crest cells; this is done in the hindbrain. In adult, BMP4 is important for the neurogenesis (i.e., the generation of new neurons) that occurs throughout life in two neurogenic niches of the brain, the dentate gyrus of the hippocampus and the subventricular zone (SVZ) adjacent to lateral ventricles. In these niches new neurons are continuously generated from stem cells. In fact it has been shown that in the dentate gyrus BMP4 maintains neural stem cells in quiescence, thus preventing the depletion of the pool of stem cells. In the SVZ, BMP-mediated signaling via Smad4 is required to initiate neurogenesis from adult neural stem cells and suppress the alternative fate of oligodendrogliogenesis. Moreover, it has been shown that in the SVZ BMP4 has a prodifferentiative effect, since it rescues a defect of terminal differentiation in SVZ neurospheres where the gene Tis21/BTG2 - required for terminal differentiation - has been deleted. Tis21 is a positive regulator of BMP4 expression in the SVZ. BMP4 is important for bone and cartilage metabolism. The BMP4 signaling has been found in formation of early mesoderm and germ cells. Limb bud regulation and development of the lungs, liver, teeth and facial mesenchyme cells are other important functions attributed to BMP4 signaling. Digit formation is influenced by BMP4, along with other BMP signals. The interdigital mesenchyme exhibits BMP4, which prevents apoptosis of the region. Tooth formation relies on BMP4 expression, which induces Msx 1 and 2. These transcription factors turn the forming tooth to become and incisor. BMP4 also plays important roles in adipose tissue: it is essential for white adipogenesis, and promotes adipocyte differentiation. Additionally, it is also important for brown fat, where it induces UCP1, related to non-shivering thermogenesis. BMP4 secretion helps cause differentiation of the ureteric bud into the ureter. BMP4 antagonizes organizer tissue and is expressed in early development in ectoderm and mesoderm tissue. Upon gastrulation, the transcription of BMP4 is limited to the ventrolateral marginal zone due to inhibition from the dorsalizing side of the developing embryo. BMP4 aids in ventralizing mesoderm, which guides the dorsal-ventral axis formation. In Xenopus BMP4 has been found to aid in formation of blood and blood islands. BMP4, initially expressed in the epidermis, is found in the roof plate during formation of the neural tube. A gradient of BMP signaling is found in opposition to a Sonic hedgehog, Shh, gradient. This expression of BMP4 patterns the dorsal neurons. BMP4, in conjunction with FGF2, promote differentiation of stem cells to mesodermal lineages. After differentiation, BMP4 and FGF2 treated cells generally produces higher amounts of osteogenic and chondrogenic differentiation than untreated stem cells. Also in conjunction with FGF2 it can produce progenitor thyroid cells from pluripotent stem cells in mice and humans. BMP4 has been shown to induce the expression of the Msx gene family, which is believed to be part of cartilage formation from somitic mesoderm. BMP4, a paracrine growth factor, has been found in rat ovaries. BMP4, in conjunction with BMP7, regulate early ovarian follicle development and primordial-to-primary follicle transition. In addition, inhibition of BMP4 with antibodies has been shown to decrease overall ovary size. These results indicate that BMP4 may aid in survival and prevention of apoptosis in oocytes. In birds, BMP4 has been shown to influence the beak size of Darwin's finches. Low amounts of BMP4 are correlated with low beak depths and widths. Conversely, high BMP4 expression makes high beak depths and widths. The genetic regulation of BMP4 provides the foundation for natural selection in bird beaks. Protein structure Yielding an active carboxy-terminal peptide of 116 residues, human bmp4 is initially synthesized as a forty percent residue preproprotein which is cleaved post translationally. BMP4 has seven residues which are conserved and glycosylated. The monomers are held with disulphide bridges and 3 pairs of cysteine amino acids. This conformation is called a "cystine knot". BMP4 can form homodimers or heterodimers with similar BMPS. One example of this is BMP7. This ability to form homodimers or heterodimers gives the ability to have greater osteoinductive activity than just bmp4 alone. Not much is known yet about how BMPS interact with the extracellular matrix. As well little is known about the pathways which then degrade BMP4. Inhibition Inhibition of the BMP4 signal (by chordin, noggin, or follistatin) causes the ectoderm to differentiate into the neural plate. If these cells also receive signals from FGF, they will differentiate into the spinal cord; in the absence of FGF the cells become brain tissue. While overexpression of BMP4 expression can lead to ventralization, inhibition with a dominant negative may result in complete dorsalization of the embryo or the formation of two axises. It is important to note that mice in which BMP4 was completely inactivated usually died during gastrulation. It is thought that inactivation of human BMP4 would likely have the same effect. However, mutations which don't entirely inactivate BMP4 in humans can also have subtle effects phenotypically, and have been implicated in tooth agenesis as well as osteoporosis. Isoforms Alternative splicing in the 5' untranslated region of this gene has been described and three variants are described, all encoding an identical protein. Molecular mechanisms BMP4, as a member of the transforming growth factor-β (TGF-β) family binds to 2 different types of serine-threonine kinase receptors known as BMPR1 and BMPR2. Signal transduction via these receptors occurs via Smad and map kinase pathways to effect transcription of its target genes. In order for signal transduction to occur, both receptors must be functional. BMP is able to bind to BMPR2 without BMPR1 however, the affinity significantly increases in the presence of both receptors. BMPR1 is transphosphorylated via BMPR2 which induces downstream signalling within the cell, affecting transcription. Smad signaling pathway TGF-β family receptors most commonly use the Smad signaling pathway to tranduce signals. Type 2 receptors are responsible for activating type 1 receptors where their function involves the phosphorylation of R-Smads (Smad-1, Smad-5, Smad-8). Upon phosphorylation, formation of an R-SMAD complex in conjunction with common-partner Smad (co-Smad) occurs where it migrates to the nucleus. This signaling pathway is regulated by the small molecule inhibitor known as dorsomorphin which prevents the downstream effects of R-smads. Map kinase (MAPK) signaling pathways Mitogen activated protein kinases (MAPK) undergo phosphorylation via a signaling cascade where MAPKKK phosphorylates and activates MAPKK and MAPKK phosphorylates and activates MAPK which then induces an intracellular response. Activation of MAPKKK is through the interaction of mainly GTPases or another group of protein kinases. TGF-β receptors induce the MAPK signaling pathways of ERK, JNK and p38. BMP4 is also known to activate the ERK, JNK and p38 MAPK signalling pathways whilst have been found to act independently of Smad signaling pathways, are mostly active in conjunction with Smad. The activation of the ERK and JNK pathways acts to phosphorylate Smad and therefore regulate its activation. In addition to this, MAPK pathways may be able to directly affect Smad-interacting transcription factors via a JNK or p38 substrate that induces convergence of the two signaling pathways. This convergence is noted to consist mainly of cooperative behavior however, there is evidence to suggest that they may at times counteract each other. Furthermore, the balance that exists between the direct activation of these signaling pathways has a significant effect on TGF-β induced cellular responses. Clinical significance Increase in expression of BMP4 has been associated with a variety of bone diseases, including the heritable disorder Fibrodysplasia Ossificans Progressiva. There is strong evidence from sequencing studies of candidate genes involved in clefting that mutations in the bone morphogenetic protein 4 (BMP4) gene may be associated in the pathogenesis of cleft lip and palate. Eye development Eyes are essential for organisms, especially terrestrial vertebrates, to observe prey and obstacles; this is critical for their survival. The formation of the eyes starts as optic vesicles and lens derived from the neuroectoderm. Bone morphogenic proteins are known to stimulate eye lens formation. During early development of eyes, the formation of the optic vesicle is essential in Mice and BMP4 expressed strongly in the optic vesicle and weakly in the surrounding mesenchyme and surface ectoderm. This concentration gradient of BMP4 in optic vesicle is critical for lens induction. Researcher, Dr. Furuta and Dr. Hogan found out that if they did a laser mutation on mice embryos and causing a BMP4 homozygous null mutation, this embryo will not develop the lens. They also did an in situ hybridization of the BMP4 gene showing green color and Sox2 gene in red which they thought it was involved in the lens formation as well. After they did these two in situ hybridizations in the mice embryos, they found that both green and red colors are found in the optic vesicle of the mice embryos. This indicated that BMP4 and Sox2 are expressed in the right place at the right time of the optic vesicle and prove that they have some essential functions for the lens induction. Furthermore, they did a follow-up experiment that by injecting BMP4 into the BMP4 homozygous mutant embryos rescued the lens formation (12). This indicated that BMP4 is definitely required for lens formation. However, researchers also found that some of the mutated mice cannot be rescued. They later found that those mutants lacked of Msx 2 which is activated by BMP4. The mechanism they predicted was that BMP4 will active Msx 2 in the optic vesicle and concentration combination of BMP4 and Msx2 together active Sox2 and the Sox2 is essential for lens differentiation. Injection of Noggin into lens fiber cells in mice significantly reduces the BMP4 proteins in the cells. This indicates that Noggin is sufficient to inhibit the production of BMP4. Moreover, another inhibitor protein, Alk6 was found that blocked the BMP4 from activating the Msx2 which stopped lens differentiation . However, there are still a lot of unknown about the mechanism of inhibition on BMP4 and downstream regulation of Sox2. In the future, researchers are aiming to find out a more complete pathway of whole eye development and hoping one day, they can find a way to cure some genetic caused eye diseases. Hair loss Hair loss or known as alopecia is caused from the changing of hair follicle morphology and hair follicle cycling in an abnormal fashion. The cycles of hair follicles are that of growth, or anagen, regression or catagen, and rest or telogen. In mammals reciprocal epithelial and mesenchymal interactions control the development of hair. Genes such as BMP4 and BMP2 are both active within the precursors of the hair shaft. Specifically BMP4 is found in the dermal papilla. BMP4 is part of the signaling network which controls the development of hair. It is needed for the induction of biochemical pathways and signaling for regulating the differentiation of the hair shaft in the anagen hair follicle. This is done through controlling the expression of the transcription factors which regulate hair differentiation. It is still unclear however where BMPs act within the genetic network. The signaling of bmp4 may potentially control expression of terminal differentiation molecules such as keratins. Other regulators have been shown to control hair follicle development as well. HOXC13 and FOXN1 are considered important regulators because loss-of-function experiments show impaired hair shaft differentiation that doesn't interfere in the hair follicle formation. When BMP4 is expressed ectopically, within transgenic mice the hair follicle outer root sheath (ORS) the proliferation of the cell matrix is inhibited. BMP4 also activates hair keratin gene expression noting that BMP4 is important in the differentiation of the hair shaft. Noggin, a known inhibitor of BMP4, is found within the matrix cells of the hair bulb. Other important factors to consider in the development of hair is the expression of Shh (sonic hedgehog), BMP7, BMP2, WNT, and β-catenin as these are required in early stage morphogenesis. Other genes which can inhibit or interact with BMP4 are noggin, follistatin, gremlin, which is all expressed in the developing hair follicles. In mice in which noggin is lacking, there are fewer hair follicles than on a normal mouse and the development of the follicle is inhibited. In chick embryos it is shown that ectopically expressed noggin produces enlarged follicles, and BMP4 signaling shows repressed placode fate in nearby cells. Noggin has also been shown during in vivo experiments to induce hair growth in post natal skin. BMP4 is an important component of the biological pathways that involved regulating hair shaft differentiation within the anagen hair follicle. The strongest levels of expressed BMP4 are found within the medulla, hair shaft cells, distal hair matrix, and potential precursors of the cuticle. The two main methods which BMP4 inhibit expression of hair is through restricting growth factor expression in the hair matrix and antagonism between growth and differentiation signaling. Pathways that regulate hair follicle formation and hair growth are key in developing therapeutic methods for hair loss conditions. Such conditions include the development of new follicles, changing the shape of characteristics of existing follicles, and the altering of hair growth in existing hair follicles. Furthermore, BMP4 and the pathway through which it works may provide therapeutic targets for the prevention of hair loss. References Further reading External links BMPedia - the Bone Morphogenetic Protein Wiki Bone morphogenetic protein Developmental genes and proteins TGFβ domain Articles containing video clips
Bone morphogenetic protein 4
[ "Biology" ]
3,688
[ "Induced stem cells", "Developmental genes and proteins" ]
5,620,714
https://en.wikipedia.org/wiki/Mockup
In manufacturing and design, a mockup, or mock-up, is a scale or full-size model of a design or device, used for teaching, demonstration, design evaluation, promotion, and other purposes. A mockup may be a prototype if it provides at least part of the functionality of a system and enables testing of a design. Mock-ups are used by designers mainly to acquire feedback from users. Mock-ups address the idea captured in a popular engineering one-liner: "You can fix it now on the drafting board with an eraser or you can fix it later on the construction site with a sledge hammer". Mockups are used as design tools virtually everywhere a new product is designed. Mockups are used in the automotive device industry as part of the product development process, where dimensions, overall impression, and shapes are tested in a wind tunnel experiment. They can also be used to test consumer reaction. Military acquisition Mockups are part of the military acquisition process. Mockups are often used to test human factors and aerodynamics, for example. In this context, mockups include wire-frame models. They can also be used for public display and demonstration purposes prior to the development of a prototype, as with the case of the Lockheed Martin F-35 Lightning II mock-up aircraft. Consumer goods Mockups are used in the consumer goods industry as part of the product development process, where dimensions, human factors, overall impression, and commercial art are tested in marketing research. Mockups help to visualise how all design decisions play together, they are convincing and closely resemble the final product, it can be easily revised rather than much later in the production stage, It also helps in visualisation of package design projects in 3D & speed up approvals. Furniture and cabinetry Mockups are commonly required by designers, architects, and end users for custom furniture and cabinetry. The intention is often to produce a full-sized replica, using inexpensive materials in order to verify a design. Mockups are often used to determine the proportions of the piece, relating to various dimensions of the piece itself, or to fit the piece into a specific space or room. The ability to see how the design of the piece relates to the rest of the space is also an important factor in determining size and design. When designing a functional piece of furniture, such as a desk or table, mockups can be used to test whether they suit typical human shapes and sizes. Designs that fail to consider these issues may not be practical to use. Mockups can also be used to test color, finish, and design details which cannot be visualized from the initial drawings and sketches. Mockups used for this purpose can be on a reduced scale. The cost of making mockups is often more than repaid by the savings made by avoiding going into production with a design which needs improvement. Software engineering The most common use of mockups in software development is to create user interfaces that show the end user what the software will look like without having to build the software or the underlying functionality. Software UI mockups can range from very simple hand drawn screen layouts, through realistic bitmaps, to semi functional user interfaces developed in a software development tool. Mockups are often used to create unit tests - there they are usually called mock objects. The main reason to create such mockups is to be able to test one part of a software system (a unit) without having to use dependent modules. The function of these dependencies is then "faked" using mock objects. This is especially important if the functions that are simulated like this are difficult to obtain (for example because it involves complex computation) or if the result is non-deterministic, such as the readout of a sensor. A common style of software design is Service-oriented architecture (SOA), where many components communicate via protocols such as HTTP. Service virtualization and API mocks and simulators are examples of implementations of mockups or so called over-the-wire test doubles in software systems that are modelling dependent components or microservices in SOA environments. Mockup software can also be used for micro level evaluation, for example to check a single function, and derive results from the tests to enhance the products power and usability on the whole. Systems engineering Mockups, wireframes and prototypes are not so cleanly distinguished in software and systems engineering, where mockups are a way of designing user interfaces on paper or in computer images. A software mockup will thus look like the real thing, but will not do useful work beyond what the user sees. A software prototype, on the other hand, will look and work just like the real thing. In many cases it is best to design or prototype the user interface before source code is written or hardware is built, to avoid having to go back and make expensive changes. Early layouts of a World Wide Web site or pages are often called mockups. A large selection of proprietary or open-source software tools are available for this purpose. Architecture At the beginning of a project's construction, architects will often direct contractors to provide material mockups for review. These allow the design team to review material and color selections, and make modifications before product orders are placed. Architectural mockups can also be used for performance testing (such as water penetration at window installations, for example) and help inform the subcontractors how details are to be installed. See also Digital mockup Human-in-the-Loop Military dummy Operations research Pilot experiment References Product development Simulation Design
Mockup
[ "Engineering" ]
1,125
[ "Design" ]
5,620,745
https://en.wikipedia.org/wiki/Motion%20camouflage
Motion camouflage is camouflage which provides a degree of concealment for a moving object, given that motion makes objects easy to detect however well their coloration matches their background or breaks up their outlines. The principal form of motion camouflage, and the type generally meant by the term, involves an attacker's mimicking the optic flow of the background as seen by its target. This enables the attacker to approach the target while appearing to remain stationary from the target's perspective, unlike in classical pursuit (where the attacker moves straight towards the target at all times, and often appears to the target to move sideways). The attacker chooses its flight path so as to remain on the line between the target and some landmark point. The target therefore does not see the attacker move from the landmark point. The only visible evidence that the attacker is moving is its looming, the change in size as the attacker approaches. Camouflage is sometimes facilitated by motion, as in the leafy sea dragon and some stick insects. These animals complement their passive camouflage by swaying like plants in the wind or ocean currents, delaying their recognition by predators. First discovered in hoverflies in 1995, motion camouflage by minimising optic flow has been demonstrated in another insect order, dragonflies, as well as in two groups of vertebrates, falcons and echolocating bats. Since bats hunting at night cannot be using the strategy for camouflage, it has been named, describing its mechanism, as constant absolute target direction. This is an efficient homing strategy, and it has been suggested that anti-aircraft missiles could benefit from similar techniques. Camouflage of approach motion Many animals are highly sensitive to motion; for example, frogs readily detect small moving dark spots but ignore stationary ones. Therefore, motion signals can be used to defeat camouflage. Moving objects with disruptive camouflage patterns remain harder to identify than uncamouflaged objects, especially if other similar objects are nearby, even though they are detected, so motion does not completely 'break' camouflage. All the same, the conspicuousness of motion raises the question of whether and how motion itself could be camouflaged. Several mechanisms are possible. Stealthy movements One strategy is to minimise actual motion, as when predators such as tigers stalk prey by moving very slowly and stealthily. This strategy effectively avoids the need to camouflage motion. Minimising motion signal When movement is required, one strategy is to minimise the motion signal, for example by avoiding waving limbs about and by choosing patterns that do not cause flicker when seen by the prey from straight ahead. Cuttlefish may be doing this with their active camouflage by choosing to form stripes at right angles to their front-back axis, minimising motion signals that would be given by occluding and displaying the pattern as they swim. Disrupting perception of motion Disrupting the attacker's perception of the target's motion was one of the intended purposes of dazzle camouflage as used on ships in the First World War, though its effectiveness is disputed. This type of dazzle does not appear to be used by animals. Mimicking optic flow of background Some animals mimic the optic flow of the background, so that the attacker does not appear to move when seen by the target. This is the main focus of work on motion camouflage, and is often treated as synonymous with it. Pursuit strategies An attacker can mimic the background's optic flow by choosing its flight path so as to remain on the line between the target and either some real landmark point, or a point at infinite distance (giving different pursuit algorithms). It therefore does not move from the landmark point as seen by the target, though it inevitably looms larger as it approaches. This is not the same as moving straight towards the target (classical pursuit): that results in visible sideways motion with a readily detectable difference in optic flow from the background. The strategy works whether the background is plain or textured. This motion camouflage strategy was discovered and modelled as algorithms in 1995 by M. V. Srinivasan and M. Davey while they were studying mating behaviour in hoverflies. The male hoverfly appeared to be using the tracking technique to approach prospective mates. Motion camouflage has been observed in high-speed territorial battles between dragonflies, where males of the Australian emperor dragonfly, Hemianax papuensis were seen to choose their flight paths to appear stationary to their rivals in 6 of 15 encounters. They made use of both real-point and infinity-point strategies. The strategy appears to work equally well in insects and in vertebrates. Simulations show that motion camouflage results in a more efficient pursuit path than classical pursuit (i.e. the motion camouflage path is shorter), whether the target flies in a straight line or chooses a chaotic path. Further, where classical pursuit requires the attacker to fly faster than the target, the motion camouflaged attacker can sometimes capture the target despite flying more slowly than it. In sailing, it has long been known that if the bearing from the target to the pursuer remains constant, known as constant bearing, decreasing range (CBDR), equivalent to taking a fixed reference point at infinite distance, the two vessels are on a collision course, both travelling in straight lines. In a simulation, this is readily observed as the lines between the two remain parallel at all times. Echolocating bats follow an infinity-point path when hunting insects in the dark. This is not for camouflage but for the efficiency of the resulting path, so the strategy is generally called constant absolute target direction (CATD); it is equivalent to CBDR but allowing for the target to manoeuvre erratically. A 2014 study of falcons of different species (gyrfalcon, saker falcon, and peregrine falcon) used video cameras mounted on their heads or backs to track their approaches to prey. Comparison of the observed paths with simulations of different pursuit strategies showed that these predatory birds used a motion camouflage path consistent with CATD. The missile guidance strategy of pure proportional navigation guidance (PPNG) closely resembles the CATD strategy used by bats. The biologists Andrew Anderson and Peter McOwan have suggested that anti-aircraft missiles could exploit motion camouflage to reduce their chances of being detected. They tested their ideas on people playing a computerised war game. The steering laws to achieve motion camouflage have been analysed mathematically. The resulting paths turn out to be extremely efficient, often better than classical pursuit. Motion camouflage pursuit may therefore be adopted both by predators and missile engineers (as "parallel navigation", for an infinity-point algorithm) for its performance advantages. Camouflage by motion Swaying: motion crypsis or masquerade Swaying behaviour is practised by highly cryptic animals such as the leafy sea dragon, the stick insect Extatosoma tiaratum, and mantises. These animals resemble vegetation with their coloration, strikingly disruptive body outlines with leaflike appendages, and the ability to sway effectively like the plants that they mimic. E. tiaratum actively sways back and forth or side to side when disturbed or when there is a gust of wind, with a frequency distribution like foliage rustling in the wind. This behaviour may represent motion crypsis, preventing detection by predators, or motion masquerade, promoting misclassification (as something other than prey), or a combination of the two, and has accordingly also been described as a form of motion camouflage. References External links Biological defense mechanisms Camouflage mechanisms Antipredator adaptations
Motion camouflage
[ "Biology" ]
1,511
[ "Biological interactions", "Behavior", "Antipredator adaptations", "Biological defense mechanisms" ]
5,621,096
https://en.wikipedia.org/wiki/Fuel%20line
A fuel line is a hose or pipe used to transfer fuel from one point in a vehicle to another. The United States Environmental Protection Agency defines a fuel line as "all hoses or tubing designed to contain liquid fuel or fuel vapor. This includes all hoses or tubing for the filler neck, for connections between dual fuel tanks, and for connecting a carbon canister to the fuel tank. This does not include hoses or tubing for routing crankcase vapors to the engine's intake or any other hoses or tubing that are open to the atmosphere." Materials Rubber Most vehicles have rubber fuel hoses connecting the fuel pipes on the chassis to the fuel pump or carburetor on the engine. Rubber hoses are flexible and can be cut to length as required, but they have a tendency to perish over time and can rub through if not properly secured. Plastic More modern vehicles may be fitted with fuel lines made of plastic, typically nylon. Plastic fuel lines do not perish and are lighter than metal tubing, but they melt at lower temperatures and cannot be repaired as easily. Steel Many FF or FR vehicles with fuel tanks at the rear are fitted with rigid steel fuel pipes that run the length of the chassis from the tank to the engine bay. Steel pipes are cheap and strong, but can corrode causing fuel leaks. Copper Older vehicles may be fitted with copper fuel pipes. These are easy to fit and repair, but copper is heavy and expensive when compared with the other options. Fittings Traditionally fuel lines had flared or compression fittings on the rigid pipe sections, and hose clamps where rubber hoses attached to metal components. In modern cars with plastic fuel lines, quick release fittings are becoming more common – this allows the fuel system components to simply clip together. Priming The primer bulb can be found on the fuel line between the gasoline tank and the carburetor. When one primes the carburetor, fuel is pushed from the carb bowl to the barrel using the rubber primer bulb. See also List of auto parts References External links Fuel Transfer System Hoses Vehicle parts
Fuel line
[ "Technology" ]
435
[ "Vehicle parts", "Components" ]
5,621,124
https://en.wikipedia.org/wiki/Syringe%20filter
A syringe filter (sometimes called a wheel filter if it has a wheel-like shape) is a single-use filter cartridge. It is attached to the end of a syringe for use. Syringe filters may have Luer lock fittings, though not universally so. The use of a needle is optional; where desired it may be fitted to the end of the syringe filter. A syringe filter generally consists of a plastic housing with a membrane that serves as a filter. The fluid to be purified may be cleaned only by being pushed through the filter from syringe, it cannot be drawn into the syringe through the filter due to its one way process. Forms In scientific applications, the most common sizes available are 0.2 or 0.22 μm and 0.45 μm pores. These sizes are sufficient for HPLC use. The smallest known sterile syringe microfilter have pore sizes of 0.02 μm. Membrane diameters of 10 mm, 13 mm, 25 mm are common as well. Some syringe filters for small volumes may not resemble a wheel at all. The syringe filter body may be made of such materials as polypropylene and nylon. The filter membrane may be of PTFE, nylon, or other treated products for specific purposes. Most manufacturers publish compatibility wallcharts advising users of compatibility between their products and organic solvents or corrosive liquids (e.g. trifluoroacetic acid). Application Syringe filters may be used to remove particles from a sample, prior to analysis by HPLC or other techniques involving expensive instruments. Particles easily damage an HPLC due to the narrow bore and high pressures within. Syringe filters are quite suitable for Schlenk line work, which makes extensive use of needles and syringes (see cannula transfer). Being relatively affordable, they may be used for general purpose filtration, especially of smaller volumes where losses by soaking up filter paper are significant. Syringe filters are also available for the filtration of gases, and for the removal of bacteria from a sample. Disk filters are frequently used for the onsite manufacture of parenteral drugs and sterile eye drops, in order to remove microbiological contaminations (sterile filtration). Harm reduction in recreational drug use Filters with 0.1 μm compared with 0.2 μm pore size have enhanced bacterial removal according to one study. See also Microfiltration Adulteration Drug injection Notes References Laboratory equipment Medical equipment
Syringe filter
[ "Biology" ]
526
[ "Medical equipment", "Medical technology" ]
5,621,528
https://en.wikipedia.org/wiki/Heijunka%20box
A heijunka box is a visual scheduling tool used in heijunka, a method originally created by Toyota for achieving a smoother production flow. While heijunka is the smoothing of production, the heijunka box is the name of a specific tool used in achieving the aims of heijunka. The heijunka box is generally a wall schedule which is divided into a grid of boxes or a set of 'pigeon-holes'/rectangular receptacles. Each column of boxes representing a specific period of time, lines are drawn down the schedule/grid to visually break the schedule into columns of individual shifts or days or weeks. Coloured cards representing individual jobs (referred to as kanban cards) are placed on the heijunka box to provide a visual representation of the upcoming production runs. The heijunka box makes it easy to see what type of jobs are queued for production and for when they are scheduled. Workers on the process remove the kanban cards for the current period from the box in order to know what to do. These cards will be passed to another section when they process the related job. Implementation The Heijunka box allows easy and visual control of a smoothed production schedule. A typical heijunka box has horizontal rows for each product. It has vertical columns for identical time intervals of production. In the illustration on the right, the time interval is thirty minutes. Production control kanban are placed in the pigeon-holes provided by the box in proportion to the number of items to be built of a given product type during a time interval. In this illustration, each time period builds an A and two Bs along with a mix of Cs, Ds and Es. What is clear from the box, from the simple repeating patterns of kanbans in each row, is that the production is smooth of each of these products. This ensures that production capacity is kept under a constant pressure thereby eliminating many issues. See also Lean production Just In Time References Bibliography Japanese business terms Lean manufacturing Toyota Production System
Heijunka box
[ "Engineering" ]
418
[ "Lean manufacturing" ]
5,621,704
https://en.wikipedia.org/wiki/Ludowici%20Roof%20Tile
Ludowici Roof Tile, LLC., based in New Lexington, Ohio, is an American manufacturer of clay roof tiles, floor tiles, and wall cladding. The company was established in 1888 with the formation of the Celadon Terra Cotta Company in Alfred, New York. It has created tile for many prominent buildings throughout the United States. History Ludowici Roofing Tile Company Carl Ludowici was a machinist in Ensheim, Germany and in 1857 he purchased a local roof tile factory and upgraded it with machines of his own design, founding the Carl Ludowici Ziegelwerke. The firm moved to a factory in Ludwigshafen in 1861 and slowly grew, largely due to the innovative nature of Ludowici's steam-powered tile press. After Carl's death in 1881, his sons Wilhelm and Franz took over the company, with Franz taking over business management and Wilhelm leading design and development. The company largely relocated to Jockgrim, where it grew into one of the major German tile manufacturers of its era. In 1893 the Ludowicis licensed their patents and designs to the newly formed Ludowici Roofing Tile Company of Chicago. This company exhibited tiles at the World's Columbian Exposition that year and with its factory in Chicago Heights grew to become a leading producer of roof tiles by the turn of the century. Ludowici built a factory in the unincorporated community of Liberty City, Georgia in 1902. As a tribute to the company, the city was incorporated as Ludowici, Georgia in 1905. Celadon Terra Cotta Company In 1888 a sculpting professor at Alfred University in Alfred, New York, found that the local supply of clay was well-suited for ornamental sculpting work, and found other local investors to form the Celadon Terra Cotta Company, named for the green hue the clay took on when salt-fired. After visiting a friend in the area, George Herman Babcock became interested in the possibilities of terra cotta and bought stock, eventually becoming president of the company. As president he filed patents for multiple profiles of tile, such as the Conosera tile and unique combination tiles with different designs but a standard base, allowing for multiple styles of interlocking tile to be used on the same roof. Babcock died in 1893, but the company continued to grow as it shifted focus towards roofing tile, and was renamed the Celadon Roofing Tile Company in 1900. Shortly after this the New York State School of Clay-Working and Ceramics was established at Alfred University after lobbying by Celadon executives and others. The presence of this school allowed the company to collaborate with leading ceramicists of the time such as Charles Fergus Binns, who did extensive consulting work with Celadon. The Celadon Company purchased the Imperial Clay Company in 1905 and gained its factory in New Lexington, Ohio. Ludowici-Celadon Company In 1906 the companies merged to form the Ludowici-Celadon Company. A plant in Coffeyville, Kansas was purchased in 1908, and in 1909 the factory in Alfred, New York burned to the ground. The company never rebuilt in the village, but the original Celadon Company office survived and remains there to this day. The factory in Ludowici, Georgia largely produced tiles for regional sales and had seen a decline in demand since the completion of tiles for the Panama Canal Zone. In October, 1913 the factory closed, and the next month the Ludowici-Celadon factory in Chicago Heights burned down, leaving the company with only its factories in New Lexington and Coffeyville. The company grew through the first quarter of the century and was helped by the popularity of traditional terra cotta in architecture of the 1920s. To tap into this interest Ludowici-Celadon released The Tuileries Brochures in 1929, which contained articles written by prominent authors and architects such as Aymar Embury II, Frederick Ackerman, Jacques Carlu, and Hilaire Belloc. During World War II the company suffered from a decline in domestic construction and supplemented its limited production of roof tile by temporarily opening pottery divisions in New Lexington and Coffeyville. Among other things these produced licensed cookie jars for Walt Disney. In 1956 the factory in Coffeyville, Kansas was closed due to declining demand for terra cotta tile, and in 1976 Ludowici-Celadon was purchased by CSC Inc. of Chicago. The company saw growth in the 1980s with a growing interest in historic restoration, and in 1986 sponsored a competition and exhibit with the National Building Museum on architectural terra cotta ornamentation. CSC sold Ludowici-Celadon to CertainTeed, a division of Saint-Gobain, in 1989. Ludowici Roof Tile CertainTeed shortened Ludowici-Celadon's name to Ludowici Roof Tile in 1994. Around 2002 Ludowici's management was transferred from CertainTeed to Terreal, another Saint-Gobain subsidiary. When Terreal spun off from Saint-Gobain in 2003, Ludowici went with it. Ludowici introduced wall cladding tile and in 2007 it opened its first showroom in a renovated former shipping building at its New Lexington factory. A larger showroom was opened in Dallas, Texas in 2019 to act as a showcase for architects and designers in that area. The Ludowici Roof Tile Company Historic District was added to the National Register of Historic Places in 2021. In 2024 Terreal and its subsidiaries, including Ludowici, were sold to wienerberger of Austria. Significant projects Ludowici has created tiles for prominent buildings through the 19th, 20th, and 21st centuries, including the White House, the Pennsylvania State Capitol, the Plaza Hotel, the New York Life Building, the New York State Capitol, Wrigley Field and many buildings at Walt Disney World. See also Ludowici Ziegelwerke (German) Roof tiles References Roof tiles Companies based in Ohio Companies established in 1888 Manufacturing companies based in Ohio Building materials companies of the United States Terracotta Ceramics manufacturers of the United States Manufacturer of architectural terracotta
Ludowici Roof Tile
[ "Engineering" ]
1,283
[ "Manufacturer of architectural terracotta", "Architecture" ]
5,621,896
https://en.wikipedia.org/wiki/Greyout
A greyout is a transient loss of vision characterized by a perceived dimming of light and color, sometimes accompanied by a loss of peripheral vision. It is a precursor to fainting or a blackout and is caused by hypoxia (low brain oxygen level), often due to a loss of blood pressure. Greyouts have a variety of possible causes: Shock, such as hypovolemia, even in mild form such as when drawing blood. Standing up suddenly (see orthostatic hypotension), especially if sick, hungover, or experiencing low blood pressure. Fatigue Hyperventilation, paradoxically: self-induced hypocapnia, such as in the fainting game or in shallow water blackout. Overexertion Panic attack Recovery is usually rapid. A greyout can be readily reversed by lying down as the cardiovascular system does not need to work against gravity for blood to reach the brain. A greyout may be experienced by aircraft pilots pulling high positive g-forces as when pulling up into a loop or a tight turn, which forces blood to the lower extremities of the body and lowers blood pressure in the brain. This is the reverse of a redout, or a reddening of the vision, which is the result of negative g-forces caused by performing an outside loop, that is by pushing the nose of the aircraft down. Redouts are potentially dangerous and can cause retinal damage and hemorrhagic stroke. Pilots of high performance aircraft can increase their resistance to greyouts by using a g-suit, which controls the pooling of blood in the lower limbs, but there is no suit yet capable of controlling a redout. In both cases, symptoms may be remedied immediately by easing pressure on the flight controls. Continued or heavy g-force will rapidly progress to g-LOC (g-force induced Loss of Consciousness). Untrained individuals can withstand approximately 4g, while fighter pilots with g-suits are trained to perform 9g maneuvers. Surprisingly, even during a heavy greyout, where the visual system is severely impaired, pilots can still hear, feel, and speak. Complete greyout and loss of consciousness are separate events. Another common occurrence of greyouts is in roller coaster riders. Many roller coasters put riders through positive g-forces, particularly in vertical loops and helices. Roller coasters are unlikely to have high enough negative g-forces to induce redouts, as most low-g elements are designed to simulate weightlessness. See also Blackout (disambiguation) Brownout (disambiguation) Whiteout (disambiguation) Notes Visual system Human eye Acceleration
Greyout
[ "Physics", "Mathematics" ]
552
[ "Wikipedia categories named after physical quantities", "Quantity", "Physical quantities", "Acceleration" ]
5,622,243
https://en.wikipedia.org/wiki/Boilerplate%20code
In computer programming, boilerplate code, or simply boilerplate, are sections of code that are repeated in multiple places with little to no variation. When using languages that are considered verbose, the programmer must write a lot of boilerplate code to accomplish only minor functionality. The need for boilerplate can be reduced through high-level mechanisms such as metaprogramming (which has the computer automatically write the needed boilerplate code or insert it at compile time), convention over configuration (which provides good default values, reducing the need to specify program details in every project) and model-driven engineering (which uses models and model-to-code generators, eliminating the need for manual boilerplate code). It is also possible to move boilerplate code to an abstract class so that it can be inherited by any number of concrete classes. Another option would be to move it into a subroutine so that it can be called instead of being duplicated. Origin The term arose from the newspaper business. Columns and other pieces that were distributed by print syndicates were sent to subscribing newspapers in the form of prepared printing plates. Because of their resemblance to the metal plates used in the making of boilers, they became known as "boiler plates", and their resulting text—"boilerplate text". As the stories that were distributed by boiler plates were usually "fillers" rather than "serious" news, the term became synonymous with unoriginal, repeated text. A related term is bookkeeping code, referring to code that is not part of the business logic but is interleaved with it in order to keep data structures updated or handle secondary aspects of the program. Preamble One form of boilerplate consists of declarations which, while not part of the program logic or the language's essential syntax, are added to the start of a source file as a matter of custom. The following Perl example demonstrates boilerplate: #!/usr/bin/perl use warnings; use strict; The first line is a shebang, which identifies the file as a Perl script that can be executed directly on the command line on Unix/Linux systems. The other two are pragmas turning on warnings and strict mode, which are mandated by fashionable Perl programming style. This next example is a C/C++ programming language boilerplate, #include guard. #ifndef MYINTERFACE_H #define MYINTERFACE_H ... #endif This checks, and sets up, a global flag to tell the compiler whether the file myinterface.h has already been included. As many interdepending files may be involved in the compilation of a module, this avoids processing the same header multiple times, (which would lead to errors due to multiple definitions with the same name). In Java and similar platforms In Java programs, DTO classes are often provided with methods for getting and setting instance variables. The definitions of these methods can frequently be regarded as boilerplate. Although the code will vary from one class to another, it is sufficiently stereotypical in structure that it would be better generated automatically than written by hand. For example, in the following Java class representing a pet, almost all the code is boilerplate except for the declarations of Pet, name, and owner: Java public class Pet { private String name; private Person owner; public Pet(String name, Person owner) { this.name = name; this.owner = owner; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Person getOwner() { return owner; } public void setOwner(Person owner) { this.owner = owner; } } Most of the boilerplate in this example exists to fulfill requirements of JavaBeans. If the variable's name and owner were declared as public, the accessor and mutator methods would not be needed. In Java 14, record classes were added to fight with this issue. To reduce the amount of boilerplate, many frameworks have been developed, e.g. Lombok for Java. The same code as above is auto-generated by Lombok using Java annotations, which is a form of metaprogramming: @AllArgsConstructor @Getter @Setter public class Pet { private String name; private Person owner; } Scala In some other programming languages it may be possible to achieve the same thing with less boilerplate, when the language has built-in support for such common constructs. For example, the equivalent of the above Java code can be expressed in Scala using just one line of code: case class Pet(var name: String, var owner: Person) C# Or in C# using automatic properties with compiler generated backing fields: public class Pet { public string Name { get; set; } public Person Owner { get; set; } } Starting with C# 9.0 there is an opportunity to use Records which generate classes with Properties automatically: public record Pet(string Name, Person Owner); Method boilerplate In addition to declarations, methods in OOP languages also contribute to the amount of boilerplate. A 2015 study on popular Java projects shows that 60% of methods can be uniquely identified by the occurrence of 4.6% of its tokens, making the remaining 95.4% boilerplate irrelevant to logic. The researchers believe this result would translate to subroutines in procedural languages in general. HTML In HTML, the following boilerplate is used as a basic empty template and is present in most web pages: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"/> <title>Test</title> </head> <body> </body> </html> The WHATWG HTML Living Standard defines that the , and tags may be safely omitted under most circumstances. The tag is technically redundant when coming directly from a web server configured to send the character encoding in an HTTP header, though it becomes useful when the HTML response is saved in an .html file, cache, or web archive. Google's HTML/CSS style guide recommends that all optional tags be omitted, resulting in much less boilerplate. The World Wide Web Consortium states that the element must not be empty: <!DOCTYPE html> <title>Test</title> Python In Python, the following boilerplate code can be used to modify if code can only be executed in or out of a module context. if __name__ == '__main__': # Anything placed here will never be executed in a module context. pass if __name__ != '__main__': # Anything placed here will only be executed in a module context. pass See also References Source code Computer programming folklore Software engineering folklore Articles with example Java code
Boilerplate code
[ "Engineering" ]
1,428
[ "Software engineering", "Software engineering folklore" ]
5,622,569
https://en.wikipedia.org/wiki/Hansen%20solubility%20parameter
Hansen solubility parameters were developed by Charles M. Hansen in his Ph.D thesis in 1967 as a way of predicting if one material will dissolve in another and form a solution. They are based on the idea that like dissolves like where one molecule is defined as being 'like' another if it bonds to itself in a similar way. Specifically, each molecule is given three Hansen parameters, each generally measured in MPa0.5: The energy from dispersion forces between molecules The energy from dipolar intermolecular forces between molecules The energy from hydrogen bonds between molecules. These three parameters can be treated as co-ordinates for a point in three dimensions also known as the Hansen space. The nearer two molecules are in this three-dimensional space, the more likely they are to dissolve into each other. To determine if the parameters of two molecules (usually a solvent and a polymer) are within range, a value called interaction radius () is given to the substance being dissolved. This value determines the radius of the sphere in Hansen space and its center is the three Hansen parameters. To calculate the distance () between Hansen parameters in Hansen space the following formula is used: Combining this with the interaction radius gives the relative energy difference (RED) of the system: If the molecules are alike and will dissolve If the system will partially dissolve If the system will not dissolve Uses Historically Hansen solubility parameters (HSP) have been used in industries such as paints and coatings where understanding and controlling solvent–polymer interactions was vital. Over the years their use has been extended widely to applications such as: Environmental stress cracking of polymers Controlled dispersion of pigments, such as carbon black Understanding of solubility/dispersion properties of carbon nanotubes, Buckyballs, and quantum dots Adhesion to polymers Permeation of solvents and chemicals through plastics to understand issues such as glove safety, food packaging barrier properties and skin permeation Diffusion of solvents into polymers via understanding of surface concentration based on RED number Cytotoxicity via interaction with DNA Artificial noses (where response depends on polymer solubility of the test odor) Safer, cheaper, and faster solvent blends where an undesirable solvent can be rationally replaced by a mix of more desirable solvents whose combined HSP equals the HSP of the original solvent. Theoretical context HSP have been criticized for lacking the formal theoretical derivation of Hildebrand solubility parameters. All practical correlations of phase equilibrium involve certain assumptions that may or may not apply to a given system. In particular, all solubility parameter-based theories have a fundamental limitation that they apply only to associated solutions (i.e., they can only predict positive deviations from Raoult's law): they cannot account for negative deviations from Raoult's law that result from effects such as solvation (often important in water-soluble polymers) or the formation of electron donor acceptor complexes. Like any simple predictive theory, HSP are best used for screening with data used to validate the predictions. Hansen parameters have been used to estimate Flory-Huggins Chi parameters, often with reasonable accuracy. The factor of 4 in front of the dispersion term in the calculation of Ra has been the subject of debate. There is some theoretical basis for the factor of four (see Ch 2 of Ref 1 and also. However, there are clearly systems (e.g. Bottino et al., "Solubility parameters of poly(vinylidene fluoride)" J. Polym. Sci. Part B: Polymer Physics 26(4), 785-79, 1988) where the regions of solubility are far more eccentric than predicted by the standard Hansen theory. HSP effects can be over-ridden by size effects (small molecules such as methanol can give "anomalous results"). It has been shown that it is possible to calculate HSP via molecular dynamics techniques, though currently the polar and hydrogen bonding parameters cannot reliably be partitioned in a manner that is compatible with Hansen's values. Limitations The following are limitations according to Hansen: The parameters will vary with temperature The parameters are an approximation. Bonding between molecules is more subtle than the three parameters suggest. Molecular shape is relevant, as are other types of bonding such as induced dipole, metallic and electrostatic interactions. The size of the molecules also plays a significant role in whether two molecules actually dissolve in a given period. The parameters are hard to measure. 2008 work by Abbott and Hansen has helped address some of the above issues. Temperature variations can be calculated, the role of molar volume ("kinetics versus thermodynamics") is clarified, new chromatographic ways to measure HSP are available, large datasets for chemicals and polymers are available, 'Sphere' software for determining HSP values of polymers, inks, quantum dots etc. is available (or easy to implement in one's own software) and the new Stefanis-Panayiotou method for estimating HSP from Unifac groups is available in the literature and also automated in software. All these new capabilities are described in the e-book, software, datasets described in the external links but can be implemented independently of any commercial package. Sometimes Hildebrand solubility parameters are used for similar purposes. Hildebrand parameters are not suitable for use outside their original area which was non-polar, non-hydrogen-bonding solvents. The Hildebrand parameter for such non-polar solvents is usually close to the Hansen value. A typical example showing why Hildebrand parameters can be unhelpful is that two solvents, butanol and nitroethane, which have the same Hildebrand parameter, are each incapable of dissolving typical epoxy polymers. Yet a 50:50 mix gives a good solvency for epoxies. This is easily explainable knowing the Hansen parameter of the two solvents and that the Hansen parameter for the 50:50 mix is close to the Hansen parameter of epoxies. See also Solvent (has a chart of Hansen solubility parameters for various solvents) Hildebrand solubility parameter MOSCED References External links Interactive web app for finding solvents with matching solubility parameters Link Physical chemistry Polymer chemistry 1967 in science
Hansen solubility parameter
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,305
[ "Applied and interdisciplinary physics", "Materials science", "Polymer chemistry", "nan", "Physical chemistry" ]
5,622,907
https://en.wikipedia.org/wiki/Ghulam%20Ahmed%20Perwez
Ghulam Ahmad Parwez (; 1903–1985) was a well-known teacher of the Quran in India and Pakistan. He posed a challenge to the established Sunni doctrine by interpreting Quranic themes with a logical approach. The educated populace held Parwez in high esteem, despite his encounter with numerous criticisms from conservative Islamic scholars throughout his tenure. The work 'Islam: A Challenge to Religion' is widely acknowledged as one of the most significant works in the history of Pakistan, according to Nadeem F. Paracha. Early and personal life Parwez was born on 9 July 1903 in Batala, Punjab, in British India. He migrated to Pakistan in 1947. He delved into the holy book of Islam and other religious texts. In 1934, he obtained a master's degree from the Punjab University. His ideas, based on modern science, helped people better understand Islam. He was introduced to Muhammad Ali Jinnah by Muhammad Iqbal. He was appointed to edit the magazine Tolu-e-Islam, which was established to counteract the propaganda emanating from certain religious circles that favour Congress. He died aged 81. Career Parwez was appointed to the Central Secretariat of the Government of India in 1927, and became an important figure in the Home Department. When Pakistan became independent, he stayed in the same job in the government and retired early as an Assistant Secretary (Class I gazetted officer) in 1955. He spent all his time doing his job. Parwez argued that his insights from the Quran were in stark contrast to both capitalist and Marxist political ideologies. Before the creation of Pakistan, Parwez was recruited by Muhammad Ali Jinnah to help popularize the need for a separate homeland for Muslims in South Asia. He emphasized the importance of the government's structure in adhering to Islamic ideals. The principles of Islam, as enumerated in the Quran, require that individuals reside in a nation that upholds God's commands, rather than their own. Ideas and contributions Even though this right almost always came before any form of authority, Parwez believed in individual freedom. Parwez, in line with this, strongly opposed slavery, arguing that it lacked any legal justification according to the Quran. Further, he said that Islam challenged the truth, validity, and very idea of religion. Parwez assessed the supporting evidence for the suppositions contained in the Quran passages that are often associated with awe-inspiring happenings, celestial beings, and jinns, weighing it all objectively, without attempting to invoke the supernatural. Parwez also pushed for the adoption of Islamic socialism, a political philosophy that seeks to reorganize society in line with Islamic ideals. He argued that socialism is the most efficient means to uphold the principles of property, justice, and the distribution of wealth, as outlined in the Qur'an. In addition, he said that the Prophet was a prophet who wanted to stop capitalists and the corrupt bureaucracy of Byzantium and Persia from exploiting Quraish merchants although Quraish merchants had little contact with the traders from the then two supreme powers. He advocated the implementation of scientific and agricultural reforms to improve economic development. Parwez has been called a "quranist" by Nadeem F. Paracha, as Parwez rejected most hadiths. In essence, the rejection of one well known hadeeth means the rejection of Sunnah. Further, Paracha claimed that Parwez approved praying Namaz in Urdu. Even while Parwez was alive, his opponents spread these claims. In 1960 more than 600 Islamic Scholars issued a fatwa declaring Ghulam Ahmad Perwez Kafir due to his views on Quran and Hadith. Ghulam Ahmed Perwez was also criticized due to wrong interpretation of Quran. Translated works Exposition of the Holy Quran Human Fundamental Rights Dictionary Of the Holy Quran Vol 1-4 What Is Islam The Quranic System of Sustenance Islam: A Challenge To Religion The Life In The Hereafter Islamic Way Of Living Letter To Tahira Quranic Laws Jihad Is Not Terrorism Glossary of Quranic Words Human and Satan Constitution Of Islamic State The books written by Syed Abdul Wadud, a close friend of Parwez, are based on his ideas. Conspiracies Against the Quran Phenomena Of Nature Quranocracy The Heavens the Earth and the Quran Gateway to the Quran Publications Matalibul Furqaan (7 vols.) Lughat-ul-Quran (4 vols.) Mafhoom-ul-Quran (3 vols.) Tabweeb-ul-Quran (3 vols.) Nizam-e-Rabubiyyat Islam A Challenge to Religion (English version) Insaan Ne Kiya Socha (What Man Thought, A History of Human Thought) Islam kia he (second part of Insan ne kia socha) Tasawwaf Ki Haqiqat (The reality of Islamic Mysticism Saleem Ke Naam (3 vols.) Tahira Ke Naam Qurani Faislay (5 vols.) Meraj-e-Insaaniat (about Muhammad) Barke toor (about Mosa) Joe noor (about Ibrahim) Shola e mastoor (about Esa) man(o) yazdan (Me and God, about Allah in light of the Quran) Shahkar-e-Risalat (a biography of Caliph Omar) Iblis o Adam (Satan and Man) Jahane farda Mazahebe Alam ke Asmani Kitaben Asbab e zwal e ummat See also Tolu-e-Islam Liberal movements within Islam Ideas of Ghulam Ahmed Perwez References External links Books of G.A. Parwez in English (PDF format) 1903 births 1985 deaths Leaders of the Pakistan Movement People from Gurdaspur Translators of the Quran into English People from Lahore Translators of the Quran into Urdu 20th-century translators Muslim activists 20th-century Muslim scholars of Islam Pakistani Muslim activists Muslim reformers Theistic evolutionists
Ghulam Ahmed Perwez
[ "Biology" ]
1,239
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
10,994,244
https://en.wikipedia.org/wiki/Harry%20R.%20Lewis
Essentially all of Lewis's career has been at Harvard, where he has been honored for his "particularly distinguished contributions to undergraduate teaching"; his students have included future entrepreneurs Bill Gates and Mark Zuckerberg, and numerous future faculty members at Harvard and other schools. The website "Six Degrees to Harry Lewis", created by Zuckerberg while at Harvard, was a precursor to Facebook. Education and career Lewis was born in Boston and grew up in Wellesley, . His parents were physicianshis father a hospital chief of anesthesiology and his mother the head of the Dever State School for disabled children. His father was a World War II veteran and the son of a German Lutheran father and a Russian Jewish mother. After graduating summa cum laude at the end of the eleventh grade at Boston's Roxbury Latin School he entered Harvard College, where he was for a time a third-string lacrosse goalie. Lewis has said that he discovered "I wasn't a real [once] I got out of the amateur leagues of high school mathematics", but was "tremendously excited" by the computer-science research at Harvard. As a senior he lectured a graduate class using a computer-graphics program, SHAPESHIFTER, which he had developed for displaying complex-plane on a cathode ray tube. SHAPESHIFTER automatically recognized formulas and commands hand-entered via a stylus on a RAND tablet, and could be "trained" to recognize the handwriting of individual users. There being no degree program in computer science per se at Harvard at the time, in 1968 Lewis received his BA (summa, Quincy House) in applied mathematics and was elected to Phi Beta Kappa. After serving for two years in the United States Public Health Service Commissioned Corps as a commissioned officer in the role of mathematician and computer scientist for the National Institutes of Health in Bethesda, Maryland, he spent a year in Europe as a Frederick Sheldon Traveling Fellow. He then returned to Harvard, where he earned his M.A. in 1973 and PhD in 1974, after which he was immediately appointed Assistant Professor of Computer Science. He became an Associate Professor in 1978, and Gordon McKay Professor of Computer Science in 1981. Lewis formally retired in 2020, but continues to teach as Gordon McKay Research Professor in Computer Science. His wife Marlyn McGrath retired in 2021 after 42 years as Harvard College's director of admissions. The Harry Lewis and Marlyn McGrath Professorship of Engineering and Applied Sciences was endowed by one of Lewis's former students in 2012. Teaching Lewis has pointed out thatlargely because his career began when the field of computer science "barely existed", and Harvard offered almost no computer science courses at the undergraduate levelhe originated almost all the courses he has taught. It was his proposal, in the late 1970s, that Harvard create a major specifically for computer science (which until then had been a branch of Harvard's applied mathematics program). From 2003 to 2008 he was designated a Harvard College Professor in recognition of "particularly distinguished contributions to undergraduate teaching". In 2021 the IEEE Computer Society awarded him its annual Mary Kenneth Keller Computer Science & Engineering Undergraduate Teaching Award, citing "his over forty-year dedication towards undergraduate computer science education at Harvard, his authoring of Computer Science introductory textbooks, and his mentoring of many future educators." Six of his teaching assistants are now members of the Harvard faculty and many others are professors of computer science (or related disciplines) elsewhere; many have gone on to win teaching awards themselves, including Eric Roberts (Association for Computing Machinery Karlstrom Award), Nicholas Horton (Robert V. Hogg Award), Joseph A. Konstan (University of Minnesota Distinguished University Teaching Professor, Graduate/Professional Teaching Award), and Margo Seltzer (Herchel Smith Professor of Computer Science at Harvard, Phi Beta Kappa teaching award, Abramson Teaching Award). His undergraduate students have included Mark Zuckerberg (whose website "Six Degrees to Harry Lewis" was a precursor to Facebooksix degrees being a reference to the small world hypothesis), Microsoft founder Bill Gates (who solved an open theoretical problem Lewis had described in class), and nine future Harvard professors. Lewis is the author or coauthor of five textbooks: An Introduction to Computer Programming and Data Structures using MACRO-11 (1981). MACRO-11 was an assembly language for PDP-11 computers. Elements of the Theory of Computation (1981, with Christos H. Papadimitriou) covers automata theory, computational complexity theory, and the theory of formal languages; its inclusion of complexity theory and mathematical logic was innovative for its time. It has been called an "excellent traditional text" but one whose terse and heavily mathematical style can be intimidating. Although intended for undergraduates, it has also been used for introductory graduate courses. Data Structures and Their Algorithms (1991, with Larry Denenberg). Essential Discrete Mathematics for Computer Science (2019, with Rachel Zax). Ideas that Created the Future (2021), a collection of "forty-six classic papers in computer science that map the evolution of the field." Lewis has also taught a course on amateur athletics and the social history of sports in America. Dean of Harvard College In 1994 Lewis coauthored the "comprehensive" Report on the Structure of Harvard College, and in 1995 he was appointed dean of Harvard College, responsible for the nonacademic aspects of undergraduate life. In that capacity he oversaw a number of sometimes-controversial policy changes, including changes to the handling of allegations of sexual assault, reorganization of the college's public-service programs, a crackdown on underage alcohol consumption, and random assignment of students to upperclass houses (countering the social segregation found under the prior system of assignment according to student preference). He also pressed improvements to advising and health care. A colleague has said that Lewis "reshaped undergraduate life more powerfully than anyone else in recent memory." Lewis continued to teach throughout his time as dean. After the 2001 inauguration of Harvard University's twenty-seventh president, Lawrence Summers, Lewis and Summers came into conflict over the direction of Harvard College and its educational philosophy. Lewis, for example, emphasized the importance of extracurricular pursuits, advising incoming freshmen that "flexibility in your schedule, unstructured time in your day, and evenings spent with your friends rather than your books are all, in a larger sense, essential for your education", while Summers complained of an insufficiently intellectual "Camp Harvard" and admonished students that "You are here to work, and your business here is to learn." After Lewis issued what The Harvard Crimson called "a scathing indictment of the view that increasing intellectual rigor ought to be the [College's] priority"pointing out that prospective employers show less interest in grades than in personal qualities built outside the classroomhe was peremptorily removed as dean in March 2003. In 2015 Lewis served as interim Dean of the Harvard School of Engineering and Applied Sciences. Writings on education and technology Lewis is a Faculty Associate of Harvard's Berkman Center for Internet & Society. In addition to his research publications and textbooks, he has written a number of works on higher education and the impact of computers on society. Drawing heavily on his experience as dean of Harvard College, his Excellence Without A Soul: How a Great University Forgot Education (2006) critiques what he sees as the abandonment by American universities, including Harvard, of the In "Renewing the Civic Mission of American Higher Education" (with Ellen Condliffe Lagemann, 2012) Lewis warns that "a flourishing multiplicity of worthy but uncoordinated agendas has crowded out higher education's commitment to the common good": Developed from a course taught by its authors, Blown to Bits: Your Life, Liberty, and Happiness After the Digital Explosion (2008, with Hal Abelson and Ken Ledeen) explores the origins and consequences of the 21st-century explosion in digital information, including its impact on culture and privacy: Baseball as a Second Language: Explaining the Game Americans Use to Explain Everything Else (self-published as an experiment in open access in 2011) discusses the many ways baseball concepts and imagery have made their way into American English. It was inspired by Lewis' experiences explaining baseball to international students. Research Lewis' undergraduate thesis describing SHAPESHIFTER, "Two applications of hand-printed two-dimensional computer input", was written under computer graphics pioneer Ivan Sutherland and presented at the 23rd National Conference of the Association for Computing Machinery in 1968. It was followed by several papers on related topics. Much of Lewis' subsequent research concerned the computational complexity of problems in mathematical logic. His doctoral thesis, "Herbrand Expansions and Reductions of the Decision Problem", was supervised by Burton Dreben and dealt with Herbrand's theorem. His 1979 book, Unsolvable classes of quantificational formulas complemented The Decision Problem: Solvable classes of quantificational formula by Dreben and Warren Goldfarb. His 1978 paper "Renaming a set of clauses as a Horn set" addressed the Boolean satisfiability problem, of determining whether a logic formula in conjunctive normal form can be made true by a suitable assignment of its variables. In general, these problems are hard, but there are two major subclasses of satisfiability for which polynomial time solutions are known: 2-satisfiability (where each clause of the formula has two literals) and Horn-satisfiability (where each clause has at most one positive literal). Lewis expanded the second of these subclasses, by showing that the problem can still be solved in polynomial time when the input is not already in Horn form, but can be put into Horn form by replacing some variables by their negations. The problem of choosing which variables to negate to make each clause get two positive literals, making the re-signed instance into a Horn set, turns out to be expressible as an instance of 2-satisfiability, the other solvable case of the satisfiability problem. By solving a 2-satisfiability instance to turn the given input into a Horn set, Lewis shows that the instances that can be turned into Horn sets can also be solved in polynomial time. The time for the sign reassignment in the original version of what Lindhorst and Shahrokhi called "this elegant result" was for an instance with clauses and variables, but it can be reduced to linear time by breaking long input clauses into smaller clauses and applying a faster 2-satisfiability algorithm. Lewis' paper "Complexity results for classes of quantificational formulas" (1980) deals with the computational complexity of problems in first-order logic. Such problems are undecidable in general, but there are several special classes of these problems, defined by restricting the order in which their quantifiers appear, that were known to be decidable. One of these special classes, for instance, is the Bernays–Schönfinkel class. For each of these special classes, Lewis establishes tight exponential time bounds either for deterministic or nondeterministic time complexity. For instance, he shows that the Bernays–Schönfinkel class is NEXPTIME-complete, and more specifically that its nondeterministic time complexity is both upper- and lower-bounded by a singly exponential function of the input length. Börger, Grädel, and Gurevich write that "this paper initiated the study of the complexity of decidable classes of the decision problem". "A logic of concrete time intervals" (1990) concerned temporal logic. This paper accompanied an earlier Aiken Computation Laboratory technical report, "Finite-state analysis of asynchronous circuits with bounded temporal uncertainty", where he first proposed the representation of an asynchronous circuit, with bounded temporal uncertainty on gate transition events, as a finite-state machine. This paper was the earliest work on the verification of timing properties that modeled time both asynchronously and continuously, neither discretizing time nor imposing a global clock. Some of Lewis' other heavily cited research papers extend beyond logic. His paper "Symbolic evaluation and the global value graph" (1977, with his student John Reif) concerned data-flow analysis and symbolic execution in compilers. And his paper "Symmetric space-bounded computation" (1982, with Christos Papadimitriou) was the first to define symmetric Turing machines and symmetric space complexity classes such as SL (an undirected or reversible analogue of nondeterministic space complexity, later shown to coincide with deterministic logarithmic space). In 1982, he chaired the program committee for the Symposium on Theory of Computing, one of the two top research conferences in theoretical computer science, considered broadly. Personal Lewis is a Visitor of Ralston College and a Life Trustee of the Roxbury Latin School. From 1995 to 2003 he was Trustee of the Charity of Edward Hopkins. The New York Times journalist David Fahrenthold is his son-in-law; while still a Harvard undergraduate, Fahrenthold wrote of his future father-in-law: Notes Selected publications Computer science research Börger, Egon (1981). Review of Unsolvable classes of quantificational formulas. . Computers and society Interview with authors, Stanford Center for Internet and Society Textbooks See in particular p. 205. Higher education Other References External links "Bits and Pieces" Lewis' blog "Blown to Bits" Lewis' old blog 1947 births Living people Writers from Boston People from Wellesley, Massachusetts American computer scientists 20th-century American mathematicians 21st-century American mathematicians Mathematical logicians American theoretical computer scientists Harvard University faculty Harvard University alumni American education writers Roxbury Latin School alumni American people of German descent American people of Russian-Jewish descent American textbook writers
Harry R. Lewis
[ "Mathematics" ]
2,833
[ "Mathematical logic", "Mathematical logicians" ]
10,995,067
https://en.wikipedia.org/wiki/Rhenium%20pentachloride
Rhenium pentachloride is an inorganic compound with the formula . This red-brown solid is paramagnetic. Structure and preparation Rhenium pentachloride has a bioctahedral structure and can be described as Cl4Re(μ-Cl)2ReCl4. The (μ-Cl)2 part of this formula indicates that two chloride ligands are bridging ligands, i.e. they connect to two Re atoms. The Re-Re distance is 3.74 Å. The motif is similar to that seen for tantalum pentachloride. This compound was first prepared in 1933, a few years after the discovery of rhenium. The preparation involves chlorination of rhenium at temperatures up to 900 °C. The material can be purified by sublimation. ReCl5 is one of the most oxidized binary chlorides of Re. It does not undergo further chlorination. ReCl6 has been prepared from rhenium hexafluoride. Rhenium heptafluoride is known but not the heptachloride. Uses and reactions It degrades in air to a brown liquid. Although rhenium pentachloride has no commercial applications, it is of historic significance as one of the early catalysts for olefin metathesis. Reduction gives trirhenium nonachloride. Oxygenation affords the Re(VII) oxychloride: ReCl5 + 3 Cl2O → ReO3Cl + 5 Cl2 Comproportionation of the penta- and trichloride gives rhenium tetrachloride. References Rhenium compounds Chlorides Metal halides Substances discovered in the 1930s
Rhenium pentachloride
[ "Chemistry" ]
363
[ "Chlorides", "Inorganic compounds", "Metal halides", "Salts" ]
10,995,185
https://en.wikipedia.org/wiki/Arachidonate%205-lipoxygenase
Arachidonate 5-lipoxygenase, also known as ALOX5, 5-lipoxygenase, 5-LOX, or 5-LO, is a non-heme iron-containing enzyme (EC 1.13.11.34) that in humans is encoded by the ALOX5 gene. Arachidonate 5-lipoxygenase is a member of the lipoxygenase family of enzymes. It transforms essential fatty acids (EFA) substrates into leukotrienes as well as a wide range of other biologically active products. ALOX5 is a current target for pharmaceutical intervention in a number of diseases. Gene The ALOX5 gene, which occupies 71.9 kilobase pairs (kb) on chromosome 10 (all other human lipoxygenases are clustered together on chromosome 17), is composed of 14 exons divided by 13 introns encoding the mature 78 kilodalton (kDa) ALOX5 protein consisting of 673 amino acids. The gene promoter region of ALOX5 contains 8 GC boxes but lacks TATA boxes or CAT boxes and thus resembles the gene promoters of typical housekeeping genes. Five of the 8 GC boxes are arranged in tandem and are recognized by the transcription factors Sp1 and Egr-1. A novel Sp1-binding site occurs close to the major transcription start site (position – 65); a GC-rich core region including the Sp1/Egr-1 sites may be critical for basal 5-LO promoter activity. Expression Cells primarily involved in regulating inflammation, allergy, and other immune responses, e.g. neutrophils, eosinophils, basophils, monocytes, macrophages, mast cells, dendritic cells, and B-lymphocytes express ALOX5. Platelets, T cells, and erythrocytes are ALOX5-negative. In skin, Langerhans cells strongly express ALOX5. Fibroblasts, smooth muscle cells and endothelial cells express low levels of ALOX5. Up-regulation of ALOX5 may occur during the maturation of leukocytes and in human neutrophils treated with granulocyte macrophage colony-stimulating factor and then stimulated with physiological agents. Aberrant expression of LOX5 is seen in various types of human cancer tumors in vivo as well as in various types of human cancer cell lines in vitro; these tumors and cell lines include those of the pancreas, prostate and colon. ALOX5 products, particularly 5-hydroxyeicosatetraenoic acid and 5-oxo-eicosatetraenoic acid, promote the proliferation of these ALOX5 aberrantly expressing tumor cell lines suggesting that ALOX5 acts as a pro-malignancy factor for them and by extension their parent tumors. Studies with cultured human cells have found that there are a large number of ALOX5 mRNA splice variants due to alternative splicing. The physiological and/or pathological consequences of this slicing has yet to be defined. In one study, however, human brain tumors were shown to express three mRNA splice variants (2.7, 3.1, and 6.4 kb) in addition to the full 8.6 lb species; the abundance of the variants correlated with the malignancy of these tumors suggesting that they may play a role in the development of these tumors. Biochemistry Human ALOX5 is a soluble, monomeric protein consisting of 673 amino acids with a molecular weight of ~78 kDa. Structurally, ALOX5 possesses: A C-terminal catalytic domain (residues 126–673) An N-terminal C2-like domain which promotes its binding to ligand substrates, Ca2+, cellular phospholipid membranes, Coactin-like protein (COL1), and Dicer protein A PLAT domain within its C2-like domain; this domain, by analogy to other PLAT domain-bearing proteins, may serve as a mobile lid over ALOX5's substrate-binding site An adenosine triphosphate (ATP) binding site; ATP is crucial for ALOX5's metabolic activity A proline-rich region (residues 566–577), sometimes termed a SH3-binding domain, which promotes its binding to proteins with SH3 domains such as Grb2 and may thereby link the enzyme's regulation to tyrosine kinase receptors. The enzyme possesses two catalytic activities as illustrated by its metabolism of arachidonic acid. ALOX5's dioxygenase activity adds a hydroperoxyl (i.e. HO2) residue to arachidonic acid (i.e. 5Z,8Z,11Z,14Z-eicosatetraenoic acid) at carbon 5 of its 1,4 diene group (i.e. its 5Z,8Z double bonds) to form 5(S)-hydroperoxy-6E,8Z,11Z,14Z-eicosatetraenoic acid (i.e. 5S-HpETE). The 5S-HpETE intermediate may then be released by the enzyme and rapidly reduced by cellular glutathione peroxidases to its corresponding alcohol, 5(S)-hydroxy-6E,8Z,11Z,14Z-eicosatetraenoic acid (i.e. 5-HETE), or, alternatively, further metabolized by ALOX5's epoxidase (also termed LTA4 synthase) activity which converts 5S-HpETE to its epoxide, 5S,6S-hydroxy-6E,8Z,11Z,14Z-eicosatetraenoic acid (i.e. LTA4). LTA4 is then acted on by a separate, soluble enzyme, leukotriene-A4 hydrolase, to form the dihydroxyl product, leukotriene B4 (LTB4, i.e. 5S,12R-dihydroxy-5S,6Z,8E,10E,12R,14Z-eicosatetraenoic acid) or by either LTC4 synthase or microsomal glutathione S-transferase 2 (MGST2), which bind the sulfur of cysteine's thio (i.e. SH) residue in the tripeptide glutamate-cysteine-glycine to carbon 6 of LTA4 thereby forming LTC4 (i.e. 5S-hydroxy,6R-(S-glutathionyl)-7E,9E,11Z,14Z-eicosatetraenoic acid). The Glu and Gly residues of LTC4 may be removed step-wise by gamma-glutamyltransferase and a dipeptidase to form sequentially LTD4 and LTE4. To varying extents, the other PUFA substrates of ALOX5 follow similar metabolic pathways to form analogous products. Sub-human mammalian Alox5 enzymes like those in rodents appear to have, at least in general, similar structures, distributions, activities, and functions as human ALOX5. Hence, model Alox5 studies in rodents appear to be valuable for defining the function of ALOX5 in humans (see ). Regulation ALOX5 exists primarily in the cytoplasm and nucleoplasm of cells. Upon cell stimulation, ALOX5: a) may be phosphorylated on serine 663, 523, and/or 271 by mitogen-activated protein kinases, S6 kinase, protein kinase A (PKA), protein kinase C, Cdc2, and/or a Ca2+/calmodulin-dependent protein kinase; b) moves to bind with phospholipids in the nuclear membrane and, probably, endoplasmic reticulum membrane; c) is able to accept substrate fatty acids presented to it by the 5-lipoxygenase-activating protein (FLAP) which is embedded in these membranes; and d) thereby becomes suited for high metabolic activity. These events, along with rises in cytosolic Ca2+ levels, which promote the translocation of ALOX5 form the cytoplasm and nucleoplasm to the cited membranes, are induced by cell stimulation such as that caused by chemotactic factors on leukocytes. Rises in cytosolic Ca2+, ALOX5's movement to membranes, and ALOX5's interaction with FLAP are critical to the physiological activation of the enzyme. Serine 271 and 663 phosphorylations do not appear to alter ALOX5's activity. Serine 523 phosphorylation (which is conducted by PKA) totally inactivates the enzyme and prevents its nuclear localization; stimuli which cause cells to activate PKA can thereby block production of ALOX5 metabolites. In addition to its activation, ALOX5 must gain access to its polyunsaturated fatty acid (PUFA) substrates, which commonly are bound in an ester linkage to the sn2 position of membrane phospholipids, in order to form biologically active products. This is accomplished by a large family of phospholipase A2 (PLA2) enzymes. The cytosolic PLA2 set (i.e. cPLA2s) of PLA2 enzymes (see ) in particular mediates many instances of stimulus-induced release of PUFA in inflammatory cells. For example, chemotactic factors stimulate human neutrophils to raise cytosolic Ca2+ which triggers cPLA2s, particularly the α isoform (cPLA2α), to move from its normal residence in the cytosol to cellular membranes. This chemotactic factor stimulation concurrently causes the activation of mitogen-activated protein kinases (MAPK) which in turn stimulates the activity of cPLA2α by phosphorylating it on ser-505 (other cell types may activate this or other cPLA2 isoforms using other kinases which phosphorylate them on different serine residues). These two events allow cPLA2s to release PUFA esterified to membrane phospholipids to FLAP which then presents them to ALOX5 for their metabolism. Other factors are known to regulate ALOX5 activity in vitro but have not been fully integrated into its physiological activation during cell stimulation. ALOX5 binds with the F actin-binding protein, coactin-like protein. Based on in vitro studies, this protein binding serves to stabilize ALOX5 by acting as a chaperone (protein) or scaffold, thereby averting the enzyme's inactivation to promote its metabolic activity; depending on circumstance such as the presence of phospholipids and levels of ambient Ca2+, this binding also alters the relative levels of hydroperoxy versus epoxide (see arachidonic acid section below) products made by ALOX5. The binding of ALOX5 to membranes as well as its interaction with FLAP likewise cause the enzyme to alter its relative levels of hydroperoxy versus epoxide production, in these cases favoring the production of the epoxide products. The presence of certain diacylglycerols such as 1-oleoyl-2-acetyl-sn-glycerol, 1-hexadecyl-2-acetyl-sn-glycerol, and 1-O-hexadecyl-2-acetyl-sn-glycerol, and 1,2-dioctanoyl-sn-glycerol but not 1-stearoyl-2-arachidonyl-sn-glycerol increase the catalytic activity of ALOX5 in vitro. Substrates, metabolites, and metabolite activities ALOX5 metabolizes various omega-3 and omega-6 PUFA to a wide range of products with varying and sometimes opposing biological activities. A list of these substrates along with their principal metabolites and metabolite activities follows. Arachidonic acid ALOX5 metabolizes the omega-6 fatty acid, arachidonic acid (AA, i.e. 5Z,8Z,11Z,14Z-eicosatetraenoic acid), to 5-hydroperoxyeicosatetraenoic acid (5-HpETE) which is then rapidly converted to physiologically and pathologically important products. Ubiquitous cellular glutathione peroxidases (GPXs) reduce 5-HpETE to 5-hydroxyeicosatetraenoic acid (5-HETE); 5-HETE may be further metabolized by 5-hydroxyeicosanoid dehydrogenase (5-HEDH) to 5-oxo-eicosatetraenoic acid (5-oxo-ETE). Alternatively, the intrinsic activity of ALOX5 may convert 5-HpETE to its 5,6 epoxide, leukotriene A4 LTA4, which is then either rapidly converted to leukotriene B4 (LTB4) by leukotriene-A4 hydrolase (LTA4H) or to leukotriene C4 (LTC4) by LTC4 synthase (LTC4S); LTC4 exits its cells of origin through the MRP1 transporter (ABCC1) and is rapidly converted to LTD4 and then to LTE4) by cell surface-attached gamma-glutamyltransferase and dipeptidase peptidase enzymes. In another pathway, ALOX5 may act in series with a second lipoxygenase enzyme, ALOX15, to metabolize AA to lipoxin A4 (LxA4) and LxB4 (see ). GPXs, 5-HEDH, LTA4H, LTC4S, ABCC1, and cell surface peptidases may act similarly on the ALOX5-derived metabolites of other PUFA. LTB4, 5-HETE, and 5-oxo-ETE may contribute to the innate immune response as leukocyte chemotactic factors, i.e. they recruit and further activate circulating blood neutrophils and monocytes to sites of microbial invasion, tissue injury, and foreign bodies. When produced in excess, however, they may contribute to a wide range of pathological inflammatory responses (5-HETE and LTB4). 5-Oxo-ETE is a particularly potent chemotactic factor for and activator of eosinophils and may thereby contribute to eosinophil-based allergic reactions and diseases. These metabolites may also contribute to the progression of certain cancers such as those of the prostate, breast, lung, ovary, and pancreas. ALOX5 may be overexpressed in some of these cancers; 5-Oxo-ETE and to a lesser extent 5-HETE stimulate human cell lines derived from these cancers to proliferate; and the pharmacological inhibition of ALOX5 in these human cell lines causes them to die by entering apoptosis. ALOX5 and its LTB4 metabolite as well as this metabolite's BLT1 and BLT2 receptors have also been shown to promote the growth of various types of human cancer cell lines in culture. LTC4, LTD4, and LTE4 contribute to allergic airways reactions such as asthma, certain non-allergic hypersensitivity airways reactions, and other lung diseases involving bronchoconstriction by contracting these airways and promoting in these airways inflammation, micro-vascular permeability, and mucus secretion; they likewise contribute to various allergic and non-allergic reactions involving rhinitis, conjunctivitis, and urticaria. Certain of these peptide-leukotrienes have been shown to promote the growth of cultured human breast cancer and chronic lymphocytic leukemia cell lines thereby suggesting that ALOX5 may contribute to the progression of these diseases. LxA4 and LxB4 are members of the specialized pro-resolving mediators class of polyunsaturated fatty acid metabolites. They form later than the ALOX5-derived chemotactic factors in the inflammatory response and are thought to limit or resolve these responses by, for example, inhibiting the entry of circulating leukocytes into inflamed tissues, inhibiting the pro-inflammatory action of the leukocytes, promoting leukocytes to exit from inflammatory sites, and stimulating leukocyte apoptosis (see Specialized pro-resolving mediators and Lipoxin). Mead acid Mead acid (i.e. 5Z,8Z,11Z-eicosatrienoic acid) is identical to AA except that has a single rather than double bond between its 14 and 15 carbon. ALOX5 metabolizes mead acid to 3-series (i.e. containing 3 double bonds) analogs of its 4-series AA metabolites viz., 5(S)-hydroxy-6E,8Z,11Z-eicosatrienoic acid (5-HETrE), 5-oxo-6,8,11-eicosatrienoic acid (5-oxo-ETrE), LTA3, and LTC3; since LTA3 inhibits LTA hydrolase, mead acid metabolizing cells produce relatively little LTB3 and are blocked from metabolizing arachidonic acid to LTB4. On the other hand, 5-oxo-ETrE is almost as potent as 5-oxo-ETE as an eosinophil chemotactic factor and may thereby contribute to the development of physiological and pathological allergic responses. Presumably, the same metabolic pathways that follow ALOX5 in metabolizing arachidonic acid to the 4-series metabolites likewise act on mead acid to form these products. Eicosapentaenoic acid ALOX5 metabolizes the omega-3 fatty acid, eicosapentaenoic acid (EPA, i.e. 4Z,8Z,11Z,14Z,17Z-eiosapentaenoic acid), to 5-hydroperoxy-eicosapentaenoic acid which is then converted to 5-series products that are structurally analogous to their arachidonic acid counterparts viz., 5-hydroxy-eicosapentaenoic acid (5-HEPE), 5-oxo-eiocosapentaenoic acid (5-oxo-HEPE), LTB5, LTC5, LTD5, and LTE5. Presumably, the same metabolic pathways that follow ALOX5 in metabolizing arachidonic acid to the 4-series metabolites likewise act on EPA to form these 5-series products. ALOX5 also cooperates with other lipoxygenase, cyclooxygenase, or cytochrome P450 enzymes in serial metabolic pathways to metabolize EPA to resolvins of the E series (see for further details on this metabolism) viz., resolvin E1 (RvE1) and RvE2. 5-HEPE, 5-oxo-HEPE, LTB5, LTC5, LTD5, and LTE5 are generally less potent in stimulating cells and tissues than their arachidonic acid-derived counterparts; since their production is associated with reduced production of their arachidonic acid-derived counterparts, they may indirectly serve to reduce the pro-inflammatory and pro-allergic activities of their arachidonic acid-derived counterparts. RvE1 and ReV2 are specialized pro-resolving mediators that contribute to the resolution of inflammation and other reactions. Docosahexaenoic acid ALOX5 acts in series with ALOX15 to metabolize the omega 3 fatty acid, docosahexaenoic acid (DHA, i.e. 4Z,7Z,10Z,13Z,16Z,19Z-docosahexaenoic acid), to D series resolvins (see for further details on this metabolism). The D series resolvins (i.e. RvD1, RvD2, RvD3, RvD4, RvD5, RvD6, AT-RVD1, AT-RVD2, AT-RVD3, AT-RVD4, AT-RVD5, and AT-RVD6) are specialized pro-resolving mediators that contribute to the resolution of inflammation, promote tissue healing, and reduce the perception of inflammation-based pain. Transgenic studies Studies in model animal systems that delete or overexpress the Alox5 gene have given seemingly paradoxical results. In mice, for example, Alox5 overexpression may decrease the damage caused by some types yet increase the damage caused by other types of invasive pathogens. This may be a reflection of the array of metabolites made by the Alox5 enzyme some of which possess opposing activities like the pro-inflammatory chemotactic factors and the anti-inflammatory specialized pro-resolving mediators. Alox5 and presumably human ALOX5 functions may vary widely depending on: the agents stimulating their activity; the types of metabolites that they form; the specific tissues responding to these metabolites; the times (e.g. early versus delayed) at which observations are made; and very likely various other factors. Alox5 gene knockout mice are more susceptible to the development and pathological complications of experimental infection with Klebsiella pneumoniae, Borrelia burgdorferi, and Paracoccidioides brasiliensis. In a model of cecum perforation-induced sepsis, ALOX5 gene knockout mice exhibited a decrease in the number of neutrophils and an increase in the number of bacteria that accumulated in their peritoneum. On the other hand, ALOX5 gene knockout mice demonstrate an enhanced resistance and lessened pathology to Brucella abortus infection and, at least in its acute phase, Trypanosoma cruzi infection. Furthermore, Alox5-null mice exhibit a worsened inflammatory component, failure to resolve inflammation-related responses, and decreased survival in experimental models of respiratory syncytial virus disease, Lyme disease, Toxoplasma gondii disease, and corneal injury. These studies indicate that Alox5 can serve a protective function presumably by generating metabolites such as chemotactic factors that mobilize the innate immunity system. However, the suppression of inflammation appears also to be a function of Alox5, presumably by contributing to the production of anti-inflammatory specialized pro-resolving mediators (SPMs), at least in certain rodent inflammation-based model systems. These genetic studies allow that ALOX5 along with the chemotactic factors and SPMs that they contribute to making may play similar opposing pro-inflammatory and anti-inflammatory functions in humans. Alox5 gene knockout mice exhibit an increase in the lung tumor volume and liver metastasis of Lewis lung carcinoma cells that were directly implanted into their lungs; this result differs from many in vitro studies which implicated human ALOX5 along with certain of its metabolites with promoting cancer cell growth in that it finds that mouse Alox5 and, perhaps, certain of its metabolites inhibit cancer cell growth. Studies in this model suggest that Alox5, acting through one or more of its metabolites, reduces growth and progression of the Lewis carcinoma by recruiting cancer-inhibiting CD4+ T helper cells and CD8+ T cytotoxic T cells to the sites of implantation. This striking difference between human in vitro and mouse in vivo studies may reflect species differences, in vitro versus in vivo differences, or cancer cell type differences in the function of ALOX5/Alox5. Clinical significance Inflammation Studies implicate ALOX5 in contributing to innate immunity by contributing to the mounting inflammatory responses to a wide range of diseases: acute pathogen invasion, trauma, and burns (see ) however, ALOX5 also contributes to the development and progression of excessive and chronic inflammatory responses such as: rheumatoid arthritis atherosclerosis inflammatory bowel disease autoimmune diseases (see ). These dual functions probably reflect ALOX5's ability to form the: a) potent chemotactic factor, LTB4, and possibly also weaker chemotactic factor, 5S-HETE, which serve to attract and otherwise activate inflammation-inducing cells such as circulating leukocytes and tissue macrophages and dendritic cells and b) lipoxin and resolvin subfamily of SPMs which tend to inhibit these cells as well as the overall inflammatory responses. Allergy ALOX5 contributes to the development and progression of allergy and allergic inflammation reactions and diseases such as: allergic rhinitis conjunctivitis asthma rashes eczema (see ). This activity reflects its formation of a) LTC4, LTD4, and LTE4 which promote vascular permeability, contract airways smooth muscle, and otherwise perturb these tissues and b) LTB4 and possibly 5-oxo-ETE which are chemotactic factors for, and activators of, the cell type promoting such reactions, the eosinophil. 5-Oxo-ETE and, to a lesser extent, 5S-HETE, also act synergistically with another pro-allergic mediator, platelet-activating factor, to stimulate and otherwise activate eosinophils. Hypersensitivity reactions ALOX5 contributes to non-allergic NSAID hypersensitivity reactions of the respiratory system and skin such as: aspirin-exacerbated respiratory disease nonallergic rhinitis non-allergic conjunctivitis angioedema urticarial. It may also contribute to hypersensitivity responses of the respiratory system to cold air and possibly even alcohol beverages. These pathological responses likely involve the same ALOX5-formed metabolites as those promoting allergic reactions. ALOX5-inhibiting drugs The tissue, animal model, and animal and human genetic studies cited above implicate ALOX5 in a wide range of diseases: excessive inflammatory responses to pathogens, trauma, burns, and other forms of tissue injury (see ) chronic inflammatory conditions such as: rheumatoid arthritis atherosclerosis inflammatory bowel disease autoimmune diseases Alzheimer's disease (see ) allergy and allergic inflammation reactions such as: allergic rhinitis conjunctivitis asthma rashes eczema NSAID-induced acute non-allergic reactions such as: asthma rhinitis conjunctivitis angioedema urticaria the progression of certain cancers such as those of the prostate and pancreas. However, clinical use of drugs that inhibit ALOX5 to treat any of these diseases has been successful with only Zileuton along with its controlled released preparation, Zileuton CR. Zileuton is approved in the US for the prophylaxis and chronic treatment of allergic asthma; it is also used to treat chronic non-allergic reactions such as NSAID-induced non-allergic lung, nose, and conjunctiva reactions as well as exercise-induced asthma. Zileuton has shown some beneficial effects in clinical trials for the treatment of rheumatoid arthritis, inflammatory bowel disease, and psoriasis. Zileuton is currently undergoing a phase II study for the treatment of acne vulgaris (mild-to-moderate inflammatory facial acne) and a phase I study (see ) combining it with imatinib for treating chronic myeloid leukemia. Zyleuton and zileuton CR cause elevations in liver enzymes in 2% of patients; the two drugs are therefore contraindicated in patients with active liver disease or persistent hepatic enzyme elevations greater than three times the upper limit of normal. Hepatic function should be assessed prior to initiating either of these drugs, monthly for the first 3 months, every 2–3 months for the remainder of the first year, and periodically thereafter; zileuton also has a rather unfavorable pharmacological profile (see ). Given these deficiencies, other drugs targeting ALOX5 are under study. Flavocoxid is a proprietary blend of purified plant derived bioflavonoids including Baicalin and Catechins. It inhibits COX-1, COX-2, and ALOX5 in vitro and in animal models. Flavocoxid has been approved for use as a medical food in the United States since 2004 and is available by prescription for use in chronic osteoarthritis in tablets of 500 mg under the commercial name Limbrel. However, in clinical trials serum liver enzyme elevations occurred in up to 10% of patients on flavocoxid therapy although elevations above 3 times the upper limit of normal occurred in only 1-2% of recipients. Since its release, however, there have been several reports of clinically apparent acute liver injury attributed to flavocoxid. Setileuton (MK-0633) has completed a Phase II clinical trial for the treatment of asthma, chronic obstructive lung disease, and atherosclerosis (NCT00404313, NCT00418613, and NCT00421278, respectively). PF-4191834 has completed phase II studies for the treatment of asthma (NCT00723021). Hyperforin, an active constituent of the herb St John's wort, is active at micromolar concentrations in inhibiting ALOX5. Indirubin-3'-monoxime, a derivative of the naturally occurring alkaloid, indirubin, is also described as selective ALOX5 inhibitor effective in a range of cell-free and cell-based model systems. In addition, curcumin, a constituent of turmeric, is a 5-LO inhibitor as defined by in vitro studies of the enzyme. Acetyl-keto-beta-boswellic acid (AKBA), one of the bioactive boswellic acids found in Boswellia serrata (Indian Frankincense) has been found to inhibit 5-lipoxygenase. Boswellia reduces brain edema in patients irradiated for brain tumor and it's believed to be due to 5-lipoxygenase inhibition. While only one ALOX5-inhibiting drug has proven useful for treating human diseases, other drugs that act down-stream in the ALOX5-initiated pathway are in clinical use. Montelukast, Zafirlukast, and Pranlukast are receptor antagonists for the cysteinyl leukotriene receptor 1 which contributes to mediating the actions of LTC4, LTD4, and LTE4. These drugs are in common use as prophylaxis and chronic treatment of allergic and non-allergic asthma and rhinitis diseases and also may be useful for treating acquired childhood sleep apnea due to adenotonsillar hypertrophy (see ). To date, however, neither LTB4 synthesis inhibitors (i.e. blockers of ALOX5 or LTA4 hydrolase) nor inhibitors of LTB4 receptors (BLT1 and BLT2) have turned out to be effective anti-inflammatory drugs. Furthermore, blockers of LTC4, LTD4, and LTE4 synthesis (i.e. ALOX5 inhibitors) as well as of LTC4 and LTD4 receptor antagonists have proven inferior to corticosteroids as single drug therapy for persistent asthma, particularly in patients with airway obstruction. As a second drug added to corticosteroids, leukotriene inhibitors appear inferior to beta2-adrenergic agonist drugs in the treatment of asthma. Human genetics ALOX5 contributes to the formation of PUFA metabolites that may promote (e.g. the leukotrienes, 5-oxo-ETE) but also to metabolites that inhibit (i.e. lipoxins, resolvins) diseases. Consequently, a given abnormality in the expression or activity of ALOX5 due to variations in its gene may promote or suppress inflammation depending on the relative roles these opposing metabolites have in regulating the particular type of reaction examined. Furthermore, the ALOX5-related tissue reactions studied to date are influenced by multiple genetic, environmental, and developmental variables that may influence the consequences of abnormalities in the expression or function of ALOX5. Consequently, abnormalities in the ALOX5 gene may vary with the population and individuals studied. Allergic asthma The upstream promoter in the human ALOX5 gene commonly possess five GGGCCGG repeats which bind the Sp1 transcription factor and thereby increase the gene's transcription of ALOX5. Homozygous variants for this five repeat promoter region in a study of 624 asthmatic children in Ankara, Turkey were much more likely to have severe asthma. These variants are associated with reduced levels of ALOX5 as well as reduced production of LTC4 in their eosinophils. These data suggest that ALOX5 may contribute to dampening the severity of asthma, possibly by metabolizing PUFA to specialized pro-resolving mediators. Single nucleotide polymorphism differences in the genes that promote ALOX5 activity (i.e. 5-lipoxygenase-activating protein), metabolize the initial product of ALOX5, 5S-HpETE, to LTB4 (i.e. leukotriene-A4 hydrolase), or are the cellular receptors responsible for mediating the cellular responses to the down-stream ALOX products LTC4 and LTD4 (i.e. CYSLTR1 and CYSLTR2) have been associated with the presence of asthma in single population studies. These studies suggest genetic variants may play a role, albeit a relatively minor one, in the overall susceptibility to allergic asthma. NSAID-induced non-allergic reactions Aspirin and other non-steroidal anti-inflammatory drugs (NSAID) can cause NSAID-exacerbated diseases (N-ERD). These have been recently classified into 5 groups 3 of which are not caused by a classical immune mechanism and are relevant to the function of ALOX5: 1) NSAIDs-exacerbated respiratory disease (NERD), i.e. symptoms of bronchial airways obstruction, shortness of breath, and/or nasal congestion/rhinorrhea occurring shortly after NSAID ingestion in patients with a history of asthma and/or rhinosinusitis; 2) NSAIDs-exacerbated cutaneous disease (NECD), i.e. wheal responses and/or angioedema responses occurring shortly after NSAID ingestion in patients with a history of chronic urticaria; and 3) NSAIDs-induced urticaria/angioedema (NIUA) (i.e. wheals and/or angioedema symptoms occurring shortly after NSAID ingestion in patients with no history of chronic urticaria). The genetic single-nucleotide polymorphism (SNP) variant in the ALOX5 gene, ALOX5-1708 G>A is associated with NSAID-induced asthma in Korean patients and three SNP ALOX5 variants, rs4948672, rs1565096, and rs7894352, are associated with NSAID-induced cutaneous reactions in Spanish patients. Atherosclerosis Bearers of two variations in the predominant five tandem repeat Sp1 binding motif (GGGCCGG) of the ALOX5 gene promoter in 470 subjects (non-Hispanic whites, 55.1%; Hispanics, 29.6%; Asian or Pacific Islander, 7.7&; African Americans, 5.3%, and others, 2.3%) were positively associated with the severity of atherosclerosis, as judged by carotid intima–media thickness measurements. Variant alleles involved deletions (one or two) or additions (one, two, or three) of Sp1 motifs to the five tandem motifs allele. See also Arachidonate 5-lipoxygenase inhibitor References Further reading External links EC 1.13.11 Enzymes Eicosanoids Lipid metabolism Peripheral membrane proteins
Arachidonate 5-lipoxygenase
[ "Chemistry" ]
7,747
[ "Lipid biochemistry", "Lipid metabolism", "Metabolism" ]
10,995,472
https://en.wikipedia.org/wiki/Prostaglandin%20H2
{{DISPLAYTITLE:Prostaglandin H2}} Prostaglandin H2 (PGH2), or prostaglandin H2 (PGH2), is a type of prostaglandin and a precursor for many other biologically significant molecules. It is synthesized from arachidonic acid in a reaction catalyzed by a cyclooxygenase enzyme. The conversion from arachidonic acid to prostaglandin H2 is a two-step process. First, COX-1 catalyzes the addition of two free oxygens to form the 1,2-dioxane bridge and a peroxide functional group to form prostaglandin G2 (PGG2). Second, COX-2 reduces the peroxide functional group to a secondary alcohol, forming prostaglandin H2. Other peroxidases like hydroquinone have been observed to reduce PGG2 to PGH2. PGH2 is unstable at room temperature, with a half life of 90–100 seconds, so it is often converted into a different prostaglandin. It is acted upon by: prostacyclin synthase to create prostacyclin thromboxane-A synthase to create thromboxane A2 and 12-(S)-hydroxy-5Z,8E,10E-heptadecatrienoic acid (HHT) (see 12-Hydroxyheptadecatrienoic acid) prostaglandin D2 synthase to create prostaglandin D2 prostaglandin E synthase to create prostaglandin E2 It rearranges non-enzymatically to: A mixture of 12-(S)-hydroxy-5Z,8E,10E-heptadecatrienoic acid (HHT) and 12-(S)-hydroxy-5Z,8Z,10E-heptadecatrienoic acid (see 12-hydroxyheptadecatrienoic acid) Functions of prostaglandin H2: regulating the constriction and dilation of blood vessels stimulating platelet aggregation binds to thromboxane receptor on platelets' cell membranes to trigger platelet migration and adhesion to other platelets. Effects of aspirin on prostaglandin H2: Aspirin has been hypothesized to block the conversion of arachidonic acid to prostaglandin References Organic peroxides Prostaglandins
Prostaglandin H2
[ "Chemistry", "Biology" ]
538
[ "Biotechnology stubs", "Organic compounds", "Biochemistry stubs", "Biochemistry", "Organic peroxides" ]
10,995,788
https://en.wikipedia.org/wiki/Prostaglandin%20E
Prostaglandin E is a family of naturally occurring prostaglandins that are used as medications. Types include: Prostaglandin E1 also known as alprostadil Prostaglandin E2 also known as dinoprostone Both types are on the World Health Organization's List of Essential Medicines. Prostaglandin E play an important role in thermoregulation of the human brain. Decreased formation of prostaglandin E through inhibition of cyclooxygenase is the basis for the antipyretic of nonsteroidal anti-inflammatory drugs (NSAIDs). References External links Prostaglandins World Health Organization essential medicines
Prostaglandin E
[ "Chemistry", "Biology" ]
144
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
10,995,827
https://en.wikipedia.org/wiki/Lewin%27s%20equation
Lewin's equation, B = f(P, E), is a heuristic formula proposed by psychologist Kurt Lewin as an explanation of what determines behavior. Description The formula states that behavior is a function of the person and their environment: Where is behavior, is person, and is the environment. This equation was first presented in Lewin's book, Principles of Topological Psychology, published in 1936. The equation was proposed as an attempt to unify the different branches of psychology (e.g. child psychology, animal psychology, psychopathology) with a flexible theory applicable to all distinct branches of psychology. This equation is directly related to Lewin's field theory. Field theory is centered around the idea that a person's life space determines their behavior. Thus, the equation was also expressed as B = f(L), where L is the life space. In Lewin's book, he first presents the equation as B = f(S), where behavior is a function of the whole situation (S). He then extended this original equation by suggesting that the whole situation could be roughly split into two parts: the person (P) and the environment (E). According to Lewin, social behavior, in particular, was the most psychologically interesting and relevant behavior. Lewin held that the variables in the equation (e.g. P and E) could be replaced with the specific, unique situational and personal characteristics of the individual. As a result, he also believed that his formula, while seemingly abstract and theoretical, had distinct concrete applications for psychology. Gestalt influence Many scholars (and even Lewin himself) have acknowledged the influence of Gestalt psychology on Lewin's work. Lewin's field theory holds that a number of different and competing forces combine to result in the totality of the situation. A single person's behavior may be different in unique situations, as he or she is acting partly in response to these differential forces and factors (e.g. the environment, or E):"A physically identical environment can be psychologically different even for the same man in different conditions."Similarly, two different individuals placed in exactly the same situation will not necessarily engage in the same behavior. "Even when from the standpoint of the physicist the environment is identical or nearly identical for a child and or an adult, the psychological situation can be fundamentally different."For this reason, Lewin holds that the person (e.g. P) must be considered in conjunction with the environment. P consists of the entirety of a person (e.g. his or her past, present, future, personality, motivations, desires). All elements within P are contained within the life space, and all elements within P interact with each other. Lewin emphasizes that the desires and motivations within the person and the situation in its entirety, the sum of all these competing forces, combine to form something larger: the life space. This notion speaks directly to the gestalt idea that the "whole is greater than the sum of its parts." The idea that the parts (e.g. P and E) of the whole (e.g. S) combine to form an interactive system has been called Lewin's 'dynamic approach,' a term that specifically refers to regarding "the elements of any situation...as parts of a system." Interaction of person and environment Relative importance of P and E Lewin explicitly stated that either the person or the environment may be more important in particular situations:"Every psychological event depends upon the state of the person and at the same time on the environment, although their relative importance is different in different cases."Thus, Lewin believed he succeeded in creating an applicable theory that was also "flexible enough to do justice to the enormous differences between the various events and organisms." In a sense, he held that it was inappropriate to pick a side on the classic psychological debate of nature versus nurture, as he held that "every scientific psychology must take into account whole situations, i.e., the state of both person and environment." Further, Lewin stated that:"The question whether heredity or environment plays the greater part also belongs to this kind of thinking. The transition of the Galilean thinking involved a recognition of the general validity of the thesis: An event is always the result of the interaction of several facts." Specific function linking P and E Lewin defined an empirical law as "the functional relationship between various facts," where facts are the "different characteristics of an event or situation." In Lewin's original proposal of his equation, he did not specify how exactly the person and the environment interact to produce behavior. Some scholars have noted that Lewin's use of the comma in his equation between the P and E represents Lewin's flexibility and receptiveness to multiple ways that these two may interact. Lewin indeed held that the importance of the person or of the environment may vary on a case-by-case basis. The use of the comma may provide the flexibility to support this assertion. Psychological reality Lewin differentiates between multiple realities. For example, the psychological reality encompasses everything that an individual perceives and believes to be true. Only what is contained within the psychological reality can affect behavior. In contrast, things that may be outside the psychological reality, such as bits of the physical reality or social reality, has no direct relation to behavior. Lewin states:"The psychological reality...does not depend upon whether or not the content...exists in a physical or social sense....The existence or nonexistence...of a psychological fact are independent of the existence or nonexistence to which its content refers."As a result, the only reality that is contained within the life space is the psychological reality, as this is the reality that has direct consequences for behavior. For example, in Principles of Topological Psychology, Lewin continually reiterates the sentiment that "the physical reality of the object concerned is not decisive for the degree of psychological reality." Lewin refers to the example of a "child living in a 'magic world.'" Lewin asserts that, for this child, the realities of the 'magic world' are a psychological reality, and thus must be considered as an influence on their subsequent behavior, even though this 'magic world' does not exist within the physical reality. Likewise, scholars familiar with Lewin's work have emphasized that the psychological situation, as defined by Lewin, is strictly composed of those facts which the individual perceives or believes. Principle of contemporaneity In Lewin's theoretical framework, the whole situation—or the life space, which contains both the person and the environment—is dynamic. In order to accurately determine behavior, Lewin's equation holds that one must consider and examine the life space at the exact moment when the behavior occurred. The life space, even moments after such behavior has occurred, is no longer exactly the same as it was when behavior occurred and thus may not accurately represent the whole situation that led to the behavior in the first place. This focus on the present situation represented a departure from many other theories at the time. Most theories tended to focus on looking at an individual's past in order to explain their present behavior, such as Sigmund Freud's psychoanalysis. Lewin's emphasis on the present state of the life space did not preclude the idea that an individual's past may impact the present state of the life space:"[The] influence of the previous history is to be thought of as indirect in dynamic psychology: From the point of view of systematic causation, past events cannot influence present events. Past events can only have a position in the historical causal chains whose interweavings create the present situation."Lewin referred to this concept as the principle of contemporaneity. References Further reading Helbing, D. (2010). Quantitative Sociodynamics: Stochastic Methods and Models of Social Interaction Processes (2nd ed.). Springer. Lewin, K. (1943). Defining the "Field at a Given Time." Psychological Review, 50, 292–310. Lewin, K (1936). Principles of Topological Psychology. New York: McGraw-Hill. External links Lewin, Sticky Minds Psychological theories Behavioral concepts
Lewin's equation
[ "Biology" ]
1,729
[ "Behavior", "Behavioral concepts", "Behaviorism" ]
10,995,966
https://en.wikipedia.org/wiki/Prostaglandin%20DP1%20receptor
{{DISPLAYTITLE:Prostaglandin DP1 receptor}} The prostaglandin D2 receptor 1 (DP1), a G protein-coupled receptor encoded by the PTGDR gene (also termed PTGDR1), is primarily a receptor for prostaglandin D2 (PGD2). The receptor is a member of the prostaglandin receptors belonging to the subfamily A14 of rhodopsin-like receptors. Activation of DP1 by PGD2 or other cognate receptor ligands is associated with a variety of physiological and pathological responses in animal models. Gene The PTGDR gene is located on chromosome 14 at position q22.1, (i.e. 14q22.1), a chromosomal locus associated with asthma and other allergic disorders. PTGDR, which consists of 4 introns and 5 exons, encodes for a ~44 kilodalton protein but also multiple alternative spliced transcript variants. Expression DP1 is expressed primarily by cells involved in mediating allergic and inflammatory reactions, i.e. human and rodent mast cells, basophils, and eosinophils, Th2 cells, and dendritic cells, and by cells contributing to these reactions, i.e. human and/or rodent airway epithelial cells, vascular endothelium, mucus-secreting goblet cells in the nasal and colonic mucosa, and serous gland cells of the nose. DP1 protein is expressed in mouse placenta and testes and mRNA transcripts have also been detected in the meninges of the mouse brain by multiple reports and, by single reports, in the rat meninges as well as the mouse thalamus, hippocampus, cerebellum, brainstem, and retina. Ligands Activating ligands PGD2 binds to and activates DP1 at concentrations in the 0.5 to 1 nanomolar range. Relative potencies in binding to and activating DP1 for the following prostanoids are: PGD2>>PGE2>prostaglandin F2α>PGI2=thromboxane A2, with PGD2 being more than 100-fold more potent than PGE2 in binding to and stimulating DP1. PDJ2, Δ12-PDJ2, and 15-deoxy-Δ12,14-PGJ2, which form in vitro and in vivo rapidly as non-enzymatic rearrangements of PGD2 (see cyclopentenone prostaglandins), also bind to and activate DP1, with PDJ2 doing so almost as effectively as PDG2 and the latter two PGJs doing so 100-fold and 300-fold less potently than PDG2. Other compounds, e.g. L-644,698, BW 245C, BW A868C, and ZK 110841, have been synthesized, found to be about as potent as PGD2 in binding to and stimulating DP1, and used to study the function of this receptor. The drug treprostinil is a high affinity ligand for and potent activator of not only DP1 but also two other prostanoid receptors, EP2 and IP. Inhibiting ligands Asapiprant (S-555739) and laropiprant are selective receptor antagonists of DP1 whereas vidupiprant is a receptor antagonist for both DP1 and DP2. Mechanisms of cell activation Among the 8 human prostanoid receptors, DP1, along with IP, EP2, and EP4, are classified as relaxant prostanoid receptors; each, including DP1, is a G protein-coupled receptors that works by activating G-S proteins which in turn raises cellular cAMP levels thereby mobilizing cyclic adenosine monophosphate-activated cell signaling pathways which regulate cell function. DP1 activation also causes the mobilization of calcium in HEK293 cells transfected with this receptor. It does so by a mechanism that is independent of inositol trisphosphate signaling; Ligand-activated DP1 also mobilizes G protein-coupled receptor kinase 2 (GRK2, also known as β-adrenergic receptor kinase 2 [BARK1]) and arrestin 2 (also known as arrestin beta 1 [ARRB1]). These agents act to uncouple DP1 from its G proteins and to internalize in a process that limits the DP1's cell-activation life-time in a process termed homologous desensitization. Activation of protein kinase Cs likewise trigger DP1 to uncouple from G proteins and internalize although in model studies DP1 has not been shown to cause the activation of PKC (see Protein kinase C#Function). Activities Allergy Tissue studies Studies in mouse as well as human tissues and cells find that DP1 stimulation has numerous pro-allergic effects. DP1 activation blocks the production of interleukin 12 by dendritic cells; this biases the development of naïve T lymphocytes to Th-2 rather than Th-1 helper cells and thereby promotes allergic rather than non-allergic inflammatory responses (see T helper cell#Th1/Th2 Model for helper T cells and T helper cell#Limitations to the Th1/Th2 model. DH1 activation also promotes allergic reactions by suppressing the function of natural killer cells, prolonging the survival of eosinophils, and stimulation the maturation of dermal mast cell. Animal studies Studies of experimentally-induced allergic responses in animals further implicate DP1 in allergy. DP1 gene knockout and/or DP1 inhibition by receptor antagonists markedly reduces airway inflammation, obstruction, hypersensitivity, and pro-allergic cytokine and chemokine production in a mouse model of ovalbumin-induced asthma as well as allergic symptoms in a guinea pig model of allergic conjunctivitis, rhinitis, and asthma. The administration of PGD2 into the skin of rats or into the eyes of rabbits causes local symptoms of allery. These responses are thought, but not yet proved, to be mediated by DP1 activation. In contrast to these results, however, activation of DP1 by intratrachael administration of a selective DP1 activator activated DP1 on dendritic cells to suppress airway allergic inflammation by increasing the number of Foxp3+ CD4+ regulatory T cells. Furthermore, DP1 activation reduces eosinophilia in allergic inflammation and blocks antigen-presenting langerhans cell function in mice. This results suggest that DP1 can promote or suppress allergic responses depending on the animal model tested and, perhaps, the type of allergic reaction investigated. Human studies Allergen inhalation challenge of humans produces rises in the PGD2 levels in their bronchoalveolar lavage fluids. Furthermore, the administration of PGD2 into the nose or skin of human volunteers produces local symptoms of allergy and the inhalation of PGD2 into asthmatics causes constriction of the airways as well as the potentiation of airway constriction responses. These reactions, similar to those produced in animal studies, may be mediated by DP1. Central nervous system PGD2 is the most abundant prostanoid in the brains of humans and other mammals and DP1 receptors are located on arachnoid mater trabecular cells in mouse basal forebrain. The PGD2-DP1 pathway is involved in the regulation of non-rapid eye movement sleep in rodents: infusion of PGD2 into the lateral ventricle of mice or the brain of rats induces an increase in the amount of non-rapid eye movement sleep in wild-type (WT) but not DP1-deficient animals. This sleep-induction appears to involve the DP1-dependent stimulation of adenosine formation and subsequent simulation of the adenosine A2A receptor by adenosine. In humans, a genetic variant of ADA associated with the reduced metabolism of adenosine to inosine has been reported to deep sleep and SWA during sleep. These studies suggest that DP1 has a similar role in the sleep of humans. Pulmonary hypertension Pulmonary arterial hypertension in humans is commonly treated with specific pulmonary artery vasodilators that increase survival such as the prostacyclin I2 (PGI2) mimetics including treprostinil, epoprostenol, iloprost, and beraprost. Recent studies find that DP1 as well as the PGI2 receptor protein are expressed in human pulmonary arteries and veins; that treprostinil but not iloprost caused pulmonary vein relaxation in part by acting through DP1 in insolated human pulmonary vascular preparations; and that the effect of treprostinil on DP1 in human pulmonary veins may contribute to its therapeutic efficacy in primary pulmonary hypertension. Reproduction Studies in male mice indicate that DP1 activation induces the translocation of SOX9 into the nucleus thereby signaling for the maturation of Sertoli cells and embryonic gonads. Disruption of this DP1-activated circuit leads to disordered maturation of the male reproductive organs such as cryptorchidism (i.e. failure of testes descent into the scrotum) in mice and, it is suggested, may also do so in humans. Genomics studies Human genomics studies have associated single-nucleotide polymorphism variants with an increased incidence of allergic diseases. Studies in two different populations have replicated associations between -549T>C, -441C>T, and -197T>C variants and a study in a single population has associated the -613C>T variation with increased incidences of nasal polyposis, asthma, and/or aspirin sensitivity; the -197T>C and -613 C>T variants were also associated with increased incidences of allergic reactions to pollen and mites. A single population study associated the -731A>C variant and studies in two different population associated the 6651C>T variant with increased incidences of asthma and/or bronchial hyper-reactivity. The intrinsic variants rs17831675, rs17831682, and rs58004654 (now termed rs7709505) have been associated with an increased incidence of asthma in single population studies. A metaanalasis −549 C/T, −441 C/T, and −197 C/T found that of these three variants, only −549 C/T conferred susceptibility to asthma in Europeans and that this susceptibility was limited to adults. See also Prostaglandin receptors Prostanoid receptors Prostaglandin DP2 receptor Eicosanoid receptor References Further reading External links G protein-coupled receptors
Prostaglandin DP1 receptor
[ "Chemistry" ]
2,343
[ "G protein-coupled receptors", "Signal transduction" ]
10,996,073
https://en.wikipedia.org/wiki/Arachidonic%20acid%205-hydroperoxide
Arachidonic acid 5-hydroperoxide (5-hydroperoxyeicosatetraenoic acid, 5-HPETE) is an intermediate in the metabolism of arachidonic acid by the ALOX5 enzyme in humans or Alox5 enzyme in other mammals. The intermediate is then further metabolized to: a) leukotriene A4 which is then metabolized to the chemotactic factor for leukocytes, leukotriene B4, or to contractors of lung airways, leukotriene C4, leukotriene D4, and leukotriene E4; b) the leukocyte chemotactic factors, 5-hydroxyicosatetraenoic acid and 5-oxo-eicosatetraenoic acid; or c) the specialized pro-resolving mediators of inflammation, lipoxin A4 and lipoxin B4. References Organic peroxides Biochemistry Eicosanoids Fatty acids
Arachidonic acid 5-hydroperoxide
[ "Chemistry", "Biology" ]
212
[ "Biotechnology stubs", "Organic compounds", "Biochemistry stubs", "nan", "Biochemistry", "Organic peroxides" ]
10,997,586
https://en.wikipedia.org/wiki/Binary%20tetrahedral%20group
In mathematics, the binary tetrahedral group, denoted 2T or , is a certain nonabelian group of order 24. It is an extension of the tetrahedral group T or (2,3,3) of order 12 by a cyclic group of order 2, and is the preimage of the tetrahedral group under the 2:1 covering homomorphism Spin(3) → SO(3) of the special orthogonal group by the spin group. It follows that the binary tetrahedral group is a discrete subgroup of Spin(3) of order 24. The complex reflection group named 3(24)3 by G.C. Shephard or 3[3]3 and by Coxeter, is isomorphic to the binary tetrahedral group. The binary tetrahedral group is most easily described concretely as a discrete subgroup of the unit quaternions, under the isomorphism , where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.) Elements Explicitly, the binary tetrahedral group is given as the group of units in the ring of Hurwitz integers. There are 24 such units given by with all possible sign combinations. All 24 units have absolute value 1 and therefore lie in the unit quaternion group Sp(1). The convex hull of these 24 elements in 4-dimensional space form a convex regular 4-polytope called the 24-cell. Properties The binary tetrahedral group, denoted by 2T, fits into the short exact sequence This sequence does not split, meaning that 2T is not a semidirect product of {±1} by T. In fact, there is no subgroup of 2T isomorphic to T. The binary tetrahedral group is the covering group of the tetrahedral group. Thinking of the tetrahedral group as the alternating group on four letters, , we thus have the binary tetrahedral group as the covering group, The center of 2T is the subgroup {±1}. The inner automorphism group is isomorphic to A4, and the full automorphism group is isomorphic to S4. The binary tetrahedral group can be written as a semidirect product where Q is the quaternion group consisting of the 8 Lipschitz units and C3 is the cyclic group of order 3 generated by . The group Z3 acts on the normal subgroup Q by conjugation. Conjugation by is the automorphism of Q that cyclically rotates , , and . One can show that the binary tetrahedral group is isomorphic to the special linear group SL(2,3) – the group of all matrices over the finite field F3 with unit determinant, with this isomorphism covering the isomorphism of the projective special linear group PSL(2,3) with the alternating group A4. Presentation The group 2T has a presentation given by or equivalently, Generators with these relations are given by with . A Cayley Table with these properties, elements ordered by GAP, is 1 2 r 4 -1 6 7 8 9 10 11 12 13 s 15 16 17 t 19 20 21 22 23 24 2 6 7 8 9 1 13 s 15 16 17 t r 4 -1 20 21 22 23 10 11 12 24 19 r 8 -1 10 11 20 23 9 t 12 1 19 s 21 24 7 16 2 4 15 22 13 17 6 4 16 19 -1 12 13 8 17 23 r 10 1 15 20 21 9 t 7 11 22 6 24 2 s -1 9 11 12 1 15 17 t 2 19 r 4 21 22 6 23 7 8 10 24 13 s 16 20 6 1 13 s 15 2 r 4 -1 20 21 22 7 8 9 10 11 12 24 16 17 t 19 23 7 s 9 16 17 10 24 15 22 t 2 23 4 11 19 13 20 6 8 -1 12 r 21 1 8 20 23 9 t r s 21 24 7 16 2 -1 10 11 15 22 13 17 12 1 19 6 4 9 15 17 t 2 -1 21 22 6 23 7 8 11 12 1 24 13 s 16 19 r 4 20 10 10 7 4 11 19 s 9 16 17 -1 12 r 24 15 22 t 2 23 1 13 20 6 8 21 11 t 1 19 r 24 16 2 8 4 -1 10 22 13 20 17 23 9 12 6 s 21 7 15 12 23 10 1 4 21 t 7 16 11 19 -1 6 24 13 2 8 17 r s 15 20 9 22 13 4 15 20 21 16 19 -1 12 22 6 24 8 17 23 r 10 1 s 9 t 7 11 2 s 10 24 15 22 7 4 11 19 13 20 6 9 16 17 -1 12 r 21 t 2 23 1 8 15 -1 21 22 6 9 11 12 1 24 13 s 17 t 2 19 r 4 20 23 7 8 10 16 16 13 8 17 23 4 15 20 21 9 t 7 19 -1 12 22 6 24 2 r 10 1 s 11 17 22 2 23 7 19 20 6 s 8 9 16 12 r 10 21 24 15 t 1 4 11 13 -1 t 24 16 2 8 11 22 13 20 17 23 9 1 19 r 6 s 21 7 4 -1 10 15 12 19 17 12 r 10 22 2 23 7 1 4 11 20 6 s 8 9 16 -1 21 24 15 t 13 20 r s 21 24 8 -1 10 11 15 22 13 23 9 t 12 1 19 6 7 16 2 4 17 21 12 6 24 13 23 10 1 4 s 15 20 t 7 16 11 19 -1 22 2 8 17 r 9 22 19 20 6 s 17 12 r 10 21 24 15 2 23 7 1 4 11 13 8 9 16 -1 t 23 21 t 7 16 12 6 24 13 2 8 17 10 1 4 s 15 20 9 11 19 -1 22 r 24 11 22 13 20 t 1 19 r 6 s 21 16 2 8 4 -1 10 15 17 23 9 12 7 There is 1 element of order 1 (element 1), one element of order 2 (), 8 elements of order 3, 6 elements of order 4 (including ), 8 elements of order 6 (which include and ). Subgroups The quaternion group consisting of the 8 Lipschitz units forms a normal subgroup of 2T of index 3. This group and the center {±1} are the only nontrivial normal subgroups. All other subgroups of 2T are cyclic groups generated by the various elements, with orders 3, 4, and 6. Higher dimensions Just as the tetrahedral group generalizes to the rotational symmetry group of the n-simplex (as a subgroup of SO(n)), there is a corresponding higher binary group which is a 2-fold cover, coming from the cover Spin(n) → SO(n). The rotational symmetry group of the n-simplex can be considered as the alternating group on n + 1 points, An+1, and the corresponding binary group is a 2-fold covering group. For all higher dimensions except A6 and A7 (corresponding to the 5-dimensional and 6-dimensional simplexes), this binary group is the covering group (maximal cover) and is superperfect, but for dimensional 5 and 6 there is an additional exceptional 3-fold cover, and the binary groups are not superperfect. Usage in theoretical physics The binary tetrahedral group was used in the context of Yang–Mills theory in 1956 by Chen Ning Yang and others. It was first used in flavor physics model building by Paul Frampton and Thomas Kephart in 1994. In 2012 it was shown that a relation between two neutrino mixing angles, derived by using this binary tetrahedral flavor symmetry, agrees with experiment. See also Binary polyhedral group Binary cyclic group, ⟨n⟩, order 2n Binary dihedral group, ⟨2,2,n⟩, order 4n Binary octahedral group, 2O = ⟨2,3,4⟩, order 48 Binary icosahedral group, 2I = ⟨2,3,5⟩, order 120 Notes References 6.5 The binary polyhedral groups, p. 68 Tetrahedral
Binary tetrahedral group
[ "Physics" ]
1,707
[ "Binary polyhedral groups", "Symmetry", "Rotational symmetry" ]
10,997,598
https://en.wikipedia.org/wiki/Binary%20octahedral%20group
In mathematics, the binary octahedral group, name as 2O or is a certain nonabelian group of order 48. It is an extension of the chiral octahedral group O or (2,3,4) of order 24 by a cyclic group of order 2, and is the preimage of the octahedral group under the 2:1 covering homomorphism of the special orthogonal group by the spin group. It follows that the binary octahedral group is a discrete subgroup of Spin(3) of order 48. The binary octahedral group is most easily described concretely as a discrete subgroup of the unit quaternions, under the isomorphism where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.) Elements Explicitly, the binary octahedral group is given as the union of the 24 Hurwitz units with all 24 quaternions obtained from by a permutation of coordinates and all possible sign combinations. All 48 elements have absolute value 1 and therefore lie in the unit quaternion group Sp(1). Properties The binary octahedral group, denoted by 2O, fits into the short exact sequence This sequence does not split, meaning that 2O is not a semidirect product of {±1} by O. In fact, there is no subgroup of 2O isomorphic to O. The center of 2O is the subgroup {±1}, so that the inner automorphism group is isomorphic to O. The full automorphism group is isomorphic to O × Z2. Presentation The group 2O has a presentation given by or equivalently, Quaternion generators with these relations are given by with Subgroups The binary tetrahedral group, 2T, consisting of the 24 Hurwitz units, forms a normal subgroup of index 2. The quaternion group, Q8, consisting of the 8 Lipschitz units forms a normal subgroup of 2O of index 6. The quotient group is isomorphic to S3 (the symmetric group on 3 letters). These two groups, together with the center {±1}, are the only nontrivial normal subgroups of 2O. The generalized quaternion group, Q16, also forms a subgroup of 2O, index 3. This subgroup is self-normalizing so its conjugacy class has 3 members. There are also isomorphic copies of the binary dihedral groups Q8 and Q12 in 2O. All other subgroups are cyclic groups generated by the various elements (with orders 3, 4, 6, and 8). Higher dimensions The binary octahedral group generalizes to higher dimensions: just as the octahedron generalizes to the orthoplex, the octahedral group in SO(3) generalizes to the hyperoctahedral group in SO(n), which has a binary cover under the map See also Binary polyhedral group binary cyclic group, ⟨n⟩, index 2n binary dihedral group, ⟨2,2,n⟩, index 4n binary tetrahedral group, 2T=⟨2,3,3⟩, index 24 binary icosahedral group, 2I=⟨2,3,5⟩, index 120 References Notes Octahedral
Binary octahedral group
[ "Physics" ]
711
[ "Binary polyhedral groups", "Symmetry", "Rotational symmetry" ]
10,997,772
https://en.wikipedia.org/wiki/CSI-DOS
CSI-DOS is an operating system, created in Samara, for the Soviet Elektronika BK-0011M and Elektronika BK-0011 microcomputers. CSI-DOS did not support the earlier BK-0010. CSI-DOS used its own unique file system and only supported a color graphics video mode. The system supported both hard and floppy drives as well as RAM disks in the computer's memory. It also included software to work with the AY-3-8910 and AY-3-8912 music co-processors, and the Covox Speech Thing. There are a number of games and demos designed specially for the system. The system also included a Turbo Vision-like application programming interface (API) allowing simpler design of user applications, and a graphical file manager called X-Shell. External links Article, contains description of some advantages of CSI-DOS for gaming over other OSs (Russian) Elektronika BK operating systems
CSI-DOS
[ "Technology" ]
203
[ "Operating system stubs", "Computing stubs" ]
10,998,227
https://en.wikipedia.org/wiki/I/O%20request%20packet
I/O request packets (IRPs) are kernel mode structures that are used by Windows Driver Model (WDM) and Windows NT device drivers to communicate with each other and with the operating system. They are data structures that describe I/O requests, and can be equally well thought of as "I/O request descriptors" or similar. Rather than passing a large number of small arguments (such as buffer address, buffer size, I/O function type, etc.) to a driver, all of these parameters are passed via a single pointer to this persistent data structure. The IRP with all of its parameters can be put on a queue if the I/O request cannot be performed immediately. I/O completion is reported back to the I/O manager by passing its address to a routine for that purpose, IoCompleteRequest. The IRP may be repurposed as a special kernel APC object if such is required to report completion of the I/O to the requesting thread. IRPs are typically created by the I/O Manager in response to I/O requests from user mode. However, IRPs are sometimes created by the plug-and-play manager, power manager, and other system components, and can also be created by drivers and then passed to other drivers. The I/O request packet mechanism is also used by Digital Equipment Corporation's VMS operating system, and was used by Digital's RSX-11 family of operating systems before that. An I/O request packet in RSX-11 is called a directive parameter block, as it is also used for system calls other than I/O calls. See also Architecture of Windows NT References External links Whitepaper on Windows I/O model IRP (Windows Drivers) Windows NT architecture Device drivers Data structures by computing platform Windows NT kernel OpenVMS de:IRP
I/O request packet
[ "Technology" ]
382
[ "OpenVMS", "Computing platforms" ]
10,998,659
https://en.wikipedia.org/wiki/Organ%20console
The pipe organ is played from an area called the console or keydesk, which holds the manuals (keyboards), pedals, and stop controls. In electric-action organs, the console is often movable. This allows for greater flexibility in placement of the console for various activities. Some very large organs, such as the van den Heuvel organ at the Church of St. Eustache in Paris, have more than one console, enabling the organ to be played from several locations depending on the nature of the performance. Controls at the console called stops select which ranks of pipes are used. These controls are generally either draw knobs (or stop knobs), which engage the stops when pulled out from the console; stop tablets (or tilting tablets) which are hinged at their far end; or rocker-tablets, which rock up and down on a central axle. Different combinations of stops change the timbre of the instrument considerably. The selection of stops is called the registration. On modern organs, the registration can be changed instantaneously with the aid of a combination action, usually featuring pistons. Pistons are buttons that can be pressed by the organist to change registrations; they are generally found between the manuals or above the pedalboard. In the latter case they are called toe studs or toe pistons (as opposed to thumb pistons). Most large organs have both preset and programmable pistons, with some of the couplers repeated for convenience as pistons and toe studs. Programmable pistons allow comprehensive and rapid control over changes in registration. Newer organs in the 2000s may have multiple levels of solid-state memory, allowing each piston to be programmed more than once. This allows more than one organist to store their own registrations. Many newer consoles also feature MIDI, which allows the organist to record performances. It also allows an external keyboard to be plugged in, which assists in tuning and maintenance. Organization of console controls The layout of an organ console is not standardized, but most organs follow historic conventions for the country and style of organ, so that the layout of stops and pistons is broadly predictable. The stops controlling each division (see Keyboards) are grouped together. Within these, the standard arrangement is for the lowest sounding stops (32 ft or 16 ft) to be placed at the bottom of the columns, with the higher pitched stops placed above this, (8 ft 4 ft, 2 ft, 2 ft, etc.); the mixtures are placed above this (II, III, V, etc.). The stops controlling the reed ranks are placed collectively above these in the same order as above, often with the stop engraving in red. In a horizontal row of stop tabs, a similar arrangement would be applied left to right rather than bottom to top. Among stops of the same pitch, louder stops are generally placed below softer ones (so an Open Diapason would be placed towards the bottom and a Dulciana towards the top), but this is less predictable since it depends on the exact stops available and the space available to arrange stop knobs. Thus, an example stop configuration for a Great division may look like this: The standard position for these columns of stops (assuming drawknobs are used) is for the Choir or Positive division to be on the outside of the player's right, with the Great nearer the center of the console and the music rest. On the left hand side, the Pedal division is on the outside, with the Swell to the inside. Other divisions can be placed on either side, depending on the amount of space available. Manual couplers and octave extensions are placed either within the stop knobs of the divisions that they control, or grouped together above the uppermost manual. The pistons, if present, are placed directly under the manual they control. To be more historically accurate, organs built along historical models will often use older schemes for organizing the keydesk controls. Keyboards The organ is played with at least one keyboard, with configurations featuring from two to five keyboards being the most common. A keyboard to be played by the hands is called a manual (from the Latin , "hand"); an organ with four keyboards is said to have four manuals. Most organs also have a pedalboard, a large keyboard to be played by the feet. [Note that the keyboards are never actually referred to as "keyboards", but as "manuals" and "pedalboard", as the case may be.] The collection of ranks controlled by a particular manual is called a division. The names of the divisions of the organ vary geographically and stylistically. Common names for divisions are: Great, Swell, Choir, Solo, Orchestral, Echo, Antiphonal (English-speaking countries) (Germany) (France) (the Netherlands) Like the arrangement of stops, the keyboard divisions are also arranged in a common order. Taking the English names as an example, the main manual (the bottom manual on two-manual instruments or the middle manual on three-manual instruments) is traditionally called the Great, and the upper manual is called the Swell. If there is a third manual, it is usually the Choir and is placed below the Great. (The name "Choir" is a corruption of "Chair", as this division initially came from the practice of placing a smaller, self-contained, organ at the rear of the organist's bench. This is also why it is called a Positif which means portable organ.) If it is included, the Solo manual is usually placed above the Swell. Some larger organs contain an Echo or Antiphonal division, usually controlled by a manual placed above the Solo. German and American organs generally use the same configuration of manuals as English organs. On French instruments, the main manual (the Grand Orgue) is at the bottom, with the Positif and the Récit above it. If there are more manuals, the Bombarde is usually above the Récit and the Grand Choeur is below the Grand Orgue or above the Bombarde. In addition to names, the manuals may be numbered with Roman numerals, starting from the bottom. Organists will frequently mark a part in their music with the number of the manual they intend to play it on, and this is sometimes seen in the original composition, typically in pieces written when organs were smaller and only had two or three manuals. It is also common to see couplers labeled as "II to I" (see Couplers below). In some cases, an organ contains more divisions than it does manuals. In these cases, the extra divisions are called floating divisions and are played by coupling them to another manual. Usually this is the case with Echo/Antiphonal and Orchestral divisions, and sometimes it is seen with Solo and Bombarde divisions. Although manuals are almost always horizontal, organs with three or more manuals may incline the uppermost manuals towards the organist to make them easier to reach. Many new chamber organs and harpsichords today feature transposing keyboards, which can slide up or down one or more semitones. This allows these instruments to be played with Baroque instruments at a ft=415 Hz, modern instruments at a ft=440 Hz, or Renaissance instruments at a ft=466 Hz. Modern organs are typically tuned in equal temperament, in which every semitone is 100 cents wide. Many organs that are built today following historical models are still tuned to historically-appropriate temperaments. The range (compass) of the keyboards on an organ has varied widely between different time periods and different nationalities. Portative organs may have a range of only an octave or two, while a few large organs, such as the Boardwalk Hall Auditorium Organ, may have some manual keyboards approaching the size of a modern piano. German organs of the seventeenth and eighteenth centuries featured manual ranges from C to f and pedal ranges from C to d, though some organs only had manual ranges that extended down to F. Many French organs of this period had pedal ranges that went down to AA (though this ravalement applied only to the reeds, and may have only included the low AA, not AA-sharp or BB). French organs of the nineteenth century typically had manual ranges from C to g and pedal ranges from C to f; in the twentieth century the manual range was extended to a. The modern console specification recommended by the American Guild of Organists calls for manual keyboards with sixty-one notes (five octaves, from C to c) and pedal keyboards with thirty-two notes (two and a half octaves, from C to g). These ranges apply to the notes written on the page; depending on the registration, the actual range of the instrument may be much greater. Enclosure and expression pedals On most organs, at least one division will be enclosed. On a two-manual (Great and Swell) organ, this will be the Swell division (from where the name comes); on larger organs often part, or all of, the Choir and Solo divisions will be enclosed as well. Enclosure is the term for the device that allows volume control (crescendo and diminuendo) for a manual without the addition or subtraction of stops. All the pipes for the division are surrounded by a box-like structure (often simply called the swell box). One side of the box, usually that facing the console or the listener, will be constructed from vertical or horizontal palettes (wooden flaps) which can be opened or closed from the console. This works in a similar fashion to a Venetian blind. When the box is 'open' it allows more sound to be heard than if it were 'closed'. The most common form of controlling the level of sound released from the enclosed box is by the use of a balanced expression pedal. This is usually placed above the centre of the pedalboard, rotating away from the organist from a near vertical position ("shut") to a near horizontal position ("open"). Unlike a car accelerator pedal, a balanced expression pedal remains in whatever position it was last moved to. Historically, the enclosure was operated by the use of the ratchet swell lever, a spring-loaded lever that locks into two or three positions controlling the opening of the shutters. Many ratchet swell devices were replaced by the more advanced balanced pedal because it allows the enclosure to be left at any point, without having to keep a foot on the lever. In addition, an organ may have a crescendo pedal, which would be found to the right of any expression pedals, and similarly balanced. Applying the crescendo pedal will incrementally activate the majority of the stops in the organ, starting with the softest stops and ending with the loudest, excluding only a handful of specialized stops that serve no purpose in a full ensemble. The order in which the stops are activated is usually preset by the organ builder and the crescendo pedal serves as a quick way for the organist to get to a registration that will sound attractive at a given volume without choosing a particular registration, or simply to get to full organ. Most organs also have a piston and/or toe-stud labeled "Tutti" or "Sforzando" that activates full organ. Couplers A device called a coupler allows the pipes of one division to be played simultaneously from an alternative manual. For example, a coupler labelled "Swell to Great" allows the stops of the Swell division to be played by the Great manual. It is unnecessary to couple the pipes of a division to the manual of the same name (for example, coupling the Great division to the Great manual), because those stops play by default on that manual (though this is done with super- and sub-couplers, see below). By using the couplers, the entire resources of an organ can be played simultaneously from one manual. On a mechanical-action organ, a coupler may connect one division's manual directly to the other, actually moving the keys of the first manual when the second is played. Some organs feature a device to add the octave above or below what is being played by the fingers. The "super-octave" adds the octave above, the "sub-octave" the octave below. These may be attached to one division only, for example "Swell octave" (the super is often assumed), or they may act as a coupler, for example "Swell octave to Great" which gives the effect while playing on the Great division of adding the Swell division an octave above what is being played. These can be used in conjunction with the standard eight foot coupler. The super-octave may be labelled, for example, Swell to Great 4 ft; in the same manner, the sub-octave may be labelled Choir to Great 16 ft. The inclusion of these couplers allows for greater registrational flexibility and color. Some literature (particularly romantic literature from France) calls explicitly for octaves aigües (super-couplers) to add brightness, or octaves graves (sub-couplers) to add gravity. Some organs feature extended ranks to accommodate the top and bottom octaves when the super- and sub-couplers are engaged (see the discussion under "Unification and extension"). In a similar vein are unison off couplers, which act to "turn off" the stops of a division on its own keyboard. For example, a coupler labelled "Great unison off" would keep the stops of the Great division from sounding, even if they were pulled. Unison off couplers can be used in combination with super- and sub-couplers to create complex registrations that would otherwise not be possible. In addition, the unison off couplers can be used with other couplers to change the order of the manuals at the console: engaging the Great to Choir and Choir to Great couplers along with the Great unison off and Choir unison off couplers would have the effect of moving the Great to the bottom manual and the Choir to the middle manual. Divided pedal Another form of coupler found on some large organs is the divided pedal. This is a device that allows the sounds played on the pedals to be split, so the lower octave (principally that of the left foot) plays stops from the pedal division while the upper half (played by the right foot), plays stops from one of the manual divisions. The choice of manual is at the discretion of the performer, as is the 'split point' of the system. The system can be found on the organs of Gloucester Cathedral, having been added by Nicholson & Co (Worcester) Ltd/David Briggs and Truro Cathedral, having been added by Mander Organs/David Briggs, as well as on the new nave console of Ripon Cathedral. The system as found in Truro Cathedral operates like this: Divided Pedal (adjustable dividing point): A# B c c# d d# under the 'divide': Pedal stops and couplers above the 'divide': four illuminated controls: Choir/Swell/Great/Solo to Pedal This allows four different sounds to be played at once (without thumbing down across manuals), for example: Right hand: Great principals 8 ft and 4 ft Left hand: Swell strings Left foot: Pedal 16 ft and 8 ft flutes and Swell to Pedal coupler Right foot: Solo Clarinet via divided pedal coupler Notes and references Pipe organ components Musical instrument parts and accessories
Organ console
[ "Technology" ]
3,145
[ "Pipe organ components", "Components", "Musical instrument parts and accessories" ]
10,999,436
https://en.wikipedia.org/wiki/Process%20performance%20index
In process improvement efforts, the process performance index is an estimate of the process capability of a process during its initial set-up, before it has been brought into a state of statistical control. Formally, if the upper and lower specifications of the process are USL and LSL, the estimated mean of the process is , and the estimated variability of the process (expressed as a standard deviation) is , then the process performance index is defined as: is estimated using the sample standard deviation. Ppk may be negative if the process mean falls outside the specification limits (because the process is producing a large proportion of defective output). Some specifications may only be one sided (for example, strength). For specifications that only have a lower limit, ; for those that only have an upper limit, . Practitioners may also encounter , a metric that does not account for process performance not exactly centered between the specification limits, and therefore is interpreted as what the process would be capable of achieving if it could be centered and stabilized. Interpretation Larger values of Ppk may be interpreted to indicate that a process is more capable of producing output within the specification limits, though this interpretation is controversial. Strictly speaking, from a statistical standpoint, Ppk is meaningless if the process under study is not in control because one cannot reliably estimate the process underlying probability distribution, let alone parameters like and . Furthermore, using this metric of past process performance to predict future performance is highly suspect. From a management standpoint, when an organization is under pressure to set up a new process quickly and economically, Ppk is a convenient metric to gauge how set-up is progressing (increasing Ppk being interpreted as "the process capability is improving"). The risk is that Ppk is taken to mean a process is ready for production before all the kinks have been worked out of it. Once a process is put into a state of statistical control, process capability is described using process capability indices, which are formulaically identical to Ppk (and Pp). The indices are named differently in order to call attention to whether the process under study is believed to be in control or not. Example Consider a quality characteristic with a target of 100.00 μm and upper and lower specification limits of 106.00 μm and 94.00 μm, respectively. If, after carefully monitoring the process for a while, it appears that the process is out of control and producing output unpredictably (as depicted in the run chart below), one can't meaningfully estimate its mean and standard deviation. In the example below, the process mean appears to drift upward, settle for a while, and then drift downward. If and are estimated to be 99.61 μm and 1.84 μm, respectively, then That the process mean appears to be unstable is reflected in the relatively low values for Pp and Ppk. The process is producing a significant number of defectives, and, until the cause of the unstable process mean is identified and eliminated, one really can't meaningfully quantify how this process will perform. See also Process (engineering) Process capability Process capability index References Index numbers Statistical process control
Process performance index
[ "Mathematics", "Engineering" ]
639
[ "Statistical process control", "Mathematical objects", "Engineering statistics", "Index numbers", "Numbers" ]
10,999,508
https://en.wikipedia.org/wiki/Logging%20%28computing%29
In computing, logging is the act of keeping a log of events that occur in a computer system, such as problems, errors or just information on current operations. These events may occur in the operating system or in other software. A message or log entry is recorded for each such event. These log messages can then be used to monitor and understand the operation of the system, to debug problems, or during an audit. Logging is particularly important in multi-user software, to have a central overview of the operation of the system. In the simplest case, messages are written to a file, called a log file. Alternatively, the messages may be written to a dedicated logging system or to a log management software, where it is stored in a database or on a different computer system. Specifically, a transaction log is a log of the communications between a system and the users of that system, or a data collection method that automatically captures the type, content, or time of transactions made by a person from a terminal with that system. For Web searching, a transaction log is an electronic record of interactions that have occurred during a searching episode between a Web search engine and users searching for information on that Web search engine. Many operating systems, software frameworks and programs include a logging system. A widely used logging standard is Syslog, defined in IETF RFC 5424. The Syslog standard enables a dedicated, standardized subsystem to generate, filter, record, and analyze log messages. This relieves software developers of having to design and code their ad hoc logging systems. Types Event logs Event logs record events taking place in the execution of a system that can be used to understand the activity of the system and to diagnose problems. They are essential to understand particularly in the case of applications with little user interaction. It can also be useful to combine log file entries from multiple sources. It is a different combination that may yield between with related events on different servers. Other solutions employ network-wide querying and reporting. Transaction logs Most database systems maintain some kind of transaction log, which are not mainly intended as an audit trail for later analysis, and are not intended to be human-readable. These logs record changes to the stored data to allow the database to recover from crashes or other data errors and maintain the stored data in a consistent state. Thus, database systems usually have both general event logs and transaction logs. The use of data stored in transaction logs of Web search engines, Intranets, and Web sites can provide valuable insight into understanding the information-searching process of online searchers. This understanding can enlighten information system design, interface development, and devising the information architecture for content collections. Message logs Internet Relay Chat (IRC), instant messaging (IM) programs, peer-to-peer file sharing clients with chat functions, and multiplayer games (especially MMORPGs) commonly have the ability to automatically save textual communication, both public (IRC channel/IM conference/MMO public/party chat messages) and private chat between users, as message logs. Message logs are almost universally plain text files, but IM and VoIP clients (which support textual chat, e.g. Skype) might save them in HTML files or in a custom format to ease reading or enable encryption. In the case of IRC software, message logs often include system/server messages and entries related to channel and user changes (e.g. topic change, user joins/exits/kicks/bans, nickname changes, the user status changes), making them more like a combined message/event log of the channel in question, but such a log is not comparable to a true IRC server event log, because it only records user-visible events for the time frame the user spent being connected to a certain channel. Instant messaging and VoIP clients often offer the chance to store encrypted logs to enhance the user's privacy. These logs require a password to be decrypted and viewed, and they are often handled by their respective writing application. Some privacy focused messaging services, such as Signal, record minimal logs about users, limiting their information to connection times. Server logs A server log is a log file (or several files) automatically created and maintained by a server consisting of a list of activities it performed. A typical example is a web server log which maintains a history of page requests. The W3C maintains a standard format (the Common Log Format) for web server log files, but other proprietary formats exist. Some servers can log information to computer readable formats (such as JSON) versus the human readable standard. More recent entries are typically appended to the end of the file. Information about the request, including client IP address, request date/time, page requested, HTTP code, bytes served, user agent, and referrer are typically added. This data can be combined into a single file, or separated into distinct logs, such as an access log, error log, or referrer log. However, server logs typically do not collect user-specific information. These files are usually not accessible to general Internet users, only to the webmaster or other administrative person of an Internet service. A statistical analysis of the server log may be used to examine traffic patterns by time of day, day of week, referrer, or user agent. Efficient web site administration, adequate hosting resources and the fine tuning of sales efforts can be aided by analysis of the web server logs. See also - comparing software tracing with event logging - with a focus on security management References Data logging Computer logging
Logging (computing)
[ "Technology" ]
1,136
[ "Computer errors", "Computer logging" ]
10,999,922
https://en.wikipedia.org/wiki/Mean%20shift
Mean shift is a non-parametric feature-space mathematical analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm. Application domains include cluster analysis in computer vision and image processing. History The mean shift procedure is usually credited to work by Fukunaga and Hostetler in 1975. It is, however, reminiscent of earlier work by Schnell in 1964. Overview Mean shift is a procedure for locating the maxima—the modes—of a density function given discrete data sampled from that function. This is an iterative method, and we start with an initial estimate . Let a kernel function be given. This function determines the weight of nearby points for re-estimation of the mean. Typically a Gaussian kernel on the distance to the current estimate is used, . The weighted mean of the density in the window determined by is where is the neighborhood of , a set of points for which . The difference is called mean shift in Fukunaga and Hostetler. The mean-shift algorithm now sets , and repeats the estimation until converges. Although the mean shift algorithm has been widely used in many applications, a rigid proof for the convergence of the algorithm using a general kernel in a high dimensional space is still not known. Aliyari Ghassabeh showed the convergence of the mean shift algorithm in one dimension with a differentiable, convex, and strictly decreasing profile function. However, the one-dimensional case has limited real world applications. Also, the convergence of the algorithm in higher dimensions with a finite number of the stationary (or isolated) points has been proved. However, sufficient conditions for a general kernel function to have finite stationary (or isolated) points have not been provided. Gaussian Mean-Shift is an Expectation–maximization algorithm. Details Let data be a finite set embedded in the -dimensional Euclidean space, . Let be a flat kernel that is the characteristic function of the -ball in , In each iteration of the algorithm, is performed for all simultaneously. The first question, then, is how to estimate the density function given a sparse set of samples. One of the simplest approaches is to just smooth the data, e.g., by convolving it with a fixed kernel of width , where are the input samples and is the kernel function (or Parzen window). is the only parameter in the algorithm and is called the bandwidth. This approach is known as kernel density estimation or the Parzen window technique. Once we have computed from the equation above, we can find its local maxima using gradient ascent or some other optimization technique. The problem with this "brute force" approach is that, for higher dimensions, it becomes computationally prohibitive to evaluate over the complete search space. Instead, mean shift uses a variant of what is known in the optimization literature as multiple restart gradient descent. Starting at some guess for a local maximum, , which can be a random input data point , mean shift computes the gradient of the density estimate at and takes an uphill step in that direction. Types of kernels Kernel definition: Let be the -dimensional Euclidean space, . The norm of is a non-negative number, . A function is said to be a kernel if there exists a profile, , such that and k is non-negative. k is non-increasing: if . k is piecewise continuous and The two most frequently used kernel profiles for mean shift are: Flat kernel Gaussian kernel where the standard deviation parameter works as the bandwidth parameter, . Applications Clustering Consider a set of points in two-dimensional space. Assume a circular window centered at and having radius as the kernel. Mean-shift is a hill climbing algorithm which involves shifting this kernel iteratively to a higher density region until convergence. Every shift is defined by a mean shift vector. The mean shift vector always points toward the direction of the maximum increase in the density. At every iteration the kernel is shifted to the centroid or the mean of the points within it. The method of calculating this mean depends on the choice of the kernel. In this case if a Gaussian kernel is chosen instead of a flat kernel, then every point will first be assigned a weight which will decay exponentially as the distance from the kernel's center increases. At convergence, there will be no direction at which a shift can accommodate more points inside the kernel. Tracking The mean shift algorithm can be used for visual tracking. The simplest such algorithm would create a confidence map in the new image based on the color histogram of the object in the previous image, and use mean shift to find the peak of a confidence map near the object's old position. The confidence map is a probability density function on the new image, assigning each pixel of the new image a probability, which is the probability of the pixel color occurring in the object in the previous image. A few algorithms, such as kernel-based object tracking, ensemble tracking, CAMshift expand on this idea. Smoothing Let and be the -dimensional input and filtered image pixels in the joint spatial-range domain. For each pixel, Initialize and Compute according to until convergence, . Assign . The superscripts s and r denote the spatial and range components of a vector, respectively. The assignment specifies that the filtered data at the spatial location axis will have the range component of the point of convergence . Strengths Mean shift is an application-independent tool suitable for real data analysis. Does not assume any predefined shape on data clusters. It is capable of handling arbitrary feature spaces. The procedure relies on choice of a single parameter: bandwidth. The bandwidth/window size 'h' has a physical meaning, unlike k-means. Weaknesses The selection of a window size is not trivial. Inappropriate window size can cause modes to be merged, or generate additional “shallow” modes. Often requires using adaptive window size. Availability Variants of the algorithm can be found in machine learning and image processing packages: ELKI. Java data mining tool with many clustering algorithms. ImageJ. Image filtering using the mean shift filter. mlpack. Efficient dual-tree algorithm-based implementation. OpenCV contains mean-shift implementation via cvMeanShift Method Orfeo toolbox. A C++ implementation. scikit-learn Numpy/Python implementation uses ball tree for efficient neighboring points lookup See also DBSCAN OPTICS algorithm Kernel density estimation (KDE) Kernel (statistics) References Computer vision Cluster analysis algorithms
Mean shift
[ "Engineering" ]
1,318
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
10,999,983
https://en.wikipedia.org/wiki/Millimeter%20Anisotropy%20eXperiment%20IMaging%20Array
The Millimeter Anisotropy eXperiment IMaging Array (MAXIMA) experiment was a balloon-borne experiment funded by the United States NSF, NASA, and Department of Energy, and operated by an international collaboration headed by the University of California, to measure the fluctuations of the cosmic microwave background. It consisted of two flights, one in August 1998 and one in June 1999. For each flight the balloon was started at the Columbia Scientific Balloon Facility in Palestine, Texas and flew to an altitude of 40,000 metres for over 8 hours. For the first flight it took data from about 0.3 percent of the sky of the northern region near the Draco constellation. For the second flight, known as MAXIMA-II, twice the area was observed, this time in the direction of Ursa Major. Initially planned together with the BOOMERanG experiment, it split off during the planning phase to take a less risky approach by reducing flying time as well as launching and landing on U.S. territory. Instrumentation A 1.3-metre primary mirror, along with a smaller secondary and tertiary mirror, was used to focus the microwaves onto the feed horns. The feed horns had spectral bands centred at 150, 240 and 420 GHz with a resolution of 10 arcminutes. A bolometer array consisting of sixteen NTD-Ge thermistors measured the incident radiation. The detector array was cooled to 100 mK via a four-stage refrigeration process. Liquid nitrogen cooled the outer layer of radiation shielding and He-4 was used to cool the two other layers to a temperature of 2–3 K. Finally liquid He-3 cooled the array down to operation temperature. The shielding, together with the properties of the feed horns, gave a sensitivity of . Two CCD cameras were used to provide accurate measurements of the telescope's orientation. The first wide-field camera pointed towards Polaris and gave a coarse orientation up to 15 arcminutes. The other camera was mounted in the primary focus and gave an accuracy of half an arcminute for stars brighter than 6th magnitude. In total, this produced an overall position tracking accuracy of 10' for the telescope. For pointing the telescope, four motors were used. Results Compared to MAXIMA's competitor the BOOMERanG experiment, MAXIMA's data covers a smaller part of the sky but with much more detail. By the end of the year 2000 the experiment had provided the most accurate measurements of the Cosmic microwave background (CMB) fluctuations on small angular scales. Using this data it was possible to calculate the first three acoustic peaks from the CMB power spectrum. The results confirmed the standard cosmological model, giving a baryon density of about 4%, which agrees with the density calculated from Big Bang nucleosynthesis. The measurement of the flatness of the Universe also confirmed a major prediction of inflationary cosmology, although BOOMERang was the first to discover this. See also Cosmic microwave background experiments Observational cosmology References Physics experiments Cosmic microwave background experiments Balloon-borne experiments
Millimeter Anisotropy eXperiment IMaging Array
[ "Physics" ]
622
[ "Experimental physics", "Physics experiments" ]
11,000,160
https://en.wikipedia.org/wiki/CA19-9
Carbohydrate antigen 19-9 (CA19-9), also known as sialyl-LewisA, is a tetrasaccharide which is usually attached to O-glycans on the surface of cells. It is known to play a role in cell-to-cell recognition processes. It is also a tumor marker used primarily in the management of pancreatic cancer. Structure CA19-9 is the sialylated form of Lewis antigenA. It is a tetrasaccharide with the sequence Neu5Acα2-3Galβ1-3[Fucα1-4]GlcNAcβ. Clinical significance Tumor marker Guidelines from the American Society of Clinical Oncology discourage the use of CA19-9 as a screening test for cancer, particularly pancreatic cancer. The reason is that the test may be falsely normal (false negative) in many cases or abnormally elevated in people who have no cancer (false positive) in others. The main use of CA19-9 is therefore to see whether a pancreatic tumor is secreting it; if that is the case, then the levels should fall when the tumor is treated, and they may rise again if the disease recurs. Therefore it is useful as a surrogate marker for relapse. In people with pancreatic masses, CA19-9 can be useful in distinguishing between cancer and other diseases of the gland. Limitations CA19-9 can be elevated in many types of gastrointestinal cancer, such as colorectal cancer, esophageal cancer and hepatocellular carcinoma. Apart from cancer, elevated levels may occur in pancreatitis, cirrhosis, and diseases of the bile ducts. It can also be elevated in people with obstruction of the bile ducts. In people who lack Lewis antigenA (a blood type antigen on red blood cells), which is about 10% of the white population, CA19-9 is not produced by any cells, even in those with large tumors. This is because of a deficiency of a fucosyltransferase enzyme that is needed to produce Lewis antigenA. History CA19-9 was discovered in the serum of patients with colon cancer and pancreatic cancer in 1981. It was characterized shortly after, and it was found to be carried primarily by mucins. See also Sialyl-LewisX Lewis antigen system References External links CA19-9 at Lab Tests Online CA19-9: analyte monograph - The Association for Clinical Biochemistry and Laboratory Medicine Essentials of Glycobiology 3rd Edition, Chapter 14: "Structures Common to Different Glycans" https://www.ncbi.nlm.nih.gov/books/NBK453042/#_Ch14_s2_ Amino sugars Tetrasaccharides Acetamides Tumor markers Pancreatic cancer
CA19-9
[ "Chemistry", "Biology" ]
605
[ "Amino sugars", "Carbohydrates", "Biomarkers", "Tumor markers", "Chemical pathology" ]
11,000,264
https://en.wikipedia.org/wiki/Homomorphic%20secret%20sharing
In cryptography, homomorphic secret sharing is a type of secret sharing algorithm in which the secret is encrypted via homomorphic encryption. A homomorphism is a transformation from one algebraic structure into another of the same type so that the structure is preserved. Importantly, this means that for every kind of manipulation of the original data, there is a corresponding manipulation of the transformed data. Technique Homomorphic secret sharing is used to transmit a secret to several recipients as follows: Transform the "secret" using a homomorphism. This often puts the secret into a form which is easy to manipulate or store. In particular, there may be a natural way to 'split' the new form as required by step (2). Split the transformed secret into several parts, one for each recipient. The secret must be split in such a way that it can only be recovered when all or most of the parts are combined. (See Secret sharing.) Distribute the parts of the secret to each of the recipients. Combine each of the recipients' parts to recover the transformed secret, perhaps at a specified time. Reverse the homomorphism to recover the original secret. Examples Suppose a community wants to perform an election, using a decentralized voting protocol, but they want to ensure that the vote-counters won't lie about the results. Using a type of homomorphic secret sharing known as Shamir's secret sharing, each member of the community can add their vote to a form that is split into pieces, each piece is then submitted to a different vote-counter. The pieces are designed so that the vote-counters can't predict how any alterations to each piece will affect the whole, thus, discouraging vote-counters from tampering with their pieces. When all votes have been received, the vote-counters combine them, allowing them to recover the aggregate election results. In detail, suppose we have an election with: Two possible outcomes, either yes or no. We'll represent those outcomes numerically by +1 and −1, respectively. A number of authorities, k, who will count the votes. A number of voters, n, who will submit votes. In advance, each authority generates a publicly available numerical key, xk. Each voter encodes his vote in a polynomial pn according to the following rules: The polynomial should have degree , its constant term should be either +1 or −1 (corresponding to voting "yes" or voting "no"), and its other coefficients should be randomly generated. Each voter computes the value of his polynomial pn at each authority's public key xk. This produces k points, one for each authority. These k points are the "pieces" of the vote: If you know all of the points, you can figure out the polynomial pn (and hence you can figure out how the voter voted). However, if you know only some of the points, you can't figure out the polynomial. (This is because you need n points to determine a degree- polynomial. Two points determine a line, three points determine a parabola, etc.) The voter sends each authority the value that was produced using the authority's key. Each authority collects the values that he receives. Since each authority only gets one value from each voter, he can't discover any given voter's polynomial. Moreover, he can't predict how altering the submissions will affect the vote. Once the voters have submitted their votes, each authority k computes and announces the sum Ak of all the values he's received. There are k sums, Ak; when they are combined together, they determine a unique polynomial P(x) – specifically, the sum of all the voter polynomials: P(x) = p1(x) + p2(x) + ... + pn(x). The constant term of P(x) is in fact the sum of all the votes, because the constant term of P(x) is the sum of the constant terms of the individual pn. Thus the constant term of P(x) provides the aggregate election result: if it is positive, more people voted for +1 than for −1; if it is negative, more people voted for −1 than for +1. Features This protocol works as long as not all of the k authorities are corrupt — if they were, then they could collaborate to reconstruct P(x) for each voter and also subsequently alter the votes. The protocol requires authorities to be completed, therefore in case there are authorities, authorities can be corrupted, which gives the protocol a certain degree of robustness. The protocol manages the IDs of the voters (the IDs were submitted with the ballots) and therefore can verify that only legitimate voters have voted. Under the assumptions on t: A ballot cannot be backtracked to the ID so the privacy of the voters is preserved. A voter cannot prove how they voted. It is impossible to verify a vote. The protocol implicitly prevents corruption of ballots. This is because the authorities have no incentive to change the ballot since each authority has only a share of the ballot and has no knowledge how changing this share will affect the outcome. Vulnerabilities The voter cannot be certain that their vote has been recorded correctly. The authorities cannot be sure the votes were legal and equal, for example the voter can choose a value that is not a valid option (i.e. not in ) such as −20, 50, which will tilt the results in their favor. See also End-to-end auditable voting systems Electronic voting Certification of voting machines Techniques of potential election fraud through physical tampering with voting machines Preventing Election fraud: Testing and certification of electronic voting Vote counting system E-democracy Secure multi-party computation Mental poker References Functions and mappings Abstract algebra
Homomorphic secret sharing
[ "Mathematics" ]
1,188
[ "Mathematical analysis", "Functions and mappings", "Algebra", "Mathematical objects", "Mathematical relations", "Abstract algebra" ]
11,000,831
https://en.wikipedia.org/wiki/Phylogenetic%20bracketing
Phylogenetic bracketing is a method of inference used in biological sciences. It is used to infer the likelihood of unknown traits in organisms based on their position in a phylogenetic tree. One of the main applications of phylogenetic bracketing is on extinct organisms, known only from fossils, going back to the last universal common ancestor (LUCA). The method is often used for understanding traits that do not fossilize well, such as soft tissue anatomy, physiology and behaviour. By considering the closest and second-closest well-known (usually extant) organisms, traits can be asserted with a fair degree of certainty, though the method is extremely sensitive to problems from convergent evolution. Method Extant Phylogenetic Bracketing requires that the species forming the brackets be extant. More general forms of phylogenetic bracketing do not require this and may use a mix of extant and extinct taxa to form the bracket. These more generalized forms of phylogenetic bracketing have the advantage in that they can be applied to a wider array of phylogenetic cases. However, since these forms of bracketing are also more generalized and may rely on inferring traits in extinct animals, they also offer lower explanatory power compared to the EPB. Extant phylogenetic bracketing (EPB) This is a popular form of phylogenetic bracketing first introduced by Witmer in 1995. It works by comparing an extinct taxon to its nearest living relatives. For example, Tyrannosaurus, a theropod dinosaur, is bracketed by birds and crocodiles. A feature found in both birds and crocodiles would likely be present in Tyrannosaurus, such as the capability to lay an amniotic egg, whereas a feature both birds and crocodiles lack, such as hair, would probably not be present in Tyrannosaurus. Sometimes this approach is used for the reconstruction of ecological traits as well. Levels of inference The extant phylogenetic bracket approach allows researchers to infer traits in extinct animals with varying levels of confidence. This is referred to as the levels of inference. There are three levels of inference, with each higher level indicating less confidence for the inference. Inferences based on osteological correlates Level 1 — The inference of a character that leaves a bony signature on the skeleton in both members of the extant sister groups. Example: Saying that Tyrannosaurus rex had an eyeball is a level 1 inference because both extant members of the groups encompassing Tyrannosaurus rex have eyeballs, and eyeball sockets (orbital excavations) in the skull, the homology of which is well established, and the fossils of Tyrannosaurus rex skulls have similar morphology. Level 2 — The inference of a character that leaves a signature on the skeleton of only one of the extant sister groups. For example, saying that Tyrannosaurus rex had air sacs running through its skeleton is a level 2 inference as birds are the only extant sister group to Tyrannosaurus rex to show such air sacs. However the underlying pneumatic fossae, air sacs, in the bones of extant birds are remarkably similar to the cavities seen in the fossil vertebrae of Tyrannosaurus rex. The high degree of similarity between the pneumatic fossae in Tyrannosaurus rex and extant birds makes this a fairly strong inference, yet not as strong as a level 1 inference. Level 3 — The inference of a character that leaves a bony signature on the skeleton but is not present in either extant sister group to the taxon in question. For example, saying that ceratopsian dinosaurs such as Triceratops horridus had horns in life would be a level 3 inference. Neither extant crocodylians, nor extant birds have horns today, but the osteological evidence for horns in ceratopsians is without question. Thus a level 3 inference receives no support from the extant phylogenetic bracket, but can still be used with confidence based on the merits of the fossil data itself. Inferences that lack osteological correlates The Extant Phylogenetic Bracket can be used to infer the presence of soft tissues even when those tissues do not interact with the skeleton. As before, there are three different levels of inference. These levels are designated as prime levels. They descend in confidence as they move up a level. Level 1′ — The inference of a character that is shared by both extant sister groups, but does not leave behind a bony signature. For example, saying that Tyrannosaurus rex had a four-chambered heart would be a level 1′ inference as both extant sister groups (Crocodylia and Aves) have four-chambered hearts, but this trait does not leave behind any bony evidence. Level 2′ — The inference of a character that is found in only one sister group to the taxon in question and that does not leave behind any bony evidence. For instance saying that Tyrannosaurus rex was warm-blooded would be a level 2′ inference as extant birds are warm-blooded but extant crocodylians are not. Further, since warm-bloodedness is a physiological trait rather than an anatomical one, it does not leave behind any bony signatures to indicate its presence. Level 3′ — The inference of a character that is found in neither sister group to the taxon in question and that does not leave behind any bony signatures. For example, saying that the large sauropod dinosaur Apatosaurus ajax gave birth to live young similar to mammals and many lizards would be a level 3′ inference as neither crocodylians nor birds give birth to live young and these traits do not leave impressions on the skeleton. In general the primes are always less confident than their underlying levels; however, the confidence between levels is less clear cut. For instance it is unclear if a level 1′ would be less confident than a level 2. The same would go for a level 2′ versus a level 3. Example of bracketing with one extinct and one extant group The Late Cretaceous Kryptobaatar and the extant monotremes (family Tachyglossidae and Ornithorhynchidae) all sport extratarsal spurs on their hind feet. Greatly simplified, the phylogeny is as follows, with taxa known to have extratarsal spurs in bold: Assuming that the Kryptobaatar and monotreme spurs are homologous, they were a feature of their mammalian last common ancestor, so we can tentatively conclude that they were present among the Early Cretaceous Eobaataridae—its descendants—as well. Example of bracketing with only extinct groups A fragmentary fossil with a known phylogeny can be compared to more complete fossil specimen to give an idea about general build and habit. The body of labyrinthodonts can usually be inferred to be broad and squat with a sideways compressed tail, although only the skull has been known for many taxa, based on the shape of more well-known labyrinthodont finds. Example of failure using phylogenetic bracketing Phylogenetic bracketing is based on the notion of anatomical conservationism. The general body shape of an animal can be fairly constant through large groups, but not always. The large theropod dinosaur Spinosaurus was until 2014 only known from fragmentary remains, mainly of the skull and vertebrae. It was assumed that the remaining skeleton would look more or less like that of related animals like Baryonyx and Suchomimus, who sport a traditional theropod anatomy of long, strong hind legs and relatively small front legs. A 2014 find, however, included a set of hind legs. The new reconstruction indicate earlier Spinosaurus reconstructions were wrong, and the animal was mainly aquatic and had relatively weak hind legs. It is possible it walked on all fours when on land, the only theropod to do so. See also Cladistics Paleontology References Phylogenetics
Phylogenetic bracketing
[ "Biology" ]
1,587
[ "Bioinformatics", "Phylogenetics", "Taxonomy (biology)" ]
11,002,752
https://en.wikipedia.org/wiki/Krogmann%27s%20salt
Krogmann's salt is a linear chain compound consisting of stacks of tetracyanoplatinate. Sometimes described as molecular wires, Krogmann's salt exhibits highly anisotropic electrical conductivity. For this reason, Krogmann's salt and related materials are of some interest in nanotechnology. History and nomenclature Krogmann's salt was first synthesized by Klaus Krogmann in the late 1960s. Krogmann's salt most commonly refers to a platinum metal complex of the formula K2[Pt(CN)4X0.3] where X is usually bromine (or sometimes chlorine). Many other non-stoichiometric metal salts containing the anionic complex [Pt(CN)4]n− can also be characterized. Structure and physical properties Krogmann's salt is a series of partially oxidized tetracyanoplatinate complexes linked by the platinum-platinum bonds on the top and bottom faces of the planar [Pt(CN)4]n− anions. This salt forms infinite stacks in the solid state based on the overlap of the dz2 orbitals. Krogmann's salt has a tetragonal crystal structure with a Pt-Pt distance of 2.880 angstroms, which is much shorter than the metal-metal bond distances in other planar platinum complexes such as Ca[Pt(CN)4]·5H2O (3.36 angstroms), Sr[Pt(CN)4]·5H2O (3.58 angstroms), and Mg[Pt(CN)4]·7H2O (3.16 angstroms). The Pt-Pt distance in Krogmann's salt is only 0.1 angstroms longer than in platinum metal. Each unit cell contains a site for Cl−, corresponding to 0.5 Cl− per Pt. However, this site is only filled 64% of the time, giving 0.32 Cl− per Pt in the actual compound. Because of this, the oxidation number of Pt does not rise above +2.32. Krogmann's salt has no recognizable phase range and is characterized by broad and intense intervalence bands in its electronic spectra. Chemical properties One of the most widely researched properties of Krogmann's salt is its unusual electric conductance. Because of its linear chain structure and overlap of the platinum orbitals, Krogmann's salt is an excellent conductor of electricity. This property makes it an attractive material for nanotechnology. Preparation The usual preparation of Krogmann's salt involves the evaporation of a 5:1 molar ratio mixture of the salts K2[Pt(CN)4] and K2[Pt(CN)4Br2] in water to give copper-colored needles of K2[Pt(CN)4]Br0.32·2.6 H2O. 5K2[Pt(CN)4] + K2[Pt(CN)4Br2] + 15.6 H2O → 6K2[Pt(CN)4]Br0.32·2.6 H2O Because excess PtII or PtIV complex crystallizes out with the product when the reactant ratio is changed, the product is therefore well defined, although non-stoichiometric. Uses Krogmann's salt nor any related material has found any commercial applications. References Cyanides Potassium compounds Platinum compounds Metal halides Electrical conductors Mixed valence compounds Non-stoichiometric compounds
Krogmann's salt
[ "Physics", "Chemistry" ]
755
[ "Mixed valence compounds", "Inorganic compounds", "Non-stoichiometric compounds", "Salts", "Materials", "Electrical conductors", "Metal halides", "Matter" ]
11,002,930
https://en.wikipedia.org/wiki/Applied%20Spectroscopy%20Reviews
Applied Spectroscopy Reviews is a peer-reviewed scientific journal that publishes review articles on all aspects of spectroscopy. Abstracting and indexing The journal is abstracted and indexed by the Science Citation Index and Current Contents/Physical, Chemical & Earth Sciences. References External links Spectroscopy journals Optics journals Taylor & Francis academic journals Bimonthly journals Academic journals established in 1967 English-language journals
Applied Spectroscopy Reviews
[ "Physics", "Chemistry", "Astronomy" ]
75
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Astronomy stubs", "Spectroscopy journals", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
11,003,420
https://en.wikipedia.org/wiki/Brown%20energy
Brown energy or brown power are terms that have been coined to describe energy produced from polluting sources as a contrast to green energy from renewable, non-polluting sources. The term "grey energy" or "gray energy" has been used instead, including by the United Nations. See also Brownout (electricity) References Energy
Brown energy
[ "Physics" ]
67
[ "Energy (physics)", "Energy", "Physical quantities" ]
11,003,581
https://en.wikipedia.org/wiki/Sattvic%20diet
A sattvic diet is a type of plant-based diet within Ayurveda where food is divided into what is defined as three yogic qualities (guna) known as sattva. In this system of dietary classification, foods that decrease the energy of the body are considered tamasic, while those that increase the energy of the body are considered rajasic. A sattvic diet is sometimes referred to as a yogic diet in modern literature. A sattvic diet shares the qualities of sattva, some of which include "pure, essential, natural, vital, energy-containing, clean, conscious, true, honest, wise". A sattvic diet can also exemplify ahimsa, the principle of not causing harm to other living beings. This is one reason yogis often follow a vegetarian diet. A sattvic diet is a regimen that places emphasis on seasonal foods, fruits if one has no sugar problems, nuts, seeds, oils, ripe vegetables, legumes, whole grains, and non-meat based proteins. Dairy products are recommended when the cow is fed and milked appropriately. In ancient and medieval era Yoga literature, the concept discussed is Mitahara, which literally means "moderation in eating". A sattvic diet is one type of treatment recommended in ayurvedic literature. Etymology Sattvic is derived from () which is a Sanskrit word. Sattva is a complex concept in Indian philosophy, used in many contexts, and it means one that is "pure, essence, nature, vital, energy, clean, conscious, strong, courage, true, honest, wise, rudiment of life". Sattva is one of three gunas (quality, peculiarity, tendency, attribute, property). The other two qualities are considered to be rajas (agitated, passionate, moving, emotional, trendy) and tamas (dark, destructive, spoiled, ignorant, stale, inertia, unripe, unnatural, weak, unclean). The concept that contrasts with and is opposed to sattva is Tamas. A sattvic diet is thus meant to include food and eating habit that is "pure, essential, natural, vital, energy-giving, clean, conscious, true, honest, wise". Ancient literature Eating agreeable (sattvic) food and eating in moderation have been emphasized throughout ancient Indian literature. For example, the c. 5th-century Tamil poet-philosopher Valluvar insists this in the 95th chapter of his work, the Tirukkural. He hints, "Assured of digestion and truly hungry, eat with care agreeable food" (verse 944) and "Agreeable food in moderation ensures absence of pain" (verse 945). Yoga includes recommendations on eating habits. Both the Śāṇḍilya Upanishad and Svātmārāma, an Indian yogi who lived during the 15th century CE, state that Mitahara (eating in moderation) is an important part of yoga practice. It is one of the Yamas (virtuous self restraints). These texts, while discussing yoga diet, however, make no mention of 'sattvic' diet. In Yoga diet context, the virtue of Mitahara is one where the yogi is aware of the quantity and quality of food and drinks he or she consumes, takes neither too much nor too little, and suits it to one's health condition and needs. The application of sattva and tamas concepts to food is a later and relatively new extension to the Mitahara virtue in Yoga literature. Verses 1.57 through 1.63 of Hatha Yoga Pradipika suggest that taste cravings should not drive one's eating habits; rather, the best diet is one that is tasty, nutritious and likable, as well as sufficient to meet the needs of one's body. It recommends that one must "eat only when one feels hungry" and "neither overeat nor eat to completely fill the capacity of one’s stomach; rather leave a quarter portion empty and fill three quarters with quality food and fresh water". The Hathayoga Pradipika suggests ‘‘mitahara’’ regimen of a yogi avoids foods with excessive amounts of sour, salt, bitterness, oil, spice burn, unripe vegetables, fermented foods or alcohol. The practice of Mitahara, in Hathayoga Pradipika, includes avoiding stale, impure and tamasic foods, and consuming moderate amounts of fresh, vital and sattvic foods. Sattvic foods According to ayurveda, sattvic, rajasic, and tamasic foods consist of some combination of any of the five basic elements: prithvi (earth), jala (water), teja (fire), vayu (air), and akash (ether). Nuts and seeds Nuts that may be considered a part of a sattvic diet include raw organic almonds, cashews, and pistachios. Seeds that may be considered a part of a sattvic diet include sunflower and pumpkin seeds. Fruit Fruits that are fresh and organic are considered sattvic. Fresh fruits are preferred to frozen or preserved in a sattvic diet. Dairy Dairy products like yogurt and cheese (paneer) must be made that day, from milk obtained that day. Butter must be fresh daily as well, and raw; but ghee (clarified butter) can be aged forever, and is great for cooking. Freshness is key with dairy. Milk should be freshly milked from a cow. Milk that is not consumed fresh can be refrigerated for one to two days in its raw state, but must be brought to a boil before drinking, and drunk while still hot/warm. Vegetables Most mild vegetables are considered sattvic. Pungent vegetables leek, garlic and onion (tamasic) are excluded, including mushrooms, as all fungi are also considered tamasic. Some consider tomatoes, peppers, and aubergines as sattvic, but most consider the Allium family (garlic, onion, leeks, shallots), as well as fungus (yeasts, molds, and mushrooms) as not sattvic. Whole grains Whole grains provide nourishment. Some include organic rice, whole wheat, spelt, oatmeal and barley. Sometimes the grains are lightly roasted before cooking to remove some of their heavy quality. Yeasted breads are not recommended, unless toasted. Wheat and other grains can be sprouted before cooking as well. Legumes Mung beans, lentils, yellow split peas, chickpeas, aduki beans, common beans and bean sprouts are considered sattvic if well prepared. In general, the smaller the bean, the easier to digest. Sweeteners Most yogis use raw honey (often in combination with dairy), jaggery, or raw sugar (not refined). Palm jaggery and coconut palm sugar are other choices. Others use alternative sweeteners, such as stevia or stevia leaf. In some traditions, sugar and/or honey are excluded from the diet, along with all other sweeteners. Spices Sattvic spices are herbs/leaves, including basil and coriander. All other spices are considered either rajasic or tamasic. However, over time, certain Hindu sects have tried to classify a few spices as Sattvic. Spices in the new sattvic list may include cardamom (yealakaai in Tamil, Elaichi in Hindi), cinnamon (Ilavangapattai in Tamil, Dalchini in Hindi), cumin (seeragam in Tamil, Jeera in Hindi), fennel (soambu in Tamil, Saunf in Hindi), fenugreek (venthaiyam in Tamil, Methi in Hindi), black pepper (Piper nigrum) also known as 'Kali mirch' in Hindi, fresh ginger (ingi in Tamil, Adrak in Hindi) and turmeric (Manjai in Tamil, Haldi in Hindi). Rajasic spices like red pepper (kudaimilagai in Tamil, 'Shimla mirch' in Hindi) are normally excluded, but are sometimes used in small amounts, both to clear channels blocked by mucus and to counter tamas. Sattvic herbs Other herbs are used to directly support sattva in the mind and in meditation. These include ashwagandha, bacopa, calamus, gotu kola, ginkgo, jatamansi, purnarnava, shatavari, saffron, shankhapushpi, tulsi and rose. Rajasic (stimulant) foods Rajas food is defined as food that is spicy, hot, fried, or acidic. Raja food could lead to sadness, misery, or ailment. Junk food or preserved foods are often categorized as rajasik. Tamasic (sedative) foods Sedative foods, also called static foods, or tamasic foods, are foods whose consumption, according to Yoga, are harmful to both mind and body. Harm to mind includes anything that will lead to a duller, less refined state of consciousness. Bodily harm includes any foods that will cause detrimental stress to any physical organ, directly or indirectly (via any physical imbalance). Such foods sometimes include: meat, fish, eggs, onion, garlic, scallion, leek, chive, mushroom, alcoholic beverage, durian (fruit), blue cheese, opium, and stale food. Food that has remained for more than three hours (i.e., one yām), is according some commentators on chapter 17 of the Bhagavad Gita, in the tamasic mode. Incompatible foods Incompatible foods (viruddha) are considered to be a cause of many diseases. In the Charaka Samhita, a list of food combinations considered incompatible in the sattvic system is given. P.V. Sharma states that such incompatibilities may not have influence on a person who is strong, exercises sufficiently, and has a good digestive system. Examples of combinations that are considered incompatible include: Salt or anything containing salt with milk (traditionally believed to produce skin diseases). Fruit with milk products. Fish with milk products (traditionally believed to produce toxins) Meat with milk products Sour food or sour fruit with milk products Leafy vegetables with milk products Milk pudding or sweet pudding with rice Mustard oil and curcuma (turmeric) See also Ayurveda Buddhist vegetarianism Diet in Hinduism Jain (Satvika) Lacto vegetarianism Ital Kosher Halal References External links Ayurveda Diets Dietetics Eating behaviors of humans Health promotion Intentional living Vegetarianism Yoga
Sattvic diet
[ "Biology" ]
2,263
[ "Eating behaviors", "Behavior", "Eating behaviors of humans", "Human behavior" ]
11,004,282
https://en.wikipedia.org/wiki/Shailer%20Mathews
Shailer Mathews (1863–1941) was an American liberal Christian theologian, involved with the Social Gospel movement. Career Born on May 26, 1863, in Portland, Maine, and graduated from Colby College. Mathews was a progressive, advocating social concerns as part of the Social Gospel message, and subjecting biblical texts to scientific study, in opposition to contemporary conservative Christians. He incorporated evolutionary theory into his religious views, noting that the two were not mutually exclusive. He remained a devout Baptist for his entire life, and helped establish the Northern Baptist Convention, serving as its president in 1915. Mathews was a prolific author, served as president of the Chicago Society of Biblical Research twice (in 1898–1899 and 1928–1929), and also served as dean of the Divinity School of the University of Chicago (from 1908 to 1933). An endowed chair in his honor, the Shailer Mathews Professorship at the University of Chicago Divinity School, has recently been held by Franklin I. Gamwell and Hans Dieter Betz. He died on October 23, 1941. His ashes are interred in the crypt of First Unitarian Church of Chicago. Select publications The Social Teachings of Jesus, 1897 A History of New Testament Times in Palestine, 1899 The French Revolution, 1900 The Messianic Hope in the New Testament, 1905 The Church and the Changing Order, 1907 The Social Gospel, 1909 The Gospel and the modern Man, 1910 The Social Teaching of Jesus, 1910 Scientific Management in Churches, 1911 The Individual and the Social Gospel, 1914 The Spiritual Interpretation of History, 1916 Patriotism and Religion, 1918 The Validity of American Ideals, 1922 The Faith of Modernism, 1924 Jesus on Social Institutions, 1928 The Atonement and the Social Process, 1930 The Growth of the Idea of God, 1931 Immortality and the Cosmic Process, 1933 Christianity and Social Process, 1934 Creative Christianity, 1935 New Faith for Old: An Autobiography, 1936 The Church and the Christian, 1938 Is God Emeritus? 1940 See also Ernest DeWitt Burton References Footnotes Bibliography External links Photograph of Shailer Mathews Mathews House History of New Testament Times in Palestine Macmillan Company, 1899 Guide to the Shailer Mathews Papers 1892-1942 at the University of Chicago Special Collections Research Center 1863 births 1941 deaths Writers from Portland, Maine American theologians American biblical scholars American historians of religion Colby College alumni University of Chicago Divinity School faculty Baptist ministers from the United States Theistic evolutionists
Shailer Mathews
[ "Biology" ]
482
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
11,004,631
https://en.wikipedia.org/wiki/Variable%20nebula
Variable nebulae are reflection nebulae that change in brightness because of changes in their star. See also McNeil's Nebula NGC 1555 (Hind's Variable Nebula) NGC 2261 (Hubble's Variable Nebula) NGC 6729 (R Coronae Australis Nebula) References External links Astrobiscuit: Seeing The Speed Of Light fun and educational video about variable nebula and the amateur community observing them Nebulae
Variable nebula
[ "Astronomy" ]
87
[ "Nebulae", "Astronomical objects" ]
11,004,775
https://en.wikipedia.org/wiki/High-temperature%20corrosion
High-temperature corrosion is a mechanism of corrosion that takes place when gas turbines, diesel engines, furnaces or other machinery come in contact with hot gas containing certain contaminants. Fuel sometimes contains vanadium compounds or sulfates, which can form low melting point compounds during combustion. These liquid melted salts are strongly corrosive to stainless steel and other alloys normally resistant with respect to corrosion at high temperatures. Other types of high-temperature corrosion include high-temperature oxidation, sulfidation, and carbonization. High temperature oxidation and other corrosion types are commonly modeled using the Deal-Grove model to account for diffusion and reaction dynamics. Sulfates Two types of sulfate-induced hot corrosion are generally distinguished: Type I takes place above the melting point of sodium sulfate, whereas Type II occurs below the melting point of sodium sulfate but in the presence of small amounts of SO3. In Type I, the protective oxide scale is dissolved by the molten salt. Sulfur is released from the salt and diffuses into the metal substrate, forming grey- or blue-colored aluminum or chromium sulfides. With the aluminum or chromium sequestered, after the salt layer has been removed, the steel cannot rebuild a new protective oxide layer. Alkali sulfates are formed from sulfur trioxide and sodium-containing compounds. As the formation of vanadates is preferred, sulfates are formed only if the amount of alkali metals is higher than the corresponding amount of vanadium. The same kind of attack has been observed for potassium sulfate and magnesium sulfate. Vanadium Vanadium is present in petroleum, especially from Canada, western United States, Venezuela and the Caribbean region, often bound to porphyrine in organometallic complexes. These complexes get concentrated on the higher-boiling fractions, which are then form the base of heavy residual fuel oils. Residues of sodium, primarily from sodium chloride and spent oil treatment chemicals, are also present in this petroleum fraction. Combusting any amount more than 100 ppm of sodium and vanadium will yield ash capable of causing fuel ash corrosion. Most fuels contain small traces of vanadium. The vanadium is oxidized to different vanadates. Molten vanadates present as deposits on metal can flux oxide scales and passivation layers. Furthermore, the presence of vanadium accelerates the diffusion of oxygen through the fused salt layer to the metal substrate. Vanadates can be present in semiconducting or ionic form, where the semiconducting form has significantly higher corrosivity as the oxygen is transported via oxygen vacancies. The ionic form, in contrast, transports oxygen by diffusion of the entire vanadate, which is significantly slower. The semiconducting form is rich in vanadium pentoxide. At high temperatures or when there is a lower availability of oxygen, refractory oxidesvanadium dioxide and vanadium trioxideform. These more reduced forms of vanadium do not promote corrosion. However, at conditions most common for burning, vanadium pentoxide gets formed. Together with sodium oxide, vanadates of various composition ratios are formed. Vanadates of composition approximating Na2O.6 V2O5 have the highest corrosion rates at the temperatures between 593 °C and 816 °C; at lower temperatures, the vanadate is in solid state, and at higher temperatures, vanadates with higher proportion of vanadium contribute the most to higher corrosion rates. The solubility of the passivation layer oxides in the molten vanadates depends on the composition of the oxide layer. Iron(III) oxide is readily soluble in vanadates between Na2O.6 V2O5 and 6 Na2O.V2O5, at temperatures below 705 °C in amounts up to equal to the mass of the vanadate. This composition range is common for ashes, which aggravates the problem. Chromium(III) oxide, nickel(II) oxide, and cobalt(II) oxide are less soluble in vanadates; they convert the vanadates to the less corrosive ionic form and their vanadates are tightly adherent, refractory, and act as oxygen barriers. The rate of corrosion caused by vanadates can be lowered by reducing the amount of excess air available for combustion to preferentially form the refractory oxides, using refractory coatings on the exposed surfaces, or using high-chromium alloys, such as 50% Ni/50% Cr or 40% Ni/60% Cr. The presence of sodium in a ratio of 1:3 gives the lowest melting point and must be avoided. This melting point of 535 °C can cause problems on the hot spots of the engine like piston crowns, valve seats, and turbochargers. Lead Lead can form a low-melting slag capable of fluxing protective oxide scales. Lead is more often known for causing stress corrosion cracking in common materials that are exposed to molten lead. The cracking tendency of lead has been known for some time, since most iron based alloys, including those used in steel containers and vessels for molten lead baths, usually fail due to cracking. See also Internal oxidation Deal-Grove model Thermal oxidation Corrosion engineering References External links Hot corrosion information Corrosion
High-temperature corrosion
[ "Chemistry", "Materials_science" ]
1,089
[ "Metallurgy", "Electrochemistry", "Materials degradation", "Corrosion" ]
11,004,852
https://en.wikipedia.org/wiki/NGC%201555
NGC 1555, sometimes known as Hind's Variable Nebula, Sh2-238 or HH 155, is a variable nebula 4 light years across, illuminated by the star T Tauri, located in the constellation Taurus. It is 400 light years away from Earth, and has a magnitude (B) of 9.98. It is also in the second Sharpless catalog as 238. It is a Herbig–Haro object. The nebula was discovered on October 11, 1852, by John Russell Hind. See also NGC 2261 References External links Galaxy Map Sharpless 238 Diffuse nebulae Taurus (constellation) 1555 Sharpless objects Herbig–Haro objects Astronomical objects discovered in 1852
NGC 1555
[ "Astronomy" ]
143
[ "Taurus (constellation)", "Constellations" ]
11,004,859
https://en.wikipedia.org/wiki/Harvard%20Five
The Harvard Five was a group of architects that settled in New Canaan, Connecticut in the 1940s: John M. Johansen, Marcel Breuer, Landis Gores, Philip Johnson and Eliot Noyes. Marcel Breuer was an instructor at the Harvard Graduate School of Design, while Gores, Johansen, Johnson and Noyes were students there. They were all influenced by Walter Gropius, who founded the Bauhaus in 1919, and thereafter became head of the architecture program at Harvard. The small town of New Canaan is nationally recognized for its many examples of modern architecture. Approximately 100 modern homes were built in town, including Johnson's Glass House and the Landis Gores House, and about 20 have been torn down. Four are now listed on the U.S. National Register of Historic Places: the Landis Gores House, the Richard and Geraldine Hodgson House, the Philip Johnson Glass House, and the Noyes House. Other notable architects lived in New Canaan and designed residences for themselves and clients there, including John Black Lee, Hugh Smallen, Victor Christ-Janer, Alan Goldberg, and Carl Koch. References Modernist architects New Canaan, Connecticut Architectural history
Harvard Five
[ "Engineering" ]
245
[ "Architectural history", "Architecture" ]
11,005,376
https://en.wikipedia.org/wiki/Rotational%20modulation%20collimator
Rotational modulation collimators (or RMCs) are a specialization of the modulation collimator, an imaging device invented by Minoru Oda. Devices of this type create images of high energy X-rays (or other radiations that cast shadows). Since high energy X-rays are not easily focused, such optics have found applications in various instruments. RMCs selectively block and unblock X-rays in a way which depends on their incoming direction, converting image information into time variations. Various mathematical transformations can then reconstitute the image of the source. The Small Astronomy Satellite 3, launched in 1975, was one orbiting experiment that used RMCs. A more recent satellite that used RMCs was RHESSI. See also Coded aperture Collimator Modulation References RHESSI Imaging Explained Astronomical instruments
Rotational modulation collimator
[ "Astronomy" ]
165
[ "Astronomy stubs", "Astronomical instruments" ]
11,005,995
https://en.wikipedia.org/wiki/Agricultural%20robot
An agricultural robot is a robot deployed for agricultural purposes. The main area of application of robots in agriculture today is at the harvesting stage. Emerging applications of robots or drones in agriculture include weed control, cloud seeding, planting seeds, harvesting, environmental monitoring and soil analysis. According to Verified Market Research, the agricultural robots market is expected to reach $11.58 billion by 2025. General Fruit picking robots, driverless tractor / sprayers, and sheep shearing robots are designed to replace human labor. In most cases, a lot of factors have to be considered (e.g., the size and color of the fruit to be picked) before the commencement of a task. Robots can be used for other horticultural tasks such as pruning, weeding, spraying and monitoring. Robots can also be used in livestock applications (livestock robotics) such as automatic milking, washing and castrating. Robots like these have many benefits for the agricultural industry, including a higher quality of fresh produce, lower production costs, and a decreased need for manual labor. They can also be used to automate manual tasks, such as weed or bracken spraying, where the use of tractors and other human-operated vehicles is too dangerous for the operators. Designs The mechanical design consists of an end effector, manipulator, and gripper. Several factors must be considered in the design of the manipulator, including the task, economic efficiency, and required motions. The end effector influences the market value of the fruit and the gripper's design is based on the crop that is being harvested. End effector An end effector in an agricultural robot is the device found at the end of the robotic arm, used for various agricultural operations. Several different kinds of end effectors have been developed. In an agricultural operation involving grapes in Japan, end effectors are used for harvesting, berry-thinning, spraying, and bagging. Each was designed according to the nature of the task and the shape and size of the target fruit. For instance, the end effectors used for harvesting were designed to grasp, cut, and push the bunches of grapes. Berry thinning is another operation performed on the grapes, and is used to enhance the market value of the grapes, increase the grapes' size, and facilitate the bunching process. For berry thinning, an end effector consists of an upper, middle, and lower part. The upper part has two plates and a rubber that can open and close. The two plates compress the grapes to cut off the rachis branches and extract the bunch of grapes. The middle part contains a plate of needles, a compression spring, and another plate which has holes spread across its surface. When the two plates compress, the needles punch holes through the grapes. Next, the lower part has a cutting device which can cut the bunch to standardize its length. For spraying, the end effector consists of a spray nozzle that is attached to a manipulator. In practice, producers want to ensure that the chemical liquid is evenly distributed across the bunch. Thus, the design allows for an even distribution of the chemical by making the nozzle move at a constant speed while keeping distance from the target. The final step in grape production is the bagging process. The bagging end effector is designed with a bag feeder and two mechanical fingers. In the bagging process, the bag feeder is composed of slits which continuously supply bags to the fingers in an up and down motion. While the bag is being fed to the fingers, two leaf springs that are located on the upper end of the bag hold the bag open. The bags are produced to contain the grapes in bunches. Once the bagging process is complete, the fingers open and release the bag. This shuts the leaf springs, which seal the bag and prevent it from opening again. Gripper The gripper is a grasping device that is used for harvesting the target crop. Design of the gripper is based on simplicity, low cost, and effectiveness. Thus, the design usually consists of two mechanical fingers that are able to move in synchrony when performing their task. Specifics of the design depend on the task that is being performed. For example, in a procedure that required plants to be cut for harvesting, the gripper was equipped with a sharp blade. Manipulator The manipulator allows the gripper and end effector to navigate through their environment. The manipulator consists of four-bar parallel links that maintain the gripper's position and height. The manipulator also can utilize one, two, or three pneumatic actuators. Pneumatic actuators are motors which produce linear and rotary motion by converting compressed air into energy. The pneumatic actuator is the most effective actuator for agricultural robots because of its high power-weight ratio. The most cost efficient design for the manipulator is the single actuator configuration, yet this is the least flexible option. Development The first development of robotics in agriculture can be dated as early as the 1920s, with research to incorporate automatic vehicle guidance into agriculture beginning to take shape. This research led to the advancements between the 1950s and 60s of autonomous agricultural vehicles. The concept was not perfect however, with the vehicles still needing a cable system to guide their path. Robots in agriculture continued to develop as technologies in other sectors began to develop as well. It was not until the 1980s, following the development of the computer, that machine vision guidance became possible. Other developments over the years included the harvesting of oranges using a robot both in France and the US. While robots have been incorporated in indoor industrial settings for decades, outdoor robots for the use of agriculture are considered more complex and difficult to develop. This is due to concerns over safety, but also over the complexity of picking crops subject to different environmental factors and unpredictability. Demand in the market There are concerns over the amount of labor the agricultural sector needs. With an aging population, Japan is unable to meet the demands of the agricultural labor market. Similarly, the United States currently depends on a large number of immigrant workers, but between the decrease in seasonal farmworkers and increased efforts to stop immigration by the government, they too are unable to meet the demand. Businesses are often forced to let crops rot due to an inability to pick them all by the end of the season. Additionally, there are concerns over the growing population that will need to be fed over the next years. Because of this, there is a large desire to improve agricultural machinery to make it more cost efficient and viable for continued use. Current applications and trends Much of the current research continues to work towards autonomous agricultural vehicles. This research is based on the advancements made in driver-assist systems and self-driving cars. While robots have already been incorporated in many areas of agricultural farm work, they are still largely missing in the harvest of various crops. This has started to change as companies begin to develop robots that complete more specific tasks on the farm. The biggest concern over robots harvesting crops comes from harvesting soft crops such as strawberries which can easily be damaged or missed entirely. Despite these concerns, progress in this area is being made. According to Gary Wishnatzki, the co-founder of Harvest Croo Robotics, one of their strawberry pickers currently being tested in Florida can "pick a 25-acre field in just three days and replace a crew of about 30 farm workers". Similar progress is being made in harvesting apples, grapes, and other crops. In the case of apple harvesting robots, current developments have been too slow to be commercially viable. Modern robots are able to harvest apples at a rate of one every five to ten seconds while the average human harvests at a rate of one per second. Another goal being set by agricultural companies involves the collection of data. There are rising concerns over the growing population and the decreasing labor available to feed them. Data collection is being developed as a way to increase productivity on farms. AgriData is currently developing new technology to do just this and help farmers better determine the best time to harvest their crops by scanning fruit trees. Applications Robots have many fields of application in agriculture. Some examples and prototypes of robots include the Merlin Robot Milker, Rosphere, Harvest Automation, Orange Harvester, lettuce bot, and weeder. According to David Gardner, chief executive of the Royal Agricultural Society of England, a robot can complete a complicated task if its repetitive and the robot is allowed to sit in a single place. Furthermore, robots that work on repetitive tasks (e.g. milking) fulfill their role to a consistent and particular standard. One case of a large scale use of robots in farming is the milk bot. It is widespread among British dairy farms because of its efficiency and nonrequirement to move. Another field of application is horticulture. One horticultural application is the development of RV100 by Harvest Automation Inc. RV 100 is designed to transport potted plants in a greenhouse or outdoor setting. The functions of RV100 in handling and organizing potted plants include spacing capabilities, collection, and consolidation. The benefits of using RV100 for this task include high placement accuracy, autonomous outdoor and indoor function, and reduced production costs. Benefits of many applications may include ecosystem/environmental benefits, and reduced costs for labor (which may translate to reduced food costs), which may be of special importance for food production in regions where there are labor shortages or where labor is relatively expensive. Benefits also include the general advantages of automation such as in terms of productivity/availability and increasing availability of human resources for other tasks or e.g. making work more engaging. Examples and further applications Weed control using lasers (e.g. LaserWeeder by Carbon Robotics) Precision agriculture robots applying low amounts of herbicides and fertilizers with precision while mapping plant locations Picking robots are under development Vinobot and Vinoculer LSU's AgBot Burro, a carrying and path following robot with the potential to expand into picking and phytopathology Harvest Automation is a company founded by former iRobot employees to develop robots for greenhouses Root AI has made a tomato-picking robot for use in greenhouses Strawberry picking robot from Robotic Harvesting and Agrobot Small Robot Company developed a range of small agricultural robots, each one being focused on a particular task (weeding, spraying, drilling holes, ...) and controlled by an AI system Agreenculture ecoRobotix has made a solar-powered weeding and spraying robot Blue River Technology has developed a farm implement for a tractor which only sprays plants that require spraying, reducing herbicide use by 90% Casmobot next generation slope mower Fieldrobot Event is a competition in mobile agricultural robotics HortiBot - A Plant Nursing Robot Lettuce Bot - Organic Weed Elimination and Thinning of Lettuce Rice planting robot developed by the Japanese National Agricultural Research Centre ROS Agriculture - Open source software for agricultural robots using the Robot Operating System The IBEX autonomous weed spraying robot for extreme terrain, under development FarmBot, Open Source CNC Farming VAE, under development by an Argentinean ag-tech startup, aims to become a universal platform for multiple agricultural applications, from precision spraying to livestock handling. ACFR RIPPA: for spot spraying ACFR SwagBot; for livestock monitoring ACFR Digital Farmhand: for spraying, weeding and seeding Thorvald - an autonomous modular multi-purpose agricultural robot developed by Saga Robotics. See also E-agriculture Machine vision Agricultural drones References External links Robot Agricultural revolutions Harvest -
Agricultural robot
[ "Physics", "Technology" ]
2,369
[ "Physical systems", "Machines", "Robots" ]
11,006,582
https://en.wikipedia.org/wiki/Strong%20partition%20cardinal
In Zermelo–Fraenkel set theory without the axiom of choice, a strong partition cardinal is an uncountable well-ordered cardinal such that every partition of the set of size subsets of into less than pieces has a homogeneous set of size . The existence of strong partition cardinals contradicts the axiom of choice. The axiom of determinacy implies that ℵ1 is a strong partition cardinal. References . Cardinal numbers
Strong partition cardinal
[ "Mathematics" ]
91
[ "Numbers", "Mathematical objects", "Cardinal numbers", "Infinity" ]
11,006,766
https://en.wikipedia.org/wiki/May%20Berenbaum
May Roberta Berenbaum (born July 22, 1953) is an American entomologist, who is a professor of entomology at University of Illinois Urbana-Champaign. Her research focuses on the chemical interactions between herbivorous insects and their host plants, and the implications of these interactions on the organization of natural communities and the evolution of species. She is particularly interested in nectar, plant phytochemicals, honey and bees, and her research has important implications for beekeeping. She is a member of the National Academy of Sciences and was named editor-in-chief of its journal, Proceedings of the National Academy of Sciences in 2019; she is also a member of the American Philosophical Society (1996), and a fellow of the American Academy of Arts and Sciences (1996). She has held a Maybelle Leland Swanlund Endowed Chair in entomology since 2012, which is the highest title a professor can hold at the University of Illinois. In 2014, she was awarded the National Medal of Science. Early life and education Berenbaum graduated summa cum laude, with a B.S. degree and honors in biology, from Yale University in 1975. Berenbaum discovered an interest in entomology after taking a course on terrestrial arthropods only because it fit her schedule, and found a second passion by taking an elective course in plant biochemistry. After attending a research seminar on chemical ecology by Paul Feeny, she decided to integrate her interests in entomology and botany, and began a PhD supervised by Feeny at Cornell University. Berenbaum received her Ph.D. in ecology and evolutionary biology in 1980. Research Berenbaum is known for her research into the chemistry of honey and its importance as a functional food for bees and wasps in the superfamily Apoidea. As of 2021, approximately 20,000 bee species are known, but there are also signs of declines in bee populations in many countries. Berenbaum's research has shown that honey contains phytochemicals that help bees to tolerate cold, resist pesticides, fight off infections, heal wounds, and live longer. Important phytochemicals include p-coumaric acid, quercetin, abscisic acid, anabasine, caffeine, gallic acid, kaempferol, and thymol. Furthermore, sick honeybees will choose among different types of honey and eat the one that contains the phytochemicals that can improve their health. Berenbaum's work has important implications, suggesting changes to practices in the beekeeping industry which may help bees to survive. One conclusion is that floral diversity matters: bees that have the opportunity to make honey from a diverse range of flowers will be healthier bees. As well, beekeepers should leave their bees a variety of different honeys, gathered at different times from different plants, so that they have a "honey pharmacy" to choose from when ill. Career Since 1980, Berenbaum has been a member of the faculty of the department of entomology at the University of Illinois Urbana-Champaign and has served as head of the department since 1992. In 1996, she was elected a Fellow of the American Academy of Arts and Sciences and she was elected a member of the American Philosophical Society in the same year. She served as the editor of Annual Review of Entomology from 1997 until 2018, and was named editor-in-chief of Proceedings of the National Academy of Sciences USA in 2019. She has also chaired two National Research Council committees, the Committee on the Future of Pesticides in U.S. Agriculture (2000) and the Committee on the Status of Pollinators in North America (2007). She has written numerous magazine articles, as well as books about insects for the general public: Ninety-nine gnats, nits, and nibblers (1989) Ninety-nine more maggots, mites, and munchers (1993) Bugs in the system: insects and their impact on human affairs (1995) Buzzwords: a scientist muses on sex, bugs, and rock'n roll (2000) Earwig's tail: a modern bestiary of multi-legged legends (2009) Honey, I'm homemade: sweet treats from the beehive across the centuries and around the world (2010) Berenbaum has also gained some measure of fame as the organizer of the Insect Fear Film Festival at the University of Illinois Urbana-Champaign. Personal life Berenbaum is a strict vegetarian in her personal life. She has researched and taught entomophagy to her students, but never eats insects herself. Awards and honors A character in The X-Files was named after her: Dr. Bambi Berenbaum, a famous entomologist and love-interest of Agent Mulder. She is the recipient of the 1996 Entomological Society of America North Central Branch Distinguished Teaching Award Awarded the prestigious Ecological Society of America Robert MacArthur Award in 2004 for outstanding contributions to ecology Berenbaum received the 2009 Public Understanding of Science and Technology Award from the American Association for the Advancement of Science. She is an Honorary Member of the British Ecological Society. In March 2011, she was awarded the University of Southern California's Tyler Prize for Environmental Achievement. In 2012, she was named a Swanlund Chair at the University of Illinois In 2012, she received the Edward O. Wilson Biodiversity Technology Award In November 2014, she had her first new species named after her, a cockroach, Xestoblatta berenbaumae (Evangelista, Kaplan, & Ware 2015). On October 3, 2014, President Barack Obama awarded the National Medal of Science to Berenbaum. She received the medal in a White House ceremony on November 20, 2014. Selected works Berenbaum, M., Miller, J. R., & Miller, T. A. (1988). Insect-Plant Interactions. New York: Springer. Berenbaum, M. (1989). Ninety-nine Gnats, Nits, and Nibblers. Urbana: University of Illinois Press. Rosenthal, G. A., & Berenbaum, M. R. (1992). Herbivores: Their Interactions with Secondary Plant Metabolites. (Herbivores.) San Diego: Academic Press. Berenbaum, M. (1993). Ninety-nine More Maggots, Mites, and Munchers. Urbana: University of Illinois Press. Berenbaum, M. (1996). Bugs in the System: Insects and their Impact on Human Affairs. Reading, Mass: Addison-Wesley. Berenbaum, M. R. (2001). Buzzwords: A Scientist Muses on Sex, Bugs, and Rock'n Roll. Washington, DC: Joseph Henry Press. Jeffords, M. R., Post, S. L., Warwick, C., & Berenbaum, M. (2008). Biologists in the Field: Stories, Tales, and Anecdotes from 150 Years of Field Biology. Champaign, Ill: Illinois Natural History Survey. Berenbaum, M. R. (2009). Earwig's Tail - a Modern Bestiary of Multi-legged Legends. Harvard University Press Berenbaum, M. R. (2010). Honey, I'm Homemade: Sweet Treats from the Beehive Across the Centuries and Around the World. Urbana: University of Illinois Press. Sadava, D. E., Hillis, D. M., Heller, H. C., & Berenbaum, M. (2014). Life: The Science of Biology. 10th ed. Berenbaum, M. R. (2023). “Debugging” insect-related conspiracy theories. Annals of the Entomological Society of America, Article saad018. Advance online publication. https://doi.org/10.1093/aesa/saad018 References External links May Berenbaum at National Academy of Sciences 1953 births Living people Cornell University College of Agriculture and Life Sciences alumni Fellows of the American Academy of Arts and Sciences Members of the United States National Academy of Sciences University of Illinois Urbana-Champaign faculty Yale College alumni National Medal of Science laureates Fellows of the Ecological Society of America Members of the American Philosophical Society American women entomologists Entomological writers American women biologists Jewish American scientists Jewish biologists Chemical ecologists Annual Reviews (publisher) editors Proceedings of the National Academy of Sciences of the United States of America editors Graduate Women in Science members
May Berenbaum
[ "Chemistry" ]
1,753
[ "Chemical ecologists", "Chemical ecology" ]
11,006,780
https://en.wikipedia.org/wiki/Reverse%20connection
A reverse connection is usually used to bypass firewall restrictions on open ports. A firewall usually blocks incoming connections on closed ports, but does not block outgoing traffic. In a normal forward connection, a client connects to a server through the server's open port, but in the case of a reverse connection, the client opens the port that the server connects to. The most common way a reverse connection is used is to bypass firewall and router security restrictions. For example, a backdoor running on a computer behind a firewall that blocks incoming connections can easily open an outbound connection to a remote host on the Internet. Once the connection is established, the remote host can send commands to the backdoor. Remote administration tools (RAT) that use a reverse connection usually send SYN packets to the client's IP address. The client listens for these SYN packets and accepts the desired connections. If a computer is sending SYN packets or is connected to the client's computer, the connections can be discovered by using the netstat command or a common port listener like “Active Ports”. If the Internet connection is closed down and an application still tries to connect to remote hosts it may be infected with malware. Keyloggers and other malicious programs are harder to detect once installed, because they connect only once per session. Note that SYN packets by themselves are not necessarily a cause for alarm, as they are a standard part of all TCP connections. There are honest uses for using reverse connections, for example to allow hosts behind a NAT firewall to be administered remotely. These hosts do not normally have public IP addresses, and so must either have ports forwarded at the firewall, or open reverse connections to a central administration server. References External links Reverse SSH Tunneling Network architecture
Reverse connection
[ "Technology", "Engineering" ]
361
[ "Network architecture", "Computing stubs", "Computer networks engineering", "Computer network stubs" ]
11,007,302
https://en.wikipedia.org/wiki/Halo%20orbit
A halo orbit is a periodic, three-dimensional orbit associated with one of the L1, L2 or L3 Lagrange points in the three-body problem of orbital mechanics. Although a Lagrange point is just a point in empty space, its peculiar characteristic is that it can be orbited by a Lissajous orbit or by a halo orbit. These can be thought of as resulting from an interaction between the gravitational pull of the two planetary bodies and the Coriolis and centrifugal force on a spacecraft. Halo orbits exist in any three-body system, e.g., a Sun–Earth–orbiting satellite system or an Earth–Moon–orbiting satellite system. Continuous "families" of both northern and southern halo orbits exist at each Lagrange point. Because halo orbits tend to be unstable, station-keeping using thrusters may be required to keep a satellite on the orbit. Most satellites in halo orbit serve scientific purposes, for example space telescopes. Definition and history Robert W. Farquhar first used the name "halo" in 1966 for orbits around L which were made periodic using thrusters. Farquhar advocated using spacecraft in such an orbit beyond the Moon (Earth–Moon ) as a communications relay station for an Apollo mission to the far side of the Moon. A spacecraft in such an orbit would be in continuous view of both the Earth and the far side of the Moon, whereas a Lissajous orbit would sometimes make the spacecraft go behind the Moon. In the end, no relay satellite was launched for Apollo, since all landings were on the near side of the Moon. In 1973 Farquhar and Ahmed Kamel found that when the in-plane amplitude of a Lissajous orbit was large enough there would be a corresponding out-of-plane amplitude that would have the same period, so the orbit ceased to be a Lissajous orbit and became approximately an ellipse. They used analytical expressions to represent these halo orbits; in 1984, Kathleen Howell showed that more precise trajectories could be computed numerically. Additionally, she found that for most values of the ratio between the masses of the two bodies (such as the Earth and the Moon) there was a range of stable orbits. The first mission to use a halo orbit was ISEE-3, a joint ESA and NASA spacecraft launched in 1978. It traveled to the Sun–Earth point and remained there for several years. The next mission to use a halo orbit was Solar and Heliospheric Observatory (SOHO), also a joint ESA/NASA mission to study the Sun, which arrived at Sun–Earth in 1996. It used an orbit similar to ISEE-3. Although several other missions since then have traveled to Lagrange points, they (eg. Gaia astrometric space observatory) typically have used the related non-periodic variations called Lissajous orbits rather than an actual halo orbit. Although halo orbits were well known in the RTBP (Restricted Three Body Problem), it was difficult to obtain Halo orbits for the real Earth-Moon system. Translunar halo orbits were first computed in 1998 by M.A. Andreu, who introduced a new model for the motion of a spacecraft in the Earth-Moon-Sun system, which was called Quasi-Bicircular Problem (QBCP). In May 2018, Farquhar's original idea was finally realized when China placed the first communications relay satellite, Queqiao, into a halo orbit around the Earth-Moon point. On 3 January 2019, the Chang'e 4 spacecraft landed in the Von Kármán crater on the far side of the Moon, using the Queqiao relay satellite to communicate with the Earth. The James Webb Space Telescope entered a halo orbit around the Sun-Earth point on 24 January 2022. Euclid entered a similar orbit around this point in August 2023. India's space agency ISRO launched Aditya-L1 to study the sun from a halo orbit around L point. On 6 January 2024, Aditya-L1 spacecraft, India's first solar mission, has successfully entered its final orbit with a period of approximately 180 days around the first Sun-Earth Lagrangian point (L1), approximately 1.5 million kilometers from Earth. See also Interplanetary Transport Network Interplanetary spaceflight Lissajous orbit, another Lagrangian-point orbit which generalizes halo orbits. Near-rectilinear halo orbit :Category:Spacecraft using halo orbits Libration point orbit References External links SOHO – The Trip to the L1 Halo Orbit Low Energy Interplanetary Transfers Using Halo Orbit Hopping Method with STK/Astrogator Gaia's Lissajous Type Orbit – a Lissajous-type orbit, i.e., a near-circular ellipse or "halo" Three-body orbits Trojans (astronomy) Lagrangian mechanics
Halo orbit
[ "Physics", "Mathematics" ]
1,007
[ "Lagrangian mechanics", "Classical mechanics", "Dynamical systems" ]
11,007,310
https://en.wikipedia.org/wiki/Pennington%20Biomedical%20Research%20Center
The Pennington Biomedical Research Center is a health science-focused research center in Baton Rouge, Louisiana. It is part of the Louisiana State University System and conducts clinical, basic, and population science research. It is the largest academically-based nutrition research center in the world, with the greatest number of obesity researchers on faculty. The center's over 500 employees occupy several buildings on the campus. The center was designed by the Baton Rouge architect John Desmond. History In 1980, Baton Rouge oilman and philanthropist C. B. "Doc" Pennington and his wife, Irene, provided $125 million to fund construction of the nutritional research center. With a U.S. Department of Defense contract and funding from the Louisiana Public Facilities Authority, Governor Buddy Roemer proclaimed the official opening of the Center in 1988. Dr George A. Bray, a renowned obesity researcher, was recruited to be the first executive director of the center and under his leadership the center reached its present status in the scientific world. Today, the Pennington Biomedical Research Center houses almost 600 employees, 14 research laboratories, 17 core service laboratories, an inpatient and outpatient clinic, two metabolic chambers, a research kitchen, an administrative area, more than $20 million in technologically advanced equipment, and a team of over 80 scientists and physicians with specialties such as molecular biology, genomics and proteomics, neuroanatomy, exercise physiology, biochemistry, psychology, endocrinology, biostatistics and electrophysiology. One of the former employees was the late state legislator Leonard J. Chabert from Terrebonne Parish, the namesake of the Leonard J. Chabert Medical Center in Houma. Research programs and labs The comprehensive research program at the Pennington Biomedical Research Center focuses on ten specific research program areas as outlined below. Researchers in these divisions rely on the latest molecular, physiological, clinical, behavioral, and bioinformatics technologies with the ultimate goal of preventing common diseases such as heart disease, diabetes, hypertension, and cancer. Cancer: Clinical Oncology & Metabolism, Cancer Energetics Diabetes: Antioxidant and Gene Regulation, John S McIlhenny Skeletal Muscle Physiology, John S. McIlhenny Botanical Research, Joint Program on Diabetes, Endocrinology and Metabolism, Oxidative Stress and Disease Epidemiology and Prevention: Chronic Disease Epidemiology, Contextual Risk Factors, Nutritional Epidemiology, Physical Activity and Obesity Epidemiology Genomics & Molecular Genetics: Gene-Nutrient Interactions, Genetics of Eating Behavior, Human Genomics, Regulation of Gene Expression Neurobiology: Autonomic Neuroscience, Leptin Signaling in the Brain, Neurobiology & Nutrition, Neurobiology of Metabolic Dysfunction Lab, Neurosignaling, Nutrition & Neural Signaling, Neurodegeneration: Aging and Neurodegeneration, Blood Brain Barrier I, Blood Brain Barrier II, Inflammation and Neurodegeneration, Nutritional Neuroscience and Aging Nutrient Sensing & Signaling: Nutrient Sensing and Adipocyte Signaling Obesity: Behavior Modification Clinical Trials, Behavior Technology Laboratory: Eating Disorders and Obesity, Behavioral Medicine, Infection and Obesity, Ingestive Behavior Laboratory, Pediatric Obesity and Health Behavior, Pharmacology-based Clinical Trials, Reproductive Endocrinology & Women's Health, Women's Health, Eating Behavior, & Smoking Cessation Program Physical Activity & Health: Exercise Biology, Human Physiology, Inactivity Physiology, Physical Activity & Ethnic Minority Health, Preventive Medicine, Walking Behavior Stem Cell & Developmental Biology: Developmental Biology, Epigenetics & Nuclear Reprogramming, Ubiquitin Biology Core services Pennington Biomedical Research Center provides core services in three specific areas (i.e., Basic Science, Clinical Science, and Population Science) to support researchers and increase the efficiency and accuracy of investigative procedures. The Basic Science Core allows researchers to use cutting edge technology in the following areas: comparative biology, animal behavior, animal metabolism, cell and tissue imaging and microscopy, cell culture facilities, genomics, transgenics, proteomics and metabolomics. The Clinical Science Core provides researchers access to clinical research study protocol development tools, Internal Review Board (IRB) submission, budgeting assistance, and contract support. The Center assists with study participant recruitment, specimen collection, processing and analysis, dietary assessment, exercise testing, psychological review, and phlebotomy. The Core also provides meal preparation using the Metabolic Kitchen and provides support for data collection and storage. The Population Science Core provides researchers with statistical support for studies, data management assistance, and access to the Library and Information Center which provides bibliographic instruction, interlibrary loan processing, and other services. Centers of excellence The National Institutes of Health (NIH) awards center grants to institutions with groups of established researchers working in a variety of scientific research fields. There are three NIH Centers of Excellence at Pennington Biomedical Research Center. the Center for Research on Botanicals and Metabolic Syndrome (BRC), the Center of Biomedical Research Excellence (COBRE), and the Nutrition and Obesity Research Center (NORC) References External links Official website Louisiana State University System Biochemistry research institutes Medical research institutes in the United States Buildings and structures in Baton Rouge, Louisiana Educational institutions established in 1981 1981 establishments in Louisiana Neuroscience research centers in the United States Research institutes in Louisiana
Pennington Biomedical Research Center
[ "Chemistry" ]
1,089
[ "Biochemistry research institutes", "Biochemistry organizations" ]
11,007,779
https://en.wikipedia.org/wiki/PRMT4%20pathway
Protein arginine N-methyltransferase-4 (PRMT4/CARM1) methylation of arginine residues within proteins plays a critical key role in transcriptional regulation (see the PRMT4 pathway on the left). PRMT4 binds to the classes of transcriptional activators known as p160 and CBP/p300. The modified forms of these proteins are involved in stimulation of gene expression via steroid hormone receptors. Significantly, PRMT4 methylates core histones H3 and H4, which are also targets of the histone acetylase activity of CBP/p300 coactivators. PRMT4 recruitment of chromatin by binding to coactivators increases histone methylation and enhances the accessibility of promoter regions for transcription. Methylation of the transcriptional coactivator CBP by PRMT4 inhibits binding to CREB and thereby partitions the limited cellular pool of CBP for steroid hormone receptor interaction. See also DNA methyltransferase Nucleosome Histone Histone-Modifying Enzymes Chromatin Diet and cancer References Gene expression
PRMT4 pathway
[ "Chemistry", "Biology" ]
238
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
11,007,822
https://en.wikipedia.org/wiki/Harish-Chandra%20class
In mathematics, Harish-Chandra's class is a class of Lie groups used in representation theory. Harish-Chandra's class contains all semisimple connected linear Lie groups and is closed under natural operations, most importantly, the passage to Levi subgroups. This closure property is crucial for many inductive arguments in representation theory of Lie groups, whereas the classes of semisimple or connected semisimple Lie groups are not closed in this sense. Definition A Lie group G with the Lie algebra g is said to be in Harish-Chandra's class if it satisfies the following conditions: g is a reductive Lie algebra (the product of a semisimple and abelian Lie algebra). The Lie group G has only a finite number of connected components. The adjoint action of any element of G on g is given by an action of an element of the connected component of the Lie group of Lie algebra automorphisms of the complexification g⊗C. The subgroup Gss of G generated by the image of the semisimple part gss=[g,g] of the Lie algebra g under the exponential map has finite center. References A. W. Knapp, Structure theory of semisimple Lie groups, in Representation theory of Lie groups
Harish-Chandra class
[ "Mathematics" ]
264
[ "Algebra stubs", "Algebra" ]
11,008,314
https://en.wikipedia.org/wiki/Evolutionary%20history%20of%20plants
The evolution of plants has resulted in a wide range of complexity, from the earliest algal mats of unicellular archaeplastids evolved through endosymbiosis, through multicellular marine and freshwater green algae, to spore-bearing terrestrial bryophytes, lycopods and ferns, and eventually to the complex seed-bearing gymnosperms and angiosperms (flowering plants) of today. While many of the earliest groups continue to thrive, as exemplified by red and green algae in marine environments, more recently derived groups have displaced previously ecologically dominant ones; for example, the ascendance of flowering plants over gymnosperms in terrestrial environments. There is evidence that cyanobacteria and multicellular thalloid eukaryotes lived in freshwater communities on land as early as 1 billion years ago, and that communities of complex, multicellular photosynthesizing organisms existed on land in the late Precambrian, around . Evidence of the emergence of embryophyte land plants first occurs in the middle Ordovician (~), and by the middle of the Devonian (~), many of the features recognised in land plants today were present, including roots and leaves. By the late Devonian (~) some free-sporing plants such as Archaeopteris had secondary vascular tissue that produced wood and had formed forests of tall trees. Also by the late Devonian, Elkinsia, an early seed fern, had evolved seeds. Evolutionary innovation continued throughout the rest of the Phanerozoic eon and still continues today. Most plant groups were relatively unscathed by the Permo-Triassic extinction event, although the structures of communities changed. This may have set the scene for the appearance of the flowering plants in the Triassic (~), and their later diversification in the Cretaceous and Paleogene. The latest major group of plants to evolve were the grasses, which became important in the mid-Paleogene, from around . The grasses, as well as many other groups, evolved new mechanisms of metabolism to survive the low and warm, dry conditions of the tropics over the last . Colonization of land Divergence Land plants evolved from a group of freshwater green algae, perhaps as early as 850 mya, but algae-like plants might have evolved as early as 1 billion years ago. The closest living relatives of land plants are the charophytes, specifically Charales; if modern Charales are similar to the distant ancestors they share with land plants, this means that the land plants evolved from a branched, filamentous alga dwelling in shallow fresh water, perhaps at the edge of seasonally desiccating pools. However, some recent evidence suggests that land plants might have originated from unicellular terrestrial charophytes similar to extant Klebsormidiophyceae. The alga would have had a haplontic life cycle. It would only very briefly have had paired chromosomes (the diploid condition) when the egg and sperm first fused to form a zygote that would have immediately divided by meiosis to produce cells with half the number of unpaired chromosomes (the haploid condition). Co-operative interactions with fungi may have helped early plants adapt to the stresses of the terrestrial realm. Challenges to land colonization Plants were not the first photosynthesisers on land. Weathering rates suggest that organisms capable of photosynthesis were already living on the land , and microbial fossils have been found in freshwater lake deposits from , but the carbon isotope record suggests that they were too scarce to impact the atmospheric composition until around . These organisms, although phylogenetically diverse, were probably small and simple, forming little more than an algal scum. Since lichens initiate the first step in primary ecological succession in contemporary contexts, one hypothesis has been that lichens came on land first and facilitated colonization by plants; however, both molecular phylogenies and the fossil record seem to contradict this. There are multiple potential reasons for why it took so long for land plants to emerge. It could be that atmospheric 'poisoning' prevented eukaryotes from colonising the land prior to the emergence of land plants, or it could simply have taken a great time for the necessary complexity to evolve. A major challenge to land adaptation would have been the absence of appropriate soil. Throughout the fossil record, soil is preserved, giving information on what early soils were like. Before land plants, the soil on land was poor in resources essential for life like nitrogen and phosphorus and had little capacity for holding water. Adaptations to land colonization Evidence of the earliest land plants occurs at about , in lower middle Ordovician rocks from Saudi Arabia and Gondwana in the form of spores known as cryptospores. These spores have walls made of sporopollenin, an extremely decay-resistant material that means they are well-preserved by the fossil record. These spores were produced either singly (monads), in pairs (dyads) or groups of four (tetrads), and their microstructure resembles that of modern liverwort spores, suggesting they share an equivalent grade of organisation. Their walls contain sporopollenin – further evidence of an embryophytic affinity. Trilete spores similar to those of vascular plants appear soon afterwards, in Upper Ordovician rocks about 455 million years ago. Depending exactly when the tetrad splits, each of the four spores may bear a "trilete mark", a Y-shape, reflecting the points at which each cell squashed up against its neighbours. However, this requires that the spore walls be sturdy and resistant at an early stage. This resistance is closely associated with having a desiccation-resistant outer wall—a trait only of use when spores must survive out of water. Indeed, even those embryophytes that have returned to the water lack a resistant wall, thus don't bear trilete marks. A close examination of algal spores shows that none have trilete spores, either because their walls are not resistant enough, or, in those rare cases where they are, because the spores disperse before they are compressed enough to develop the mark or do not fit into a tetrahedral tetrad. The earliest megafossils of land plants were thalloid organisms, which dwelt in fluvial wetlands and are found to have covered most of an early Silurian flood plain. They could only survive when the land was waterlogged. There were also microbial mats. Once plants had reached the land, there were two approaches to dealing with desiccation. Modern bryophytes either avoid it or give in to it, restricting their ranges to moist settings or drying out and putting their metabolism "on hold" until more water arrives, as in the liverwort genus Targionia. Tracheophytes resist desiccation by controlling the rate of water loss. They all bear a waterproof outer cuticle layer wherever they are exposed to air (as do some bryophytes), to reduce water loss, but since a total covering would cut them off from in the atmosphere tracheophytes use variable openings, the stomata, to regulate the rate of gas exchange. Tracheophytes also developed vascular tissue to aid in the movement of water within the organisms (see below), and moved away from a gametophyte dominated life cycle (see below). Vascular tissue ultimately also facilitated upright growth without the support of water and paved the way for the evolution of larger plants on land. Consequences A global glaciation event called Snowball Earth, from around 720-635 mya in the Cryogenian period, is believed to have been at least partially caused by early photosynthetic organisms, which reduced the concentration of carbon dioxide and decreased the greenhouse effect in the atmosphere, leading to an icehouse climate. Based on molecular clock studies of the previous decade or so, a 2022 study observed that the estimated time for the origin of the multicellular streptophytes (all except the unicellular basal clade Mesostigmatophyceae) fell in the cool Cryogenian while that of the subsequent separation of streptophytes fell in the warm Ediacaran, which they interpreted as an indication of selective pressure by the glacial period to the photosynthesizing organisms, a group of which succeeded in surviving in relatively warmer environments that remained habitable, subsequently flourishing in the later Ediacaran and Phanerozoic on land as embryophytes. The study also theorized that the unicellular morphology and other unique features of the Zygnematophyceae may reflect further adaptations to a cold loving life style. The establishment of a land-based flora increased the rate of accumulation of oxygen in the atmosphere, as the land plants produced oxygen as a waste product. When this concentration rose above 13%, around 0.45 billion years ago, wildfires became possible, evident from charcoal in the fossil record. Apart from a controversial gap in the Late Devonian, charcoal has been present ever since. Charcoalification is an important taphonomic mode. Wildfire or burial in hot volcanic ash drives off the volatile compounds, leaving only a residue of pure carbon. This is not a viable food source for fungi, herbivores or detritovores, so it is prone to preservation. It is also robust and can withstand pressure, displaying exquisite, sometimes sub-cellular, detail in remains. In addition to the advent of charcoal in the rock record, the terrestrialization of plants has made significant contributions to changes in geology and landscapes. The Ordovician and Silurian show a 1.4 times greater proportion of mudrock in the geologic record than the previous 90% of earth's history and this increase in mudrock is considered to be a result of land plants retaining muds in a terrestrial setting. Evolution of life cycles All multicellular plants have a life cycle comprising two generations or phases. The gametophyte phase has a single set of chromosomes (denoted 1n) and produces gametes (sperm and eggs). The sporophyte phase has paired chromosomes (denoted 2n) and produces spores. The gametophyte and sporophyte phases may be homomorphic, appearing identical in some algae, such as Ulva lactuca, but are very different in all modern land plants, a condition known as heteromorphy. The pattern in plant evolution has been a shift from homomorphy to heteromorphy. The algal ancestors of land plants were almost certainly haplobiontic, being haploid for all their life cycles, with a unicellular zygote providing the 2N stage. All land plants (i.e. embryophytes) are diplobiontic – that is, both the haploid and diploid stages are multicellular. Two trends are apparent: bryophytes (liverworts, mosses and hornworts) have developed the gametophyte as the dominant phase of the life cycle, with the sporophyte becoming almost entirely dependent on it; vascular plants have developed the sporophyte as the dominant phase, with the gametophytes being particularly reduced in the seed plants. It has been proposed as the basis for the emergence of the diploid phase of the life cycle as the dominant phase that diploidy allows masking of the expression of deleterious mutations through genetic complementation. Thus if one of the parental genomes in the diploid cells contains mutations leading to defects in one or more gene products, these deficiencies could be compensated for by the other parental genome (which nevertheless may have its own defects in other genes). As the diploid phase was becoming predominant, the masking effect likely allowed genome size, and hence information content, to increase without the constraint of having to improve accuracy of replication. The opportunity to increase information content at low cost is advantageous because it permits new adaptations to be encoded. This view has been challenged, with evidence showing that selection is no more effective in the haploid than in the diploid phases of the lifecycle of mosses and angiosperms. There are two competing theories to explain the appearance of a diplobiontic lifecycle. The interpolation theory (also known as the antithetic or intercalary theory) holds that the interpolation of a multicellular sporophyte phase between two successive gametophyte generations was an innovation caused by preceding meiosis in a freshly germinated zygote with one or more rounds of mitotic division, thereby producing some diploid multicellular tissue before finally meiosis produced spores. This theory implies that the first sporophytes bore a very different and simpler morphology to the gametophyte they depended on. This seems to fit well with what is known of the bryophytes, in which a vegetative thalloid gametophyte nurtures a simple sporophyte, which consists of little more than an unbranched sporangium on a stalk. Increasing complexity of the ancestrally simple sporophyte, including the eventual acquisition of photosynthetic cells, would free it from its dependence on a gametophyte, as seen in some hornworts (Anthoceros), and eventually result in the sporophyte developing organs and vascular tissue, and becoming the dominant phase, as in the tracheophytes (vascular plants). This theory may be supported by observations that smaller Cooksonia individuals must have been supported by a gametophyte generation. The observed appearance of larger axial sizes, with room for photosynthetic tissue and thus self-sustainability, provides a possible route for the development of a self-sufficient sporophyte phase. The alternative hypothesis, called the transformation theory (or homologous theory), posits that the sporophyte might have appeared suddenly by delaying the occurrence of meiosis until a fully developed multicellular sporophyte had formed. Since the same genetic material would be employed by both the haploid and diploid phases, they would look the same. This explains the behaviour of some algae, such as Ulva lactuca, which produce alternating phases of identical sporophytes and gametophytes. Subsequent adaption to the desiccating land environment, which makes sexual reproduction difficult, might have resulted in the simplification of the sexually active gametophyte, and elaboration of the sporophyte phase to better disperse the waterproof spores. The tissue of sporophytes and gametophytes of vascular plants such as Rhynia preserved in the Rhynie chert is of similar complexity, which is taken to support this hypothesis. By contrast, modern vascular plants, with the exception of Psilotum, have heteromorphic sporophytes and gametophytes in which the gametophytes rarely have any vascular tissue. Evolution of plant anatomy Arbuscular mycorrhizal symbiosis There is no evidence that early land plants of the Silurian and early Devonian had roots, although fossil evidence of rhizoids occurs for several species, such as Horneophyton. The earliest land plants did not have vascular systems for transport of water and nutrients either. Aglaophyton, a rootless vascular plant known from Devonian fossils in the Rhynie chert was the first land plant discovered to have had a symbiotic relationship with fungi which formed arbuscular mycorrhizas, literally "tree-like fungal roots", in a well-defined cylinder of cells (ring in cross section) in the cortex of its stems. The fungi fed on the plant's sugars, in exchange for nutrients generated or extracted from the soil (especially phosphate), to which the plant would otherwise have had no access. Like other rootless land plants of the Silurian and early Devonian Aglaophyton may have relied on arbuscular mycorrhizal fungi for acquisition of water and nutrients from the soil. The fungi were of the phylum Glomeromycota, a group that probably first appeared 1 billion years ago and still forms arbuscular mycorrhizal associations today with all major land plant groups from bryophytes to pteridophytes, gymnosperms and angiosperms and with more than 80% of vascular plants. Evidence from DNA sequence analysis indicates that the arbuscular mycorrhizal mutualism arose in the common ancestor of these land plant groups during their transition to land and it may even have been the critical step that enabled them to colonise the land. Appearing as they did before these plants had evolved roots, mycorrhizal fungi would have assisted plants in the acquisition of water and mineral nutrients such as phosphorus, in exchange for organic compounds which they could not synthesize themselves. Such fungi increase the productivity even of simple plants such as liverworts. Cuticle, stomata and intercellular spaces To photosynthesise, plants must absorb from the atmosphere. However, making the tissues available for to enter allows water to evaporate, so this comes at a price. Water is lost much faster than is absorbed, so plants need to replace it. Early land plants transported water apoplastically, within the porous walls of their cells. Later, they evolved three anatomical features that provided the ability to control the inevitable water loss that accompanied  acquisition.  First, a waterproof outer covering or cuticle evolved that reduced water loss. Secondly, variable apertures, the stomata that could open and close to regulate the amount of water lost by evaporation during uptake and thirdly intercellular space between photosynthetic parenchyma cells that allowed improved internal distribution of the to the chloroplasts. This three-part system provided improved homoiohydry, the regulation of water content of the tissues, providing a particular advantage when water supply is not constant. The high concentrations of the Silurian and early Devonian, when plants were first colonising land, meant that they used water relatively efficiently. As was withdrawn from the atmosphere by plants, more water was lost in its capture, and more elegant water acquisition and transport mechanisms evolved. Plants growing upwards into the air needed a system for transporting water from the soil to all the different parts of the above-soil plant, especially to photosynthesising parts. By the end of the Carboniferous, when concentrations had been reduced to something approaching that of today, around 17 times more water was lost per unit of uptake. However, even in the "easy" early days, water was always at a premium, and had to be transported to parts of the plant from the wet soil to avoid desiccation. Water can be wicked by capillary action along a fabric with small spaces. In narrow columns of water, such as those within the plant cell walls or in tracheids, when molecules evaporate from one end, they pull the molecules behind them along the channels. Therefore, evaporation alone provides the driving force for water transport in plants.  However, without specialized transport vessels, this cohesion-tension mechanism can cause negative pressures sufficient to collapse water conducting cells, limiting the transport water to no more than a few cm, and therefore limiting the size of the earliest plants. Xylem To be free from the constraints of small size and constant moisture that the parenchymatic transport system inflicted, plants needed a more efficient water transport system.  As plants grew upwards, specialised water transport vascular tissues evolved, first in the form of simple hydroids of the type found in the setae of moss sporophytes. These simple elongated cells were dead and water-filled at maturity, providing a channel for water transport, but their thin, unreinforced walls would collapse under modest water tension, limiting the plant height.  Xylem tracheids, wider cells with lignin-reinforced cell walls that were more resistant to collapse under the tension caused by water stress, occur  in more than one plant group by mid-Silurian, and may have a single evolutionary origin, possibly within the hornworts, uniting all tracheophytes.  Alternatively,  they may have evolved more than once.  Much later, in the Cretaceous, tracheids were followed by vessels in flowering plants. As water transport mechanisms and waterproof cuticles evolved, plants could survive without being continually covered by a film of water. This transition from poikilohydry to homoiohydry opened up new potential for colonisation. The early Devonian pretracheophytes Aglaophyton and Horneophyton have unreinforced water transport tubes with wall structures very similar to moss hydroids, but they grew alongside several species of tracheophytes, such as Rhynia gwynne-vaughanii that had xylem tracheids that were well reinforced by bands of lignin. The earliest macrofossils known to have xylem tracheids are small, mid-Silurian plants of the genus Cooksonia. However, thickened bands on the walls of isolated tube fragments are apparent from the early Silurian onwards. Plants continued to innovate ways of reducing the resistance to flow within their cells, progressively increasing the efficiency of their water transport and to increase the resistance of the tracheids to collapse under tension. During the early Devonian, maximum tracheid diameter increased with time, but may have plateaued in the zosterophylls by mid-Devonian.  Overall transport rate also depends on the overall cross-sectional area of the xylem bundle itself, and some mid-Devonian plants, such as the Trimerophytes, had much larger steles than their early ancestors. While wider tracheids provided higher rates of water transport, they increased the risk of cavitation, the formation of air bubbles resulting from the breakage of the water column under tension. Small pits in tracheid walls allow water to by-pass a defective tracheid while preventing air bubbles from passing through but at the cost of restricted flow rates. By the Carboniferous, Gymnosperms had developed bordered pits, valve-like structures that allow high-conductivity  pits to seal when one side of a tracheid is depressurized. Tracheids have non-perforated end walls with pits, which impose a great deal of resistance on water flow, but may have had the advantage of isolating air embolisms caused by cavitation or freezing. Vessels first evolved during the dry, low periods of the Late Permian, in the horsetails, ferns and Selaginellales independently, and later appeared in the mid Cretaceous in gnetophytes and angiosperms.  Vessel members are open tubes with no end walls, and are arranged end to end to operate as if they were one continuous vessel. Vessels allowed the same cross-sectional area of wood to transport much more water than tracheids. This allowed plants to fill more of their stems with structural fibres and also opened a new niche to vines, which could transport water without being as thick as the tree they grew on. Despite these advantages, tracheid-based wood is a lot lighter, thus cheaper to make, as vessels need to be much more reinforced to avoid cavitation. Once plants had evolved this level of control over water evaporation and water transport, they were truly homoiohydric, able to extract water from their environment through root-like organs rather than relying on a film of surface moisture, enabling them to grow to much greater size but as a result of their increased independence from their surroundings, most vascular plants lost their ability to survive desiccation - a costly trait to lose. In early land plants, support was mainly provided by turgor pressure, particularly of the outer layer of cells known as the sterome tracheids, and not by the xylem, which was too small, too weak and in too central a position to provide much structural support.  Plants with secondary xylem that  had appeared by mid-Devonian, such as the Trimerophytes and Progymnosperms had much larger vascular cross sections producing strong woody tissue. Endodermis An endodermis may have evolved in the earliest plant roots during the Devonian, but the first fossil evidence for such a structure is Carboniferous. The endodermis in the roots surrounds the water transport tissue and regulates ion exchange between the groundwater and the tissues and prevents unwanted pathogens etc. from entering the water transport system. The endodermis can also provide an upwards pressure, forcing water out of the roots when transpiration is not enough of a driver. Evolution of plant morphology Leaves Leaves are the primary photosynthetic organs of a modern plant. The origin of leaves was almost certainly triggered by falling concentrations of atmospheric during the Devonian period, increasing the efficiency with which carbon dioxide could be captured for photosynthesis. Leaves evolved more than once. Based on their structure, they are classified into two types: microphylls, which lack complex venation and may have originated as spiny outgrowths known as enations, and megaphylls, which are large and have complex venation that may have arisen from the modification of groups of branches. It has been proposed that these structures arose independently. Megaphylls, according to Walter Zimmerman's telome theory, have evolved from plants that showed a three-dimensional branching architecture, through three transformations—overtopping, which led to the lateral position typical of leaves, planation, which involved formation of a planar architecture, webbing or fusion, which united the planar branches, thus leading to the formation of a proper leaf lamina. All three steps happened multiple times in the evolution of today's leaves. It is widely believed that the telome theory is well supported by fossil evidence. However, Wolfgang Hagemann questioned it for morphological and ecological reasons and proposed an alternative theory. Whereas according to the telome theory the most primitive land plants have a three-dimensional branching system of radially symmetrical axes (telomes), according to Hagemann's alternative the opposite is proposed: the most primitive land plants that gave rise to vascular plants were flat, thalloid, leaf-like, without axes, somewhat like a liverwort or fern prothallus. Axes such as stems and roots evolved later as new organs. Rolf Sattler proposed an overarching process-oriented view that leaves some limited room for both the telome theory and Hagemann's alternative and in addition takes into consideration the whole continuum between dorsiventral (flat) and radial (cylindrical) structures that can be found in fossil and living land plants. This view is supported by research in molecular genetics. Thus, James (2009) concluded that "it is now widely accepted that... radiality [characteristic of axes such as stems] and dorsiventrality [characteristic of leaves] are but extremes of a continuous spectrum. In fact, it is simply the timing of the KNOX gene expression". Before the evolution of leaves, plants had the photosynthetic apparatus on the stems, which they retain albeit leaves have largely assumed that job. Today's megaphyll leaves probably became commonplace some 360mya, about 40my after the simple leafless plants had colonized the land in the Early Devonian. This spread has been linked to the fall in the atmospheric carbon dioxide concentrations in the Late Paleozoic era associated with a rise in density of stomata on leaf surface. This would have resulted in greater transpiration rates and gas exchange, but especially at high concentrations, large leaves with fewer stomata would have heated to lethal temperatures in full sunlight. Increasing the stomatal density allowed for a better-cooled leaf, thus making its spread feasible, but increased uptake at the expense of decreased water use efficiency. The rhyniophytes of the Rhynie chert consisted only of slender, unornamented axes. The early to middle Devonian trimerophytes may be considered leafy. This group of vascular plants are recognisable by their masses of terminal sporangia, which adorn the ends of axes which may bifurcate or trifurcate. Some organisms, such as Psilophyton, bore enations. These are small, spiny outgrowths of the stem, lacking their own vascular supply. The zosterophylls were already important in the late Silurian, much earlier than any rhyniophytes of comparable complexity. This group, recognisable by their kidney-shaped sporangia which grew on short lateral branches close to the main axes, sometimes branched in a distinctive H-shape. Many zosterophylls bore enations (small tissue outgrowths on the surface with variable morphologies) on their axes but none of these had a vascular trace. The first evidence of vascularised enations occurs in a fossil clubmoss known as Baragwanathia that had already appeared in the fossil record in the Late Silurian. In this organism, these leaf traces continue into the leaf to form their mid-vein. One theory, the "enation theory", holds that the microphyllous leaves of clubmosses developed by outgrowths of the protostele connecting with existing enations The leaves of the Rhynie genus Asteroxylon, which was preserved in the Rhynie chert almost 20 million years later than Baragwanathia, had a primitive vascular supply – in the form of leaf traces departing from the central protostele towards each individual "leaf". Asteroxylon and Baragwanathia are widely regarded as primitive lycopods, a group still extant today, represented by the quillworts, the spikemosses and the club mosses. Lycopods bear distinctive microphylls, defined as leaves with a single vascular trace. Microphylls could grow to some size, those of Lepidodendrales reaching over a meter in length, but almost all just bear the one vascular bundle. An exception is the rare branching in some Selaginella species. The more familiar leaves, megaphylls, are thought to have originated four times independently: in the ferns, horsetails, progymnosperms and seed plants. They appear to have originated by modifying dichotomising branches, which first overlapped (or "overtopped") one another, became flattened or planated and eventually developed "webbing" and evolved gradually into more leaf-like structures. Megaphylls, by Zimmerman's telome theory, are composed of a group of webbed branches and hence the "leaf gap" left where the leaf's vascular bundle leaves that of the main branch resembles two axes splitting. In each of the four groups to evolve megaphylls, their leaves first evolved during the Late Devonian to Early Carboniferous, diversifying rapidly until the designs settled down in the mid Carboniferous. The cessation of further diversification can be attributed to developmental constraints, raising the question of why it took so long for leaves to evolve in the first place. Plants had been on land for at least 50 million years before megaphylls became significant. However, small, rare mesophylls are known from the early Devonian genus Eophyllophyton – so development could not have been a barrier to their appearance. The best explanation so far is that atmospheric was declining rapidly during this time – falling by around 90% during the Devonian. This required an increase in stomatal density by 100 times to maintain the rate of photosynthesis. When stomata open to allow water to evaporate from leaves it has a cooling effect, resulting from the loss of latent heat of evaporation. It appears that the low stomatal density in the early Devonian meant that evaporation and evaporative cooling were limited, and that leaves would have overheated if they grew to any size. The stomatal density could not increase, as the primitive steles and limited root systems would not be able to supply water quickly enough to match the rate of transpiration. Clearly, leaves are not always beneficial, as illustrated by the frequent occurrence of secondary loss of leaves, exemplified by cacti and the "whisk fern" Psilotum. Secondary evolution can disguise the true evolutionary origin of some leaves. Some genera of ferns display complex leaves which are attached to the pseudostele by an outgrowth of the vascular bundle, leaving no leaf gap. Deciduous trees deal with another disadvantage to having leaves. The popular belief that plants shed their leaves when the days get too short is misguided; evergreens prospered in the Arctic Circle during the most recent greenhouse earth. The generally accepted reason for shedding leaves during winter is to cope with the weather – the force of wind and weight of snow are much more comfortably weathered without leaves to increase surface area. Seasonal leaf loss has evolved independently several times and is exhibited in the ginkgoales, some pinophyta and certain angiosperms. Leaf loss may also have arisen as a response to pressure from insects; it may have been less costly to lose leaves entirely during the winter or dry season than to continue investing resources in their repair. Roots The evolution of roots had consequences on a global scale. By disturbing the soil and promoting its acidification (by taking up nutrients such as nitrate and phosphate), they enabled it to weather more deeply, injecting carbon compounds deeper into soils with huge implications for climate. These effects may have been so profound they led to a mass extinction. While there are traces of root-like impressions in fossil soils in the Late Silurian, body fossils show the earliest plants to be devoid of roots. Many had prostrate branches that sprawled along the ground, with upright axes or thalli dotted here and there, and some even had non-photosynthetic subterranean branches which lacked stomata. Roots have a root cap, unlike specialised branches. So while Siluro-Devonian plants such as Rhynia and Horneophyton possessed the physiological equivalent of roots, roots – defined as organs differentiated from stems – did not arrive until later. Unfortunately, roots are rarely preserved in the fossil record. Rhizoids – small structures performing the same role as roots, usually a cell in diameter – probably evolved very early, perhaps even before plants colonised the land; they are recognised in the Characeae, an algal sister group to land plants. That said, rhizoids probably evolved more than once; the rhizines of lichens, for example, perform a similar role. Even some animals (Lamellibrachia) have root-like structures. Rhizoids are clearly visible in the Rhynie chert fossils, and were present in most of the earliest vascular plants, and on this basis seem to have presaged true plant roots. More advanced structures are common in the Rhynie chert, and many other fossils of comparable early Devonian age bear structures that look like, and acted like, roots. The rhyniophytes bore fine rhizoids, and the trimerophytes and herbaceous lycopods of the chert bore root-like structure penetrating a few centimetres into the soil. However, none of these fossils display all the features borne by modern roots, with the exception of Asteroxylon, which has recently been recognized as bearing roots that evolved independently from those of extant vascular plants. Roots and root-like structures became increasingly common and deeper penetrating during the Devonian, with lycopod trees forming roots around 20 cm long during the Eifelian and Givetian. These were joined by progymnosperms, which rooted up to about a metre deep, during the ensuing Frasnian stage. True gymnosperms and zygopterid ferns also formed shallow rooting systems during the Famennian. The rhizophores of the lycopods provide a slightly different approach to rooting. They were equivalent to stems, with organs equivalent to leaves performing the role of rootlets. A similar construction is observed in the extant lycopod Isoetes, and this appears to be evidence that roots evolved independently at least twice, in the lycophytes and other plants, a proposition supported by studies showing that roots are initiated and their growth promoted by different mechanisms in lycophytes and euphyllophytes. Early rooted plants are little more advanced than their Silurian forebears, without a dedicated root system; however, the flat-lying axes can be clearly seen to have growths similar to the rhizoids of bryophytes today. By the Middle to Late Devonian, most groups of plants had independently developed a rooting system of some nature. As roots became larger, they could support larger trees, and the soil was weathered to a greater depth. This deeper weathering had effects not only on the aforementioned drawdown of , but also opened up new habitats for colonisation by fungi and animals. The narrowest roots of modern plants are a mere 40 μm in diameter, and could not physically transport water if they were any narrower. The earliest fossil roots recovered, by contrast, narrowed from 3 mm to under 700 μm in diameter; of course, taphonomy is the ultimate control of what thickness can be seen. Tree form The early Devonian landscape was devoid of vegetation taller than waist height. Greater height provided a competitive advantage in the harvesting of sunlight for photosynthesis, overshadowing of competitors and in spore distribution, as spores (and later, seeds) could be blown for greater distances if they started higher. An effective vascular system was required in order to achieve greater heights. To attain arborescence, plants had to develop woody tissue that provided both support and water transport, and thus needed to evolve the capacity for secondary growth. The stele of plants undergoing secondary growth is surrounded by a vascular cambium, a ring of meristematic cells which produces more xylem on the inside and phloem on the outside. Since xylem cells comprise dead, lignified tissue, subsequent rings of xylem are added to those already present, forming wood. Fossils of plants from the early Devonian show that a simple form of wood first appeared at least 400 million years ago, at a time when all land plants were small and herbaceous. Because wood evolved long before shrubs and trees, it is likely that its original purpose was for water transport, and that it was only used for mechanical support later. The first plants to develop secondary growth and a woody habit, were apparently the ferns, and as early as the Middle Devonian one species, Wattieza, had already reached heights of 8 m and a tree-like habit. Other clades did not take long to develop a tree-like stature. The Late Devonian Archaeopteris, a precursor to gymnosperms which evolved from the trimerophytes, reached 30 m in height. The progymnosperms were the first plants to develop true wood, grown from a bifacial cambium. The first appearance of one of them, Rellimia, was in the Middle Devonian. True wood is only thought to have evolved once, giving rise to the concept of a "lignophyte" clade. Archaeopteris forests were soon supplemented by arborescent lycopods, in the form of Lepidodendrales, which exceeded 50m in height and 2m across at the base. These arborescent lycopods rose to dominate Late Devonian and Carboniferous forests that gave rise to coal deposits. Lepidodendrales differ from modern trees in exhibiting determinate growth: after building up a reserve of nutrients at a lower height, the plants would "bolt" as a single trunk to a genetically determined height, branch at that level, spread their spores and die. They consisted of "cheap" wood to allow their rapid growth, with at least half of their stems comprising a pith-filled cavity. Their wood was also generated by a unifacial vascular cambium – it did not produce new phloem, meaning that the trunks could not grow wider over time. The horsetail Calamites appeared in the Carboniferous. Unlike the modern horsetail Equisetum, Calamites had a unifacial vascular cambium, allowing them to develop wood and grow to heights in excess of 10 m and to branch repeatedly. While the form of early trees was similar to that of today's, the Spermatophytes or seed plants, the group that contain all modern trees, had yet to evolve. The dominant tree groups today are all seed plants, the gymnosperms, which include the coniferous trees, and the angiosperms, which contain all fruiting and flowering trees. No free-sporing trees like Archaeopteris exist in the extant flora. It was long thought that the angiosperms arose from within the gymnosperms, but recent molecular evidence suggests that their living representatives form two distinct groups. The molecular data has yet to be fully reconciled with morphological data, but it is becoming accepted that the morphological support for paraphyly is not especially strong. This would lead to the conclusion that both groups arose from within the pteridosperms, probably as early as the Permian. The angiosperms and their ancestors played a very small role until they diversified during the Cretaceous. They started out as small, damp-loving organisms in the understorey, and have been diversifying ever since the Cretaceous, to become the dominant member of non-boreal forests today. Seeds Early land plants reproduced in the fashion of ferns: spores germinated into small gametophytes, which produced eggs and/or sperm. These sperm would swim across moist soils to find the female organs (archegonia) on the same or another gametophyte, where they would fuse with an egg to produce an embryo, which would germinate into a sporophyte. Heterosporic plants, as their name suggests, bear spores of two sizes – microspores and megaspores. These would germinate to form microgametophytes and megagametophytes, respectively. This system paved the way for ovules and seeds: taken to the extreme, the megasporangia could bear only a single megaspore tetrad, and to complete the transition to true ovules, three of the megaspores in the original tetrad could be aborted, leaving one megaspore per megasporangium. The transition to ovules continued with this megaspore being "boxed in" to its sporangium while it germinated. Then, the megagametophyte was contained within a waterproof integument, which enclosed the seed. The pollen grain, which contained a microgametophyte germinated from a microspore , was employed for dispersal of the male gamete, only releasing its desiccation-prone flagellate sperm when it reached a receptive megagametophyte. Lycopods and sphenopsids got a fair way down the path to the seed habit without ever crossing the threshold. Fossil lycopod megaspores reaching 1 cm in diameter, and surrounded by vegetative tissue, are known (Lepidocarpon, Achlamydocarpon);– these even germinated into a megagametophyte in situ. However, they fell short of being ovules, since the nucellus, an inner spore-covering layer, does not completely enclose the spore. A very small slit (micropyle) remains, meaning that the megasporangium is still exposed to the atmosphere. This has two consequences – firstly, it means it is not fully resistant to desiccation, and secondly, sperm do not have to "burrow" to access the archegonia of the megaspore. A Middle Devonian precursor to seed plants from Belgium has been identified predating the earliest seed plants by about 20 million years. Runcaria, small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears an unopened distal extension protruding above the multilobed integument. It is suspected that the extension was involved in anemophilous pollination. Runcaria sheds new light on the sequence of character acquisition leading to the seed. Runcaria has all of the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the ovule. The first spermatophytes (literally: "seed plants") – that is, the first plants to bear true seeds – are called pteridosperms: literally, "seed ferns", so called because their foliage consisted of fern-like fronds, although they were not closely related to ferns. The oldest fossil evidence of seed plants is of Late Devonian age, and they appear to have evolved out of an earlier group known as the progymnosperms. These early seed plants ranged from trees to small, rambling shrubs; like most early progymnosperms, they were woody plants with fern-like foliage. They all bore ovules, but no cones, fruit or similar. While it is difficult to track the early evolution of seeds, the lineage of the seed ferns may be traced from the simple trimerophytes through homosporous Aneurophytes. The seed plants underwent their first major evolutionary radiation in the Famennian era. This seed model is shared by basically all gymnosperms (literally: "naked seeds"), most of which encase their seeds in a woody cone or fleshy aril (the yew, for example), but none of which fully enclose their seeds. The angiosperms ("vessel seeds") are the only group to fully enclose the seed, in a carpel. Fully enclosed seeds opened up a new pathway for plants to follow: that of seed dormancy. The embryo, completely isolated from the external atmosphere and hence protected from desiccation, could survive some years of drought before germinating. Gymnosperm seeds from the Late Carboniferous have been found to contain embryos, suggesting a lengthy gap between fertilisation and germination. This period is associated with the entry into a greenhouse earth period, with an associated increase in aridity. This suggests that dormancy arose as a response to drier climatic conditions, where it became advantageous to wait for a moist period before germinating. This evolutionary breakthrough appears to have opened a floodgate: previously inhospitable areas, such as dry mountain slopes, could now be tolerated, and were soon covered by trees. Seeds offered further advantages to their bearers: they increased the success rate of fertilised gametophytes, and because a nutrient store could be "packaged" in with the embryo, the seeds could germinate rapidly in inhospitable environments, reaching a size where it could fend for itself more quickly. For example, without an endosperm, seedlings growing in arid environments would not have the reserves to grow roots deep enough to reach the water table before they expired from dehydration. Likewise, seeds germinating in a gloomy understory require an additional reserve of energy to quickly grow high enough to capture sufficient light for self-sustenance. A combination of these advantages gave seed plants the ecological edge over the previously dominant genus Archaeopteris, thus increasing the biodiversity of early forests. Despite these advantages, it is common for fertilized ovules to fail to mature as seeds. Also during seed dormancy (often associated with unpredictable and stressful conditions) DNA damage accumulates. Thus DNA damage appears to be a basic problem for survival of seed plants, just as DNA damage is a major problem for life in general. Flowers Flowers are modified leaves possessed only by the angiosperms, which are relatively late to appear in the fossil record. The group originated and diversified during the Early Cretaceous and became ecologically significant thereafter. Flower-like structures first appear in the fossil records some ~130 mya, in the Cretaceous. However, in 2018, scientists reported the finding of a fossil flower from about 180 million years ago, 50 million years earlier than previously thought. The interpretation has been however highly disputed. Colorful and/or pungent structures surround the cones of plants such as cycads and Gnetales, making a strict definition of the term "flower" elusive. The main function of a flower is reproduction, which, before the evolution of the flower and angiosperms, was the job of microsporophylls and megasporophylls. A flower can be considered a powerful evolutionary innovation, because its presence allowed the plant world to access new means and mechanisms for reproduction. The flowering plants have long been assumed to have evolved from within the gymnosperms; according to the traditional morphological view, they are closely allied to the Gnetales. However, as noted above, recent molecular evidence is at odds with this hypothesis, and further suggests that Gnetales are more closely related to some gymnosperm groups than angiosperms, and that extant gymnosperms form a distinct clade to the angiosperms, the two clades diverging some . The relationship of stem groups to the angiosperms is important in determining the evolution of flowers. Stem groups provide an insight into the state of earlier "forks" on the path to the current state. Convergence increases the risk of misidentifying stem groups. Since the protection of the megagametophyte is evolutionarily desirable, probably many separate groups evolved protective encasements independently. In flowers, this protection takes the form of a carpel, evolved from a leaf and recruited into a protective role, shielding the ovules. These ovules are further protected by a double-walled integument. Penetration of these protective layers needs something more than a free-floating microgametophyte. Angiosperms have pollen grains comprising just three cells. One cell is responsible for drilling down through the integuments, and creating a conduit for the two sperm cells to flow down. The megagametophyte has just seven cells; of these, one fuses with a sperm cell, forming the nucleus of the egg itself, and another joins with the other sperm, and dedicates itself to forming a nutrient-rich endosperm. The other cells take auxiliary roles. This process of "double fertilisation" is unique and common to all angiosperms. In the fossil record, there are three intriguing groups which bore flower-like structures. The first is the Permian pteridosperm Glossopteris, which already bore recurved leaves resembling carpels. The Mesozoic Caytonia is more flower-like still, with enclosed ovules – but only a single integument. Further, details of their pollen and stamens set them apart from true flowering plants. The Bennettitales bore remarkably flower-like organs, protected by whorls of bracts which may have played a similar role to the petals and sepals of true flowers; however, these flower-like structures evolved independently, as the Bennettitales are more closely related to cycads and ginkgos than to the angiosperms. However, no true flowers are found in any groups save those extant today. Most morphological and molecular analyses place Amborella, the nymphaeales and Austrobaileyaceae in a basal clade called "ANA". This clade appear to have diverged in the early Cretaceous, around  – around the same time as the earliest fossil angiosperm, and just after the first angiosperm-like pollen, 136 million years ago. The magnoliids diverged soon after, and a rapid radiation had produced eudicots and monocots by . By the end of the Cretaceous , over 50% of today's angiosperm orders had evolved, and the clade accounted for 70% of global species. It was around this time that flowering trees became dominant over conifers. The features of the basal "ANA" groups suggest that angiosperms originated in dark, damp, frequently disturbed areas. It appears that the angiosperms remained constrained to such habitats throughout the Cretaceous – occupying the niche of small herbs early in the successional series. This may have restricted their initial significance, but given them the flexibility that accounted for the rapidity of their later diversifications in other habitats. Some propose that the Angiosperms arose from an unknown Seed Fern, Pteridophyte, and view Cycads as living Seed Ferns with both Seed-Bearing and sterile leaves (Cycas revoluta) In August 2017, scientists presented a detailed description and 3D reconstruction of possibly the first flower that lived about 140 million years ago. Origins of the flower The family Amborellaceae is regarded as being the sister clade to all other living flowering plants. A draft genome of Amborella trichopoda was published in December, 2013. By comparing its genome with those of all other living flowering plants, it will be possible to work out the most likely characteristics of the ancestor of A. trichopoda and all other flowering plants, i.e. the ancestral flowering plant. It seems that on the level of the organ, the leaf may be the ancestor of the flower, or at least some floral organs. When some crucial genes involved in flower development are mutated, clusters of leaf-like structures arise in place of flowers. Thus, sometime in history, the developmental program leading to formation of a leaf must have been altered to generate a flower. There probably also exists an overall robust framework within which the floral diversity has been generated. An example of that is a gene called LEAFY (LFY), which is involved in flower development in Arabidopsis thaliana. The homologs of this gene are found in angiosperms as diverse as tomato, snapdragon, pea, maize and even gymnosperms. Expression of Arabidopsis thaliana LFY in distant plants like poplar and citrus also results in flower-production in these plants. The LFY gene regulates the expression of some genes belonging to the MADS-box family. These genes, in turn, act as direct controllers of flower development. Adaptive function of flowers Flowers likely emerged during plant evolution as an adaptation to facilitate cross-fertilization (outcrossing), a process that leads to the masking of recessive deleterious mutations in progeny genomes. This masking effect of expression of deleterious mutations is referred to as genetic complementation. This beneficial masking effect of cross-fertilization is also considered to be the basis of hybrid vigor or heterosis in progeny. Once flowers have become established in a lineage based on their adaptive function of promoting cross-fertilization, subsequent switching to inbreeding ordinarily then becomes disadvantageous, mainly because it permits expression of the previously masked deleterious recessive mutations, i.e. inbreeding depression. In addition, meiosis, the process by which seed progeny are produced in flowering plants, provides a direct mechanism for repairing DNA through genetic recombination. Thus, in flowering plants, the two fundamental processes of sexual reproduction are cross-fertilization (outcrossing) and meiosis and these two processes appear to be maintained respectively by the advantages of genetic complementation and recombinational repair of DNA. Evolution of the MADS-box family The members of the MADS-box family of transcription factors play a very important and evolutionarily conserved role in flower development. According to the ABC Model of flower development, three zones — A, B and C — are generated within the developing flower primordium, by the action of some transcription factors, that are members of the MADS-box family. Among these, the functions of the B and C domain genes have been evolutionarily more conserved than the A domain gene. Many of these genes have arisen through gene duplications of ancestral members of this family. Quite a few of them show redundant functions. The evolution of the MADS-box family has been extensively studied. These genes are present even in pteridophytes, but the spread and diversity is many times higher in angiosperms. There appears to be quite a bit of pattern into how this family has evolved. Consider the evolution of the C-region gene AGAMOUS (AG). It is expressed in today's flowers in the stamens, and the carpel, which are reproductive organs. Its ancestor in gymnosperms also has the same expression pattern. Here, it is expressed in the strobili, an organ that produces pollen or ovules. Similarly, the B-genes' (AP3 and PI) ancestors are expressed only in the male organs in gymnosperms. Their descendants in the modern angiosperms also are expressed only in the stamens, the male reproductive organ. Thus, the same, then-existing components were used by the plants in a novel manner to generate the first flower. This is a recurring pattern in evolution. Factors influencing floral diversity There is enormous variation in floral structure in plants, typically due to changes in the MADS-box genes and their expression pattern. For example, grasses possess unique floral structures. The carpels and stamens are surrounded by scale-like lodicules and two bracts, the lemma and the palea, but genetic evidence and morphology suggest that lodicules are homologous to eudicot petals. The palea and lemma may be homologous to sepals in other groups, or may be unique grass structures. Another example is that of Linaria vulgaris, which has two kinds of flower symmetries-radial and bilateral. These symmetries are due to epigenetic changes in just one gene called CYCLOIDEA. Arabidopsis thaliana has a gene called AGAMOUS that plays an important role in defining how many petals and sepals and other organs are generated. Mutations in this gene give rise to the floral meristem obtaining an indeterminate fate, and proliferation of floral organs in double-flowered forms of roses, carnations and morning glory. These phenotypes have been selected by horticulturists for their increased number of petals. Several studies on diverse plants like petunia, tomato, Impatiens, maize, etc. have suggested that the enormous diversity of flowers is a result of small changes in genes controlling their development. The Floral Genome Project confirmed that the ABC Model of flower development is not conserved across all angiosperms. Sometimes expression domains change, as in the case of many monocots, and also in some basal angiosperms like Amborella. Different models of flower development like the Fading boundaries model, or the Overlapping-boundaries model which propose non-rigid domains of expression, may explain these architectures. There is a possibility that from the basal to the modern angiosperms, the domains of floral architecture have become more and more fixed through evolution. Flowering time Another floral feature that has been a subject of natural selection is flowering time. Some plants flower early in their life cycle, others require a period of vernalization before flowering. This outcome is based on factors like temperature, light intensity, presence of pollinators and other environmental signals: genes like CONSTANS (CO), Flowering Locus C (FLC) and FRIGIDA regulate integration of environmental signals into the pathway for flower development. Variations in these loci have been associated with flowering time variations between plants. For example, Arabidopsis thaliana ecotypes that grow in the cold, temperate regions require prolonged vernalization before they flower, while the tropical varieties, and the most common lab strains, don't. This variation is due to mutations in the FLC and FRIGIDA genes, rendering them non-functional. Many of the genes involved in this process are conserved across all the plants studied. Sometimes though, despite genetic conservation, the mechanism of action turns out to be different. For example, rice is a short-day plant, while Arabidopsis thaliana is a long-day plant. Both plants have the proteins CO and FLOWERING LOCUS T (FT), but, in Arabidopsis thaliana, CO enhances FT production, while in rice, the CO homolog represses FT production, resulting in completely opposite downstream effects. Theories of flower evolution The Anthophyte theory was based on the observation that a gymnospermic group Gnetales has a flower-like ovule. It has partially developed vessels as found in the angiosperms, and the megasporangium is covered by three envelopes, like the ovary structure of angiosperm flowers. However, many other lines of evidence show that Gnetales is not related to angiosperms. The Mostly Male theory has a more genetic basis. Proponents of this theory point out that the gymnosperms have two very similar copies of the gene LFY, while angiosperms just have one. Molecular clock analysis has shown that the other LFY paralog was lost in angiosperms around the same time as flower fossils become abundant, suggesting that this event might have led to floral evolution. According to this theory, loss of one of the LFY paralog led to flowers that were more male, with the ovules being expressed ectopically. These ovules initially performed the function of attracting pollinators, but sometime later, may have been integrated into the core flower. Mechanisms and players in evolution of plant morphology While environmental factors are significantly responsible for evolutionary change, they act merely as agents for natural selection. Change is inherently brought about via phenomena at the genetic level: mutations, chromosomal rearrangements, and epigenetic changes. While the general types of mutations hold true across the living world, in plants, some other mechanisms have been implicated as highly significant. Genome doubling is a relatively common occurrence in plant evolution and results in polyploidy, which is consequently a common feature in plants. It is estimated that at least half (and probably all) plants have seen genome doubling in their history. Genome doubling entails gene duplication, thus generating functional redundancy in most genes. The duplicated genes may attain new function, either by changes in expression pattern or changes in activity. Polyploidy and gene duplication are believed to be among the most powerful forces in evolution of plant form; though it is not known why genome doubling is such a frequent process in plants. One probable reason is the production of large amounts of secondary metabolites in plant cells. Some of them might interfere in the normal process of chromosomal segregation, causing genome duplication. In recent times, plants have been shown to possess significant microRNA families, which are conserved across many plant lineages. In comparison to animals, while the number of plant miRNA families are lesser than animals, the size of each family is much larger. The miRNA genes are also much more spread out in the genome than those in animals, where they are more clustered. It has been proposed that these miRNA families have expanded by duplications of chromosomal regions. Many miRNA genes involved in regulation of plant development have been found to be quite conserved between plants studied. Domestication of plants like maize, rice, barley, wheat etc. has also been a significant driving force in their evolution. Research concerning the origin of maize has found that it is a domesticated derivative of a wild plant from Mexico called teosinte. Teosinte belongs to the genus Zea, just as maize, but bears very small inflorescence, 5–10 hard cobs and a highly branched and spread out stem. Crosses between a particular teosinte variety and maize yields fertile offspring that are intermediate in phenotype between maize and teosinte. QTL analysis has also revealed some loci that, when mutated in maize, yield a teosinte-like stem or teosinte-like cobs. Molecular clock analysis of these genes estimates their origins to some 9,000 years ago, well in accordance with other records of maize domestication. It is believed that a small group of farmers must have selected some maize-like natural mutant of teosinte some 9,000 years ago in Mexico, and subjected it to continuous selection to yield the familiar maize plant of today. The edible cauliflower is a domesticated version of the wild plant Brassica oleracea, which does not possess the dense undifferentiated inflorescence, called the curd, that cauliflower possesses. Cauliflower possesses a single mutation in a gene called CAL, controlling meristem differentiation into inflorescence. This causes the cells at the floral meristem to gain an undifferentiated identity and, instead of growing into a flower, they grow into a dense mass of inflorescence meristem cells in arrested development. This mutation has been selected through domestication since at least the time of the Greek empire. Evolution of photosynthetic pathways The C4 metabolic pathway is a valuable recent evolutionary innovation in plants, involving a complex set of adaptive changes to physiology and gene expression patterns. Photosynthesis is a complex chemical pathway facilitated by a range of enzymes and co-enzymes. The enzyme RuBisCO is responsible for "fixing"  – that is, it attaches it to a carbon-based molecule to form a sugar, which can be used by the plant, releasing an oxygen molecule. However, the enzyme is notoriously inefficient, and, as ambient temperature rises, will increasingly fix oxygen instead of in a process called photorespiration. This is energetically costly as the plant has to use energy to turn the products of photorespiration back into a form that can react with . Concentrating carbon Broadly, the two main ways to concentrate carbon dioxide in plants are 1) biochemical concentrating mechanisms (CCM) and 2) biophysical concentrating mechanisms. Biochemical CCMs such as C4 and CAM photosynthesis concentrate by using an enzyme, phosphoenolpyruvate carboxylase, to bind inorganic carbon to an intermediate four carbon sugar, which can then be converted back to RuBP and for subsequent fixation by Rubisco. Biophysical CCMs like carboxysomes and pyrenoids concentrate in a particular locus through the coordination of carbonic anhydrases and anion channels. C4 plants evolved carbon concentrating mechanisms that work by increasing the concentration of around RuBisCO, and excluding oxygen, thereby increasing the efficiency of photosynthesis by decreasing photorespiration. The process of concentrating around RuBisCO requires more energy than allowing gases to diffuse, but under certain conditions – i.e. warm temperatures (>25 °C), low concentrations, or high oxygen concentrations – pays off in terms of the decreased loss of sugars through photorespiration. One type of C4 metabolism employs a so-called Kranz anatomy. This transports through an outer mesophyll layer, via a range of organic molecules, to the central bundle sheath cells, where the is released. In this way, is concentrated near the site of RuBisCO operation. Because RuBisCO is operating in an environment with much more than it otherwise would be, it performs more efficiently. A second mechanism, CAM photosynthesis, temporally separates photosynthesis from the action of RuBisCO. RuBisCO only operates during the day, when stomata are sealed and is provided by the breakdown of the chemical malate. More is then harvested from the atmosphere when stomata open, during the cool, moist nights, reducing water loss. The third mechanism present in plants, pyrenoid-based CCMs, is found only in the hornwort lineage . In this mechanism, RuBisCO is concentrated in the pyrenoid, a membraneless compartment, by importing inorganic carbon in the form of bicarbonate . This import is thought to be dependent on the coordination of carbonic anhydrases and anion channels, and takes advantage of the native pH differences between the cytosol, chloroplast stroma, and thylakoid lumen. Evolutionary record These two pathways, with the same effect on RuBisCO, evolved a number of times independently – indeed, C4 alone arose 62 times in 18 different plant families. A number of 'pre-adaptations' seem to have paved the way for , leading to its clustering in certain clades: it has most frequently been innovated in plants that already had features such as extensive vascular bundle sheath tissue. Many potential evolutionary pathways resulting in the phenotype are possible and have been characterised using Bayesian inference, confirming that non-photosynthetic adaptations often provide evolutionary stepping stones for the further evolution of . The C4 construction is used by a subset of grasses, while CAM is employed by many succulents and cacti. The C4 trait appears to have emerged during the Oligocene, around ; however, they did not become ecologically significant until the Miocene, . Remarkably, some charcoalified fossils preserve tissue organised into the Kranz anatomy, with intact bundle sheath cells, allowing the presence C4 metabolism to be identified. Isotopic markers are used to deduce their distribution and significance. C3 plants preferentially use the lighter of two isotopes of carbon in the atmosphere, 12C, which is more readily involved in the chemical pathways involved in its fixation. Because C4 metabolism involves a further chemical step, this effect is accentuated. Plant material can be analysed to deduce the ratio of the heavier 13C to 12C. This ratio is denoted . C3 plants are on average around 14‰ (parts per thousand) lighter than the atmospheric ratio, while C4 plants are about 28‰ lighter. The of CAM plants depends on the percentage of carbon fixed at night relative to what is fixed in the day, being closer to C3 plants if they fix most carbon in the day and closer to C4 plants if they fix all their carbon at night. Original fossil material in sufficient quantity to analyse the grass itself is scarce, but horses provide a good proxy. They were globally widespread in the period of interest, and browsed almost exclusively on grasses. There's an old phrase in isotope paleontology, "you are what you eat (plus a little bit)" – this refers to the fact that organisms reflect the isotopic composition of whatever they eat, plus a small adjustment factor. There is a good record of horse teeth throughout the globe, and their record shows a sharp negative inflection around , during the Messinian that is interpreted as resulting from the rise of C4 plants on a global scale. Advantage of C4 While C4 enhances the efficiency of RuBisCO, the concentration of carbon is highly energy intensive. This means that C4 plants only have an advantage over C3 organisms in certain conditions: namely, high temperatures and low rainfall. C4 plants also need high levels of sunlight to thrive. Models suggest that, without wildfires removing shade-casting trees and shrubs, there would be no space for C4 plants. But, wildfires have occurred for 400 million years. The Carboniferous (~) had notoriously high oxygen levels – almost enough to allow spontaneous combustion – and very low , but no C4 isotopic signature has been found. There also does not seem to be a sudden trigger for the Miocene rise. During the Miocene, the atmosphere and climate were relatively stable. If anything, increased gradually from before settling down to concentrations similar to the Holocene. This suggests that it did not have a key role in invoking C4 evolution. Grasses themselves (the group which would give rise to the most occurrences of C4) had probably been around for 60 million years or more, so had had plenty of time to evolve C4, which, in any case, is present in a diverse range of groups and thus evolved independently. There is a strong signal of climate change in South Asia; increasing aridity – hence increasing fire frequency and intensity – may have led to an increase in the importance of grasslands. However, this is difficult to reconcile with the North American record. It is possible that the signal is entirely biological, forced by the fire driven acceleration of grass evolution – which, both by increasing weathering and incorporating more carbon into sediments, reduced atmospheric levels. Finally, there is evidence that the onset of C4 from is a biased signal, which only holds true for North America, from where most samples originate; emerging evidence suggests that grasslands evolved to a dominant state at least 15Ma earlier in South America. Evolution of transcriptional regulation Transcription factors and transcriptional regulatory networks play key roles in plant development and stress responses, as well as their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants. Evolution of secondary metabolism Secondary metabolites are essentially low molecular weight compounds, sometimes having complex structures, that are not essential for the normal processes of growth, development, or reproduction. They function in processes as diverse as immunity, anti-herbivory, pollinator attraction, communication between plants, maintaining symbiotic associations with soil flora, or enhancing the rate of fertilization, and hence are significant from the evo-devo perspective. Secondary metabolites are structurally and functionally diverse, and it is estimated that hundreds of thousands of enzymes might be involved in the process of producing them, with about 15–25% of the genome coding for these enzymes, and every species having its unique arsenal of secondary metabolites. Many of these metabolites, such as salicylic acid are of medical significance to humans. The purpose of producing so many secondary metabolites, with a significant proportion of the metabolome devoted to this activity is unclear. It is postulated that most of these chemicals help in generating immunity and, in consequence, the diversity of these metabolites is a result of a constant arms race between plants and their parasites. Some evidence supports this case. A central question involves the reproductive cost to maintaining such a large inventory of genes devoted to producing secondary metabolites. Various models have been suggested that probe into this aspect of the question, but a consensus on the extent of the cost has yet to be established; as it is still difficult to predict whether a plant with more secondary metabolites increases its survival or reproductive success compared to other plants in its vicinity. Secondary metabolite production seems to have arisen quite early during evolution. In plants, they seem to have spread out using mechanisms including gene duplications or the evolution of novel genes. Furthermore, research has shown that diversity in some of these compounds may be positively selected for. Although the role of novel gene evolution in the evolution of secondary metabolism is clear, there are several examples where new metabolites have been formed by small changes in the reaction. For example, cyanogen glycosides have been proposed to have evolved multiple times in different plant lineages. There are several such instances of convergent evolution. For example, enzymes for synthesis of limonene – a terpene – are more similar between angiosperms and gymnosperms than to their own terpene synthesis enzymes. This suggests independent evolution of the limonene biosynthetic pathway in these two lineages. Evolution of plant-microbe interactions The origin of microbes on Earth, tracing back to the beginning of life more than 3.5 billion years ago, indicates that microbe-microbe interactions have continuously evolved and diversified over time, long before plants started to colonize land 450 million years ago. Therefore, it is likely that both intra- and inter-kingdom intermicrobial interactions represent strong drivers of the establishment of plant-associated microbial consortia at the soil-root interface. Nonetheless, it remains unclear to what extent these interactions in the rhizosphere/phyllosphere and in endophytic plant compartments (i.e., within the host) shape microbial assemblages in nature and whether microbial adaptation to plant habitats drive habitat-specific microbe-microbe interaction strategies that impact plant fitness. Furthermore, the contribution of competitive and cooperative microbe-microbe interactions to the overall community structure remains difficult to evaluate in nature due to the strong environmental noise. See also Evolution of herbivory Evolutionary history of life Paleobotany Plant evolutionary developmental biology Timeline of plant evolution References External links Evolution And Paleobotany at Britannica Evolution of plants Plant sexuality
Evolutionary history of plants
[ "Biology" ]
15,944
[ "Behavior", "Plants", "Plant sexuality", "Evolution of plants", "Sexuality" ]
2,238,152
https://en.wikipedia.org/wiki/Association%20scheme
The theory of association schemes arose in statistics, in the theory of experimental design for the analysis of variance. In mathematics, association schemes belong to both algebra and combinatorics. In algebraic combinatorics, association schemes provide a unified approach to many topics, for example combinatorial designs and the theory of error-correcting codes. In algebra, association schemes generalize groups, and the theory of association schemes generalizes the character theory of linear representations of groups. Definition An n-class association scheme consists of a set X together with a partition S of X × X into n + 1 binary relations, R0, R1, ..., Rn which satisfy: ; it is called the identity relation. Defining , if R in S, then R* in S. If , the number of such that and is a constant depending on , , but not on the particular choice of and . An association scheme is commutative if for all , and . Most authors assume this property. Note, however, that while the notion of an association scheme generalizes the notion of a group, the notion of a commutative association scheme only generalizes the notion of a commutative group. A symmetric association scheme is one in which each is a symmetric relation. That is: if (x, y) ∈ Ri, then (y, x) ∈ Ri. (Or equivalently, R* = R.) Every symmetric association scheme is commutative. Two points x and y are called i th associates if . The definition states that if x and y are i th associates then so are y and x. Every pair of points are i th associates for exactly one . Each point is its own zeroth associate while distinct points are never zeroth associates. If x and y are k th associates then the number of points which are both i th associates of and j th associates of is a constant . Graph interpretation and adjacency matrices A symmetric association scheme can be visualized as a complete graph with labeled edges. The graph has vertices, one for each point of , and the edge joining vertices and is labeled if and are  th associates. Each edge has a unique label, and the number of triangles with a fixed base labeled having the other edges labeled and is a constant , depending on but not on the choice of the base. In particular, each vertex is incident with exactly edges labeled ; is the valency of the relation . There are also loops labeled at each vertex , corresponding to . The relations are described by their adjacency matrices. is the adjacency matrix of for and is a v × v matrix with rows and columns labeled by the points of . The definition of a symmetric association scheme is equivalent to saying that the are v × v (0,1)-matrices which satisfy I. is symmetric, II. (the all-ones matrix), III. , IV. . The (x, y)-th entry of the left side of (IV) is the number of paths of length two between x and y with labels i and j in the graph. Note that the rows and columns of contain 's: Terminology The numbers are called the parameters of the scheme. They are also referred to as the structural constants. History The term association scheme is due to but the concept is already inherent in . These authors were studying what statisticians have called partially balanced incomplete block designs (PBIBDs). The subject became an object of algebraic interest with the publication of and the introduction of the Bose–Mesner algebra. The most important contribution to the theory was the thesis of P. Delsarte who recognized and fully used the connections with coding theory and design theory. Generalizations have been studied by D. G. Higman (coherent configurations) and B. Weisfeiler (distance regular graphs). Basic facts , i.e., if then and the only such that is . ; this is because the partition . The Bose–Mesner algebra The adjacency matrices of the graphs generate a commutative and associative algebra (over the real or complex numbers) both for the matrix product and the pointwise product. This associative, commutative algebra is called the Bose–Mesner algebra of the association scheme. Since the matrices in are symmetric and commute with each other, they can be diagonalized simultaneously. Therefore, is semi-simple and has a unique basis of primitive idempotents . There is another algebra of matrices which is isomorphic to , and is often easier to work with. Examples The Johnson scheme, denoted by J(v, k), is defined as follows. Let S be a set with v elements. The points of the scheme J(v, k) are the subsets of S with k elements. Two k-element subsets A, B of S are i th associates when their intersection has size k − i. The Hamming scheme, denoted by H(n, q), is defined as follows. The points of H(n, q) are the qn ordered n-tuples over a set of size q. Two n-tuples x, y are said to be i th associates if they disagree in exactly i coordinates. E.g., if x = (1,0,1,1), y = (1,1,1,1), z = (0,0,1,1), then x and y are 1st associates, x and z are 1st associates and y and z are 2nd associates in H(4,2). A distance-regular graph, G, forms an association scheme by defining two vertices to be i th associates if their distance is i. A finite group G yields an association scheme on , with a class Rg for each group element, as follows: for each let where is the group operation. The class of the group identity is R0. This association scheme is commutative if and only if G is abelian. A specific 3-class association scheme: Let A(3) be the following association scheme with three associate classes on the set X = {1,2,3,4,5,6}. The (i, j&hairsp;) entry is s if elements i and j are in relation Rs. Coding theory The Hamming scheme and the Johnson scheme are of major significance in classical coding theory. In coding theory, association scheme theory is mainly concerned with the distance of a code. The linear programming method produces upper bounds for the size of a code with given minimum distance, and lower bounds for the size of a design with a given strength. The most specific results are obtained in the case where the underlying association scheme satisfies certain polynomial properties; this leads one into the realm of orthogonal polynomials. In particular, some universal bounds are derived for codes and designs in polynomial-type association schemes. In classical coding theory, dealing with codes in a Hamming scheme, the MacWilliams transform involves a family of orthogonal polynomials known as the Krawtchouk polynomials. These polynomials give the eigenvalues of the distance relation matrices of the Hamming scheme. See also Block design Bose–Mesner algebra Combinatorial design Notes References . (Chapters from preliminary draft are available on-line.) Design of experiments Analysis of variance Algebraic combinatorics Representation theory
Association scheme
[ "Mathematics" ]
1,503
[ "Representation theory", "Fields of abstract algebra", "Combinatorics", "Algebraic combinatorics" ]
2,238,288
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Italy
In the NUTS (Nomenclature of Territorial Units for Statistics) codes of Italy (IT), the three levels are: NUTS codes The following codes have been discontinued: ITC45 (Milano) was split into ITC4C and ITC4D. ITD (Northeast Italy) became ITH. ITE (Central Italy) became ITI. ITF41 (Foggia) and ITF42 (Bari) were split into ITF46, ITF47, and ITF48. ITG21 (Sassari), ITG22 (Nuoro), ITG23 (Oristano), and ITG24 (Cagliari) were split into the current divisions of ITG2. Local administrative units Below the NUTS levels, the two LAU (Local Administrative Units) levels are: The LAU codes of Italy can be downloaded here: See also Subdivisions of Italy ISO 3166-2 codes of Italy FIPS region codes of Italy References Sources Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe Overview map of EU Countries - NUTS level 1 ITALIA - NUTS level 2 ITALIA - NUTS level 3 Correspondence between the NUTS levels and the national administrative units List of current NUTS codes Download current NUTS codes (ODS format) Provinces of Italy, Statoids.com Italy Nuts
NUTS statistical regions of Italy
[ "Mathematics" ]
264
[ "Nomenclature of Territorial Units for Statistics", "Statistical concepts", "Statistical regions" ]
2,238,505
https://en.wikipedia.org/wiki/Reconfigurability
Reconfigurability denotes the Reconfigurable Computing capability of a system, so that its behavior can be changed by reconfiguration, i. e. by loading different configware code. This static reconfigurability distinguishes between reconfiguration time and run time. Dynamic reconfigurability denotes the capability of a dynamically reconfigurable system that can dynamically change its behavior during run time, usually in response to dynamic changes in its environment. In the context of wireless communication dynamic reconfigurability tackles the changeable behavior of wireless networks and associated equipment, specifically in the fields of radio spectrum, radio access technologies, protocol stacks, and application services. Research regarding the (dynamic) reconfigurability of wireless communication systems is ongoing for example in working group 6 of the Wireless World Research Forum (WWRF), in the Wireless Innovation Forum (WINNF) (formerly Software Defined Radio Forum), and in the European FP6 project End-to-End Reconfigurability (E²R). Recently, E²R initiated a related standardization effort on the cohabitation of heterogeneous wireless radio systems in the framework of the IEEE P1900.4 Working Group. See cognitive radio. In the context of Control reconfiguration, a field of fault-tolerant control within control engineering, reconfigurability is a property of faulty systems meaning that the original control goals specified for the fault-free system can be reached after suitable control reconfiguration. External links Wireless World Research Forum Wireless World Research Forum, Working Group 6 Wireless Innovation Forum (formerly Software Defined Radio Forum) Wireless networking Radio resource management Reconfigurable computing
Reconfigurability
[ "Technology", "Engineering" ]
360
[ "Wireless networking", "Computer networks engineering" ]
2,238,581
https://en.wikipedia.org/wiki/Goodnites
Goodnites (formerly Pull-Ups Goodnites; known as DryNites in the United Kingdom and most markets outside of North America) are diapers designed for managing bedwetting. Goodnites are produced by Kimberly-Clark. The product has also been seen titled as Huggies Goodnites on official Huggies branded webpages. Goodnites constitute the middle level of Kimberly-Clark's line of disposable products, being targeted at children, teens and young adults. The company also produces Huggies diapers for babies, Pull-Ups training pants for toddlers undergoing toilet training, Poise pads for adult women, and Depend incontinence products for adults in general. History 1990s 1994 - Goodnites released The original Goodnites were released in 1994. They came in two sizes: medium (45-65 lbs) and large (65-85 lbs). 1999 - Goodnites released a new size In 1999, Kimberly-Clark introduced a new extra-large size (85 lbs-125 lbs and up). 2000s 2001 A "Cloth-Like Cover" replaced the previous cover. 2003 - Goodnites introduce a new fit Kimberly-Clark introduced the "Trim-Fit" style (a drastic reduction in padding thickness and the overall size of the pull-up). 2004 - Goodnites introduces gender-specific Prior to 2004, Goodnites were unisex, plain-white pull-ups with only a faux tag printed on the back. Kimberly-Clark introduced gender-specific Goodnites with absorbency zoned for boys and girls. Medium Goodnites became small/medium and were designed to fit kids 38-65 pounds. The small/medium size is the equivalent of size 4-8 underwear (Size 8 US is 23.5in Waist). Large and extra large Goodnites were combined into large/extra-Large for kids from 60-125+ pounds (Height for healthy weight of 125 pounds is 4' 11" up to 5' 8" (The CDC states that the Average Height for Men is 69in or 5' 9" with an average waist size of 40.3in while the Average for Women is 63.6in or 5' 3.6" with the average waist size of 38.7in)). The large/extra-large size is equivalent to size 8-14 underwear (Size 14 US is 27in Waist). 2007 - Boxer and Sleep Shorts Goodnites brand releases the boxers and sleep shorts 2010s 2011 Goodnites boxers and sleep shorts disappeared from the official website 2012 - Bed Mats introduced Goodnites released the bed mats product 2014 Goodnites Tru-fit were released 2017 - Goodnites release a new size A new extra-small size was introduced for both boys and girls. It fits clothing sizes 3-5 and is designed for children weighing 28-45 lbs. 2019 - Logo style updates and new products Goodnites has updated the logo online and is stating "New Look, Coming Soon The same protection you trust, with brand new packaging!" Goodnites Tru-fit was discontinued Goodnites introduced the absorbent inserts 2020s 2021 - Goodnites released new sizes In early 2021, Kimberly-Clark adjusted the sizing of Goodnites and introduced a new extra-large size, intended for those with kids' underwear size 14 to 20 as well as adult sizes up to a 6 waist (corresponding to up to a waist) and weight from 95–140 pounds or more (43-63+ kilograms), which are partially aimed toward teenagers and young adults. Extra-small now ranges range from 28-43 pounds (13-20 kilograms) and small/medium at 43-68 pounds (20-31 kilograms), or underwear size 4 to 8. The previous large/extra-large size was downgraded to large and revised to be recommended for underwear sized 8 to 12 or weighing 68-95 pounds (31-43 kilograms). 2023 Goodnites absorbent inserts are not mentioned as a product from the Goodnites brand The Goodnites website added a section for articles focusing on ADHD and Autism when it comes to bedwetting as well as the Goodnites brand has teamed up with the Autism Society of America According to the Kimberly-Clark website, Goodnites became the, "...winner in Good Housekeeping's 2023 Best Parenting Awards in the Diapering Dynamos category." 2024 The Goodnites logo has changed a little. Instead of the word "goodnites" in a form of cursive being formed by stars, the word is now a solid line with the same lowercase cursive as before. Effectiveness Absorbency Goodnites are advertised as absorbing up to of urine using a combination of wood pulp and superabsorbent polymer. On social behavior In a study published in the Bulletin of Pediatric Health, Goodnites and similar bedwetting underpants were analyzed for effectiveness in relieving social anxiety related to bedwetting for boys ages 7 to 13 and for girls ages 5 to 15. Nearly five-hundred boys who wore diapers on a nightly-basis were compared to a control group experiencing the same problem but did not wear diapers to bed. 625 girls who wore diapers on a nightly-basis were compared to a control group experiencing the same problem but did not wear diapers to bed. The study found, predictably, that nearly all of the children were fearful of being discovered by their peers, while 48% of the 7-to-10-year-olds and 81% of the 11-to-13-year-olds described Goodnites, in particular, being "a little" or "very babyish." Despite these statistics, 60% said they would not go to bed without them. Asked about what they feared upon "discovery," the top worries were verbal teasing (89%) and loss of friends (61%) followed closely by physical bullying (gaining bullies, being beaten up by a peer, given wedgies, swirlies, or other kinds of playground bullying) 57%, and being compared to a baby (51%). Actual incidences of bullying due to bedwetting were found to be higher among the wearers than in the control, leading the study's author to conclude that the Goodites and similar products did successfully add to the wearers' confidence, so that they engaged more in what was dubbed for the purposes of the study "risky behavior" (e.g. going to sleepovers, participation in camping trips); 17% of the experimental group reported bullying, while only 11% of the control reported bullying. Current Products Goodnites Nighttime Underwear Goodnites are designed to be worn to bed in order to prevent wetting of the sheets and pajamas in case of an accident. Goodnites are pull-up style rather than tab-style to make it easier for the wearer to change their own pant and to reduce the chance of stigma associated with having to wear disposable underwear by making the experience more similar to wearing actual underwear. Goodnites Bed Mats Goodnites released Goodnites Bed Mats in April 2012. They can be used to protect the mattress from bedwetting accidents. These are made more for that occasional bedwetter. For a person who wets maybe once a week, the Goodnite would be wasted after two nights of wearing. These protect the mattress in the event of an accident. Goodnites Bed Mats feature adhesive to allow them to stick to the bed. Discontinued Products Boxers & Sleep Shorts Goodnites Boxers (for boys) and Sleep Shorts (for girls) were a product manufactured by Kimberly-Clark from 2007–2009, and distributed from 2007–2010. They were designed to look and feel like boxers. They were blue for boys and pink for girls. The outer covering was cloth-like to look like a pair of boxers. The inside was a pull-up underwear. As of 2011, Kimberly-Clark makes no reference to this product line on the official Goodnites website. Tru-Fit The Tru-Fit line was a pad-and-pants system that combined an absorbent, disposable liner inside a rubberized, waterproof pair of boxer shorts. They came in 4 styles. Likely released in 2014 and then discontinued in 2019 according to the company Facebook page stating, "Our GoodNites Tru-Fit underwear have been discontinued, however, you may be able to find the Tru-Fit underwear through online retailers until inventories are depleted." Goodnites Inserts In 2019, Goodnites introduced inserts for boys who experience minor leakage while sleeping. They fit inside underwear briefs and are one size fits most. They are not recommended for heavy to complete loss of bladder control or for full bedwetting accidents. In 2021, inserts for girls were introduced with similar functionality. They fit inside a standard girls' underwear brief. As of 2023, the official Goodnites website (owned by Kimberly-Clark) makes no mention of the product. Competition When they were first released, Goodnites were an alternative to waterproof mattress pads and more expensive disposable youth diapers intended for children with disabilities; as a result, they lacked any direct competition. By 2000, Goodnites' primary competition consisted of store brand disposable bedwetting diapers. In 2002, Procter & Gamble, Kimberly-Clark's primary competitor, introduced Luvs Sleepdrys as a direct competitor to Goodnites. Luvs Sleepdrys were discontinued in 2004, and, from 2004 to 2008, store brands were the primary form of direct competition to Goodnites. In 2008, Procter & Gamble released Pampers Underjams as another direct competitor to Goodnites. In 2020, Procter & Gamble discontinued Pampers Underjams and replaced them with Ninjamas. As of 2023, Goodnites' competition comes from both Ninjamas and store brand pull-ups or diapers. References External links Goodnites USA Goodnites English-Canada Goodnites French-Canada Products introduced in 1994 Kimberly-Clark brands Toilet training
Goodnites
[ "Biology" ]
2,095
[ "Excretion", "Toilet training" ]
2,238,710
https://en.wikipedia.org/wiki/Railway%20Technical%20Centre
The Railway Technical Centre (RTC) in London Road, Derby, England, was the technical headquarters of the British Railways Board and was built in the early 1960s. British Rail described it as the largest railway research complex in the world. The RTC centralised most of the technical services provided by the regional Chief Mechanical & Electrical Engineers (CM&EE) to form the Department of Mechanical & Electrical Engineering (DM&EE). In addition, it housed the newly formed British Rail Research Division which reported directly to the Board. The latter is well known for its work on the experimental Advanced Passenger Train (APT-E). At that early stage this was a concept vehicle, and in time the DM&EE applied the new knowledge to existing practice in the design of the High Speed Train (HST), the later prototype APT-P and other high-speed vehicles. History Opening The Research Division was the first to move into the purpose-built accommodation on London Road. This was formed initially with personnel from other departments around the country, including the Electrical Research Division from Rugby, the Mechanical Engineers Research Section, the Civil Engineering Research Unit (Track Lab), and the Chemical Research Unit, while the Scientific Services Division occupied the former LMS Scientific Research Laboratory building across the road known as Hartley House. The embryo RTC site (mainly Kelvin House and the Research Test Hall) was officially opened by Prince Philip, Duke of Edinburgh in May 1964. Later additional buildings were added: Trent House and Derwent House, the Advanced Projects lab, then Stephenson House, Lathkill House and finally Brunel House. Department of Mechanical & Electrical Engineering In addition to the research employees, the RTC became the headquarters of the DM&EE. This brought together engineers from the regional departments, together with its Drawing Offices, the Testing & Performance Section and the Engineering Development Unit workshop (EDU) from Darlington, the Workshops Division (which later became British Rail Engineering Limited) and it was also home to the Board's Central Purchasing Department. Strange to relate but the layout of equipment within the new workshop was as near as possible the same as the original. Following this came the Plastics Development Unit from Eastleigh, which, among other innovations, was responsible for the design of the High Speed Train's streamlined cabs as well as the prototype Mark 3 coach doors. Test tracks When research and testing required stretches of real railway line, the Research Division used the Old Dalby Test Track, and at Mickleover in Derby and The High Marnham Test Track, the former LD&ECR from Shirebrook to Tuxford where one of Network Rail's RIDC is located. RTC today At privatisation, most of the facilities were taken over by commercial railway engineering companies, and it was marketed as the "rtc Business Park" renting space to a range of small consultancy firms. The only facility which is still used for railway research is the moving-model aerodynamic test facility. The former RTC site is used by Loram to carry out repairs and maintenance on railway vehicles. It was also used by Rampart Carriage & Wagon Services (RC&WS) which went into liquidation in 2013. A large part of the site is used as storage and an operating base by Loram and Network Rail, whose rolling stock on site forms part of Network Rail testing trains. Usual traction on these trains is either Colas Rail class 37s or class 67s. The New Measurement Train and Loram C21 Grinding/Rail Profile Trains are also maintained at this facility. LCR (London and Continental Railways) took over the site in 2013 in response to a demand from the local community to retain RTC’s position as a key employment site and preserve its status as a core asset for Derby’s internationally-acclaimed engineering business cluster References General references British Railway Research 1864-1965 by S.Wise,C.Eng.,F.I.Mech.E.,M.I.M. (Edited by A.O.Gilchrist and with a biographical note by E.S.Burdon) External links TrainTesting.com - Personal Website of a former RTC employee Old Dalby and Mickleover railway test tracks departmentals.com - The RTC Business Park Website -LCR London and Continental Railway British Rail research and development Engineering research institutes History of Derby Rail transport in Derby Science and technology in Derbyshire
Railway Technical Centre
[ "Engineering" ]
895
[ "Engineering research institutes" ]
2,238,741
https://en.wikipedia.org/wiki/Bioactive%20glass
Bioactive glasses are a group of surface reactive glass-ceramic biomaterials and include the original bioactive glass, Bioglass. The biocompatibility and bioactivity of these glasses has led them to be used as implant devices in the human body to repair and replace diseased or damaged bones. Most bioactive glasses are silicate-based glasses that are degradable in body fluids and can act as a vehicle for delivering ions beneficial for healing. Bioactive glass is differentiated from other synthetic bone grafting biomaterials (e.g., hydroxyapatite, biphasic calcium phosphate, calcium sulfate), in that it is the only one with anti-infective and angiogenic properties. History Discovery and development Larry Hench and colleagues at the University of Florida first developed these materials in 1969 and they have been further developed by his research team at the Imperial College London and other researchers worldwide. Hench began development by submitting a proposal hypothesis to the United States Army Medial Research and Development command in 1968 based upon his theory of the body rejecting metallic or polymeric material unless it was able to form a coating of hydroxyapatite which is found in bone. Hench and his team received funding for one year, and began development on what would become the 45S5 composition. The name "Bioglass" was trademarked by the University of Florida as a name for the original 45S5 composition. It should therefore only be used in reference to the 45S5 composition and not as a general term for bioactive glasses. Through use of a Na2O-CaO-SiO2 phase diagram, Hench chose a composition of 45% SiO2, 24.5% Na2O, 24.5% CaO, and 6% P2O5 to allow for a large amount of CaO and some P2O5 in a SiO2-Na2O matrix. The glass was batched, melted, and cast into small rectangular implants to be inserted into the femoral bone of rats for six weeks as developed by Dr. Ted Greenlee of the University of Florida. After six weeks, Dr. Greenlee reported "These ceramic implants will not come out of the bone. They are bonded in place. I can push on them, I can shove them, I can hit them and they do not move. The controls easily slide out." These findings were the basis of the first paper on 45S5 bioactive glass in 1971 which summarized that in vitro experiments in a calcium and phosphate ion deficient solution showed a developed layer of hydroxyapatite similar to the observed hydroxyapatite later in vivo by Dr. Greenlee. Animal Testing Scientists in Amsterdam, the Netherlands, took cubes of bioactive glass and implanted them into the tibias of guinea pigs in 1986. After 8, 12, and 16 weeks of implantation, the guinea pigs were euthanized and their tibias were harvested. The implants and tibias were then subjected to a shear strength test to determine the mechanical properties of the implant to bone boundary, where it was found to have a shear strength of 5 N/mm2. Electron microscopy showed the ceramic implants had bone remnants firmly adhered to them. Further optical microscopy revealed bone cell and blood vessel growth within the area of the implant which was proof of biocompatibility between the bone and implant. Bioactive glass was the first material found to create a strong bond with living bone tissue. Structure Solid state NMR spectroscopy has been very useful in determining the structure of amorphous solids. Bioactive glasses have been studied by 29Si and 31P solid state MAS NMR spectroscopy. The chemical shift from MAS NMR is indicative of the type of chemical species present in the glass. The 29Si MAS NMR spectroscopy showed that Bioglass 45S5 was a Q2 type-structure with a small amount of Q3; i.e., silicate chains with a few crosslinks. The 31P MAS NMR revealed predominately Q0 species; i.e., PO43−; subsequent MAS NMR spectroscopy measurements have shown that Si-O-P bonds are below detectable levels Compositions There have been many variations on the original composition which was Food and Drug Administration (FDA) approved and termed Bioglass. This composition is known as Bioglass 45S5. The compositions include: 45S5: 45 wt% SiO2, 24.5 wt% CaO, 24.5 wt% Na2O and 6.0  wt% P2O5. Bioglass S53P4: 53 wt% SiO2, 23 wt% Na2O, 20 wt% CaO and 4 wt% P2O5. 58S: 58 wt% SiO2, 33 wt% CaO and 9 wt% P2O5. 70S30C: 70 wt% SiO2, 30 wt% CaO. 13-93: 53 wt% SiO2, 6 wt% Na2O, 12 wt% K2O, 5 wt% MgO, 20 wt% CaO, 4 wt% P2O5. Bioglass 45S5 The composition was originally selected because of being roughly eutectic. The 45S5 name signifies glass with 45 wt.% of SiO2 and 5:1 molar ratio of calcium to phosphorus. Lower Ca/P ratios do not bond to bone. The key composition features of Bioglass is that it contains less than 60 mol% SiO2, high Na2O and CaO contents, high CaO/P2O5 ratio, which makes Bioglass highly reactive to aqueous medium and bioactive. High bioactivity is the main advantage of Bioglass, while its disadvantages includes mechanical weakness, low fracture resistance due to amorphous 2-dimensional glass network. The bending strength of most Bioglass is in the range of 40–60 MPa, which is not enough for load-bearing application. Its Young's modulus is 30–35 GPa, very close to that of cortical bone, which can be an advantage. Bioglass implants can be used in non-load-bearing applications, for buried implants loaded slightly or compressively. Bioglass can be also used as a bioactive component in composite materials or as powder and can be used to create an artificial septum to treat perforations caused by cocaine abuse. It has no known side-effects. The first successful surgical use of Bioglass 45S5 was in replacement of ossicles in middle ear, as a treatment of conductive hearing loss. The advantage of 45S5 is in no tendency to form fibrous tissue. Other uses are in cones for implantation into the jaw following a tooth extraction. Composite materials made of Bioglass 45S5 and patient's own bone can be used for bone reconstruction. Bioglass is comparatively soft in comparison to other glasses. It can be machined, preferably with diamond tools, or ground to powder. Bioglass has to be stored in a dry environment, as it readily absorbs moisture and reacts with it. Bioglass 45S5 is manufactured by conventional glass-making technology, using platinum or platinum alloy crucibles to avoid contamination. Contaminants would interfere with the chemical reactivity in organism. Annealing is a crucial step in forming bulk parts, due to high thermal expansion of the material. Heat treatment of Bioglass reduces the volatile alkali metal oxide content and precipitates apatite crystals in the glass matrix. The resulting glass–ceramic material, named Ceravital, has higher mechanical strength and lower bioactivity. Bioglass S53P4 The formula of S53P4 was first developed in the early 1990s in Turku, Finland, at Åbo Akademi University and University of Turku. It has received the product claim for use in bone cavity filling in the treatment of chronic osteomyelitis in 2011. S53P4 is among the most studied bioactive glasses on the market with over 150 publications. When S53P4 bioactive glass is placed into the bone cavity, it reacts with body fluids to activate the glass. During this activation period, the bioactive glass goes through a series of chemical reactions, creating the ideal conditions for the bone to rebuild through osteoconduction. Na, Si, Ca, and P ions are released. A silica gel layer forms on the bioactive glass surface. CaP crystallizes, forming a layer of hydroxyapatite on the surface of the bioactive glass. Once the hydroxyapatite layer is formed, the bioactive glass interacts with biological entities, i.e., blood proteins, growth factors and collagen. Following this interaction, the osteoconductive and osteostimulative processes help the new bone grow onto and between the bioactive glass structures. Bioactive glass bonds to bone –facilitating new bone formation. Osteostimulation begins by stimulating osteogenic cells to increase the remodeling rate of bone. Radio-dense quality of bioactive glass allows for post-operative evaluation. In the final transformative phase, the process of bone regeneration and remodeling continues. Over time the bone fully regenerates, restoring the patient's natural anatomy. Bone consolidation occurs. S53P4 bioactive glass continues to remodel into bone over a period of years. Bioactive glass S53P4 is currently the only bioactive glass on the market which has been proven to inhibit bacterial growth effectively. The bacterial growth inhibiting properties of S53P4 derive from two simultaneous chemical and physical processes, which occurs once the bioactive glass reacts with body fluids. Sodium (Na) is released from the surface of the bioactive glass and induces an increase in pH (alkaline environment), which is not favorable for the bacteria, thus inhibiting their growth. The released Na, Ca, Si and P ions give rise to an increase in osmotic pressure due to an elevation in salt concentration, i.e., an environment where bacteria cannot grow. Bioglass 8625 Bioglass 8625, also called Schott 8625, is a soda-lime glass used for encapsulation of implanted devices. The most common use of Bioglass 8625 is in the housings of RFID transponders for use in human and animal microchip implants. It is patented and manufactured by Schott AG. Bioglass 8625 is also used for some piercings. Bioglass 8625 does not bond to tissue or bone, it is held in place by fibrous tissue encapsulation. After implantation, a calcium-rich layer forms on the interface between the glass and the tissue. Without additional antimigration coating it is subject to migration in the tissue. The antimigration coating is a material that bonds to both the glass and the tissue. Parylene, usually Parylene type C, is often used as such material. Bioglass 8625 has a significant content of iron, which provides infrared light absorption and allows sealing by a light source, e.g., a Nd:YAG laser or a mercury-vapor lamp. The content of Fe2O3 yields high absorption with maximum at 1100 nm, and gives the glass a green tint. The use of infrared radiation instead of flame or contact heating helps preventing contamination of the device. After implantation, the glass reacts with the environment in two phases, in the span of about two weeks. In the first phase, alkali metal ions are leached from the glass and replaced with hydrogen ions; small amount of calcium ions also diffuses from the material. During the second phase, the Si-O-Si bonds in the silica matrix undergo hydrolysis, yielding a gel-like surface layer rich on Si-O-H groups. A calcium phosphate-rich passivation layer gradually forms over the surface of the glass, preventing further leaching. It is used in microchips for tracking of many kinds of animals, and recently in some human implants. The U.S. Food and Drug Administration (FDA) approved use of Bioglass 8625 in humans in 1994. Bioglass 13-93 Compared to Bioglass 45S5, silicate 13-93 bioactive glass is composed of a higher composition of SiO2 and includes K2O and MgO. It is commercially available from Mo-Sci Corp. or can be directly prepared by melting a mixture of Na2CO3, K2CO3, MgCO3, CaCO3, SiO2 and NaH2PO4 · 2H2O in a platinum crucible at 1300 °C and quenching between stainless steel plates. The 13-93 glass has received approval for in vivo use in the US and Europe. It has more facile viscous flow behavior and a lower tendency to crystallize upon being pulled into fibers. 13-93 bioactive glass powder could be dispersed into a binder to create ink for robocasting or direct ink 3D printing technique. The mechanical properties of the resulting porous scaffolds have been studied in various works of literature. The printed 13-93 bioactive glass scaffold in the study by Liu et al. was dried in ambient air, fired to 600 °C under the O2 atmosphere to remove the processing additives, and sintered in air for 1 hour at 700 °C. In the pristine sample, the flexural strength (11 ± 3 MPa) and flexural modulus (13 ± 2 MPa) are comparable to the minimum value of those of trabecular bones while the compressive strength (86 ± 9 MPa) and compressive modulus (13 ± 2 GPa) are close to the cortical bone values. However, the fracture toughness of the as-fabricated scaffold was 0.48 ± 0.04 MPa·m1/2, indicating that it is more brittle than human cortical bone whose fracture toughness is 2–12 MPa·m1/2. After immersing the sample in a simulated body fluid (SBF) or subcutaneous implantation in the dorsum of rats, the compressive strength and compressive modulus decrease sharply during the initial two weeks but more gradually after two weeks. The decrease in the mechanical properties was attributed to the partial conversion of the glass filaments in the scaffolds into a layer mainly composed of a porous hydroxyapatite-like material. Another work by Kolan and co-workers used selective laser sintering instead of conventional heat treatment. After the optimization of the laser power, scan speed, and heating rate, the compressive strength of the sintered scaffolds varied from 41 MPa for a scaffold with ~50% porosity to 157 MPa for dense scaffolds. The in vitro study using SBF resulted in a decrease in the compressive strength but the final value was similar to that of human trabecular bone. 13-93 porous glass scaffolds were synthesized using a polyurethane foam replication method in the report by Fu et al. The stress-strain relationship was examined in obtained from the compressive test using eight samples with 85 ± 2% porosity. The resultant curve demonstrated a progressive breaking down of the scaffold structure and the average compressive strength of 11 ± 1 MPa, which was in the range of human trabecular bone and higher than competitive bioactive materials for bone repairing such as hydroxyapatite scaffolds with the same extent of pores and polymer-ceramic composites prepared by the thermally induced phase separation (TIPS) method. Synthesis Bioactive glasses have been synthesized through methods such as conventional melting, quenching, the sol–gel process, flame synthesis, and microwave irradiation. The synthesis of bioglass has been reviewed by various groups, with sol-gel synthesis being one of the most frequently used methods for producing bioglass composites, particularly for tissue engineering applications. Other methods of bioglass synthesis have been developed, such as flame and microwave synthesis, though they are less prevalent in research. Bioactive metallic glass Bioactive metallic glass is a subset of bioactive glass, wherein the bulk material is composed of a metal-glass substrate and is coated with bioactive glass in order to make the material bioactive. The reasoning behind the introduction of the metallic base is to create a less brittle, stronger material that will be permanently implanted within the body. Metallic glasses tout lower Young's Moduli and higher elastic limits than bioactive glass, and as such, will allow for more deformation of the material before fracture occurs. This is highly desirable, as a permanent implant would need to avoid shattering within the patient's body. Common materials which compose the metallic bulk include Zr and Ti, whereas some examples of the few key metals that shouldn't be used as bulk materials are Al, Be, and Ni. Laser-cladding While metals are not necessarily inherently bioactive, bioactive glass coatings which are applied to metal substrates via laser-cladding introduce the bioactivity that the glass would express, but have the added benefits of having a metal base. Laser cladding is a method by which bioactive glass microparticles are thrust in a stream at the bulk material, and introduced to a high enough heat that they melt into a coating of material. Sol-gel processing Metals can also be affixed with bioactive glass using a sol-gel process, in which the bioactive glass is sintered onto metals at a controlled temperature that is high enough to perform the sintering, but low enough to avoid phase-shifts and other unwanted side effects. Experimentation has been done with sintering double layered, silica-based bioactive glass onto stainless steel substrates at 600 °C for 5 hours. This method has proven to maintain largely amorphous structure while containing key crystalline elements, and also achieves a remarkably similar level of bioactivity to bioactive glass. Mechanism of activity The underlying mechanisms that enable bioactive glasses to act as materials for bone repair have been investigated since the first work of Hench et al. at the University of Florida. Early attention was paid to changes in the bioactive glass surface. Five inorganic reaction stages are commonly thought to occur when a bioactive glass is immersed in a physiological environment: Ion exchange in which modifier cations (mostly Na+) in the glass exchange with hydronium ions in the external solution. Hydrolysis in which Si-O-Si bridges are broken, forming Si-OH silanol groups, and the glass network is disrupted. Condensation of silanols in which the disrupted glass network changes its morphology to form a gel-like surface layer, depleted in sodium and calcium ions. Precipitation in which an amorphous calcium phosphate layer is deposited on the gel. Mineralization in which the calcium phosphate layer gradually transforms into crystalline hydroxyapatite, that mimics the mineral phase naturally contained with vertebrate bones. Later, it was discovered that the morphology of the gel surface layer was a key component in determining the bioactive response. This was supported by studies on bioactive glasses derived from sol-gel processing. Such glasses could contain significantly higher concentrations of SiO2 than traditional melt-derived bioactive glasses and still maintain bioactivity (i.e., the ability to form a mineralized hydroxyapatite layer on the surface). The inherent porosity of the sol-gel-derived material was cited as a possible explanation for why bioactivity was retained, and often enhanced with respect to the melt-derived glass. Subsequent advances in DNA microarray technology enabled an entirely new perspective on the mechanisms of bioactivity in bioactive glasses. Previously, it was known that a complex interplay existed between bioactive glasses and the molecular biology of the implant host, but the available tools did not provide a sufficient quantity of information to develop a holistic picture. Using DNA microarrays, researchers are now able to identify entire classes of genes that are regulated by the dissolution products of bioactive glasses, resulting in the so-called "genetic theory" of bioactive glasses. The first microarray studies on bioactive glasses demonstrated that genes associated with osteoblast growth and differentiation, maintenance of extracellular matrix, and promotion of cell-cell and cell-matrix adhesion were up-regulated by conditioned cell culture media containing the dissolution products of bioactive glass. Medical uses S53P4 bioactive glass was first used in a clinical setting as an alternative to bone or cartilage grafts in facial reconstruction surgery. The use of artificial materials as bone prosthesis had the advantage of being much more versatile than traditional autotransplants, as well as having fewer postoperative side effects. There is tentative evidence that bioactive glass by the composition S53P4 may also be useful in long bone infections. Support from randomized controlled trials, however, is still not available as of 2015. See also Ceramic foam Nanofoam Metal foam Osseointegration Porous medium Synthesis of bioglass References Periodontology Biomaterials Glass compositions Glass-ceramics Glass chemistry American inventions Glass
Bioactive glass
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
4,387
[ "Biomaterials", "Glass engineering and science", "Glass chemistry", "Glass compositions", "Materials", "Matter", "Medical technology" ]
2,238,749
https://en.wikipedia.org/wiki/British%20Rail%20Research%20Division
The British Rail Research Division was a division of the state-owned railway company British Rail (BR). It was charged with conducting research into improving various aspects of Britain's railways, particularly in the areas of reliability and efficiency, including achieving cost reductions and increasing service levels. Its creation was endorsed by the newly created British Rail Board (BRB) in 1963 and incorporated personnel and existing resources from all over the country, including the LMS Scientific Research Laboratory. It was primarily based at the purpose-built Railway Technical Centre in Derby. In addition to its domestic activities, the Research Division would provide technology and personnel to other countries for varying purposes and periods under the trade name "Transmark". It became recognised as a centre of excellence in its field; the theoretical rigour of its approach to railway engineering superseded the ad hoc methods that had prevailed previously. Its research led to advances in various sectors, such as in the field of signalling, where progress was made with block systems, remote operation systems, and the Automatic Warning System (AWS). Trackside improvements, such as the standardisation of overhead electrification equipment and refinements to the plasma torch, were also results of the Research Division's activities. Perhaps its most high-profile work was into new forms of rolling stock, such as the High Speed Freight Vehicle and railbuses, which led to the introduction of the Class 140. One of its projects that gained particularly high-profile coverage was the Advanced Passenger Train (APT), a high-speed tilting train intended for BR's Intercity services. However, due to schedule overruns, negative press coverage, and a lack of political support, work on the APT was ceased in the mid-1980s in favour of the more conventional InterCity 125 and InterCity 225 trainsets. The Research Division was reorganised in the runup to the privatisation of British Rail during the 1990s; the bulk having become "BR Research Limited". This unit was acquired by the private company AEA Technology in 1996, which has since become Resonate Group. Several elements of its work have continued under various organisations, such as the patents filed during the APT's development being harnessed in the development of the Pendolino, a modern high speed tilting train. Background During the mid-1950s, it became increasingly apparent to senior figures within the British Transport Commission (BTC) that, in light of mixed results from using external contractors, there was value in British Rail performing some research projects in-house instead. In August 1958, Dr F. T. Barnwell was appointed by the BTC to prepare and present specific electrical research proposals; the creation of an initially small Electrical Research Section employing 31 staff was also authorised by the BTC in July 1960. Many of these early proposals were related to traction and power equipment, such as motor control, signalling, digital computers, and 25 kV AC railway electrification. Several existing research efforts, such as into rail adhesion, were also folded into the new section's remit; in June 1960, the Rugby Locomotive Testing Senter was also transferred to the Chief Electrical Engineer's responsibility and became a key site for the section. During 1963, the newly created British Rail Board (BRB) agreed to transfer the Electrical Research Section to the British Rail Research Department, with the purpose of forming a completed new division. The Research Division brought together personnel and expertise from all over the country, including the LMS Scientific Research Laboratory. Its remit was not simply the improvement of existing equipment, or the solution of existing problems, but fundamental research from first principles, into railway operation. The results of its work would go on to inform development by engineers, manufacturers and railways all over the world. For instance, once the initial APT-E experimental project was complete, it passed to the mechanical engineering department to build the APT-P prototype. In time, engineers would be seconded to other countries for varying periods under the trade name "Transmark". One early matter for this new division was the choice for a long term location, Rugby being passed over in favour of Derby, where the purpose-built Railway Technical Centre was built during the 1960s at a cost of £4 million. Nearby, the Research Division developed its first test track on the old Great Northern Railway line between Egginton Junction and Derby Friargate (later used only as far as Mickleover) and was used by the Train Control Group. Later on, when the revolutionary Advanced Passenger Train (APT) was being developed, a second test track was created on the line between Melton Junction and Edwalton (known as the Old Dalby Test Track), which was acquired specifically to test this train. The Mickleover test track was closed and lifted in the early 1990s, however Old Dalby remained in use into the twenty-first century. Projects Early benefits of the Research Division's work were already being felt by the late 1960s in the field of signalling, specifically in block systems. While practical demonstrations were being performed as early as 1964, some of these efforts, such as an early use of radar-based obstacle detection, proved to be not mature enough for deployment. One project of this nature that was highly impactful on future railway operations was the creation of automated simulations of traffic flow through a network. In response to concerns by managers of the British Rail's Southern Region, the Research Team developed improvements for the Automatic Warning System (AWS), sometimes referred to as Signal Repeating AWS, which would be deployed extensively in that region. Another early advance was the remote control of freight locomotives at low speed, such as when coal trains were delivering their materials to power stations. By the mid-1960s, the Research Division had multiple traction-related projects underway, however, they were negatively impacted by the sudden death of senior engineer James Brown. Work into the use of induction drives, for both rotary and linear motors, was one such project; a rail-mounted trolley was developed and tested as part of this research. It was concluded that, largely due to the cost of the aluminium reaction rail necessary, linear motors were not economically practical at that time. The division has also collaborated with English Electric to produce a heavily modified demonstrator, converted from a redundant early diesel electric locomotive, to evaluate the rotary induction motor. Other advances made by researchers in the field of overhead electrification, such as hydraulic dampers and flexible contact wire supports, greatly aided the Modernisation of the West Coast Main Line. During the late 1960s, attention was paid to expanding the Research Division's mathematical capabilities. This heavily contributed to the development of Junction Optimisation Technique (JOT), an approach for optimisating traffic flows through complex junctions (such as that outside of Glasgow Central railway station). The arrival of more powerful computers around this time allowed for time-based, rather than event-based, traffic simulations to be programmed as well, leading to the General Area Time-based Train Simulator (GATTS). By the end of the 1960s, the division has made progress in the area of rail adhesion; influenced by French experiments with spark discharges, development of what became the plasma torch proceeded based on promising test results gathered in 1967. Subsequent testing provided even better results; however, progress was badly impacted by the departure of Dr Alston in 1971. The division also provided support in troubleshooting issues encountered with the recently deployed overhead electrification apparatus; the development of simpler and standardised equipment and further research into digitally simulating the dynamic behavior of overhead equipment proceeded. The success of these efforts were such that, having been initially authorised for a five-year period, the BRB approved a further 11-year extension in 1973, thus continuing the Research Division's work in these areas through to March 1985. One key research project was examining the tendency of new wheels to hunt, which was counteracted by deliberately profiling, or pre-wearing, wheels. During the 1960s, an extensive study was performed by the aeronautical engineer Alan Wickens, which identified dynamic instability as the cause. Concluding that a properly damped suspension system, both horizontally as well as vertically, additional research led to additional projects, such as the High Speed Freight Vehicle, which started work during the late 1960s and reaching a high point during the mid-1970s. Various tests of the High Speed Freight Vehicle were carried out between 1975 and 1979. An even more radical freight vehicle, the Autowagon, was also worked on during the early 1970s; the concept of individual self-powered container-carrying wagons automatically loading, traversing the rail network, and unloading as required. This project never proceeded beyond demonstrations and studies into the control systems required. During the mid-1970s, British Rail became interested in introducing a new generation of railbuses; thus, the Research Division collaborated with British Leyland to jointly develop and evaluate several prototype four-wheel vehicles, commonly referred to as LEVs (Leyland Experimental Vehicle). These prototypes were essentially Leyland National bus bodies mounted on a modified High-Speed Freight Vehicle chassis. Testing commenced in 1978. A more capable two-car prototype railbus, the Class 140, was built between 1979 and 1981. Following its early use as a testbed, during which the Class 140 toured several different regions across the UK, it later served as a demonstrator for the subsequent production units based on the type, the Class 141, introduced in 1984, and Class 142, introduced in 1985. These subsequent production classes diverge from the Class 140's design in numerous places; one example is the separation between the underframe and the body above by a flexible mounting, a reduction in the depth of the underframe for maintenance accessibility, and the use of road bus-standard electrical equipment, passenger fittings, and general cab layout. Likely the most prominent project undertaken by the Research Division was the Advanced Passenger Train (APT), a high-speed tilting train intended to accelerate Britain's Intercity services. This work, begun during the mid-1960s, was in part motivated, and influenced, by the recent success of the Japanese Shinkansen line between Tokyo and Osaka. The use of tilting permitted the alignment of the lateral forces with the floor, in turning higher top speeds to be attained before passenger comfort was adversely impacted. An active tilting system, using hydraulic actuation, was to enable the APT to round corners 40% faster than conventional counterparts. The prototype APT-E, powered by gas turbines, conducted its first run on 25 July 1972. Due to trade union opposition, it did not run again on the main line until August 1973. During testing on the Great Western Main Line, the prototype achieved a new British railway speed record on 10 August 1975, having reached . However, by the early 1980s, the project had been running for over a decade and the trains were still not in service. The APT was quietly abandoned during the mid-1980s in favour of the more conventional InterCity 125 and InterCity 225 trainsets. Other work involved looking at the tamping of ballast, properties of subsoils, and rail prestressing. A large part of the network had been converted to continuous welded rail which, during a hot summer, caused many problems with rail buckling; although there were no injuries, there were a number of derailments. Evaluations were conducted into the methods, costs, and benefits of tamping the ballast over the sleeper ends. There were extended studies into metal fatigue, and pioneering work in ultrasound crack detection at a time when it was being investigated elsewhere for medical diagnostics. Major signalling breakthroughs made by the Research Division included Solid State Interlocking and the Integrated Electronic Control Centre. Reforms and privatisation In 1986, finance for the division was moved from the board to the operating divisions. Thus emphasis shifted from pure research to problem solving. During 1989, BR Research became a self-contained unit working under contract to British Rail and other customers, and the route was open for privatisation. When British Rail was sold into private ownership during the 1990s, the Research Division (which had become "BR Research Limited") was bought by AEA Technology in 1996. The resulting business, "AEA Technology Rail", was subsequently sold in 2006 to a venture capital company and became DeltaRail Group. Transmark, the consultancy arm, was sold to Halcrow to become Halcrow Transmark. A somewhat dated display of material relating to the work of the Division was maintained in the Derby Industrial Museum. Legacy The Research Division had an uneasy relationship with other parts of BR, and like most of the products of Harold Wilson's "white heat of technology" speech, were killed off in the early 1980s. The basis of the unease was the traditional approach of most of BR compared with theoretical and aerospace approaches adopted by the Research Division. The hiring of graduates rather than training people up internally also caused tensions. It could be somewhat tactless, or perhaps naive, at times. The APT-E was provided with a single driver position central in the cab, at a time when the unions were resisting the loss of the "second man" (the fireman in steam days). After its first run out to Duffield the APT-E was blacked (boycotted) by the unions for a year. Nevertheless, its empirical research into vehicle dynamics has produced today's high speed trains, both freight and passenger, including the InterCity 125 and InterCity 225. The concept of a tilting system for the APT became part of the Pendolino, while the products of its signalling and operations control research are used over a significant amount of the British railway system. References Citations Bibliography Further reading External links Dave Coxon's site about train testing at the Research Division in the 1970s and 80s A selection of the research papers published by the division Access to searchable abstracts of most research papers published by the division (free access, but registration required) Department for Transport "A strategy for regeneration of rail research in Great Britain" A belated acknowledgement of the part played by British Rail's research effort British Rail research and development Collection of Derby Museum and Art Gallery Engineering research institutes Organisations based in Derby Rail transport in Derby Science and technology in Derbyshire 1964 establishments in England
British Rail Research Division
[ "Engineering" ]
2,871
[ "Engineering research institutes" ]
2,238,765
https://en.wikipedia.org/wiki/Static%20induction%20thyristor
The static induction thyristor (SIT, SITh) is a thyristor with a buried gate structure in which the gate electrodes are placed in n-base region. Since they are normally on-state, gate electrodes must be negatively or anode biased to hold off-state. It has low noise, low distortion, high audio frequency power capability. The turn-on and turn-off times are very short, typically 0.25 microseconds. History The first static induction thyristor was invented by Japanese engineer Jun-ichi Nishizawa in 1975. It was capable of conducting large currents with a low forward bias and had a small turn-off time. It had a self controlled gate turn-off thyristor that was commercially available through Tokyo Electric Co. (now Toyo Engineering Corporation) in 1988. The initial device consisted of a p+nn+ diode and a buried p+ grid. In 1999, an analytical model of the SITh was developed for the PSPICE circuit simulator. In 2010, a newer version of SITh was developed by Zhang Caizhen, Wang Yongshun, Liu Chunjuan and Wang Zaixing, the new feature of which was its high forward blocking voltage. See also Static induction transistor MOS composite static induction thyristor References External links Static induction thyristor Semiconductor devices Solid state switches Power electronics
Static induction thyristor
[ "Engineering" ]
287
[ "Electronic engineering", "Power electronics" ]
2,238,936
https://en.wikipedia.org/wiki/Projection%20screen
A projection screen is an installation consisting of a surface and a support structure used for displaying a projected image for the view of an audience. Projection screens may be permanently installed on a wall, as in a movie theater, mounted to or placed in a ceiling using a rollable projection surface that retracts into a casing (these can be motorized or manually operated), painted on a wall, or portable with tripod or floor rising models as in a conference room or other non-dedicated viewing space. Another popular type of portable screens are inflatable screens for outdoor movie screening (open-air cinema). Uniformly white or grey screens are used almost exclusively as to avoid any discoloration to the image, while the most desired brightness of the screen depends on a number of variables, such as the ambient light level and the luminous power of the image source. Flat or curved screens may be used depending on the optics used to project the image and the desired geometrical accuracy of the image production, flat screens being the more common of the two. Screens can be further designed for front or back projection, the more common being front projection systems, which have the image source situated on the same side of the screen as the audience. Different markets exist for screens targeted for use with digital projectors, movie projectors, overhead projectors and slide projectors, although the basic idea for each of them is very much the same: front projection screens work on diffusely reflecting the light projected on to them, whereas back-projection screens work by diffusely transmitting the light through them. Screens by installation type in different settings In the commercial movie theaters, the screen is a reflective surface that may be either aluminized (for high contrast in moderate ambient light) or a white surface with small glass beads (for high brilliance under dark conditions). The screen also has hundreds of small, evenly spaced holes to allow air to and from the speakers and subwoofer, which often are directly behind it. Rigid wall-mounted screens maintain their geometry perfectly which makes them suitable for applications that demand exact reproduction of image geometry. Such screens are often used in home theaters, along with the pull-down screens. Pull-down screens (also known as manual wall screens) are often used in spaces where a permanently installed screen would require too much space. These commonly use painted fabric that is rolled in the screen case when not used, making them less obtrusive when the screen is not in use. Fixed-frame screens provide the greatest level of uniform tension on the screens surface, resulting in the optimal image quality. They are often used in home theater and professional environments where the screen does not need to be recessed into the case. Electric screens can be wall-mounted, ceiling-mounted or ceiling recessed. These are often larger screens, though electric screens are available for home theater use as well. Electric screens are similar to pull-down screens, but instead of the screen being pulled down manually, an electric motor raises and lowers the screen. Electric screens are usually raised or lowered using either a remote control or wall-mounted switch, although some projectors are equipped with an interface that connects to the screen and automatically lowers the screen when the projector is switched on and raises it when the projector is switched off. Switchable projection screens can be switched between opaque and clear. In the opaque state, projected image on the screen can be viewed from both sides. It is very good for advertising on store windows. Mobile screens usually use either a pull-down screen on a free stand, or pull up from a weighted base. These can be used when it is impossible or impractical to mount the screen to a wall or a ceiling. Both mobile and permanently installed pull-down screens may be of tensioned or not tensioned variety. Tensioned models attempt to keep the fabric flat and immobile, whereas the not tensioned models have the fabric of the screen hanging freely from their support structures. In the latter screens, the fabric can rarely stay immobile if there are currents of air in the room, giving imperfections to the projected image. Specialty screens may not fall into any of these categories. These include non-solid screens, inflatable screens and others, and can be inexpensively made at home. See the respective articles for more information. Screen gain One of the most often quoted properties in a home theater screen is the gain. This is a measure of reflectivity of light compared to a screen coated with magnesium carbonate, titanium dioxide, or barium sulfate when the measurement is taken for light targeted and reflected perpendicular to the screen. Titanium dioxide is a bright white colour, but greater gains can be accomplished with materials that reflect more of the light parallel to projection axis and less off-axis. Frequently quoted gain levels of various materials range from 0.8 of light grey matte screens to 2.5 of the more highly reflective glass bead screens. Very high gain levels could be attained simply by using a mirror surface, although the audience would then just see a reflection of the projector, defeating the purpose of using a screen. Many screens with higher gain are simply semi-glossy, and so exhibit more mirror-like properties, namely a bright "hot spot" in the screen—an enlarged (and greatly blurred) reflection of the projector's lens. Opinions differ as to when this "hot spotting" begins to be distracting, but most viewers do not notice differences as large as 30% in the image luminosity, unless presented with a test image and asked to look for variations in brightness. This is possible because humans have greater sensitivity to contrast in smaller details, but less so in luminosity variations as great as half of the screen. Other screens with higher gain are semi-retroreflective. Unlike mirrors, retroreflective surfaces reflect light back toward the source. Hot spotting is less of a problem with retroreflective high-gain screens. At the perpendicular direction used for gain measurement, mirror reflection and retroreflection are indistinguishable, and this has sown confusion about the behavior of high gain screens. A second common confusion about screen gain arises for grey-colored screens. If a screen material looks grey on casual examination then its total reflectance is much less than 1. However, the grey screen can have measured gain of 1 or even much greater than 1. The geometric behavior of a grey screen is different from that of a white screen of identical gain. Therefore, since geometry is important in screen applications, screen materials should be at least specified by their gain and their total reflectance. Instead of total reflectance, "geometric gain" (equal to the gain divided by the total reflectance) can be the second specification. Curved screens can be made highly reflective without introducing any visible hot spots, if the curvature of the screen, placement of the projector and the seating arrangement are designed correctly. The object of this design is to have the screen reflect the projected light back to the audience, effectively making the entire screen a giant "hot spot". If the angle of reflection is about the same across the screen, no distracting artifacts will be formed. Semi-specular high gain screen materials are suited to ceiling-mounted projector setups since the greatest intensity of light will be reflected downward toward the audience at an angle equal and opposite to the angle of incidence. However, for a viewer seated to one side of the audience the opposite side of the screen is much darkened for the same reason. Some structured screen materials are semi-specularly reflective in the vertical plane while more perfectly diffusely reflective in the horizontal plane to avoid this. Glass-bead screens exhibit a phenomenon of retroreflection; the light is reflected more intensely back to its source than in any other direction. They work best for setups where the image source is placed in the same direction from the screen as the audience. With retroreflective screens, the screen center might be brighter than the screen periphery, a kind of hot spotting. This differs from semi-specular screens where the hot spot's location varies depending on the viewer's position in the audience. Retroreflective screens are seen as desirable due to the high image intensity they can produce with a given luminous flux from a projector. Screen geometry Projector screens are almost always rectangular in shape. They typically follow a standard display aspect ratio. For most home cinema setups there are two aspect ratios. 16:9 and Cinemascope. For classroom, businesses and houses of worship settings, 16:10 is the more commonly used projector screen aspect ratio because this matches the aspect ratio used by many modern computers. Square-shaped screens used for overhead projectors sometimes double as projection screens for digital projectors in meeting rooms, where space is scarce and multiple screens can seem redundant. These screens have an aspect ratio of 1:1 by definition. Most image sources are designed to project a perfectly rectangular image on a flat screen. If the audience stays relatively close to the projector, a curved screen may be used instead without visible distortion in the image geometry. Viewers closer or farther away will see a pincushion or barrel distortion, and the curved nature of the screen will become apparent when viewed off-axis. Image brightness and contrast Apparent contrast in a projected image — the range of brightness — is dependent on the ambient light conditions, luminous power of the projector and the size of the image being projected. A larger screen size means less luminous (luminous power per unit solid angle per unit area) and thus less contrast in the presence of ambient light. Some light will always be created in the room when an image is projected, increasing the ambient light level and thus contributing to the degradation of picture quality. This effect can be lessened by decorating the room with dark colours. The real-room situation is different from the contrast ratios advertised by projector manufacturers, who record the light levels with projector on full black / full white, giving as high contrast ratios as possible. Manufacturers of home theater screens have attempted to resolve the issue of ambient light by introducing screen surfaces that direct more of the light back to the light source. The rationale behind this approach relies on having the image source placed near the audience, so that the audience will actually see the increased reflected light level on the screen. Highly reflective flat screens tend to suffer from hot spots, when part of the screen seems much more bright than the rest. This is a result of the high directionality (mirror-likeness) of such screens. Screens with high gain also have a narrower usable viewing angle, as the amount of reflected light rapidly decreases as the viewer moves away from front of such screen. Because of the said effect, these screens are also less vulnerable to ambient light coming from the sides of the screen, as well. Grey screens A relatively recent attempt in improving the perceived image quality is the introduction of grey screens, which are more capable of darker tones than their white counterparts. A matte grey screen would have no advantage over a matte white screen in terms of contrast; contemporary grey screens are rather designed to have a gain factor similar to those of matte white screens, but a darker appearance. A darker (grey) screen reflects less light, of course—both light from the projector and ambient light. This decreases the luminance (brightness) of both the projected image and ambient light, so while the light areas of the projected image are dimmer, the dark areas are darker; white is less bright, but intended black is closer to actual black. Many screen manufacturers thus appropriately call their grey screens "high-contrast" models. Although a projection screen cannot improve a projector's contrast level, the perceived contrast can be boosted. In an optimal viewing room, the projection screen is reflective, whereas the surroundings are not. The ambient light level is related to the overall reflectivity of the screen, as well as that of the surroundings. In cases where the area of the screen is large compared to that of the surroundings, the screen's contribution to the ambient light may dominate and the effect of the non-screen surfaces of the room may even be negligible. Some examples of this are planetariums and virtual-reality cubes featuring front-projection technology. Some planetariums with dome-shaped projection screens have thus opted to paint the dome interior in gray, in order to reduce the degrading effect of inter-reflections when images of the sun are displayed simultaneously with images of dimmer objects. Grey screens are designed to rely on powerful image sources that are able to produce adequate levels of luminosity so that the white areas of the image still appear as white, taking advantage of the non-linear perception of brightness in the human eye. People may perceive a wide range of luminosities as "white", as long as the visual clues present in the environment suggest such an interpretation. A grey screen may thus succeed almost as well in delivering a bright-looking image, or fail to do so in other circumstances. Compared to a white screen, a grey screen reflects less light to the room and less light from the room, making it increasingly effective in dealing with the light originating from the projector. Ambient light originating from other sources may reach the eye immediately after having reflected from the screen surface, giving no advantage over a white high-gain screen in terms of contrast ratio. The potential improvement from a grey screen may thus be best realized in a darkened room, where the only light is that of the projector. Partly fueled by popularity, grey screen technology has improved greatly in recent years. Grey screens are now available in various gain and grey-scale levels. Selectively reflective screens Certain screens are claimed to selectively reflect the narrow wavelengths of projector light while absorbing other wavelengths in the optical spectrum. Sony makes a screen that appears grey in normal room light, and is intended to reduce the effect of ambient light. This is purported to work by preferentially absorbing ambient light of colors not used by the projector, while preferentially reflecting the colors of red, green and blue light the projector uses. A true color-selective screen has not been substantiated. A contrast-enhancing screen has been introduced by Dai Nippon Printing (DNP) and Screen Innovations that is based on thin layers of black louvers rather than wavelength-selective reflection properties. Screens as an optical element In an optimally configured system, projection screen surface and the real image plane are made to coincide. From an optical point of view, a screen is not needed for the image to form; screens are rather used to make an image visible. See also Cathode-ray tube Contrast ratio Holographic screen Home cinema Inflatable movie screen Rear-projection television Video projector References External links Display technology Film and video technology Home video
Projection screen
[ "Engineering" ]
3,005
[ "Electronic engineering", "Display technology" ]
2,239,104
https://en.wikipedia.org/wiki/Islands%20of%20automation
Islands of automation was a popular term used largely during the 1980s to describe how rapidly developing automation systems were at first unable to communicate easily with each other. Industrial communication protocols, network technologies, and system integration helped to improve this situation. Just a few of the many examples of helping technologies are Modbus, Fieldbus, Ethernet, etc. It is more recently used by automation specialists to describe a discrete and fully enclosed automated system applied in a largely manual environment. In today's interconnected world it is uncommon for automated systems to be fully stand alone. Therefore, the old usage is defunct and the new usage is more appropriate for companies that wish to automate in a limited fashion. References Impact of automation System integration
Islands of automation
[ "Technology", "Engineering" ]
144
[ "Systems engineering", "Computer network stubs", "Impact of automation", "Automation", "Computing stubs", "System integration" ]
2,239,109
https://en.wikipedia.org/wiki/An%20Inquiry%20Concerning%20the%20Source%20of%20the%20Heat%20Which%20Is%20Excited%20by%20Friction
"An Inquiry Concerning the Source of the Heat Which Is Excited by Friction" is a scientific paper by Benjamin Thompson, Count Rumford, which was published in the Philosophical Transactions of the Royal Society in 1798. The paper provided a substantial challenge to established theories of heat, and began the 19th century revolution in thermodynamics. Background Rumford was an opponent of the caloric theory of heat which held that heat is a fluid that could be neither created nor destroyed. He had further developed the view that all gases and liquids are absolute non-conductors of heat. His views were out of step with the accepted science of the time and the latter theory had particularly been attacked by John Dalton and John Leslie. Rumford was heavily influenced by the argument from design and it is likely that he wished to grant water a privileged and providential status in the regulation of human life. Though Rumford was to come to associate heat with motion, there is no evidence that he was committed to the kinetic theory or the principle of vis viva. In his 1798 paper, Rumford acknowledged that he had predecessors in the notion that heat was a form of motion. Those predecessors included Francis Bacon, Robert Boyle, Robert Hooke, John Locke, and Henry Cavendish. Experiments Rumford had observed the frictional heat generated by boring out cannon barrels at the arsenal in Munich. At that time, cannons were cast at the foundry with an extra section of metal forward of what would become the muzzle, and this section was removed and discarded later in the manufacturing process. Rumford took an unfinished cannon and modified this section to allow it to be enclosed by a watertight box while a blunted boring tool was used on it. He showed that water in this box could be boiled within roughly two and a half hours, and that the supply of frictional heat was seemingly inexhaustible. Rumford confirmed that no physical change had taken place in the material of the cannon by comparing the specific heats of the material machined away and that remaining were the same. Rumford also argued that the seemingly indefinite generation of heat was incompatible with the caloric theory. He contended that the only thing communicated to the barrel was motion. Rumford made no attempt to further quantify the heat generated or to measure the mechanical equivalent of heat. Reception Most established scientists, such as William Henry, as well as Thomas Thomson, believed that there was enough uncertainty in the caloric theory to allow its adaptation to account for the new results. It had certainly proved robust and adaptable up to that time. Furthermore, Thomson, Jöns Jakob Berzelius, and Antoine César Becquerel observed that electricity could be indefinitely generated by friction. No educated scientist of the time was willing to hold that electricity was not a fluid. Ultimately, Rumford's claim of the "inexhaustible" supply of heat was a reckless extrapolation from the study. Charles Haldat made some penetrating criticisms of the reproducibility of Rumford's results and it is possible to see the whole experiment as somewhat tendentious. However, the experiment inspired the work of James Prescott Joule in the 1840s. Joule's more exact measurements were pivotal in establishing the kinetic theory at the expense of caloric. Notes References Citations Sources Bibliography Thermodynamics literature 1798 documents 1798 in science Physics papers Works originally published in Philosophical Transactions of the Royal Society
An Inquiry Concerning the Source of the Heat Which Is Excited by Friction
[ "Physics", "Chemistry" ]
688
[ "Thermodynamics literature", "Thermodynamics" ]
2,239,113
https://en.wikipedia.org/wiki/Deep%20inelastic%20scattering
In particle physics, deep inelastic scattering is the name given to a process used to probe the insides of hadrons (particularly the baryons, such as protons and neutrons), using electrons, muons and neutrinos. It was first attempted in the 1960s and 1970s and provided the first convincing evidence of the reality of quarks, which up until that point had been considered by many to be a purely mathematical phenomenon. It is an extension of Rutherford scattering to much higher energies of the scattering particle and thus to much finer resolution of the components of the nuclei. Henry Way Kendall, Jerome Isaac Friedman and Richard E. Taylor were joint recipients of the Nobel Prize of 1990 "for their pioneering investigations concerning deep inelastic scattering of electrons on protons and bound neutrons, which have been of essential importance for the development of the quark model in particle physics." Description To explain each part of the terminology, "scattering" refers to the deflection of leptons (electron, muon, etc.) off of hadrons. Measuring the angles of deflection gives information about the nature of the process. "Inelastic" means that the target absorbs some kinetic energy. In fact, at the very high energies of leptons used, the target is "shattered" and emits many new particles. These particles are hadrons and, to oversimplify greatly, the process is interpreted as a constituent quark of the target being "knocked out" of the target hadron, and due to quark confinement, the quarks are not actually observed but instead produce the observable particles by hadronization. "Deep" refers to the high energy of the lepton, which gives it a very short wavelength and hence the ability to probe distances that are small compared with the size of the target hadron, so it can probe "deep inside" the hadron. Also, note that in the perturbative approximation it is a high-energy virtual photon emitted from the lepton and absorbed by the target hadron which transfers energy to one of its constituent quarks, as in the adjacent diagram. Povh and Rosina pointed out that the term “deep inelastic scattering against nucleons” was coined when the quark substructure of nucleons was unknown. They prefer the term “quasielastic lepton-quark scattering”. History The Standard Model of physics, in particular the work of Murray Gell-Mann in the 1960s, had been successful in uniting much of the previously disparate concepts in particle physics into one, relatively straightforward, scheme. In essence, there were three types of particles: The leptons, which were low-mass particles such as electrons, neutrinos and their antiparticles. They have integer electric charge. The gauge bosons, which were particles that exchange forces. These ranged from the massless, easy-to-detect photon (the carrier of the electro-magnetic force) to the exotic (though still massless) gluons that carry the strong nuclear force. The quarks, which were massive particles that carried fractional electric charges. They are the "building blocks" of the hadrons. They are also the only particles to be affected by the strong interaction. The leptons had been detected since 1897, when J. J. Thomson had shown that electric current is a flow of electrons. Some bosons were being routinely detected, although the W+, W− and Z0 particles of the electroweak force were only categorically seen in the early 1980s, and gluons were only firmly pinned down at DESY in Hamburg at about the same time. Quarks, however, were still elusive. Drawing on Rutherford's groundbreaking experiments in the early years of the 20th century, ideas for detecting quarks were formulated. Rutherford had proven that atoms had a small, massive, charged nucleus at their centre by firing alpha particles at atoms of gold. Most had gone through with little or no deviation, but a few were deflected through large angles or came right back. This suggested that atoms had internal structure and a lot of empty space. In order to probe the interiors of baryons, a small, penetrating and easily produced particle needed to be used. Electrons were ideal for the role, as they are abundant and easily accelerated to high energies due to their electric charge. In 1968, at the Stanford Linear Accelerator Center (SLAC), electrons were fired at protons and neutrons in atomic nuclei. Later experiments were conducted with muons and neutrinos, but the same principles apply. The collision absorbs some kinetic energy, and as such it is inelastic. This is a contrast to Rutherford scattering, which is elastic: no loss of kinetic energy. The electron emerges from the nucleus, and its trajectory and velocity can be detected. Analysis of the results led to the conclusion that hadrons do indeed have internal structure. The experiments were important because not only did they confirm the physical reality of quarks, but also proved again that the Standard Model was the correct avenue of research for particle physicists to pursue. See also Semi-inclusive deep inelastic scattering References Further reading Scattering Experimental particle physics 1960s in science
Deep inelastic scattering
[ "Physics", "Chemistry", "Materials_science" ]
1,082
[ "Nuclear physics", "Scattering", "Experimental physics", "Particle physics", "Condensed matter physics", "Experimental particle physics" ]
2,239,157
https://en.wikipedia.org/wiki/Wheel%20%28computing%29
In Unix operating systems, the term wheel refers to a user account with a wheel bit, a system setting that provides additional special system privileges that empower a user to execute restricted commands that ordinary user accounts cannot access. Origins The term wheel was first applied to computer user privilege levels after the introduction of the TENEX operating system, later distributed under the name TOPS-20 in the 1960s and early 1970s. The term was derived from the slang phrase big wheel, referring to a person with great power or influence. In the 1980s, the term was imported into Unix culture due to the migration of operating system developers and users from TENEX/TOPS-20 to Unix. Wheel group Modern Unix systems generally use user groups as a security protocol to control access privileges. The wheel group is a special user group used on some Unix systems, mostly BSD systems, to control access to the su or sudo command, which allows a user to masquerade as another user (usually the super user). Debian and its derivatives create a group called sudo with purpose similar to that of a wheel group. Wheel war The phrase wheel war, which originated at Stanford University, is a term used in computer culture, first documented in the 1983 version of The Jargon File. A 'wheel war' was a user conflict in a multi-user (see also: multiseat) computer system, in which students with administrative privileges would attempt to lock each other out of a university's computer system, sometimes causing unintentional harm to other users. See also Superuser References Unix Computer jargon
Wheel (computing)
[ "Technology" ]
320
[ "Computing terminology", "Computer jargon", "Natural language and computing" ]
2,239,159
https://en.wikipedia.org/wiki/Mechanical%20equivalent%20of%20heat
In the history of science, the mechanical equivalent of heat states that motion and heat are mutually interchangeable and that in every case, a given amount of work would generate the same amount of heat, provided the work done is totally converted to heat energy. The mechanical equivalent of heat was a concept that had an important part in the development and acceptance of the conservation of energy and the establishment of the science of thermodynamics in the 19th century. Its independent and simultaneous discovery by James Prescott Joule and by Julius Robert von Mayer led to a priority dispute. History and priority dispute Benjamin Thompson, Count Rumford, had observed the frictional heat generated by boring cannon at the arsenal in Munich, Bavaria, circa 1797 Rumford immersed a cannon barrel in water and arranged for a specially blunted boring tool. He showed that the water could be boiled within roughly two and a half hours and that the supply of frictional heat was seemingly inexhaustible. Based on his experiments, he published "An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction", (1798), Philosophical Transactions of the Royal Society p. 102. This scientific paper provided a substantial challenge to established theories of heat and began the 19th century revolution in thermodynamics. The experiment inspired the work of James Prescott Joule in the 1840s. Joule's more exact measurements on equivalence were pivotal in establishing the kinetic theory at the expense of the caloric theory. The idea that heat and work are equivalent was also proposed by Julius Robert von Mayer in 1842 in the leading German physics journal and independently by James Prescott Joule in 1843, in the leading British physics journal. Similar work was carried out by Ludwig A. Colding in 1840–1843, though Colding's work was little known outside his native Denmark. A collaboration between Nicolas Clément and Sadi Carnot in the 1820s had some related thinking near the same lines. In 1845, Joule published a paper entitled "The Mechanical Equivalent of Heat", in which he specified a numerical value for the amount of mechanical work required to produce a unit of heat. In particular Joule had experimented on the amount of mechanical work generated by friction needed to raise the temperature of a pound of water by one degree Fahrenheit and found a consistent value of 778.24 foot pound force (4.1550 J·cal−1). Joule contended that motion and heat were mutually interchangeable and that, in every case, a given amount of work would generate the same amount of heat. Von Mayer also published a numerical value for mechanical equivalent of heat in 1845 but his experimental method wasn't as convincing. Though a standardised value of 4.1860 J·cal−1 was established in the early 20th century, in the 1920s, it was ultimately realised that the constant is simply the specific heat of water, a quantity that varies with temperature between the values of 4.17 and 4.22 J·g−1·°C−1. The change in unit was the result of the demise of the calorie as a unit in physics and chemistry. Both von Mayer and Joule met with initial neglect and resistance despite having published in leading European physics journals, but by 1847, a lot of leading scientists of the day were paying attention. Hermann Helmholtz in 1847 published what is considered a definitive declaration of the conservation of energy. Helmholtz had learned from reading Joule's publications, though Helmholtz eventually came around to crediting both Joule and von Mayer for priority. Also in 1847, Joule made a well-attended presentation at the annual meeting of British Association for the Advancement of Science. Among those in attendance was William Thomson. Thomson was intrigued but initially skeptical. Over the next two years, Thomson became increasingly convinced of Joule's theory, finally admitting his conviction in print in 1851, simultaneously crediting von Mayer. Thomson collaborated with Joule, mainly by correspondence, Joule conducting experiments, Thomson analysing the results and suggesting further experiments. The collaboration lasted from 1852 to 1856. Its published results did much to bring about general acceptance of Joule's work and the kinetic theory. However, in 1848, von Mayer had first had sight of Joule's papers and wrote to the French Académie des Sciences to assert priority. His letter was published in the Comptes Rendus and Joule was quick to react. Thomson's close relationship with Joule allowed him to become dragged into the controversy. The pair planned that Joule would admit von Mayer's priority for the idea of the mechanical equivalent but to claim that experimental verification rested with Joule. Thomson's associates, co-workers and relatives such as William John Macquorn Rankine, James Thomson, James Clerk Maxwell, and Peter Guthrie Tait joined to champion Joule's cause. However, in 1862, John Tyndall, in one of his many excursions into popular science and many public disputes with Thomson and his circle, gave a lecture at the Royal Institution entitled On Force in which he credited von Mayer with conceiving and measuring the mechanical equivalent of heat. Thomson and Tait were angered, and an undignified public exchange of correspondence took place in the pages of the Philosophical Magazine, and the rather more popular Good Words. Tait even resorted to championing Colding's cause in an attempt to undermine von Mayer. Though Tyndall again pressed von Mayer's cause in Heat: A Mode of Motion (1863) with the publication of Sir Henry Enfield Roscoe's Edinburgh Review article Thermo-Dynamics in January 1864, Joule's reputation was sealed while that of von Mayer entered a period of obscurity. Notes References Further reading Foucault, L. (1854) “Equivalent mécanique de la chaleur. M. Mayer, M. Joule. Chaleur spécifique des gaz sous volume constant. M. Victor Regnault”, Journal des débats politiques et littéraires, Thursday 8 June , pp. 154–5 Smith, C. (2004) "Joule, James Prescott (1818-1889)", Oxford Dictionary of National Biography, Oxford University Press, <http://www.oxforddnb.com/view/article/15139, accessed 27 July 2005> (subscription required) Zemansky, M.W. (1968) Heat and Thermodynamics: An Intermediate Textbook, McGraw-Hill, pp. 86–87 External links History of thermodynamics History of physics Discovery and invention controversies Scientific rivalry
Mechanical equivalent of heat
[ "Physics", "Chemistry" ]
1,343
[ "History of thermodynamics", "Thermodynamics" ]
2,239,191
https://en.wikipedia.org/wiki/3Server
The 3Com 3Server was a headless dedicated network-attached storage machine designed to run 3Com local area network (LAN) server software, 3+Share. Background The companion product was the diskless 3Station network workstation, a dedicated client machine. However, 3Servers could also network with standard PC-compatibles and were commonly used in this role. Having no display other than a small one-line LCD and no keyboard or mouse interface, 3Servers were controlled via another PC on the network which allowed console access to the internal server software. The original 3Server was a x86 computer that wasn't compatible with any ordinary PC based on an Intel 80186 CPU, running a special version of MS-DOS and 3Com's proprietary 3+Share network server software. This was a multitasking network server stack that ran on top of single-tasking DOS. Internally, it had a network stack, file and print server modules, disk caching, user handling and more, all running simultaneously inside the DOS memory space. Because they were not limited by the PC memory map, 3Servers could support 1 megabyte of flat memory, breaking the PC's 640 kB barrier. This was a large amount of RAM for the time. The original 3Server shipped in 1985 with of RAM and a single hard disk. It had slots for adding six additional drives, making it one of the first network attached storage (NAS) arrays. It supported both Ethernet (then branded EtherSeries) and AppleTalk and was quick to add IBM Token Ring as well. The 3Server/70, introduced in July 1985, doubled the storage space to The 3Server/500 was a 80386-based version introduced in the late 1980s, with the 80486-based 3Server/600 introduced in 1991. The last models, the 3Server386 family, ran OS/2 1.3 as the basic operating system, using 3+Open, a variant of OS/2 LAN Manager. 3Com's version was an enhancement of the basic LAN Manager package, also sold by Microsoft and IBM and on other operating systems - for example, running on VAX/VMS it was the basis of DEC Pathworks. Decline In February, 1991, 3Com announced that it would hand over all rights to LAN Manager, 3+Open, its Macintosh and NetWare integration, and related software to Microsoft. The company soon exited the network server business as well. References Server hardware
3Server
[ "Technology" ]
515
[ "Computing stubs", "Computer hardware stubs" ]
2,239,197
https://en.wikipedia.org/wiki/Two-factor%20theory%20of%20emotion
The two-factor theory of emotion posits when an emotion is felt, a physiological arousal occurs and the person uses the immediate environment to search for emotional cues to label the physiological arousal. The theory was put forth by researchers Stanley Schachter and Jerome E. Singer in a 1962 article. According to the theory, emotions may be misinterpreted based on the body's physiological state. Empirical support In 1962, Stanley Schachter and Jerome E. Singer performed a study that tested how people use clues in their environment to explain physiological changes. They had three hypotheses going into the experiment. First, that if a person experiences a state of arousal for which they have no immediate explanation, they will label this state and describe their feelings in terms of the cognitions available to them at the time. Second, that if a person experiences a state of arousal for which they have an appropriate explanation, then they will be unlikely to label their feelings in terms of the alternative cognitions available. Third, that if a person is put in a situation that in the past could have made them feel an emotion, they will react emotionally or experience emotions only if they are in a state of physiological arousal. Participants were told they were being injected with a new drug called "Suproxin" to test their eyesight. The participants were actually injected either with epinephrine (which causes an increase in blood pressure, heart rate, and breathing) or a placebo. There were four conditions that participants were randomly placed in: epinephrine informed (where participants were told they would feel effects similar to epinephrine), epinephrine ignorant (where participants were not told about side effects), epinephrine misinformed (where participants were told wrong side effects), and a control group (where participants were injected with a placebo and not told about any side effects). After the injection, a confederate, acting either as angry or euphoric, interacted with the students. The experimenters watched through a one way mirror and rated the participants' state on a three category scale. The participants were then given a questionnaire and their heart rate was checked. Participants in the epinephrine misinformed group experienced the highest euphoria, followed by the ignorant, placebo, and informed group. In contrast, participants in the ignorant group experienced the most anger, followed by the placebo and informed group. The results show that those participants who had no explanation of why their body felt as it did were more susceptible to the confederate, supporting the three hypotheses. Misattribution of arousal The misattribution of arousal study tested Schachter and Singer's two-factor theory of emotion. Psychologists Donald G. Dutton and Arthur P. Aron wanted to use a natural setting that would induce physiological arousal. In this experiment, they had male participants walk across two different styles of bridges. One bridge was a very scary (arousing) suspension bridge, which was very narrow and suspended above a deep ravine. The second bridge was much safer and more stable than the first. At the end of each bridge an attractive female experimenter met the [male] participants. She gave the participants a questionnaire which included an ambiguous picture to describe and her number to call if they had any further questions. The idea of this study was to find which group of males were more likely to call the female experimenter and to measure the sexual content of the stories the men wrote after crossing one of the bridges. They found that the men who walked across the scary bridge were more likely to call the woman to follow up on the study, and that their stories had more sexual content. The two-factor theory would say that this is because they had transferred (misattributed) their arousal from fear or anxiety on the suspension bridge to higher levels of sexual feeling towards the female experimenter. Schachter & Wheeler In the Schachter & Wheeler (1962) study the subjects were injected with epinephrine, chlorpromazine, or a placebo (chlorpromazine is a neuroleptic, i.e., an antipsychotic). None of the subjects had any information about the injection. After receiving the injection, the subjects watched a short comical movie. While watching the movie, the subjects were monitored for signs of humor. After the movie was watched, the subjects rated how funny the movie was and if they enjoyed. The results concluded that the epinephrine subjects demonstrated the most signs of humor. The placebo subjects demonstrated fewer reactions of humor but more than the chlorpromazine subjects. Criticisms Criticism of the theory has come from attempted replications of the Schachter and Singer (1962) study. Marshall and Zimbardo (1979, and Marshall 1976) tried to replicate the Schachter and Singer’s euphoria conditions. Just as Schachter and Singer did, the subjects were injected with epinephrine or a placebo, except the administrator told the subjects that they will be experiencing non-arousal symptoms. Then the subjects were put into four different conditions: subjects injected epinephrine and were exposed to a neutral confederate, another in which they received the placebo and were told to expect arousal symptoms, and two conditions in which the dosage of epinephrine was determined by body weight rather than being fixed. The results found that euphoria confederate had little impact on the subjects. Also, that the euphoric confederate didn’t produce anymore euphoria than the neutral confederate did. Concluding that the subjects who were injected with epinephrine were not more susceptible to emotional manipulations than the non-aroused placebo subjects. Maslach (1979) designed a study to try to replicate and extend on the Schachter and Singer study. Instead of being injected with epinephrine, the administrators used hypnotic suggestions for the source of arousal. Either the subjects were hypnotized or were used as a control (same as the placebo effect in the Schachter and Singer study). Subjects that were hypnotized were given a suggestion to become aroused at the presentation of a cue and were instructed not to remember the source of this arousal. Right after the subjects had been hypnotized, a confederate began acting either in a euphoric or angry condition. Later on in the study the subjects were exposed to two more euphoric confederates. One confederate was to keep aware the source of the arousal, while the other confederates told the subjects to expect different arousal symptoms. The results found that all the subjects both on self-reports and on observation found that unexplained arousal causes negative conditions. Subjects still showed angry emotions regardless of the euphoric confederate. Maslach concluded that when there is a lack of explanation for an arousal it will cause a negative emotion, which will evoke either anger or fear. However, Maslach did mention a limitation that there might have been more negative emotion self-reported because there are more terms referring to negative emotions than to positive ones. There are also criticisms of the two-factor theory that come from a theoretical standpoint. One of these criticisms is that the Schachter-Singer Theory centers primarily on the autonomic nervous system and provides no account of the emotional process within the central nervous system aside from signaling the role of cognitive factors. This is important considering the heavy implication of certain brain centers in mitigating emotional experience (e.g., fear and the amygdala). It can also be noted that Gregorio Marañon also had early studies in the development of cognitive theories of emotion and should be recognized for making contributions to this concept. See also Cannon–Bard theory James–Lange theory Misattribution of arousal Notes References Izard, C. E. The face of emotion. New York: Appleton-Century-Crofts, 1971. Marshall, G. D. (1976). The affective consequences of "inadequately explained" physiological arousal. Unpublished doctoral dissertation, Stanford University. External links A Powerpoint presentation describing Schachter & Singer's experiment A Powerpoint presentation describing Dutton & Aron's experiment Emotion Psychological theories
Two-factor theory of emotion
[ "Biology" ]
1,693
[ "Emotion", "Behavior", "Human behavior" ]
2,239,212
https://en.wikipedia.org/wiki/Microsoft%20BizTalk%20Server
Microsoft BizTalk Server is an inter-organizational middleware system (IOMS) that automates business processes through the use of adapters which are tailored to communicate with different software systems used in an enterprise. Created by Microsoft, it provides enterprise application integration, business process automation, business-to-business communication, message broker and business activity monitoring. BizTalk Server was previously positioned as both an application server and an . Microsoft changed this strategy when they released the AppFabric server which became their official application server. Research firm Gartner consider Microsoft's offering one of their 'Leaders' for Application Integration Suites. The latest release of Biztalk (Biztalk Server 2020) was released on 15 January 2020. In a common scenario, BizTalk integrates before going out and manages automated business processes by exchanging business documents such as purchase orders and invoices between disparate applications, within or across organizational boundaries. Development for BizTalk Server is done through Microsoft Visual Studio. A developer can create transformation maps transforming one message type to another. For example, an XML file can be transformed to SAP IDocs. Messages inside BizTalk are implemented through the XML documents and defined with the XML schemas in XSD standard. Maps are implemented with the XSLT standard. Orchestrations are implemented with the WS-BPEL compatible process language xLANG. Schemas, maps, pipelines and orchestrations are created visually using graphical tools within Microsoft Visual Studio. The additional functionality can be delivered by .NET assemblies that can be called from existing modules—including, for instance, orchestrations, maps, pipelines, business rules. Version history Starting in 2000, the following versions were released: 2000-12-01 BizTalk Server 2000 2002-02-04 BizTalk Server 2002 2004-03-02 BizTalk Server 2004 (First version to run on Microsoft .NET 1.0) 2006-03-27 BizTalk Server 2006 (First version to run on Microsoft .NET 2.0) 2007-10-02 BizTalk Server 2006 R2 (First version to utilize the new Windows Communication Foundation (WCF) via native adapter – (Release date 2 October 2007)) 2010-04-27 BizTalk Server 2009 (First version to work with Visual Studio 2008) 2010-10-01 BizTalk Server 2010 (First version to work with Visual Studio 2010 and Microsoft .NET 4.0) 2013-03-21 BizTalk 2013 (First version to work with Visual Studio 2012 and Microsoft .NET 4.5) 2014-06-23 BizTalk 2013 R2 (First version to work with Visual Studio 2013 and Microsoft .NET 4.5.1) 2016-09-30 BizTalk Server 2016 2017-04-26 BizTalk Server 2016 Feature Pack 1 (Application Insights and Power BI integration; Swagger-compatible REST Management APIs) 2017-11-21 BizTalk Server 2016 Feature Pack 2 (Azure integration) 2018-06-26 BizTalk Server 2016 Feature Pack 3 (Office 365 integration) 2020-01-15 BizTalk Server 2020 (First version to work with Visual Studio 2019 and Microsoft .NET 4.7) Features The following is an incomplete list of the technical features in the BizTalk Server: The use of adapters to simplify integration to line of business (LOB) applications (Siebel, SAP, IFS Applications, JD Edwards, Oracle, Microsoft Dynamics CRM), databases (Microsoft SQL Server, Oracle Database and IBM Db2) and other Technologies (TIBCO and Java EE) Accelerators offer support for enterprise standards like RosettaNet, HL7, HIPAA and SWIFT. Business rules engine (BRE). This is a Rete algorithm rule engine. Business activity monitoring (BAM), which allows a dashboard, aggregated (PivotTable) view on how the Business Processes are doing and how messages are processed. A unified administration console for deployment, monitoring and operations of solutions on BizTalk servers in environment. Built-in electronic data interchange (EDI) functionality supporting X12 and EDIFACT, as of BizTalk 2006 R2. Ability to do graphical modelling of business processes in Visual Studio, model documents with XML schemas, graphically mapping (with the assistance of functoids) between different schemas, and building pipelines to decrypt, verify, parse messages as they enter or exit the system via adapters. Users can automate business management processes via Orchestrations. BizTalk integrates with other Microsoft products like Microsoft Dynamics CRM, Microsoft SQL Server, and SharePoint to allow interaction with a user participating in a workflow process. Extensive support for web services (consuming and exposing) RFID support, as of BizTalk 2006 R2. Deprecated in the 2016 release Support for Application Insight, as of BizTalk Server 2016 Feature Pack 1 Automatic deployment through Visual Studio Team Service, as of BizTalk Server 2016 Feature Pack 1 Exposed management REST APIs with full Swagger support, as of BizTalk Server 2016 Feature Pack 1 Exposed operational data with Power BI support, as of BizTalk Server 2016 Feature Pack 1 Human-centric processes cannot be implemented directly with BizTalk Server and need additional applications like Microsoft SharePoint server. Architecture The BizTalk Server runtime is built on a publish/subscribe architecture, sometimes called "content-based publish/subscribe". Messages are published into BizTalk, transformed to the desired format, and then routed to one or more subscribers. BizTalk makes processing safe by serialization (called "dehydration" in Biztalk's terminology) – placing messages into a database while waiting for external events, thus preventing data loss. This architecture binds BizTalk with Microsoft SQL Server. Processing flow can be tracked by administrators using an Administration Console. BizTalk supports the transaction flow through the whole line from one customer to another. BizTalk orchestrations also implement long-running transactions. Adapters BizTalk uses adapters for communications with different protocols, message formats, and specific software products. Some of the adapters are: electronic data interchange, file, HTTP, SFTP, FTP SMTP, POP3, SOAP, SQL, MSMQ, MLLP, Azure Logic App, Azure API Management, Microsoft SharePoint Server, IBM mainframe zSeries (CICS and IMS) and midrange IBM i (previously AS/400) systems, IBM Db2, IBM WebSphere MQ adapters. The WCF Adapter set was added with 2006 R2. It includes: WCF-WSHttp, WCF-BasicHttp, WCF-NetTcp, WCF-NetMsmq, WCF-NetNamedPipe, WCF-Custom, WCF-CustomIsolated adapters. Microsoft also ships a BizTalk Adapter Pack that includes WCF-based adapters for LOB systems. Currently, this includes adapters for SAP and Oracle database, Oracle E-Business Suite, Microsoft SQL Server, MySQL, PeopleSoft Enterprise and Siebel Systems. Additional adapters (for Active Directory, for example) are available from third party Microsoft BizTalk core partners. References External links 2000 software Enterprise application integration Message-oriented middleware Microsoft server software|BizTalk Server Middleware Proprietary software Service-oriented (business computing)
Microsoft BizTalk Server
[ "Technology", "Engineering" ]
1,537
[ "Software engineering", "Middleware", "IT infrastructure" ]
2,239,396
https://en.wikipedia.org/wiki/Smartwrap
Smartwrap is an ultra-thin polymer-based material made by James Timberlake and Stephen Kieran of Philadelphia architecture firm KieranTimberlake. The compound consists of substrate and printed and laminated layers that have been roll-coated into a single film. The resulting film has the ability to alter look and color, as well as to supply light and electricity, shelter, and govern internal temperatures. References Technical fabrics
Smartwrap
[ "Physics" ]
86
[ "Materials stubs", "Materials", "Matter" ]
2,239,442
https://en.wikipedia.org/wiki/IEC%2060446
The international standard IEC 60446 Basic and safety principles for man-machine interface, marking and identification - Identification of equipment terminals, conductor terminations and conductors was a standard published by the International Electrotechnical Commission (IEC) that defined basic safety principles for identifying electrical conductors by colours or numerals, for example in electricity distribution wiring. The standard has been withdrawn; the fourth edition (IEC 60446:2007) was merged in 2010 into the fifth edition of IEC 60445 along with the fourth edition, IEC 60445:2006. Permitted colours The standard permits the following colours for identifying conductors: Black Brown Red Orange Yellow Green Blue Violet Grey White Pink Turquoise The colours green and yellow on their own are only permitted where confusion with the colouring of the green/yellow protective conductor is unlikely. Combinations of the above colours are permitted, but green and yellow should not be used in any of these combinations other than as green/yellow for the protective conductor. Use of colours Neutral or mid-point conductor If a circuit includes a neutral or midpoint conductor, then it should be identified by a blue colour (preferably light blue ). Light blue is the colour used to identify intrinsically safe conductors, and must not be used for any other type of conductor. AC phase conductors The preferred colours for AC phase conductors are: L1: Brown L2: Black L3: Grey For a single AC phase: brown Protective conductor The colour combination green/yellow is always and exclusively used to identify the protective conductor. On any 15 mm length of the conductor, one of these two colours should cover between 30% and 70% of the area and the other the remaining area. Protective earth and neutral (PEN) conductor Insulated PEN conductors (combined protective earth + neutral in TN–C systems) should be marked: either green/yellow along their entire length with light blue markings at their ends, or light blue along their entire length with green/yellow markings at the ends. The cable must have a cross sectional area of 16 mm (5 AWG) or greater. United States, Canada and Japan The three countries United States, Canada and Japan are mentioned in a note in the standard for using different colours: White or natural grey for mid-wire or neutral conductor (instead of light blue ) Green for the protective conductor (instead of green/yellow ) United Kingdom British Standard BS 7671:2001 Amendment No 2:2004 adopted the IEC 60446 colours for fixed wiring in the United Kingdom , with the extension that grey can also be used for line conductors, such that three colours are available for three-phase installations. This extension is expected to be adopted across Europe and may even find its way into a future revision of IEC 60446. Marking Where conductors are in addition identified by letters and numbers, then: letters to be used come from Latin character set, numbers must be written in Arabic numerals, digits 6 and 9 must be underlined ( and ), and a few symbols like + and − can be used. Green-and-yellow conductors must not be marked. Examples: L1, L2, L3, N, L+, L−, M, 35, 1 References External links IEC 60446:2007: Basic and safety principles for man-machine interface, marking and identification – Identification of conductors by colours or numerals, International Electrotechnical Commission, Geneva. IEC 60445:2010: Basic and safety principles for man-machine interface, marking and identification - Identification of equipment terminals, conductor terminations and conductors, International Electrotechnical Commission, Geneva. Paul Cook: Harmonised colours and alphanumeric marking. IEE Wiring Matters, Spring 2006. 60446 Electrical wiring
IEC 60446
[ "Physics", "Technology", "Engineering" ]
755
[ "Electrical systems", "Building engineering", "Computer standards", "IEC standards", "Physical systems", "Electrical engineering", "Electrical wiring" ]
2,239,579
https://en.wikipedia.org/wiki/Epsilonretrovirus
Epsilonretrovirus is a waterborn genus of the Retroviridae family. It infects fish. The species include Walleye dermal sarcoma virus, and Walleye epidermal hyperplasia virus 1 and 2. References External links Epsilonretroviruses Virus genera
Epsilonretrovirus
[ "Biology" ]
60
[ "Virus stubs", "Viruses" ]
2,239,611
https://en.wikipedia.org/wiki/Alpharetrovirus
Alpharetrovirus is a genus of the family Retroviridae. It has type C morphology. Members can cause sarcomas, other tumors, and anaemia of wild and domestic birds and also affect rats. Species include the Rous sarcoma virus, avian leukosis virus, and avian myeloblastosis virus (AMV). Not all animals that can infect develop cancer. The tumor caused by the virus is usually in the form of lymphoma and leukemia. It occurs after a long and latent process. The tumor cells formed consist of a single progenitor cell and are clonal. However, infection from retroviruses does not directly produce tumors, but only placement and recombination events leading to tumor cell formation. References External links ICTVdb Viralzone: Alpharetrovirus Alpharetroviruses Virus genera
Alpharetrovirus
[ "Biology" ]
180
[ "Virus stubs", "Viruses" ]
2,239,614
https://en.wikipedia.org/wiki/Foil%20%28fluid%20mechanics%29
A foil is a solid object with a shape such that when placed in a moving fluid at a suitable angle of attack the lift (force generated perpendicular to the fluid flow) is substantially larger than the drag (force generated parallel to the fluid flow). If the fluid is a gas, the foil is called an airfoil or aerofoil, and if the fluid is water the foil is called a hydrofoil. Physics of foils A foil generates lift primarily because of its shape and angle of attack. When oriented at a suitable angle, the foil deflects the oncoming fluid, resulting in a force on the foil in the direction opposite to the deflection. This force can be resolved into two components: lift and drag. This "turning" of the fluid in the vicinity of the foil creates curved streamlines which results in lower pressure on one side and higher pressure on the other. This pressure difference is accompanied by a velocity difference, via Bernoulli's principle, so for foils generating lift the resulting flowfield about the foil has a higher average velocity on one surface than on the other. A more detailed description of the flowfield is given by the simplified Navier–Stokes equations, applicable when the fluid is incompressible. And since the effects of the compressibility of air at low speeds is negligible, these simplified equations can be used for airfoils as long as the airflow is substantially less than the speed of sound (up to about Mach 0.3). For hydrofoils at high speeds, of the order of according to Faltinsen, cavitation and ventilation – with air penetrating along the strut from the water surface to the foil – may occur. Both effects may have a substantial influence on the foil's lift. Basic design considerations The simplest type of foil is a flat plate. When set at an angle (the angle of attack) to the flow the plate will deflect the fluid passing over and under it, and this deflection will result in a lift force on the plate. However, while it does generate lift, it also generates a large amount of drag. Since even a flat plate can generate lift, a significant factor in foil design is the minimization of drag. An example of this is the rudder of a boat or aircraft. When designing a rudder a key design factor is the minimization of drag in its neutral position, which is balanced with the need to produce sufficient lift with which to turn the craft at a reasonable rate. Other types of foils, both natural and man-made, seen both in air and water, have features that delay or control the onset of lift-induced drag, flow separation, and stall (see Bird flight, Fin, Airfoil, Placoid scale, Tubercle, Vortex generator, Canard (close-coupled), Blown flap, Leading edge slot, Leading edge slats), as well as Wingtip vortices (see Winglet). Lifted ability in air and water The weight a foil can lift is proportional to its lift coefficient, the density of the fluid, the foil area and its speed squared. The following shows the lifting ability of a flat plate with span 10 metres and area 10 square metres moving at a speed of 10 m/s at different altitudes and water depths. It uses the lift at an altitude of 11 km as a datum to show how the lift increases with decreasing altitude (increasing air density). It also shows the influence of ground effect and then the effect of increase in density going from air to water. height 11 km: lift 1.0 (datum for comparison) 5 m 3.4 in ground effect 4.1 water surface-planing 1,280 just submerged 1,420 depth 5 m 2,840 10 km 2,860 See also Aircraft Bilgeboard Boomerang Centerboard Chord (aircraft) Coanda effect Diving plane Drag coefficient Flipper (anatomy) Fluid dynamics Formula One car Keel (hydrodynamic) Lift coefficient NACA airfoil Propeller Sail (aerodynamics) Skeg Spoiler (automotive) Surfboard fin Wing References External links Lift from Flow Turning What is Lift? Bernoulli and Newton Effect of Shape on Lift Incorrect Lift Theory Penguin can fly thresher shark swim towards scuba divers Swimming with Wild Dolphins Bird Flight II Fluid dynamics Aerodynamics
Foil (fluid mechanics)
[ "Chemistry", "Engineering" ]
886
[ "Chemical engineering", "Aerodynamics", "Aerospace engineering", "Piping", "Fluid dynamics" ]
2,239,651
https://en.wikipedia.org/wiki/Betaretrovirus
Betaretrovirus is a genus of the Retroviridae family. It has type B or type D morphology. The type B is common for a few exogenous, vertically transmitted and endogenous viruses of mice; some primate and sheep viruses are the type D. Examples are Mouse mammary tumor virus, enzootic nasal tumor virus (ENTV-1, ENTV-2), and simian retrovirus types 1, 2 and 3 (SRV-1, SRV-2, SRV-3). References External links Viralzone: Betaretrovirus Betaretroviruses Virus genera
Betaretrovirus
[ "Biology" ]
128
[ "Virus stubs", "Viruses" ]
2,239,740
https://en.wikipedia.org/wiki/Deltaretrovirus
Deltaretrovirus is a genus of the Retroviridae family. It consists of exogenous horizontally transmitted viruses found in several groups of mammals. , ICTV lists under this genus the Bovine leukemia virus and three species of primate T-lymphotropic virus. The genus of viruses is known for its propensity to target immune cells and oncogenicity, evident in the names of the four named species. Infection is usually asymptomatic, but inflammation and cancer can develop over time. Classification Four species are recognized by the ICTV as of 2023: Bovine leukemia virus Primate T-lymphotropic virus 1 Primate T-lymphotropic virus 2 Primate T-lymphotropic virus 3 Two additional PTLVs are known but not recognized: HTLV-4 (South Cameroon, 2005) and STLV-5 (Mac B43 strain, highly divergent PTLV-1). In addition, eight endogenous retroviruses identified as Deltaretrovirus are known as of 2019. Two of these were complete enough to show ORFs; the rest only showing long terminal repeats. Hosts Known exogenous deltaretroviruses infect cattle and primates. The two complete endogenous ones were found in bats and dolphins; the others in Solenodon, mongoose, and fossa. These endogenous examples fill in the large gap in the host range. Clinical relevance References External links Viralzone: Deltaretrovirus Deltaretroviruses Virus genera
Deltaretrovirus
[ "Biology" ]
324
[ "Virus stubs", "Viruses" ]
2,239,772
https://en.wikipedia.org/wiki/Spumaretrovirinae
Spumaretrovirinae, commonly called spumaviruses (, Latin for "foam") or foamyviruses, is a subfamily of the Retroviridae family. Spumaviruses are exogenous viruses that have specific morphology with prominent surface spikes. The virions contain significant amounts of double-stranded full-length DNA, and assembly is rather unusual in these viruses. Spumaviruses are unlike most enveloped viruses in that the envelope membrane is acquired by budding through the endoplasmic reticulum instead of the cytoplasmic membrane. Some spumaviruses, including the equine foamy virus (EFV), bud from the cytoplasmic membrane. Some examples of these viruses are simian foamy virus and the human foamy virus. While spumaviruses will form characteristic large vacuoles in their host cells while in vitro, there is no disease association in vivo. References Further reading External links Viralzone: "Spumavirus" Spumaviruses de:Spumaviren es:Spumavirus
Spumaretrovirinae
[ "Biology" ]
222
[ "Virus stubs", "Viruses" ]