text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**ROSA26**
ROSA26:
ROSA26 is a locus used for constitutive, ubiquitous gene expression in mice. It was first isolated in 1991 in a gene-trap mutagenesis screen of embryonic stem cells (ESCs). Over 130 knock-in lines have been created based on the ROSA26 locus. The human homolog of the ROSA26 locus has been identified. ROSA stands for Reverse Orientation Splice Acceptor, named after the lentivirus genetrap vector. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Area codes 809, 829, and 849**
Area codes 809, 829, and 849:
Area codes 809, 829, and 849 are telephone area codes in the North American Numbering Plan (NANP) for the Dominican Republic. As all NANP members, the Dominican Republic uses country code 1, and has similar dialing procedures for dialing the ten-digit national telephone numbers, which consist of the area code, a three-digit central office code, and a four-digit line number. The three area codes of the country are organized as an overlay plan for a single numbering plan area (NPA), comprising the entire country. Thus, ten-digit dialing is mandatory.
Area code 809:
Area code 809 was assigned in 1958 to Bermuda and the Caribbean islands. However, Cuba, Haiti, the Netherlands Antilles, and the French West Indies decided not to participate in the North American Numbering Plan. Beginning with Bermuda in November 1994, and The Bahamas, Puerto Rico, and Barbados in 1995, several countries in the Caribbean received individual area code assignments from the NANPA, effectively splitting area code 809. By 1999, it was retained only by the Dominican Republic, following the departure of Saint Vincent and the Grenadines from using the area code.
Area codes 829 and 849:
Area code 829 was added for all of the Dominican Republic to form an all-services distributed overlay on January 31, 2005. Earliest central office assignments were possible on October 1, 2005. The relief was needed because of the growth of mobile phone communication in the Dominican Republic, starting in the mid-1990s with prepaid telephone cards, and growing quickly through the early 2000s with the launch of two cellphone carriers, Orange (now Altice) and Centennial (now Viva), in addition to the preexisting CODETEL (now Claro Dominicana) and TRICOM (now Altice Dominicana).
Area codes 829 and 849:
The expansion in telecommunication services continued and further relief for the numbering resources was needed in 2009, when an additional area code was assigned for the numbering plan area, area code 849. Earliest central office code assignments were possibly on July 1, 2009, but did not occur until 2010.
Calling scam:
Telephone fraud scams once involved area code 809; it was being used since calling international numbers from the United States is charged at a higher rate than domestic calls. The charge is set jointly by the originating and terminating countries; the foreign country portion of the charge could be very high, and was not regulated. There may have been a resurgence with wireless telephones. The victim received a message on an answering machine to call a number with an 809 area code. However, the number dialed is an international call with a share of the revenue going from the foreign telephone company to the operator of the number. The victim could be put on hold indefinitely, and billed for each minute.
Calling scam:
More recently, a similar scam has emerged due to the prevalence of wireless phones which display callback numbers automatically, known as the "one ring scam". The perpetrator of the scam calls the victim via a robodialer or similar means, sometimes during night time, then hangs up after the call is answered with the hope that the receiver will be curious enough to call back, which incurs an automatic $19.95 international fee, as well as $9.00/min thereafter. Similar scams have been linked to Grenada (area code 473), Antigua (area code 268), Jamaica (area code 876) and the British Virgin Islands (area code 284). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mimetic isomorphism**
Mimetic isomorphism:
Mimetic isomorphism in organization theory refers to the tendency of an organization to imitate another organization's structure because of the belief that the structure of the latter organization is beneficial. This behavior happens primarily when an organization's goals or means of achieving these goals is unclear. In this case, mimicking another organization perceived as legitimate becomes a "safe" way to proceed. An example is a struggling regional university hiring a star faculty member in order to be perceived as more similar to organizations that are revered (e.g., an Ivy League institution). Mimetic isomorphism is in contrast to coercive isomorphism, where organizations are forced to change by external forces, or normative isomorphism, where professional standards or networks influence change.
Mimetic isomorphism:
The term had been applied by companies such as McKinsey & Co as part of their recommendations to companies undergoing restructuring or other organizational transformations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3H domain**
3H domain:
In molecular biology, the 3H domain is a protein domain named after its three highly conserved histidine residues. The 3H domain appears to be a smarr molecure-binding domain, based on its occurrence with other domains. Several proteins carrying this domain are transcriptional regulators from the biotin repressor family. The transcription regulator TM1602 from Thermotoga maritima is a DNA-binding protein thought to belong to a family of de novo NAD synthesis pathway regulators. TM1602 has an N-terminal DNA-binding domain and a C-terminal 3H regulatory domain. The N-terminal domain appears to bind to the NAD promoter region and repress the de novo NAD biosynthesis operon, while the C-terminal 3H domain may bind to nicotinamide, nicotinic acid, or other substrate/products. The 3H domain has a 2-layer alpha/beta sandwich fold. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SPG14**
SPG14:
Spastic paraplegia 14 (autosomal recessive) is a protein that in humans is encoded by the SPG14 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transmissible gastroenteritis virus**
Transmissible gastroenteritis virus:
Transmissible gastroenteritis virus or Transmissible gastroenteritis coronavirus (TGEV) is a coronavirus which infects pigs. It is an enveloped, positive-sense, single-stranded RNA virus which enters its host cell by binding to the APN receptor. The virus is a member of the genus Alphacoronavirus, subgenus Tegacovirus, species Alphacoronavirus 1.Proteins that contribute to the overall structure of TGEV include the spike (S), envelope (E), membrane (M) and nucleocapsid (N). The genomic size of coronaviruses ranges from approximately 28.6 kilobases. Other coronaviruses that belong to the species Alphacoronavirus 1 are Feline coronavirus, Canine coronavirus and Feline infectious peritonitis virus.
Biology:
TGEV belongs to the family Coronaviridae, genus Alphacoronavirus, species Alphacoronavirus 1. It is an enveloped virus with a positive single stranded RNA genome. TGEV has three major structural proteins, which are phosphoprotein (N), integral membrane protein (E1), and large glycoprotein (E2). The N protein encapsulates the genomic RNA, and the S protein forms viral projections.
Biology:
The 3' segment of about 8000 nucleotides encodes subgenomic RNAs. The remaining part of the genome encodes viral replicase. The three largest gene sequence from 5' to 3' is in the order of E2 to E1 to N. There are about seven other open reading frames that are not structurally related. There are very little overlaps among the genes, and is densely packed. A negative strand is synthesized to serve as a template for transcribing RNAs of one genome size and several subgenome sized RNAs.
Biology:
The E2 protein forms a petal-shaped 20 nm long projection from the virus's surface. The E2 protein is thought to be involved in pathogenesis by helping the virus enter the host cytoplasm. The E2 protein initially has 1447residues, and then a short hydrophobic sequence is cleaved. After glycosylation of the protein in the golgi, the protein is then incorporated into the new virus. There are several functional domains within the E2 protein. A 20 residue hydrophobic segment at the C-terminus anchors the protein in the lipid membrane. The rest of the protein is divided into two parts, a hydrophilic stretch that is inside the virus and a cysteine rich stretch that are possibly fatty acylation sites. The E1 protein is mostly embedded in the lipid envelop and hence plays an essential role in virus architecture. The E1 protein is postulated to interact with the lymphocyte membrane, which leads to the induction of IFN-coding genes.
Biology:
Coronaviruses enter the host by first attaching to the host cell using the spike glycoprotein. The S protein interacts with the porcine aminopeptidase N (pAPN), a cellular receptor, to aide in its entry. The same cell receptor is also a point of contact for Human Coronaviruses. A domain in the S spike protein is recognized by pAPN, and transfection of pAPN occurs to nonpermissive cells and infects them with TGEV.
Morphology:
The morphology of TGEV was mostly determined by electron microscopy techniques. The morphology is similar to myxovirus and oncogenic virus in that they have surface projections and an envelop. The viruses are mainly circular in shape with a diameter ranging from 100 to 150 nm including the surface projections. The projections were mainly petal-shaped attached by a very narrow stalk. The projections seemed to be very easily detached from the virus and were only found on select areas.
Pathology:
TGEV infects pigs. In piglets less than 1 week old, the mortality rate is close to 100%. The pathology of TGEV is similar to that of other coronaviruses. Once the virus infects the host, it multiplies in the cell lining of the small intestine resulting in the loss of absorptive cells that in turn leads to shortening of villi. The infected swine then have reduced capability for digesting food and die from dehydration.
Occurrence:
TGE was prevalent in the US when it was originally discovered in the early 20th century. It became more scarce in the late 80's with the rise of porcine respiratory coronavirus (PRCV). It is thought that PRCV provides some immunity to TGE.
Engineering TGEV coronavirus:
The Transmissible Gastroenteritis Virus has been engineered as an expression vector. The vector was constructed by replacing the nonessential 3a and 3b ORF, which is driven by the transcription-regulating sequences (TRS) with green fluorescent protein. The resulting construct was still enteropathogenic, but with reduced growth. The infection of cells with this altered virus elicits a specific lactogenic immune response against the heterologous protein. The application of this vector is in the development of a vaccine or even gene therapy. The motivation for engineering the TGEV genome is that coronaviruses have large genomes, so they have room for insertion of foreign genes. Coronaviruses also infect the respiratory tract, and they can be used to target antigens to that area and generate some immune response. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rainbow rose**
Rainbow rose:
The rainbow rose is a rose that has had its petals artificially colored. The method exploits the rose natural processes by which water is drawn up the stem. By splitting the stem and dipping each part in different colored water, the colors are drawn into the petals resulting in a multicolored rose. With these changes to the rose, it causes them to not live as long as an uncolored rose. The colors are artificial.
Rainbow rose:
Besides roses, other cut flowers like the chrysanthemum, carnation, hydrangea, and some species of orchids can also be colored using the same method.
History:
Agatha Christie's notorious Poirot mystery “The Gilded Lily” popularized the practice of artificially altering floral color patterns in cultured society only as recently as the 20th century.
Contrary to unfounded claims (likely the Dutch) no rainbow roses or other artificially colored flower arrangements appeared at any expos in Holland in or before 2007.
Cultivars:
A commonly used cultivar is "Vendela", a cream colored Hybrid Tea cultivated in the Netherlands, Colombia and Ecuador, as this cultivar absorbs the different dyes perfectly. "Vendela" has a flower diameter of 6 cm in full bloom, a stem length of 40 to 100 cm, and is not scented. Other cultivars that can be used for this coloring process are Rosa La Belle and Rosa Avalanche+. Some vendors use the cultivar name to describe their products, e.g., Vendela Rainbow Rose, or Rose Avalanche Crystal Green.
Color combinations:
The Original Rainbow Rose has the seven colors of the rainbow and is the most popular rose in this category. However, there are also the tropical variant with combinations of red/pink and yellow, and the ocean variant with combinations of green and blue. Other color combinations are also possible, though black and white are impossible to make. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ship cradle**
Ship cradle:
A ship cradle is a rig designed to hold a ship or boat upright on dry land to allow the vessel to be built or repaired. The vessel is held in place in the cradle by wooden chocks, cables, sand bags or restraining fixtures on the cradle. Ship cradles are made of timber or steel and are usually built adjacent the seashore, lake or river side or on the floor of a dry dock.
Overview:
"Cradle" may refer to the whole rig or sometimes each section of it. The cradle may be fixed to the dock floor, relying on the tides or a dry dock to drain it, or be equipped with wheels, running on an inclined track to allow the ship to be moved out of the water to a dry parking area. Large or heavy ships require steel railway wheels running on fixed steel tracks; cradles designed for smaller boats may have rubber-tyred wheels, usually running on a concrete slipway, and can be moved anywhere in the boatyard.
Movement:
Most cradles with steel wheels can move only in one direction, following the cradle rail track and designed to lift the vessel out of the water either longitudinally (bow-stern) or transversely (across the beam). The empty cradle shown top right extracts the ship longitudinally but its wheels can then be rotated 90o allowing it to park the ship transversely and freeing up the slipway for another vessel. The ferry bottom right can also be transported in the longitudinal as well as the transverse direction but uses a separate transverse carriage to change direction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Iconograph**
Iconograph:
An iconograph is a picture formed by a word or words. It can take the form of irregularly shaped letters or (especially in the case of poems) irregularly aligned text.
American poet May Swenson popularized such poems in her 1970 book Iconographs, which contained a number of poems laid out to resemble their subjects (e.g. a butterfly). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heme A**
Heme A:
Heme A (or haem A) is a heme, a coordination complex consisting of a macrocyclic ligand called a porphyrin, chelating an iron atom. Heme A is a biomolecule and is produced naturally by many organisms. Heme A, often appears a dichroic green/red when in solution, is a structural relative of heme B, a component of hemoglobin, the red pigment in blood.
Relationship to other hemes:
Heme A differs from heme B in that a methyl side chain at ring position 8 is oxidized to a formyl group and a hydroxyethylfarnesyl group, an isoprenoid chain, has been attached to the vinyl side chain at ring position 2 of the iron tetrapyrrole heme. Heme A is similar to heme o, in that both have this farnesyl addition at position 2 but heme O does not have the formyl group at position 8, still containing the methyl group. The correct structure of heme A, based upon NMR and IR experiments of the reduced, Fe(II) form of the heme, was published in 1975. The structure was confirmed by synthesis of the dimethyl ester of the iron-free form.
History:
Heme A was first isolated by the German biochemist Otto Warburg in 1951 and shown by him to be the active component of the integral membrane metalloprotein cytochrome c oxidase.
Stereochemistry:
The final structural question of the exact geometric configuration about the first carbon at ring position 3 of ring I, the carbon bound to the hydroxyl group, has been shown to be the chiral S configuration.Like heme B, heme A is often attached to the apoprotein through a coordinate bond between the heme iron and a conserved amino acid side-chain. In the important respiratory protein cytochrome c oxidase (CCO) this ligand 5 for the heme A at the oxygen reaction center is a histidyl group. Histidine is a common ligand for many hemeproteins including hemoglobin and myoglobin.
Stereochemistry:
Heme A in the cytochrome a portion of cytochrome c oxidase, bound by two histidine residues (shown in pink)An example of a metalloprotein that contains heme A is cytochrome c oxidase. This very complicated protein contains heme A at two different sites, each with a different function. The iron of the heme A of cytochrome a is hexacoordinated, that is bound with 6 other atoms. The iron of the heme A of cytochrome a3 is sometimes bound by 5 other atoms leaving the sixth site available to bind dioxygen (molecular oxygen). In addition, this enzyme binds 3 copper, magnesium, zinc, and several potassium and sodium ions. The two heme A groups in CCO are thought to readily exchange electrons between each other, the copper ions and the closely associated protein cytochrome c.
Stereochemistry:
Both the formyl group and the isoprenoid side chain are thought to play important roles in conservation of the energy of oxygen reduction by cytochrome c oxidase. CCO is thought to be responsible for conserving the energy of dioxygen reduction by pumping protons into the inter-membrane mitochondrial space. Both the formyl and hydroxyethylfarnesyl groups of heme A are thought to play important roles in this critical process, as published by the influential group of S. Yoshikawa. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PaTaank**
PaTaank:
PaTaank is a video game developed and published by PF Magic for the 3DO.
Gameplay:
Pataank is a pinball game where the player steers the pinball, hitting it into targets.
Reception:
Next Generation reviewed the game, rating it two stars out of five, and stated that "It's an interesting idea, done badly." Entertainment Weekly gave the game an A−. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GNU Go**
GNU Go:
GNU Go is a free software program by the Free Software Foundation that plays Go. Its source code is quite portable, and can be easily compiled for Linux, as well as other Unix-like systems, Microsoft Windows and macOS; ports exist for other platforms.
The program plays Go against the user, at about 5 to 7 kyu strength on the 9×9 board. Multiple board sizes are supported, from 5×5 to 19×19.
Strength:
At this level of performance, GnuGo was between six and seven stones weaker than the top commercial programs on good hardware as of early 2009, but comparable in strength to the strongest programs not using Monte Carlo methods. It did well at many computer Go tournaments. For instance, it took the gold medal at the 2003 and 2006 Computer Olympiad and second place at the 2006 Gifu Challenge.
Protocols:
Although ASCII-based, GNU Go supports two protocols—the Go Modem Protocol and the Go Text Protocol—by which GUIs can interface with it to give a graphical display. Several such GUIs exist. GTP also allows it to play online on Go servers (through the use of bridge programs), and copies can be found running on NNGS, KGS, and probably others.
Versions:
The current (stable) version of GNU Go is 3.8. The latest experimental release was 3.9.1. There is also an experimental feature for using Monte Carlo methods for 9×9 board play.
A version called Pocket GNU Go, based on GNU Go 2.6, is available for the Windows CE operating system (Pocket PC). Versions based on the much weaker 1.2 engine also exist for the Game Boy Advance and Palm Pilot. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Telluroxide**
Telluroxide:
A telluroxide is a type of organotellurium compound with the formula R2TeO. These compounds are analogous to sulfoxides in some respects. Reflecting the decreased tendency of Te to form multiple bonds, telluroxides exist both the monomer and the polymer, which are favored in solution and the solid state, respectively: (R2TeO)n ⇌ n R2TeOTelluroxides are prepared from the telluroethers by halogenation followed by base hydrolysis: R2Te + Br2 → R2TeBr2 R2TeBr2 + 2 NaOH → R2TeO + 2 NaBr + H2O | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gametangiogamy**
Gametangiogamy:
Gametangiogamy is the fusion or copulation of whole gametangia in certain members of the phyla Zygomycota and Ascomycota. The copulated union of multinuclear cells is followed after a more or less long period dikaryophase, by a pairwise fusion (karyogamy) of sexually different nuclei. In this case, karyogamy takes place simultaneously between the nuclei of many pairs of nuclei, not as in gametogamy between two gametic nuclei (polyfertilization). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Radical 141**
Radical 141:
Radical 141 or radical tiger (虍部) meaning "tiger" is one of the 29 Kangxi radicals (214 radicals in total) composed of 6 strokes.
In the Kangxi Dictionary, there are 114 characters (out of 49,030) to be found under this radical.
虍 is also the 130th indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China, with 虎 being its associated indexing component.
Literature:
Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1.
Lunde, Ken (Jan 5, 2009). "Appendix J: Japanese Character Sets" (PDF). CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing (Second ed.). Sebastopol, Calif.: O'Reilly Media. ISBN 978-0-596-51447-1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MX-2900ZOOM**
MX-2900ZOOM:
Made by Fujifilm, the MX-2900 was an early consumer level digital camera with a 2.3 megapixel CCD sensor and Optical resolution up to 1800 x 1200 pixels. As with all technology, the MX-2900 was surpassed by newer, faster and higher resolution digital cameras and is a now an obsolete model no longer in production. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Medium Extended Air Defense System**
Medium Extended Air Defense System:
The Medium Extended Air Defense System (MEADS) is a ground-mobile air and missile defense system intended to replace the Patriot missile system through a NATO-managed development. The program is a development of the United States, Germany and Italy. MEADS is designed to address the shortcomings of fielded systems and to permit full interoperability between U.S. and allied forces. Germany chose MEADS to replace their MIM-104 Patriot systems in June 2015.
Description:
MEADS provides ground-mobile air and missile defense with expanded coverage. The system provides enhanced force protection against a broad array of third-dimension threats. Improved interoperability, mobility, and full 360-degree defense capability against the evolving threat represent are key aspects. MEADS is the first air and missile defense (AMD) system that provides continuous on-the-move protection for maneuver forces. MEADS also provides area defense, homeland defense, and weighted asset protection.MEADS incorporates the Lockheed Martin hit-to-kill PAC-3 Missile Segment Enhancement (MSE) missile in a system including 360-degree surveillance and fire control sensors, netted-distributed tactical operations centers, and lightweight launchers. A single MEADS battery is able to defend up to 8 times the area of a Patriot battery through use of advanced 360-degree sensors, near-vertical launch capability, and the longer-range PAC-3 MSE missile. The MEADS radars – using active phased arrays and digital beam forming – enable full use of the PAC-3 MSE missile's extended range.
Description:
Truck-mounted MEADS elements drive or roll on and off C-130 and A400M transport aircraft so they are quickly deployed to a theater of operations. Because MEADS uses fewer system assets, it permits a substantial reduction in deployed personnel and equipment. MEADS reduces demand for airlift, so it can deploy to theater faster.
The minimum MEADS engagement capability requires only one launcher, one battle manager, and one fire control radar to provide 360-degree defense of troops or critical assets. As more system elements arrive, they automatically and seamlessly join the MEADS network and build out capability.
Description:
The prime contractor, MEADS International, is a multinational joint venture headquartered in Orlando, Florida. Its participating companies are MBDA Italia, MBDA Deutschland GmbH, and Lockheed Martin. The company initially won a competitive downselect to develop the MEADS system in 1999, but the program could not be started because the losing competitor filed two successive suits. In 2001, a $216 million Risk Reduction Effort contract was awarded to incorporate a new interceptor approach. In May 2005, MEADS International signed a definitized contract valued at $2 billion plus €1.4 billion for MEADS design and development. This development contract was completed in 2014. The U.S. funded 58 percent of the MEADS Design and Development program, with European partners Germany and Italy providing 25 percent and 17 percent respectively.
Description:
The German Bundeswehr completed an analysis of air defense alternatives in 2010 and strongly recommended MEADS as the basis for improving Germany's missile defense shield and as Germany's contribution to the European Phased Adaptive Approach. In February 2011, the U.S. Department of Defense announced that it intended to fulfill its commitment to complete the design and development effort, but that it would not procure the MEADS system for budgetary reasons. Lockheed Martin developed an interactive life cycle cost and capabilities application based on their Dynamic Comparative Analysis Methodology (DCAM) approach to more fully evaluate and communicate the performance and cost advantage of MEADS as compared to alternative systems. The DCAM application further reinforced the value of MEADS and is credited with helping ensure continued funding.
Description:
In October 2011, the National Armaments Directors of Germany, Italy, and the U.S. approved a contract amendment to fund two flight intercept tests, a launcher/missile characterization test, and a sensor characterization test conducted to complete the planned development scope.In September 2013, MEADS received operating certification for Mode 5 interrogation in its Identification friend or foe (IFF) system. Mode 5 is more secure and provides positive line-of-sight identification of friendly platforms equipped with an IFF transponder to better protect allied forces.In June 2015, MEADS was selected as the basis for the German Taktisches Luftverteidigungssystem (TLVS), a new generation of air and missile defense that requires flexible architecture based on strong networking capabilities. MEADS was a candidate for Poland's Wisła medium range air defense system procurement, but was eliminated in June 2014 when competition was downselected to the US Patriot system and the French/Italian SAMP/T system. However, Lockheed Martin began renewed discussions with the Polish Ministry of Defense in February 2016 leading to a formal request for information in September 2016. MEADS remains a candidate for Poland's Narew short range air defense system procurement.
Major equipment items:
The MEADS air and missile defense system is composed of six major equipment items. The MEADS radars, battle manager, and launchers are designed for high reliability so that the system will be able to maintain sustained operations much longer than legacy systems, resulting in overall lower operation and support costs.
Major equipment items:
Multifunction Fire Control Radar (MFCR) An X-band, solid-state, active electronically scanned array (AESA) radar using element-level transmit/receive modules developed in Germany. The MMIC is supplied by the Selex Sistemi Integrati foundry in Rome. The photonics foundry in Rome supplies lithium niobate (LiNbO3) components for the radar. The MFCR radar provides precision tracking and wideband discrimination and classification capabilities. For extremely rapid deployments, the MEADS MFCR can provide both surveillance and fire control capabilities until a surveillance radar joins the network. The MFCR uses its main beam for uplink and downlink missile communications. An advanced Mode 5 identification friend-or-foe subsystem supports improved threat identification and typing.
Major equipment items:
Surveillance Radar (SR) The UHF MEADS Surveillance Radar is a 360-degree active electronically steered array radar that provides extended range coverage. It provides threat detection capability against highly maneuverable low-signature threats, including short- and medium-range ballistic missiles, cruise missiles, and other air-breathing threats.
Major equipment items:
Tactical Operations Center (TOC) The MEADS TOC provides battle management and C4I (command, control, computers, communications, and intelligence). It controls an advanced network-centric open architecture that allows any combination of sensors and launchers to be organized into a single air and missile defense battle element. The system is netted and distributed. Every MEADS battle manager, radar, and launcher is a wireless node on the network. By virtue of multiple communications paths, the network can be expanded or contracted as the situation dictates and precludes single point failure if one node becomes inoperable. It also has a plug-and-fight capability that allows MEADS launchers and radars to seamlessly enter and leave the network without shutting it down and interrupting ongoing operations. MEADS uses open, non-proprietary standardized interfaces to extend plug-and-fight to non-MEADS elements. This flexibility is new for ground-based AMD systems.
Major equipment items:
Launcher and Reloader The lightweight MEADS launcher is easily transportable, tactically mobile, and capable of rapid reload. It carries up to eight PAC-3 Missile Segment Enhancement (MSE) Missiles and achieves launch readiness in minimum time. A MEADS reloader is similar but lacks launcher electronic systems.
Major equipment items:
Certified Missile Round PAC-3 MSE The PAC-3 Missile Segment Enhancement (MSE) missile is the baseline interceptor for MEADS. The interceptor increases the system's range and lethality over the baseline PAC-3 missile, which was selected as the primary missile for MEADS when the design and development program began in 2004. The MSE missile increases the engagement envelope and defended area by using more responsive control surfaces and a more powerful rocket motor.
Major equipment items:
IRIS-T SL In Germany, the PAC-3 MSE missile is expected to be supplemented by IRIS-T SLM as secondary missile for ground-based medium range air defense. It is based on the IRIS-T air-to-air missile equipped with an enlarged rocket motor, datalink, and jettisonable drag-reducing nose cone.
Plug-and-fight:
In the BMC4I TOC, plug-and-fight flexibility lets MEADS exchange data with non-MEADS sensors and shooters. The same capability lets MEADS move with ground forces and interoperate with allied forces. Through interoperability features designed into the system, MEADS will dramatically improve combat effectiveness and situational awareness, reducing the possibility of friendly fire incidents. MEADS system elements can seamlessly integrate into each nation's, or NATO's, combat architecture as required.
Plug-and-fight:
Units can be dispersed over a wide area. Command and control of launchers and missiles can be handed over to a neighboring battle management unit while the initial systems are moved, maintaining maneuver force protection. Plug-and-fight connectivity lets MEADS elements attach to and detach from the network at will, with no requirement to shut the system down.
The MEADS plug-and-fight capability enables command and control over other air and missile defense system elements through open, non-proprietary standardized interfaces. MEADS implements a unique ability to work with secondary missile systems if selected, and to evolve as other capabilities are developed.
Integration and test history:
In July 2010, the MEADS BMC4I demonstrated its interoperability with the NATO Air Command and Control System (ACCS) during tests using the Active Layer Theatre Ballistic Missile Defense (ALTBMD) Integration Test Bed being developed by NATO. The test was an early maturity demonstration for the MEADS BMC4I capability.In August 2010, the MEADS program completed an extensive series of Critical Design Review (CDR) events with a Summary CDR at MEADS International. Reviewers from Germany, Italy, the U.S., and the NATO Medium Extended Air Defense System Management Agency (NAMEADSMA) evaluated the MEADS design criteria in a comprehensive series of 47 reviews.In December 2010, the first MEADS launcher and Tactical Operations Center were displayed in ceremonies in Germany and Italy before initiating system integration tests at Pratica di Mare Air Force Base in Italy.In November 2011, it was announced that the MEADS Multifunction Fire Control Radar had been integrated with a MEADS TOC and launcher at Pratica di Mare Air Force Base. The objectives of the integration test series were to demonstrate that the MEADS TOC could control the MEADS MFCR in coordination with the MEADS Launcher as initial operational proof of the plug-and-fight capability. The MFCR demonstrated key functionalities including 360-degree target acquisition and track using both dedicated flights and other air traffic. Then, at White Sands Missile Range, MEADS demonstrated a first-ever over-the-shoulder launch of the PAC-3 MSE missile against a simulated target attacking from behind. It required a unique sideways maneuver, demonstrating a 360-degree capability. The missile executed a planned self-destruct sequence at the end of the mission after successfully engaging the simulated threat.In November 2012 at White Sands Missile Range, MEADS detected, tracked, intercepted, and destroyed an air-breathing target in an intercept flight test. The test configuration included a networked MEADS Tactical Operations Center, lightweight launcher firing a PAC-3 MSE, and a 360-degree MEADS Multifunction Fire Control Radar, which tracked the MQM-107 target and guided the missile to a successful intercept.Several progress milestones were demonstrated during 2013, culminating in a 360-degree dual-intercept test that went beyond initial contract objectives. In April, the MEADS Surveillance Radar acquired and tracked a small test aircraft and relayed its location to a MEADS TOC, which generated cue search commands. The MFCR, in full 360-degree rotating mode, searched the cued area, acquired the target, and established a dedicated track.In June 2013, during six days of testing, MEADS demonstrated network interoperability with NATO systems during Joint Project Optic Windmill (JPOW) exercises. MEADS demonstrated battle management capability to transmit, receive, and process Link 16 messages and to conduct threat engagements.In November 2013, MEADS intercepted and destroyed two simultaneous targets attacking from opposite directions during a stressing demonstration of its 360-degree AMD capabilities at White Sands Missile Range, New Mexico. All elements of the MEADS system were tested, including the 360-degree MEADS Surveillance Radar, a networked MEADS battle manager, two lightweight launchers firing PAC-3 Missile Segment Enhancement (MSE) Missiles and a 360-degree MEADS Multifunction Fire Control Radar (MFCR). The flight test achieved all criteria for success.
Integration and test history:
The first target, a QF-4 air-breathing target, approached from the south. Simultaneously a MGM-52 Lance missile, flying a tactical ballistic missile trajectory, attacked from the north. The Surveillance Radar acquired both targets and provided target cues to the MEADS battle manager, which generated cue commands for the MFCR. The MFCR tracked both targets successfully and guided missiles from launchers in the Italian and German configurations to successful intercepts.At White Sands Missile Range, Lockheed Martin and Northrop Grumman also demonstrated plug-and-fight connectivity between MEADS and the U.S. Army's Integrated Battle Command System (IBCS). IBCS demonstrated ability to plug-and-fight a 360-degree MEADS Surveillance Radar and Multifunction Fire Control Radar.In July 2014, MEADS completed a comprehensive system demonstration at Pratica di Mare Air Base, Italy. The tests, including operational demonstrations run by German and Italian military personnel, were designed to seamlessly add and subtract system elements under representative combat conditions, and to blend MEADS with other systems in a larger system architecture. All criteria for success were achieved.
Integration and test history:
During the test, plug-and-fight capability to rapidly attach and control an external Italian deployable air defense radar was demonstrated. Also demonstrated was engage-on-remote flexibility, which allows operators to target threats at greater distances despite being masked by terrain. Through reassigning workload, MEADS demonstrated ability to maintain defense capabilities if any system element is lost or fails.
Integration and test history:
Interoperability with German and Italian air defense assets was demonstrated through exchange of standardized NATO messages. Italian air-defense assets were integrated into a test bed at an Italian national facility, while the Surface to Air Missile Operations Centre and Patriot assets were integrated into a test bed at the German Air Force Air Defense Center in Fort Bliss, Texas. MEADS further demonstrated capability to perform engagement coordination with other systems, which fielded system are unable to do.In September 2014, MEADS MFCRs completed a six-week performance test at Pratica di Mare Air Base, Italy, and MBDA Deutschland's air defense center in Freinhausen. During the tests, the MEADS MFCR successfully demonstrated several advanced capabilities, many of which are critical for ground-mobile radar systems. Capabilities tested include tracking and canceling of jamming signals; searching, cueing and tracking in ground clutter; and successfully classifying target data using kinematic information.On 9 June 2015, Defense Minister Ursula von der Leyen announced that Germany had selected MEADS as the foundation for its Taktisches Luftverteidigungssystem (TVLS), which is planned to replace Germany's Patriot systems. In January 2017, MEADS International presented an updated offer for Poland's medium-range air defense (Wisła) program to Poland's Ministry of National Defense. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital Library of Mathematical Functions**
Digital Library of Mathematical Functions:
The Digital Library of Mathematical Functions (DLMF) is an online project at the National Institute of Standards and Technology (NIST) to develop a database of mathematical reference data for special functions and their applications. It is intended as an update of Abramowitz's and Stegun's Handbook of Mathematical Functions (A&S). It was published online on 7 May 2010, though some chapters appeared earlier. In the same year it appeared at Cambridge University Press under the title NIST Handbook of Mathematical Functions.In contrast to A&S, whose initial print run was done by the U.S. Government Printing Office and was in the public domain, NIST asserts that it holds copyright to the DLMF under Title 17 USC 105 of the U.S. Code. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thrombus perviousness**
Thrombus perviousness:
Thrombus perviousness is an imaging biomarker which is used to estimate clot permeability from CT imaging. It reflects the ability of artery-occluding thrombi to let fluid seep into and through them. The more pervious a thrombus, the more fluid it lets through. Thrombus perviousness can be measured using radiological imaging routinely performed in the clinical management of acute ischemic stroke: CT scans without intravenous contrast (also called non-contrast CT, in short NCCT) combined with CT scans after intravenously administered contrast fluid (CT-angiography, in short CTA). Pervious thrombi may let more blood pass through to the ischemic brain tissue, and/or have a larger contact surface and histopathology more sensitive for thrombolytic medication. Thus, patients with pervious thrombi may have less brain tissue damage by stroke. The value of thrombus perviousness in acute ischemic stroke treatment is currently being researched.
Etymology:
Emilie Santos et al. introduced the term thrombus perviousness in 2016 to estimate thrombus permeability in ischemic stroke patients. Before, Mishra et al. used ‘residual flow within the clot’, and Frölich et al. used ‘antegrade flow across incomplete vessel occlusions’ to describe an estimate of thrombus permeability. Permeability is the physical measure of the ability of a material to transmit fluids over time. To measure thrombus permeability, one needs to measure contrast flow through a clot over time and the pressure drop caused by the occlusion, which is commonly not possible in the acute management of a patient with acute ischemic stroke. Current standard diagnostic protocol for acute ischemic stroke only requires single-phase imaging, visualizing the thrombus at a snapshot in time. Therefore, thrombus perviousness was introduced as a derivative measure of permeability.
Measurement:
The amount of contrast that seeps into a thrombus can be quantified by the density difference of thrombi between non-contrast computed tomography (NCCT) and CT angiography (CTA) images. Two measures for thrombus perviousness have been introduced: (1) the void fraction and (2) thrombus attenuation increase (TAI).
Measurement:
Void fraction (ε) The void fraction represents the ratio of the void volume within a thrombus, filled with a volume of blood (Vblood) and the volume of thrombus material (Vthrombus):Void fraction can be estimated by measuring the attenuation increase (Δ) between NCCT and CTA in the thrombus (Δthrombus) and in the contralateral artery, filling with contrast on CTA (Δblood), and subsequently compute the ratio of these Δs: Thrombus attenuation increase To measure TAI, the mean attenuation (density, in Hounsfield Units) of a clot is measured on NCCT (ρthrombusNCCT) and subtracted from the thrombus density measured on CTA (ρthrombusCTA). CTA thrombus density increases after administration of the high-density contrast fluid used in CTA: Δthrombus = ρthrombusCTA – ρthrombusNCCTA manual (volume of interest [ROI]-based) and semi-automated (full thrombus segmentation) method have been described to measure thrombus density.
Measurement:
Manual 3-ROI TAI assessment In the manual thrombus perviousness assessment, spherical ROIs with a diameter of 2 mm are manually placed in the thrombus, both on NCCT and CTA. To improve reflection of possible thrombus heterogeneity, three ROIs are placed per imaging modality rather than one. The average of every three ROIs is calculated and used as ρthrombusNCCT and ρthrombusCTA.
Measurement:
Semi-automated full thrombus segmentation In automated measurements, the thrombus on CTA images is semi-automatically segmented in three steps.
Measurement:
An observer places four seed points. The first two are placed in the vasculature ipsilateral to (on the same side as) the occlusion, one proximal and one distal to the clot. The second two are placed in the contralateral vasculature (on the opposite side), both at approximately the same height as the first two points. The automated method subsequently segments the contralateral vasculature using these seed points.
Measurement:
The segmentation of the contralateral side is mapped to the occluded artery, using mirror symmetry, to segment the occluded artery.
The thrombus is segmented using intensity based region growing.
Finally, the density distribution of the entire thrombus in NCCT is compared to that in CTA to calculate thrombus attenuation increase (Δ).
Measurement:
Comparison between 3-ROI and semi-automated full thrombus measurement It has been shown that manual measurement tends to overestimate actual entire thrombus density, especially in low-density thrombi. Measurements based on the full thrombus show a wider variety of thrombus densities and better discrimination of high- and low-density thrombi and shows a stronger correlation with outcome measures than measurements based on 3 ROIs.
Influence of imaging parameters:
TAI measurements performed on CT scans with thicker slices will be less accurate, because volume averaging results in a reduction of thrombus density on NCCT. Therefore, it has been suggested to only use thin-slice CT images (≤2.5 mm) to measure thrombus perviousness.
Additional permeability measures:
Alternative measures of similar thrombus permeability characteristics have been introduced and are still being introduced. Mishra et al. introduced the residual flow grade, which distinguishes no contrast penetration (grade 0); contrast permeating diffusely through thrombus (grade 1); and tiny hairline lumen or streak of well-defined contrast within the thrombus extending either through its entire length or part of the thrombus (grade 2).
Clinical relevance:
Currently, treatment for acute ischemic stroke due to an occlusion of one of the arteries of the proximal anterior intracranial circulation consists of intravenous thrombolysis followed by endovascular thrombectomy for patients that arrive at the hospital within 4.5 hours of stroke onset. Patients that arrive later than 4.5 hours after onset, or have contra-indications for intravenous thrombolysis can still be eligible for endovascular thrombectomy only. Even with treatment, not all patients recover after their stroke; many are left with permanent brain damage. Increased thrombus perviousness may decrease brain damage during stroke by allowing more blood to reach the ischemic tissue. Furthermore, level of perviousness may reflect histopathological composition of clots or size of contact surface for thrombolytic medication, thereby influencing effectiveness of thrombolysis.
Thrombus perviousness in research:
A number of studies has been conducted on the effects of thrombus perviousness on NCCT and CTA. In addition, dynamic imaging modalities have been used to investigate thrombus perviousness/permeability in animal and laboratory studies and in humans using digital subtraction angiography (DSA) and CT Perfusion/4D-CTA. 4D-CTA may enable more accurate measurement of TAI, since it overcomes the influence of varying scan timing and contrast arrival in single phase CTA. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mainline (aeronautics)**
Mainline (aeronautics):
A mainline flight is a flight operated by an airline's main operating unit, rather than by regional alliances, regional code-shares, regional subsidiaries, or wholly owned subsidiaries offering low-cost operations. Mainline carriers typically operate between hub airports within their network and on international or long-haul services, using narrow-body and wide-body aircraft. This is in contrast to regional airlines, providing feeder services to hub airports operating smaller turboprop or regional jet aircraft, or low-cost carrier subsidiaries serving leisure markets.
Mainline (aeronautics):
In the United States, examples of mainline passenger airline flights include those operated by American Airlines, Delta Air Lines, and United Airlines; but not flights operated by regional airlines Envoy Air, Mesa Airlines, Executive Airlines, Piedmont Airlines, or PSA Airlines with regional jets or the services of regional airline marketing brands such as American Eagle, Delta Connection, or United Express aboard lower-capacity narrowbody jets and turboprop aircraft, such as those produced by Embraer or Bombardier, that do not have transcontinental range.
Mainline (aeronautics):
U.S. legacy carriers may operate branded mainline services using the same flight crews and AOC as that of their mainline operations. For example, United p.s. and American Flagship Service cater to the medium-haul transcontinental business segment. Short-haul air shuttles, such as Delta Shuttle, operate at high frequency intervals between busy city pairs. Previously, U.S. legacy carriers operated low-cost air services within their mainline operations to compete with low-cost carriers; these operations were short-lived and included brands such as Continental Lite, Song (Delta), and Ted (United). Outside the U.S., low-cost carrier subsidiary airlines are more common, with examples including Air Canada Rouge, Jetstar Airways (subsidiary of Qantas), and Eurowings (subsidiary of Lufthansa).
Mainline (aeronautics):
An airline carrier's collective bargaining agreement with flight crews stipulates the maximum seating capacity of regional aircraft; as such, any aircraft that exceeds this capacity must operate as a mainline flight. The converse is not the case; mainline flight crews, with proper type ratings, may operate aircraft that are smaller than typical mainline aircraft.
Mainline subsidiary carriers and airline within an airline brands:
Notes: 1Though not part of the main "legacy airline" or "flag carrier", these particular airlines are often described as "regional airlines" by the mainline airline counterparts they are affiliated or owned by.2These airline businesses resultant of airline liberalization in Europe, really do not have a "mainline brand", but do have unified brandings across multiple individual airline certificates forming "virtual airlines" much like the American Eagle, Delta Connection, and United Express banner branded regional airlines in the United States.
Mainline subsidiary carriers and airline within an airline brands:
North American mainline carrier's regional affiliates Notes:1 Branding used for regional feeder service and commuter flights. Operated either by a regional subsidiary or under contract by an independent regional airline.2These airlines are independent and not subsidiaries of mainline air carriers.3 These independent airlines operate regional aircraft under codeshare agreements with a mainline carrier.4 Independent airlines operating under a capacity purchase agreement with their mainline partner | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Weapon effects simulation**
Weapon effects simulation:
Weapon Effects Simulation (WES) is the creation of artificial weapons effects such as flashes, bangs and smoke during military training exercises. It is used in combination with Tactical engagement simulation (TES), which uses laser projection for training purposes instead of bullets and missiles. Typically, an accurate laser "shot" hitting a target such as a tank, will trigger cartridge-based WES equipment fitted to the tank which will give a flash, bang and smoke, signifying a hit in the exercise scenario. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slip angle**
Slip angle:
In vehicle dynamics, slip angle or sideslip angle is the angle between the direction in which a wheel is pointing and the direction in which it is actually traveling (i.e., the angle between the forward velocity vector vx and the vector sum of wheel forward velocity vx and lateral velocity vy , as defined in the image to the right). This slip angle results in a force, the cornering force, which is in the plane of the contact patch and perpendicular to the intersection of the contact patch and the midplane of the wheel. This cornering force increases approximately linearly for the first few degrees of slip angle, then increases non-linearly to a maximum before beginning to decrease.The slip angle, α is defined as
Causes:
A non-zero slip angle arises because of deformation in the tire carcass and tread. As the tire rotates, the friction between the contact patch and the road results in individual tread 'elements' (finite sections of tread) remaining stationary with respect to the road. If a side-slip velocity u is introduced, the contact patch will be deformed. When a tread element enters the contact patch, the friction between the road and the tire causes the tread element to remain stationary, yet the tire continues to move laterally. Thus the tread element will be ‘deflected’ sideways. While it is equally valid to frame this as the tire/wheel being deflected away from the stationary tread element, convention is for the co-ordinate system to be fixed around the wheel mid-plane.
Causes:
While the tread element moves through the contact patch it is deflected further from the wheel mid-plane. This deflection gives rise to the slip angle, and to the cornering force. The rate at which the cornering force builds up is described by the relaxation length.
Effects:
The ratios between the slip angles of the front and rear axles (a function of the slip angles of the front and rear tires respectively) will determine the vehicle's behavior in a given turn. If the ratio of front to rear slip angles is greater than 1:1, the vehicle will tend to understeer, while a ratio of less than 1:1 will produce oversteer. Actual instantaneous slip angles depend on many factors, including the condition of the road surface, but a vehicle's suspension can be designed to promote specific dynamic characteristics. A principal means of adjusting developed slip angles is to alter the relative roll couple (the rate at which weight transfers from the inside to the outside wheel in a turn) front to rear by varying the relative amount of front and rear lateral load transfer. This can be achieved by modifying the height of the roll centers, or by adjusting roll stiffness, either through suspension changes or the addition of an anti-roll bar.
Effects:
Because of asymmetries in the side-slip along the length of the contact patch, the resultant force of this side-slip occurs away from the geometric center of the contact patch, a distance described as the pneumatic trail, and so creates a torque on the tire, the so-called self aligning torque.
Measurement of slip angle:
There are two main ways to measure slip angle of a tire: on a vehicle as it moves, or on a dedicated testing device.
There are a number of devices which can be used to measure slip angle on a vehicle as it moves; some use optical methods, some use inertial methods, some GPS and some both GPS and inertial.
Measurement of slip angle:
Various test machines have been developed to measure slip angle in a controlled environment. A motorcycle tire test machine is located at the University of Padua. That uses a 3-meter diameter disk that rotates under a tire held at a fixed steer and camber angle, up to 54 degrees. Sensors measure the force and moment generated, and a correction is made to account for the curvature of the track. Other devices use the inner or outer surface of rotating drums, sliding planks, conveyor belts, or a trailer that presses the test tire to an actual road surface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hadamard space**
Hadamard space:
In geometry, an Hadamard space, named after Jacques Hadamard, is a non-linear generalization of a Hilbert space. In the literature they are also equivalently defined as complete CAT(0) spaces.
A Hadamard space is defined to be a nonempty complete metric space such that, given any points x and y, there exists a point m such that for every point z, The point m is then the midpoint of x and y: 2.
Hadamard space:
In a Hilbert space, the above inequality is equality (with m=(x+y)/2 ), and in general an Hadamard space is said to be flat if the above inequality is equality. A flat Hadamard space is isomorphic to a closed convex subset of a Hilbert space. In particular, a normed space is an Hadamard space if and only if it is a Hilbert space.
Hadamard space:
The geometry of Hadamard spaces resembles that of Hilbert spaces, making it a natural setting for the study of rigidity theorems. In a Hadamard space, any two points can be joined by a unique geodesic between them; in particular, it is contractible. Quite generally, if B is a bounded subset of a metric space, then the center of the closed ball of the minimum radius containing it is called the circumcenter of B.
Hadamard space:
Every bounded subset of a Hadamard space is contained in the smallest closed ball (which is the same as the closure of its convex hull). If Γ is the group of isometries of a Hadamard space leaving invariant B, then Γ fixes the circumcenter of B (Bruhat–Tits fixed point theorem). The basic result for a non-positively curved manifold is the Cartan–Hadamard theorem. The analog holds for a Hadamard space: a complete, connected metric space which is locally isometric to a Hadamard space has an Hadamard space as its universal cover. Its variant applies for non-positively curved orbifolds. (cf. Lurie.) Examples of Hadamard spaces are Hilbert spaces, the Poincaré disc, complete real trees (for example, complete Bruhat–Tits building), (p,q) -space with p,q≥3 and 2pq≥p+q, and Hadamard manifolds, that is, complete simply-connected Riemannian manifolds of nonpositive sectional curvature. Important examples of Hadamard manifolds are simply connected nonpositively curved symmetric spaces.
Hadamard space:
Applications of Hadamard spaces are not restricted to geometry. In 1998, Dmitri Burago and Serge Ferleger used CAT(0) geometry to solve a problem in dynamical billiards: in a gas of hard balls, is there a uniform bound on the number of collisions? The solution begins by constructing a configuration space for the dynamical system, obtained by joining together copies of corresponding billiard table, which turns out to be an Hadamard space. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thiocyanic acid**
Thiocyanic acid:
Thiocyanic acid is a chemical compound with the formula HSCN and structure H−S−C≡N, which exists as a tautomer with isothiocyanic acid (H−N=C=S). The iso- form tends to dominate with the material being about 95% isothiocyanic acid in the vapor phase.
Thiocyanic acid:
It is a moderately strong acid, with a pKa of 1.1 at 20 °C and extrapolated to zero ionic strength.HSCN is predicted to have a triple bond between carbon and nitrogen. It has been observed spectroscopically but has not been isolated as a pure substance.The salts and esters of thiocyanic acid are known as thiocyanates. The salts are composed of the thiocyanate ion (−SCN) and a suitable metal cation (e.g., potassium thiocyanate, KSCN). The esters of thiocyanic acid have the general structure R–SCN.
Thiocyanic acid:
Isothiocyanic acid, HNCS, is a Lewis acid whose free energy, enthalpy and entropy changes for its 1:1 association with a variety of Lewis bases in carbon tetrachloride solution at 25 °C have been reported. HNCS acceptor properties are discussed in the ECW model. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Congruence lattice problem**
Congruence lattice problem:
In mathematics, the congruence lattice problem asks whether every algebraic distributive lattice is isomorphic to the congruence lattice of some other lattice. The problem was posed by Robert P. Dilworth, and for many years it was one of the most famous and long-standing open problems in lattice theory; it had a deep impact on the development of lattice theory itself. The conjecture that every distributive lattice is a congruence lattice is true for all distributive lattices with at most ℵ1 compact elements, but F. Wehrung provided a counterexample for distributive lattices with ℵ2 compact elements using a construction based on Kuratowski's free set theorem.
Preliminaries:
We denote by Con A the congruence lattice of an algebra A, that is, the lattice of all congruences of A under inclusion.
The following is a universal-algebraic triviality. It says that for a congruence, being finitely generated is a lattice-theoretical property.
Lemma.
A congruence of an algebra A is finitely generated if and only if it is a compact element of Con A.
As every congruence of an algebra is the join of the finitely generated congruences below it (e.g., every submodule of a module is the union of all its finitely generated submodules), we obtain the following result, first published by Birkhoff and Frink in 1948.Theorem (Birkhoff and Frink 1948).
The congruence lattice Con A of any algebra A is an algebraic lattice.
While congruences of lattices lose something in comparison to groups, modules, rings (they cannot be identified with subsets of the universe), they also have a property unique among all the other structures encountered yet.
Theorem (Funayama and Nakayama 1942).
The congruence lattice of any lattice is distributive.
This says that α ∧ (β ∨ γ) = (α ∧ β) ∨ (α ∧ γ), for any congruences α, β, and γ of a given lattice. The analogue of this result fails, for instance, for modules, as A∩(B+C)≠(A∩B)+(A∩C) , as a rule, for submodules A, B, C of a given module.
Soon after this result, Dilworth proved the following result. He did not publish the result but it appears as an exercise credited to him in Birkhoff 1948. The first published proof is in Grätzer and Schmidt 1962.Theorem (Dilworth ≈1940, Grätzer and Schmidt 1962).
Every finite distributive lattice is isomorphic to the congruence lattice of some finite lattice.
Preliminaries:
It is important to observe that the solution lattice found in Grätzer and Schmidt's proof is sectionally complemented, that is, it has a least element (true for any finite lattice) and for all elements a ≤ b there exists an element x with a ∨ x = b and a ∧ x = 0. It is also in that paper that CLP is first stated in published form, although it seems that the earliest attempts at CLP were made by Dilworth himself. Congruence lattices of finite lattices have been given an enormous amount of attention, for which a reference is Grätzer's 2005 monograph.
Preliminaries:
The congruence lattice problem (CLP): Is every distributive algebraic lattice isomorphic to the congruence lattice of some lattice? The problem CLP has been one of the most intriguing and longest-standing open problems of lattice theory. Some related results of universal algebra are the following.
Theorem (Grätzer and Schmidt 1963).
Every algebraic lattice is isomorphic to the congruence lattice of some algebra.
The lattice Sub V of all subspaces of a vector space V is certainly an algebraic lattice. As the next result shows, these algebraic lattices are difficult to represent.
Theorem (Freese, Lampe, and Taylor 1979).
Let V be an infinite-dimensional vector space over an uncountable field F. Then Con A isomorphic to Sub V implies that A has at least card F operations, for any algebra A.
As V is infinite-dimensional, the largest element (unit) of Sub V is not compact. However innocuous it sounds, the compact unit assumption is essential in the statement of the result above, as demonstrated by the following result.
Theorem (Lampe 1982).
Every algebraic lattice with compact unit is isomorphic to the congruence lattice of some groupoid.
Semilattice formulation of CLP:
The congruence lattice Con A of an algebra A is an algebraic lattice. The (∨,0)-semilattice of compact elements of Con A is denoted by Conc A, and it is sometimes called the congruence semilattice of A. Then Con A is isomorphic to the ideal lattice of Conc A. By using the classical equivalence between the category of all (∨,0)-semilattices and the category of all algebraic lattices (with suitable definitions of morphisms), as it is outlined here, we obtain the following semilattice-theoretical formulation of CLP.
Semilattice formulation of CLP:
Semilattice-theoretical formulation of CLP: Is every distributive (∨,0)-semilattice isomorphic to the congruence semilattice of some lattice? Say that a distributive (∨,0)-semilattice is representable, if it is isomorphic to Conc L, for some lattice L. So CLP asks whether every distributive (∨,0)-semilattice is representable.
Many investigations around this problem involve diagrams of semilattices or of algebras. A most useful folklore result about these is the following.
Theorem.
The functor Conc, defined on all algebras of a given signature, to all (∨,0)-semilattices, preserves direct limits.
Schmidt's approach via distributive join-homomorphisms:
We say that a (∨,0)-semilattice satisfies Schmidt's Condition, if it is isomorphic to the quotient of a generalized Boolean semilattice B under some distributive join-congruence of B. One of the deepest results about representability of (∨,0)-semilattices is the following.
Theorem (Schmidt 1968).
Any (∨,0)-semilattice satisfying Schmidt's Condition is representable.
This raised the following problem, stated in the same paper.
Problem 1 (Schmidt 1968).
Does any (∨,0)-semilattice satisfy Schmidt's Condition? Partial positive answers are the following.
Theorem (Schmidt 1981).
Every distributive lattice with zero satisfies Schmidt's Condition; thus it is representable.
This result has been improved further as follows, via a very long and technical proof, using forcing and Boolean-valued models.
Theorem (Wehrung 2003).
Every direct limit of a countable sequence of distributive lattices with zero and (∨,0)-homomorphisms is representable.
Other important representability results are related to the cardinality of the semilattice. The following result was prepared for publication by Dobbertin after Huhn's passing away in 1985. The two corresponding papers were published in 1989.Theorem (Huhn 1985). Every distributive (∨,0)-semilattice of cardinality at most ℵ1 satisfies Schmidt's Condition. Thus it is representable.
By using different methods, Dobbertin got the following result.Theorem (Dobbertin 1986).
Every distributive (∨,0)-semilattice in which every principal ideal is at most countable is representable.
Problem 2 (Dobbertin 1983). Is every conical refinement monoid measurable?
Pudlák's approach; lifting diagrams of (∨,0)-semilattices:
The approach of CLP suggested by Pudlák in his 1985 paper is different. It is based on the following result, Fact 4, p. 100 in Pudlák's 1985 paper, obtained earlier by Yuri L. Ershov as the main theorem in Section 3 of the Introduction of his 1977 monograph.Theorem (Ershov 1977, Pudlák 1985).
Every distributive (∨,0)-semilattice is the directed union of its finite distributive (∨,0)-subsemilattices.
Pudlák's approach; lifting diagrams of (∨,0)-semilattices:
This means that every finite subset in a distributive (∨,0)-semilattice S is contained in some finite distributive (∨,0)-subsemilattice of S. Now we are trying to represent a given distributive (∨,0)-semilattice S as Conc L, for some lattice L. Writing S as a directed union S=⋃(Si∣i∈I) of finite distributive (∨,0)-subsemilattices, we are hoping to represent each Si as the congruence lattice of a lattice Li with lattice homomorphisms fij : Li→ Lj, for i ≤ j in I, such that the diagram S of all Si with all inclusion maps Si→Sj, for i ≤ j in I, is naturally equivalent to in I) , we say that the diagram in I) lifts S (with respect to the Conc functor). If this can be done, then, as we have seen that the Conc functor preserves direct limits, the direct limit lim →i∈ILi satisfies ConcL≅S While the problem whether this could be done in general remained open for about 20 years, Pudlák could prove it for distributive lattices with zero, thus extending one of Schmidt's results by providing a functorial solution.
Pudlák's approach; lifting diagrams of (∨,0)-semilattices:
Theorem (Pudlák 1985).
There exists a direct limits preserving functor Φ, from the category of all distributive lattices with zero and 0-lattice embeddings to the category of all lattices with zero and 0-lattice embeddings, such that ConcΦ is naturally equivalent to the identity. Furthermore, Φ(S) is a finite atomistic lattice, for any finite distributive (∨,0)-semilattice S.
Pudlák's approach; lifting diagrams of (∨,0)-semilattices:
This result is improved further, by an even far more complex construction, to locally finite, sectionally complemented modular lattices by Růžička in 2004 and 2006.Pudlák asked in 1985 whether his result above could be extended to the whole category of distributive (∨,0)-semilattices with (∨,0)-embeddings. The problem remained open until it was recently solved in the negative by Tůma and Wehrung.Theorem (Tůma and Wehrung 2006).
Pudlák's approach; lifting diagrams of (∨,0)-semilattices:
There exists a diagram D of finite Boolean (∨,0)-semilattices and (∨,0,1)-embeddings, indexed by a finite partially ordered set, that cannot be lifted, with respect to the Conc functor, by any diagram of lattices and lattice homomorphisms.
In particular, this implies immediately that CLP has no functorial solution.
Pudlák's approach; lifting diagrams of (∨,0)-semilattices:
Furthermore, it follows from deep 1998 results of universal algebra by Kearnes and Szendrei in so-called commutator theory of varieties that the result above can be extended from the variety of all lattices to any variety V such that all Con A, for A∈V , satisfy a fixed nontrivial identity in the signature (∨,∧) (in short, with a nontrivial congruence identity).
Pudlák's approach; lifting diagrams of (∨,0)-semilattices:
We should also mention that many attempts at CLP were also based on the following result, first proved by Bulman-Fleming and McDowell in 1978 by using a categorical 1974 result of Shannon, see also Goodearl and Wehrung in 2001 for a direct argument.Theorem (Bulman-Fleming and McDowell 1978).
Every distributive (∨,0)-semilattice is a direct limit of finite Boolean (∨,0)-semilattices and (∨,0)-homomorphisms.
It should be observed that while the transition homomorphisms used in the Ershov-Pudlák Theorem are (∨,0)-embeddings, the transition homomorphisms used in the result above are not necessarily one-to-one, for example when one tries to represent the three-element chain. Practically this does not cause much trouble, and makes it possible to prove the following results.
Theorem.
Pudlák's approach; lifting diagrams of (∨,0)-semilattices:
Every distributive (∨,0)-semilattice of cardinality at most ℵ1 is isomorphic to (1) Conc L, for some locally finite, relatively complemented modular lattice L (Tůma 1998 and Grätzer, Lakser, and Wehrung 2000).(2) The semilattice of finitely generated two-sided ideals of some (not necessarily unital) von Neumann regular ring (Wehrung 2000).(3) Conc L, for some sectionally complemented modular lattice L (Wehrung 2000).(4) The semilattice of finitely generated normal subgroups of some locally finite group (Růžička, Tůma, and Wehrung 2007).(5) The submodule lattice of some right module over a (non-commutative) ring (Růžička, Tůma, and Wehrung 2007).
Congruence lattices of lattices and nonstable K-theory of von Neumann regular rings:
We recall that for a (unital, associative) ring R, we denote by V(R) the (conical, commutative) monoid of isomorphism classes of finitely generated projective right R-modules, see here for more details. Recall that if R is von Neumann regular, then V(R) is a refinement monoid. Denote by Idc R the (∨,0)-semilattice of finitely generated two-sided ideals of R. We denote by L(R) the lattice of all principal right ideals of a von Neumann regular ring R. It is well known that L(R) is a complemented modular lattice.
Congruence lattices of lattices and nonstable K-theory of von Neumann regular rings:
The following result was observed by Wehrung, building on earlier works mainly by Jónsson and Goodearl.
Theorem (Wehrung 1999).
Let R be a von Neumann regular ring. Then the (∨,0)-semilattices Idc R and Conc L(R) are both isomorphic to the maximal semilattice quotient of V(R).
Congruence lattices of lattices and nonstable K-theory of von Neumann regular rings:
Bergman proves in a well-known unpublished note from 1986 that any at most countable distributive (∨,0)-semilattice is isomorphic to Idc R, for some locally matricial ring R (over any given field). This result is extended to semilattices of cardinality at most ℵ1 in 2000 by Wehrung, by keeping only the regularity of R (the ring constructed by the proof is not locally matricial). The question whether R could be taken locally matricial in the ℵ1 case remained open for a while, until it was disproved by Wehrung in 2004. Translating back to the lattice world by using the theorem above and using a lattice-theoretical analogue of the V(R) construction, called the dimension monoid, introduced by Wehrung in 1998, yields the following result.
Congruence lattices of lattices and nonstable K-theory of von Neumann regular rings:
Theorem (Wehrung 2004).
There exists a distributive (∨,0,1)-semilattice of cardinality ℵ1 that is not isomorphic to Conc L, for any modular lattice L every finitely generated sublattice of which has finite length.
Problem 3 (Goodearl 1991). Is the positive cone of any dimension group with order-unit isomorphic to V(R), for some von Neumann regular ring R?
A first application of Kuratowski's free set theorem:
The abovementioned Problem 1 (Schmidt), Problem 2 (Dobbertin), and Problem 3 (Goodearl) were solved simultaneously in the negative in 1998.
Theorem (Wehrung 1998).
A first application of Kuratowski's free set theorem:
There exists a dimension vector space G over the rationals with order-unit whose positive cone G+ is not isomorphic to V(R), for any von Neumann regular ring R, and is not measurable in Dobbertin's sense. Furthermore, the maximal semilattice quotient of G+ does not satisfy Schmidt's Condition. Furthermore, G can be taken of any given cardinality greater than or equal to ℵ2.
A first application of Kuratowski's free set theorem:
It follows from the previously mentioned works of Schmidt, Huhn, Dobbertin, Goodearl, and Handelman that the ℵ2 bound is optimal in all three negative results above.
As the ℵ2 bound suggests, infinite combinatorics are involved. The principle used is Kuratowski's free set theorem, first published in 1951. Only the case n=2 is used here.
A first application of Kuratowski's free set theorem:
The semilattice part of the result above is achieved via an infinitary semilattice-theoretical statement URP (Uniform Refinement Property). If we want to disprove Schmidt's problem, the idea is (1) to prove that any generalized Boolean semilattice satisfies URP (which is easy), (2) that URP is preserved under homomorphic image under a weakly distributive homomorphism (which is also easy), and (3) that there exists a distributive (∨,0)-semilattice of cardinality ℵ2 that does not satisfy URP (which is difficult, and uses Kuratowski's free set theorem).
A first application of Kuratowski's free set theorem:
Schematically, the construction in the theorem above can be described as follows. For a set Ω, we consider the partially ordered vector space E(Ω) defined by generators 1 and ai,x, for i<2 and x in Ω, and relations a0,x+a1,x=1, a0,x ≥ 0, and a1,x ≥ 0, for any x in Ω. By using a Skolemization of the theory of dimension groups, we can embed E(Ω) functorially into a dimension vector space F(Ω). The vector space counterexample of the theorem above is G=F(Ω), for any set Ω with at least ℵ2 elements.
A first application of Kuratowski's free set theorem:
This counterexample has been modified subsequently by Ploščica and Tůma to a direct semilattice construction. For a (∨,0)-semilattice, the larger semilattice R(S) is the (∨,0)-semilattice freely generated by new elements t(a,b,c), for a, b, c in S such that c ≤ a ∨ b, subjected to the only relations c=t(a,b,c) ∨ t(b,a,c) and t(a,b,c) ≤ a. Iterating this construction gives the free distributive extension D(S)=⋃(Rn(S)∣n<ω) of S. Now, for a set Ω, let L(Ω) be the (∨,0)-semilattice defined by generators 1 and ai,x, for i<2 and x in Ω, and relations a0,x ∨ a1,x=1, for any x in Ω. Finally, put G(Ω)=D(L(Ω)).
A first application of Kuratowski's free set theorem:
In most related works, the following uniform refinement property is used. It is a modification of the one introduced by Wehrung in 1998 and 1999.
Definition (Ploščica, Tůma, and Wehrung 1998).
A first application of Kuratowski's free set theorem:
Let e be an element in a (∨,0)-semilattice S. We say that the weak uniform refinement property WURP holds at e, if for all families (ai)i∈I and (bi)i∈I of elements in S such that ai ∨ bi=e for all i in I, there exists a family (ci,j∣(i,j)∈I×I) of elements of S such that the relations • ci,j ≤ ai,bj, • ci,j ∨ aj ∨ bi=e, • ci,k ≤ ci,j∨ cj,k hold for all i, j, k in I. We say that S satisfies WURP, if WURP holds at every element of S.
A first application of Kuratowski's free set theorem:
By building on Wehrung's abovementioned work on dimension vector spaces, Ploščica and Tůma proved that WURP does not hold in G(Ω), for any set Ω of cardinality at least ℵ2. Hence G(Ω) does not satisfy Schmidt's Condition. All negative representation results mentioned here always make use of some uniform refinement property, including the first one about dimension vector spaces.
However, the semilattices used in these negative results are relatively complicated. The following result, proved by Ploščica, Tůma, and Wehrung in 1998, is more striking, because it shows examples of representable semilattices that do not satisfy Schmidt's Condition. We denote by FV(Ω) the free lattice on Ω in V, for any variety V of lattices.
Theorem (Ploščica, Tůma, and Wehrung 1998).
The semilattice Conc FV(Ω) does not satisfy WURP, for any set Ω of cardinality at least ℵ2 and any non-distributive variety V of lattices. Consequently, Conc FV(Ω) does not satisfy Schmidt's Condition.
A first application of Kuratowski's free set theorem:
It is proved by Tůma and Wehrung in 2001 that Conc FV(Ω) is not isomorphic to Conc L, for any lattice L with permutable congruences. By using a slight weakening of WURP, this result is extended to arbitrary algebras with permutable congruences by Růžička, Tůma, and Wehrung in 2007. Hence, for example, if Ω has at least ℵ2 elements, then Conc FV(Ω) is not isomorphic to the normal subgroup lattice of any group, or the submodule lattice of any module.
Solving CLP: the Erosion Lemma:
The following recent theorem solves CLP.
Theorem (Wehrung 2007).
The semilattice G(Ω) is not isomorphic to Conc L for any lattice L, whenever the set Ω has at least ℵω+1 elements.
Hence, the counterexample to CLP had been known for nearly ten years, it is just that nobody knew why it worked! All the results prior to the theorem above made use of some form of permutability of congruences. The difficulty was to find enough structure in congruence lattices of non-congruence-permutable lattices.
We shall denote by ε the `parity function' on the natural numbers, that is, ε(n)=n mod 2, for any natural number n.
Solving CLP: the Erosion Lemma:
We let L be an algebra possessing a structure of semilattice (L,∨) such that every congruence of L is also a congruence for the operation ∨ . We put for all U,V⊆L, and we denote by ConcU L the (∨,0)-subsemilattice of Conc L generated by all principal congruences Θ(u,v) ( = least congruence of L that identifies u and v), where (u,v) belongs to U ×U. We put Θ+(u,v)=Θ(u ∨ v,v), for all u, v in L.br /> The Erosion Lemma (Wehrung 2007).
Solving CLP: the Erosion Lemma:
Let x0, x1 in L and let Z={z0,z1,…,zn} , for a positive integer n, be a finite subset of L with ⋁i<nzi≤zn . Put for all 2.
Then there are congruences θj∈Conc{xj}∨ZL , for j<2, such that mod and for all 2.
Solving CLP: the Erosion Lemma:
(Observe the faint formal similarity with first-order resolution in mathematical logic. Could this analogy be pushed further?) The proof of the theorem above runs by setting a structure theorem for congruence lattices of semilattices—namely, the Erosion Lemma, against non-structure theorems for free distributive extensions G(Ω), the main one being called the Evaporation Lemma. While the latter are technically difficult, they are, in some sense, predictable. Quite to the contrary, the proof of the Erosion Lemma is elementary and easy, so it is probably the strangeness of its statement that explains that it has been hidden for so long.
Solving CLP: the Erosion Lemma:
More is, in fact, proved in the theorem above: For any algebra L with a congruence-compatible structure of join-semilattice with unit and for any set Ω with at least ℵω+1 elements, there is no weakly distributive homomorphism μ: Conc L → G(Ω) containing 1 in its range. In particular, CLP was, after all, not a problem of lattice theory, but rather of universal algebra—even more specifically, semilattice theory! These results can also be translated in terms of a uniform refinement property, denoted by CLR in Wehrung's paper presenting the solution of CLP, which is noticeably more complicated than WURP.
Solving CLP: the Erosion Lemma:
Finally, the cardinality bound ℵω+1 has been improved to the optimal bound ℵ2 by Růžička.Theorem (Růžička 2008).
The semilattice G(Ω) is not isomorphic to Conc L for any lattice L, whenever the set Ω has at least ℵ2 elements.
Růžička's proof follows the main lines of Wehrung's proof, except that it introduces an enhancement of Kuratowski's Free Set Theorem, called there existence of free trees, which it uses in the final argument involving the Erosion Lemma.
A positive representation result for distributive semilattices:
The proof of the negative solution for CLP shows that the problem of representing distributive semilattices by compact congruences of lattices already appears for congruence lattices of semilattices. The question whether the structure of partially ordered set would cause similar problems is answered by the following result.Theorem (Wehrung 2008). For any distributive (∨,0)-semilattice S, there are a (∧,0)-semilattice P and a map μ : P × P → S such that the following conditions hold: (1) x ≤ y implies that μ(x,y)=0, for all x, y in P.
A positive representation result for distributive semilattices:
(2) μ(x,z) ≤ μ(x,y) ∨ μ(y,z), for all x, y, z in P.
(3) For all x ≥ y in P and all α, β in S such that μ(x,y) ≤ α ∨ β, there are a positive integer n and elements x=z0 ≥ z1 ≥ ... ≥ z2n=y such that μ(zi,zi+1) ≤ α (resp., μ(zi,zi+1) ≤ β) whenever i < 2n is even (resp., odd).
(4) S is generated, as a join-semilattice, by all the elements of the form μ(x,0), for x in P.
Furthermore, if S has a largest element, then P can be assumed to be a lattice with a largest element.
It is not hard to verify that conditions (1)–(4) above imply the distributivity of S, so the result above gives a characterization of distributivity for (∨,0)-semilattices. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vitamalt**
Vitamalt:
Vitamalt is a brand of non-alcoholic malt beverages manufactured and originated in Denmark and its taste might be described as sweet, unfermented beer. High on nutrients and vitamins, Vitamalt is a drink designed as an energy supplement. It is available in about 70 countries, but it is most widely known in the West Indies, where over time it has attained the status of a cultural symbol. The manufacturers have sponsored sporting events and clubs throughout the Caribbean. Cycling, running, basketball, football and amateur sports are activities that Vitamalt is usually associated with.
Vitamalt:
Because of high nutritional value and vitamin, mineral, protein and antioxidant content, the malt drink is sometimes consumed as an alternative to sports drinks or energy drinks. Containing no alcohol makes it halal.
The Vitamalt product range includes Vitamalt Plus, which contains acai, guarana and aloe vera, Vitamalt Ginger and Vitamalt Light, which has lower nutritional value.
The official tagline is "Vitamalt takes care of you". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diffuser-augmented wind turbine**
Diffuser-augmented wind turbine:
A diffuser-augmented wind turbine (DAWT) is a wind turbine modified with a cone-shaped wind diffuser that is used to increase the efficiency of converting wind power to electrical power. The increased efficiency is possible due to the increased wind speeds the diffuser can provide. In traditional bare turbines, the rotor blades are vertically mounted at the top of a support tower or shaft. In a DAWT, the rotor blades are mounted within the diffuser, which is then placed on the top of the support tower. Additional modifications can be made to the diffuser in order to further increase efficiency.
Mechanics:
Wind power measures how much energy is available in the wind, and it can be represented by the following equation w=(1/2)rAv3 where r is air density, A is rotor area, and V is wind velocity. This means that the amount of energy available in the wind is directly proportional to the wind speed cubed. For example, assuming that all other variables are held constant, doubling the wind speed would increase the available energy in the wind by 8 times. A slight increase in wind speed results in dramatic increases in wind power. Unfortunately, this means that if the wind speeds were to slow down even slightly, it would drastically reduce wind power.
Designs:
Most designs include a cone-shaped diffuser with the purpose of increasing the velocity of the air as it travels through the turbine. In order for this to be possible, the exit hole of the diffuser must be larger than the entrance hole to properly diffuse the air. As wind flows through the diffuser, it travels along the walls, which causes the exiting wind to form vortices of wind when exiting. These vortices cause most of the air to be diffused away from the center of the exit, which creates a low pressure segment of air behind the turbine. The pressure difference accelerates the high pressure air in the front towards the low pressure air in the back, causing a significant increase in speed. If the diffuser were to instead have an exit hole smaller than its entrance, then the opposite effects would be achieved. A high-pressure area would be formed at the exit, severely restricting airflow through the diffuser. Additional designs take the basic diffuser and make additional modifications to further increase power generation.
Designs:
Wind lens A design by Yuji Ohya, a professor at Kyushu University, further modified the diffuser by adding a broad ring around the exit hole and an inlet shroud at the entrance—a "wind lens". This design amplifies the positive effects of a normal diffuser shroud to result in a more efficient diffuser. The brimmed exit hole creates stronger vortices than a regular diffuser, which means that the pressure difference is greater than it would be with a normal diffuser. As a result, wind is able to reach higher speeds. In addition, the inlet shroud at the entrance makes it easier for air to enter, so air will not be slowed down as much going in.
Designs:
Multi-rotor design Other designs are very similar to a diffuser but consist of multiple rotors within it to capture more electrical energy from the wind. One way to generate more energy would be to increase the rotor area, which can be done in two ways. One way is to increase the diameter of a single rotor, however, this causes unfavorable gains in mass. Another way is to increase the number of rotors per turbine, which does not cause undesirable increases in weight. Systems with up to 45 rotors in one turbine have been tested, and no negative interference has been found between the rotors.
Results:
Turbines equipped with a diffuser-shaped shroud and a broad exit ring generate 2–5 times more power than bare wind turbines for any given wind speed or turbine diameter. Further analysis concludes that the Betz's limit can be exceeded if the wind turbine were to be equipped with a diffuser. For multi-rotor turbines equipped with a diffuser, the power augmentation is smaller, but still favorable at around 5%–9% increase.
Limitations of traditional turbines:
Bare wind turbines have several limitations that decrease their efficiency in generating electricity. These limitations play a big role when it comes to mass production of energy.
Limitations of traditional turbines:
Manufacturing The amount of energy that a bare wind turbine can generate is largely dependent on how big the rotor is, which implies that the bigger a turbine is, the more energy it will produce. However, using large turbines results in heavy overall weights and high manufacturing costs. Heavier turbines are also prone to higher malfunction rates which results in higher maintenance costs. In addition, the bigger the turbine is, the more resources that will have to be invested in transporting the massive parts from the factory to where they will be deployed. This is very rarely a viable option since it defeats the whole purpose of affordable alternative energy.
Limitations of traditional turbines:
Betz's law In addition to manufacturing limitations, there exist limits within the laws of physics that govern how much energy can be generated. Traditional open turbine designs are also limited by Betz's law, which states that for a bare turbine in open wind, no more than 16/27 of the total wind kinetic energy can be converted to electrical energy. 59% is not the most efficient rate, so several designs have been made in order to get around this limitation. Designs include the addition of a "Wind-Lens" or using multiple rotors within the diffuser. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Protofection**
Protofection:
Protofection is a protein-mediated transfection of foreign mitochondrial DNA (mtDNA) into the mitochondria of cells in a tissue to supplement or replace the native mitochondrial DNA already present. The complete mtDNA genome or just fragments of mtDNA generated by polymerase chain reaction can be transferred into the target mitochondria through the technique.Scientists have hypothesized for the last couple of decades that protofection can be beneficial for patients with mitochondrial diseases. This technique is a recent development and is continuously being improved. As mitochondrial DNA becomes progressively more damaged with age, this may provide a method of at least partially rejuvenating mitochondria in old tissue, restoring them to their original, youthful function.
Method:
Protofection is a developing technique and is continuously being improved. A specific protein transduction system has been created that is complexed with mtDNA, which enables the mtDNA to move across the targeted cell's membrane and specifically target mitochondria. The transduction system used consists of a protein transduction domain, mitochondrial localization sequences, and mitochondrial transcription factor A. Each of these play a specific role in protofection: A protein transduction domain is needed because they are small regions of proteins that can cross the cell membrane of cells, independently.
Method:
A specific mitochondrial localization sequences is used for protofection because it permits mtDNA to enter the mitochondria.
Method:
Mitochondrial transcription factor A is used because it unwinds the mtDNA that enters the mitochondria, which is critical for mtDNA replication.This process can lead to an increase in the amount of mtDNA present in the mitochondria of the target cells.The transduction system has been tweaked and modified, since the first use of protofection. To shorten the name of the complex, which was previously called PTD-MLS-TFAM complex, it is now named MTD-TFAM. MTD stands for mitochondrial transduction domain and it includes the protein transduction domain and the mitochondrial localization sequences.
Possible therapeutic uses:
One hypothesis for mitochondrial diseases is that mitochondrial damage and dysfunction play an important role in aging. Protofection is being researched as a possibly viable laboratory technique for constructing gene therapies for inherited mitochondrial diseases, such as Leber's hereditary optic neuropathy. Studies have shown that protofection can lead to improved mitochondrial function in targeted cells.Protofection could be applied to modified or artificial mitochondria. Mitochondria could be modified to produce few or no free radicals without compromising energy production. Recent studies have demonstrated that mitochondrial transplants may be useful to rejuvenate dead or dying tissue, such as in heart attacks, for which the mitochondria is the first part of the cell that dies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Martingale (tack)**
Martingale (tack):
A martingale is any of several designs of tack that are used on horses to control head carriage. Martingales may be seen in a wide variety of equestrian disciplines, both riding and driving. Rules for their use vary widely; in some disciplines they are never used, others allow them for schooling but not in judged performance, and some organizations allow certain designs in competition. The two most common types of martingale, the standing and the running, are used to control the horse's head height, and to prevent the horse from throwing its head so high that the rider gets hit in the face by the horse's poll or upper neck. When a horse's head gets above a desired height, the martingale places pressure on the head so that it becomes more difficult or impossible to raise it higher.
The standing martingale:
The standing martingale, also known as a "tiedown" or a "head check", has a single strap which is attached to the girth, passes between the horse's front legs and is fixed to the back of the noseband. To prevent it from catching on other objects, it also has a neck strap. A variation is attached to a breastplate in lieu of a neck strap. When correctly fitted for English riding, it should be possible to push the martingale strap up to touch the horse's throatlatch.
The standing martingale:
A variation of the standing martingale, called a tiedown, is seen almost exclusively in the western riding disciplines. A tiedown is adjusted much shorter than a standing martingale and is intended primarily to prevent the horse from flipping its head up when asked to abruptly stop or turn in speed events. Users also claim that it gives the horse something to brace against for balance. It consists of an adjustable strap, one end which attaches to the horse's breastplate and the other which attaches to a noseband on the bridle. The noseband can be of leather, but may also be of lariat rope, or even plastic-covered cable, which can make the western tiedown considerably harsher than the English-style standing martingale. It is properly adjusted when it puts no pressure on the horse's nose when held at a normal position, but will immediately act if the horse raises its nose more than a few inches.
The standing martingale:
With both pieces of equipment, the slack is taken up out of the strap when the horse raises its head above the desired point, and pressure is placed on the horse's nose. The standing martingale is competition legal for show hunter and hunt seat equitation riders over fences in the US, show jumping competitions in the UK, and is permissible and in common use in fox hunting, polocrosse, horseball, and polo. It is also seen on some military and police horses, partly for style and tradition, but also in the event of an emergency that may require the rider to handle the horse in an abrupt manner. It is not legal for flat classes. The tiedown is commonly seen in rodeo and speed events such as gymkhana games, but is not show legal in any other western-style horse show competition.
The standing martingale:
Safety and risks The standing martingale is more restrictive than the running martingale because it cannot be loosened in an emergency. A horse that trips in a standing martingale could potentially fall more easily because its range of motion is restricted. If a horse falls wearing an incorrectly fitted standing martingale, the animal cannot extend its neck fully, plus will have a more difficult time getting back up.
The standing martingale:
Due to the risk of injury to the cartilage of the nose, the martingale strap is never attached to a drop noseband. Because of the risk of both nose and jaw injuries, it also should not be attached to any type of "figure 8" or "grackle" noseband. A standing martingale can be attached to the cavesson (the upper, heavier strap) of a flash noseband, but not to the lower, "flash" or "drop" strap. Any martingale may cause pain to the horse if misused in combination with certain other equipment. If used in conjunction with a gag bit, a standing martingale can trap the head of the horse, simultaneously asking the horse to raise and lower its head and providing no source of relief in either direction. This combination is sometimes seen in polo, in some rodeo events, and occasionally in the lower levels of jumping. Overuse or misuse of a martingale or tiedown, particularly as a means to prevent a horse from head-tossing, can lead to the overdevelopment of the muscles on the underside of the neck, creating an undesirable "upside down" neck that makes it more difficult for the horse work properly under saddle. It may also lead to the horse tensing the back muscles and moving incorrectly, especially over fences. This may put excessive pressure on the horse's spine, reduce the shock-absorbing capacity of the leg anatomy, and can over time lead to lameness. There is also a risk of accidents: If a horse is sufficiently "trapped" by a combination of a too-short martingale and too-harsh bit, the horse may attempt to rear and, inhibited by the action of the martingale, fall, potentially injuring both horse and rider.
The running martingale and German martingale:
The running martingale consists of a strap which is attached to the girth and passes between the horse's front legs before dividing into two pieces. At the end of each of these straps is a small metal ring through which the reins pass. It is held in the correct position by a neck strap or breastplate.
The running martingale and German martingale:
A running martingale is adjusted so that each of the "forks" has about an inch of slack when the horse holds its head in the normal position. When correctly adjusted, the reins make a straight line from the rider's hand to the bit ring when the horse's head in at the correct height and the running martingale is not in effect.
The running martingale and German martingale:
When the horse raises its head above the desired point, the running martingale adds leverage through the reins to the bit on the bars of the horse's mouth. The leverage created by this pressure encourages the horse to lower its head. A running martingale provides more freedom for the horse than a standing martingale, as the rider can release pressure as soon as the desired result is achieved. Additionally, if a horse happens to trip on landing after a fence, the rider can loosen the reins and the horse will have full use of its head and neck. Because of this safety factor, the running martingale is the only style of martingale permitted for use in eventing competitions and horse racing. Some show jumpers also prefer the running martingale due to the extra freedom it provides. Running martingales are also used outside of the competition arena on young horses being trained in the Saddle seat, western riding, and many other disciplines. The German martingale, also called a Market Harborough, consists of a split fork that comes up from the chest, runs through the rings of the bit and attaches to rings on the reins of the bridle between the bit and the rider's hand. It acts in a manner similar to a running martingale, but with additional leverage. It is not show legal and is used primarily as a training aid.
The running martingale and German martingale:
Safety and risks A running martingale is generally used with rein stops, which are rubber or leather stops slipped onto the rein between the bit and the ring of the martingale. Rein stops are compulsory at Pony Club and British Eventing Events. They are an important safety feature that stops the martingale from sliding too far forward and getting caught on the bit ring or on the buckles or studs that attach the reins to the bit. Sanctioning organizations require a running martingale to be used in conjunction with rein stops if the reins are buckled to the bit.The primary difficulty in use of a running martingale is the inability to raise the horse's head in the event of the animal bucking. If adjusted too short, lateral use of the reins may be impeded. If used improperly, the force exerted by the running martingale on the horse's mouth can be severe and for this reason the standing martingale is preferred in some circles. Improper use includes use on the reins of a curb bit; adjustment too short, so that the equipment pulls the horse's head below the proper position.
The Irish martingale:
The Irish martingale is not a true martingale in the sense of a device that affects the rider's control over the horse. Thus, it is sometimes known as a semi-martingale. It is a simple short strap with a ring on either end. The reins are each run through a ring on either side before being buckled. The Irish martingale's purpose is not to control the head, but to prevent the reins from coming over the horse's head, risking entanglement, should a rider fall. It is used mostly in European horse racing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tool and cutter grinder**
Tool and cutter grinder:
A Tool and Cutter Grinder is used to sharpen milling cutters and tool bits along with a host of other cutting tools.
It is an extremely versatile machine used to perform a variety of grinding operations: surface, cylindrical, or complex shapes. The image shows a manually operated setup, however highly automated Computer Numerical Control (CNC) machines are becoming increasingly common due to the complexities involved in the process.
Tool and cutter grinder:
The operation of this machine (in particular, the manually operated variety) requires a high level of skill. The two main skills needed are understanding of the relationship between the grinding wheel and the metal being cut and knowledge of tool geometry. The illustrated set-up is only one of many combinations available. The huge variety in shapes and types of machining cutters requires flexibility in usage. A variety of dedicated fixtures are included that allow cylindrical grinding operations or complex angles to be ground. The vise shown can swivel in three planes.
Tool and cutter grinder:
The table moves longitudinally and laterally, the head can swivel as well as being adjustable in the horizontal plane, as visible in the first image. This flexibility in the head allows the critical clearance angles required by the various cutters to be achieved.
CNC tool and cutter grinder:
Today's tool and cutter grinder is typically a CNC machine tool, usually 5 axes, which produces endmills, drills, step tools, etc. which are widely used in the metal cutting and woodworking industries.
CNC tool and cutter grinder:
Modern CNC tool and cutter grinders enhance productivity by typically offering features such as automatic tool loading as well as the ability to support multiple grinding wheels. High levels of automation, as well as automatic in-machine tool measurement and compensation, allow extended periods of unmanned production. With careful process configuration and appropriate tool support, tolerances less than 5 micrometres (0.0002") can be consistently achieved even on the most complex parts.
CNC tool and cutter grinder:
Apart from manufacturing, in-machine tool measurement using touch-probe or laser technology allows cutting tools to be reconditioned. During normal use, cutting edges either wear and/or chip. The geometric features of cutting tools can be automatically measured within the CNC tool grinder and the tool ground to return cutting surfaces to optimal condition. Significant software advancements have allowed CNC tool and cutter grinders to be utilized in a wide range of industries. Advanced CNC grinders feature sophisticated software that allows geometrically complex parts to be designed either parametrically or by using third party CAD/CAM software. 3D simulation of the entire grinding process and the finished part is possible as well as detection of any potential mechanical collisions and calculation of production time. Such features allow parts to be designed and verified, as well as the production process optimized, entirely within the software environment. Tool and cutter grinders can be adapted to manufacturing precision machine components. The machine, when used for these purposes more likely would be called a CNC Grinding System.
CNC tool and cutter grinder:
CNC Grinding Systems are widely used to produce parts for aerospace, medical, automotive, and other industries. Extremely hard and exotic materials are generally no problem for today's grinding systems and the multi-axis machines are capable of generating quite complex geometries.
Radius grinder:
A radius grinder (or radius tool grinder) is a special grinder used for grinding the most complex tool forms, and is the historical predecessor to the CNC tool and cutter grinder. Like the CNC grinder, it may be used for other tasks where grinding spherical surfaces is necessary. The tool itself consists of three parts: The grinder head, work table, and holding fixture. The grinder head has three degrees of freedom. Vertical movement, movement into the workpiece, and tilt. These are generally set statically, and left fixed throughout operations. The work table is a T-slotted X-axis table mounted on top of a radial fixture. Mounting the X axis on top of the radius table, as opposed to the other way around, allows for complex and accurate radius grinds. The holding fixtures can be anything one can mount on a slotted table, but most commonly used is a collet or chuck fixture that indexes and has a separate Y movement to allow accurate depth setting and endmill sharpening. The dressers used on these grinders are usually quite expensive, and can dress the grinding wheel itself with a particular radius.
D-bit grinder:
The D-bit (after Deckel, the brand of the original manufacturer) grinder is a tool bit grinder designed to produce single-lip cutters for pantograph milling machines. Pantographs are a variety of milling machine used to create cavities for the dies used in the molding process; they are largely obsolete and replaced by CNC machining centers in modern industry.
D-bit grinder:
With the addition of accessory holders, the single-lip grinding capability may also be applied to grinding lathe cutting bits, and simple faceted profiles on tips of drill bits or end mills. The machine is sometimes advertised as a "universal cutter-grinder", but the "universal" term refers only to the range of compound angles available, not that the machine is capable of sharpening the universe of tools. The machine is not capable of sharpening drill bits in the standard profiles, or generating any convex or spiral profiles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Andrew Troelsen**
Andrew Troelsen:
Andrew W. Troelsen is currently a technology manager at Thomson Reuters in the Enterprise Content Platform (ECP - Big Data) division. He is an author of several books in the Microsoft technology space including books on Microsoft (D)COM, ATL, .NET, C#, VB (4.0 - modern) and COM & .NET Interoperability. His latest edition of his C# book covers the .NET Core platform and each C# 7.0 update. He has over 18 years experience authoring software development (3-5 day) workshops for engineers on MS platform technologies.
Books:
Pro C# With the .Net 3.0 Extensions Pro C# 3.0 and the .Net 3.5 Framework Pro VB 2005 and the .NET 2.0 Platform, Second Edition Pro VB 2008 and the .NET 3.5 Platform, Third Edition Pro VB 2010 and the .NET 4 Platform (co-written with Vidya Vrat Agarwal) Pro C# 2005 and the .Net 2.0 Platform, Third Edition Pro C# 2008 and the .NET 3.5 Platform, Fourth Edition Pro C# 2010 and the .NET 4 Platform, Fifth Edition Pro C# 5.0 and the .NET 4.5 Framework Sixth Edition C# 6.0 and the .NET 4.6 Framework Seventh Edition Exploring .Net (with Jason Bock) Com and .Net Interoperability Expert Asp.net 2.0: Advanced Application Design (with many others) Developer's Workshop to Com and Atl 3.0 Visual Basic .Net and the .Net Platform: An Advanced Guide Pro Vb With the .net 3.0 Extensions C# and the .Net Platform | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Euphemism**
Euphemism:
A euphemism () is an innocuous word or expression used in place of one that is deemed offensive or suggests something unpleasant. Some euphemisms are intended to amuse, while others use bland, inoffensive terms for concepts that the user wishes to downplay. Euphemisms may be used to mask profanity or refer to topics some consider taboo such as disability, sex, excretion, or death in a polite way.
Etymology:
Euphemism comes from the Greek word euphemia (εὐφημία) which refers to the use of 'words of good omen'; it is a compound of eû (εὖ), meaning 'good, well', and phḗmē (φήμη), meaning 'prophetic speech; rumour, talk'. Eupheme is a reference to the female Greek spirit of words of praise and positivity, etc. The term euphemism itself was used as a euphemism by the ancient Greeks; with the meaning "to keep a holy silence" (speaking well by not speaking at all).
Purpose:
Avoidance Reasons for using euphemisms vary by context and intent. Commonly, euphemisms are used to avoid directly addressing subjects that might be deemed negative or embarrassing, e.g., death, sex, excretory bodily functions. They may be created for innocent, well-intentioned purposes or nefariously and cynically, intentionally to deceive and confuse.
Purpose:
Mitigation Euphemisms are also used to mitigate, soften or downplay the gravity of large-scale injustices, war crimes, or other events that warrant a pattern of avoidance in official statements or documents. For instance, one reason for the comparative scarcity of written evidence documenting the exterminations at Auschwitz, relative to their sheer number, is "directives for the extermination process obscured in bureaucratic euphemisms". Another example of this is during the 2022 Russian invasion of Ukraine, where Russian President Vladimir Putin, in his speech starting the invasion, called the invasion a "special military operation".Euphemisms are sometimes used to lessen the opposition to a political move. For example, according to linguist Ghil'ad Zuckermann, Israeli Prime Minister Benjamin Netanyahu used the neutral Hebrew lexical item פעימות peimót ("beatings (of the heart)"), rather than נסיגה nesigá ("withdrawal"), to refer to the stages in the Israeli withdrawal from the West Bank (see Wye River Memorandum), in order to lessen the opposition of right-wing Israelis to such a move.: 181 The lexical item פעימות peimót, which literally means "beatings (of the heart)" is thus a euphemism for "withdrawal".: 181 Rhetoric Euphemism may be used as a rhetorical strategy, in which case its goal is to change the valence of a description.
Controversial use:
The act of labeling a term as a euphemism can in itself be controversial, as in the following two examples: Affirmative action, meaning a preference for minorities or the historically disadvantaged, usually in employment or academic admissions. This term is sometimes said to be a euphemism for reverse discrimination, or, in the UK, positive discrimination, which suggests an intentional bias that might be legally prohibited, or otherwise unpalatable.
Controversial use:
Enhanced interrogation is a euphemism for torture. For example, columnist David Brooks called the use of this term for practices at Abu Ghraib, Guantánamo, and elsewhere an effort to "dull the moral sensibility".
Formation methods:
Phonetic modification Phonetic euphemism is used to replace profanities and blasphemies, diminishing their intensity. Modifications include: Shortening or "clipping" the term, such as Jeez (Jesus) and what the— ("what the hell").
Mispronunciations, such as oh my gosh ("oh my God"), frickin ("fucking"), darn ("damn") or oh shoot ("oh shit"). This is also referred to as a minced oath.
Using acronyms as replacements, such as SOB ("son of a bitch"). Sometimes, the word "word" or "bomb" is added after it, such as F-word ("fuck"), etc. Also, the letter can be phonetically respelled.
Formation methods:
Pronunciation To alter the pronunciation or spelling of a taboo word (such as a swear word) to form a euphemism is known as taboo deformation, or a minced oath. Feck is a minced oath originating in Hiberno-English and popularised outside of Ireland by the British sitcom Father Ted. Some examples of Cockney rhyming slang may serve the same purpose: to call a person a berk sounds less offensive than to call a person a cunt, though berk is short for Berkeley Hunt, which rhymes with cunt.
Formation methods:
Understatement Euphemisms formed from understatements include asleep for dead and drinking for consuming alcohol. "Tired and emotional" is a notorious British euphemism for "drunk", one of many recurring jokes ppopularizedby the satirical magazine Private Eye; it has been used by MPs to avoid unparliamentary language.
Formation methods:
Substitution Pleasant, positive, worthy, neutral, or nondescript terms are often substituted for explicit or unpleasant ones, with many substituted terms deliberately coined by sociopolitical movements, marketing, public relations, or advertising initiatives, including: "meat packing company" for "slaughterhouse" (avoids entirely the subject of killing); "natural issue" or "love child" for "bastard"; "let go" for "fired", etc.Over time, it becomes socially unacceptable to use the latter word, as one is effectively downgrading the matter concerned to its former lower status, and the euphemism becomes dominant, due to a wish not to offend; see euphemism treadmill.
Formation methods:
Metaphor Metaphors (beat the meat, choke the chicken, or jerkin' the gherkin for masturbation; take a dump and take a leak for defecation and urination, respectively) Comparisons (buns for buttocks, weed for cannabis) Metonymy (men's room for "men's toilet") Slang The use of a term with a softer connotation, though it shares the same meaning. For instance, screwed up is a euphemism for fucked up; hook-up and laid are euphemisms for sexual intercourse.
Formation methods:
Foreign words Expressions or words from a foreign language may be imported for use as euphemism. For example, the French word enceinte was sometimes used instead of the English word pregnant; abattoir for "slaughterhouse", although in French the word retains its explicit violent meaning "a place for beating down", conveniently lost on non-French speakers. "Entrepreneur" for "businessman", adds glamour; "douche" (French: shower) for vaginal irrigation device; "bidet" (French: little pony) for "vessel for intimate ablutions". Ironically, although in English physical "handicaps" are almost always described with euphemism, in French the English word "handicap" is used as a euphemism for their problematic words "infirmité" or "invalidité".
Formation methods:
Periphrasis/circumlocution Periphrasis, or circumlocution, is one of the most common: to "speak around" a given word, implying it without saying it. Over time, circumlocutions become recognized as established euphemisms for particular words or ideas.
Doublespeak:
Bureaucracies frequently spawn euphemisms intentionally, as doublespeak expressions. For example, in the past, the US military used the term "sunshine units" for contamination by radioactive isotopes. Into the present, the United States Central Intelligence Agency refers to systematic torture as "enhanced interrogation techniques". An effective death sentence in the Soviet Union during the Great Purge often used the clause "imprisonment without right to correspondence": the person sentenced would be shot soon after conviction. As early as 1939, Nazi official Reinhard Heydrich used the term Sonderbehandlung ("special treatment") to mean summary execution of persons viewed as "disciplinary problems" by the Nazis even before commencing the systematic extermination of the Jews. Heinrich Himmler, aware that the word had come to be known to mean murder, replaced that euphemism with one in which Jews would be "guided" (to their deaths) through the slave-labor and extermination camps after having been "evacuated" to their doom. Such was part of the formulation of Endlösung der Judenfrage (the "Final Solution to the Jewish Question"), which became known to the outside world during the Nuremberg Trials.
Lifespan:
Frequently, over time, euphemisms themselves become taboo words, through the linguistic process of semantic change known as pejoration, which University of Oregon linguist Sharon Henderson Taylor dubbed the "euphemism cycle" in 1974, also frequently referred to as the "euphemism treadmill". For instance, the act of human defecation is possibly the neediest candidate for a euphemism in all eras. Toilet is an 18th-century euphemism, replacing the older euphemism house-of-office, which in turn replaced the even older euphemisms privy-house and bog-house. In the 20th century, where the old euphemisms lavatory (a place where one washes) or toilet (a place where one dresses) had grown from widespread usage (e.g., in the United States) to being synonymous with the crude act they sought to deflect, they were sometimes replaced with bathroom (a place where one bathes), washroom (a place where one washes), or restroom (a place where one rests) or even by the extreme form powder room (a place where one applies facial cosmetics). The form water closet, which in turn became euphemised to W.C., is a less deflective form. The word shit appears to have originally been a euphemism for defecation in Pre-Germanic, as the Proto-Indo-European root *sḱeyd-, from which it was derived, meant 'to cut off'.Another example in American English is the replacement of "colored people" with "Negro" (euphemism by foreign language), which itself came to be replaced by either "African American" or "Black". Also in the United States the term "ethnic minorities" in the 2010s has been replaced by people of color.Venereal disease, which associated shameful bacterial infection with a seemingly worthy ailment emanating from Venus, the goddess of love, soon lost its deflective force in the post-classical education era, as "VD", which was replaced by the three-letter initialism "STD" (sexually transmitted disease); later, "STD" was replaced by "STI" (sexually transmitted infection).Intellectualy disabled people were originally defined with words such as "morons" or "imbeciles", which then became commonly used insults. The medical diagnosis was changed to "mentally retarded", which morphed into a pejorative against those with intellectual disabilities. To avoid the negative connotations of their diagnoses, students who need accommodations because of such conditions are often labeled as "special needs" instead, although the word "special" or "sped" (short for "special education") has begun to crop up as a schoolyard insult. As of August 2013, the Social Security Administration replaced the term "mental retardation" with "intellectual disability". Since 2012, that change in terminology has been adopted by the National Institutes of Health and the medical industry at large. There are numerous disability-related euphemisms that have negative connotations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PET100**
PET100:
PET100 homolog is a protein that in humans is encoded by the PET100 gene. Mitochondrial complex IV, or cytochrome c oxidase, is a large transmembrane protein complex that is part of the respiratory electron transport chain of mitochondria. The small protein encoded by the PET100 gene plays a role in the biogenesis of mitochondrial complex IV. This protein localizes to the inner mitochondrial membrane and is exposed to the intermembrane space. Mutations in this gene are associated with mitochondrial complex IV deficiency. This gene has a pseudogene on chromosome 3. Alternative splicing results in multiple transcript variants.
Structure:
The PET100 gene is located on the p arm of chromosome 19 in position 13.2 and spans 1,839 base pairs. The gene produces a 9.1 kDa protein composed of 73 amino acids. The encoded protein localizes to the inner mitochondrial membrane and is exposed to the intermembrane space. This protein's N-terminus is essential for mitochondrial localization. It assembles into a 300 kDA complex which is dependent on the mitochondrial membrane potential, accumulating over time.
Function:
The protein encoded by PET100 is involved in Complex IV biogenesis as a COX chaperone; it is required for interaction between MR-1S, PET117, and Complex IV.
Clinical significance:
In 8 patients of Lebanese origin living in Australia, a c.3G>C mutation in the PET100 gene caused Complex IV deficiency and Leigh syndrome. Symptoms included delayed psychomotor development, seizures, hypotonia, brain abnormalities, and elevated blood and cerebrospinal fluid lactate levels. In another patient of Pakistani origin, a homozygous c.142C>T mutation resulted in Complex IV deficiency with intrauterine growth retardation, metabolic and lactic acidosis, hypoglycemia, coagulopathy, elevated serum creatine kinase levels, seizures, and intraventricular cysts.
Interactions:
The encoded protein interacts with MR-1S and COX7A2.
This protein is required for MR-1S, PET117, and Complex IV to interact. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reach cast**
Reach cast:
The reach cast is a casting technique used in fly fishing. The reach cast involves casting the fly lure over flowing water, such as a stream, and then just before the fly lands, moving the arm and fly rod in the upstream direction to arrange the fishing line so that it produces less apparent drag in the water. The technique allows the lure to more closely resemble a free-floating insect, resulting in greater chance of it being taken by a fish. Reach casting also allows an experienced caster to pitch curved casts in order to get the lures into difficult places.Reach casting is most commonly used in fishing freshwater streams for trout although the reach cast is also used in some saltwater fishing where one can stand in the shallows and there is a consistent current moving in one direction.
Reach cast:
A reach cast is considered a type of mend during the casting stroke, an in-air mend prior to the fly landing in the water. Without this cast adjustment, the line would grow taut immediately upon impact with the moving water's surface and would pull the fly against the current or across it, making its motion become more unnatural to the fish seeking an insect that has just landed on the water.
Reach cast:
In many streams, current may flow more slowly along the edges where it is shallower and there is drag introduced by the shore, and surface-feeding trout and other fish tend to linger in the still part of the water. When casting a line across a stream, the line can land in the swifter-running portion of the current, and would pull against the fly lure that lands in the slower-moving water. The reach cast introduces some slack to compensate for the faster-moving water, allowing the fly to land and move more like a floating insect. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Combustion chamber**
Combustion chamber:
A combustion chamber is part of an internal combustion engine in which the fuel/air mix is burned. For steam engines, the term has also been used for an extension of the firebox which is used to allow a more complete combustion process.
Internal combustion engines:
In an internal combustion engine, the pressure caused by the burning air/fuel mixture applies direct force to part of the engine (e.g. for a piston engine, the force is applied to the top of the piston), which converts the gas pressure into mechanical energy (often in the form of a rotating output shaft). This contrasts an external combustion engine, where the combustion takes place in a separate part of the engine to where the gas pressure is converted into mechanical energy.
Internal combustion engines:
Spark-ignition engines In spark ignition engines, such as petrol (gasoline) engines, the combustion chamber is usually located in the cylinder head. The engines are often designed such that the bottom of combustion chamber is roughly in line with the top of the engine block.
Internal combustion engines:
Modern engines with overhead valves or overhead camshaft(s) use the top of the piston (when it is near top dead centre) as the bottom of the combustion chamber. Above this, the sides and roof of the combustion chamber include the intake valves, exhaust valves and spark plug. This forms a relatively compact combustion chamber without any protrusions to the side (i.e. all of the chamber is located directly above the piston). Common shapes for the combustion chamber are typically similar to one or more half-spheres (such as the hemi, pent-roof, wedge or kidney-shaped chambers).
Internal combustion engines:
The older flathead engine design uses a "bathtub"-shaped combustion chamber, with an elongated shape that sits above both the piston and the valves (which are located beside the piston). IOE engines combine elements of overhead valve and flathead engines; the intake valve is located above the combustion chamber, while the exhaust valve is located below it.
Internal combustion engines:
The shape of the combustion chamber, intake ports and exhaust ports are key to achieving efficient combustion and maximising power output. Cylinder heads are often designed to achieve a certain "swirl" pattern (rotational component to the gas flow) and turbulence, which improves the mixing and increases the flow rate of gasses. The shape of the piston top also affects the amount of swirl.
Internal combustion engines:
Another design feature to promote turbulence for good fuel/air mixing is squish, where the fuel/air mix is "squished" at high pressure by the rising piston.The location of the spark plug is also an important factor, since this is the starting point of the flame front (the leading edge of the burning gasses) which then travels downwards towards the piston. Good design should avoid narrow crevices where stagnant "end gas" can become trapped, reducing the power output of the engine and potentially leading to engine knocking. Most engines use a single spark plug per cylinder, however some (such as the 1986-2009 Alfa Romeo Twin Spark engine) use two spark plugs per cylinder.
Internal combustion engines:
Compression-ignition engines Compression-ignition engines, such as diesel engines, are typically classified as either: Direct injection, where the fuel is injected into the combustion chamber. Common varieties include unit direct injection and common rail injection.
Indirect injection, where the fuel is injected into a swirl chamber or pre-combustion chamber. The fuel ignites as it is injected into this chamber and the burning air/fuel mixture spreads into the main combustion chamber.Direct injection engines usually give better fuel economy but indirect injection engines can use a lower grade of fuel.
Harry Ricardo was prominent in developing combustion chambers for diesel engines, the best known being the Ricardo Comet.
Gas turbine In a continuous flow system, for example a jet engine combustor, the pressure is controlled and the combustion creates an increase in volume. The combustion chamber in gas turbines and jet engines (including ramjets and scramjets) is called the combustor.
The combustor is fed with high pressure air by the compression system, adds fuel and burns the mix and feeds the hot, high pressure exhaust into the turbine components of the engine or out the exhaust nozzle.
Different types of combustors exist, mainly: Can type: Can combustors are self-contained cylindrical combustion chambers. Each "can" has its own fuel injector, liner, interconnectors, casing. Each "can" get an air source from individual opening.
Cannular type: Like the can type combustor, can annular combustors have discrete combustion zones contained in separate liners with their own fuel injectors. Unlike the can combustor, all the combustion zones share a common air casing.
Annular type: Annular combustors do away with the separate combustion zones and simply have a continuous liner and casing in a ring (the annulus).
Rocket engine If the gas velocity changes, thrust is produced, such as in the nozzle of a rocket engine.
Steam engines:
Considering the definition of combustion chamber used for internal combustion engines, the equivalent part of a steam engine would be the firebox, since this is where the fuel is burned. However, in the context of a steam engine, the term "combustion chamber" has also been used for a specific area between the firebox and the boiler. This extension of the firebox is designed to allow a more complete combustion of the fuel, improving fuel efficiency and reducing build-up of soot and scale. The use of this type of combustion chamber is large steam locomotive engines, allows the use of shorter firetubes.
Micro combustion chambers:
Micro combustion chambers are the devices in which combustion happens at a very small volume, due to which surface to volume ratio increases which plays a vital role in stabilizing the flame. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Control array**
Control array:
In simple array is a group of consecutive memory location with same name and typeVisual Basic, a control array is a group of related controls in a Visual Basic form that share the same event handlers. Control arrays are always single-dimensional arrays, and controls can be added or deleted from control arrays at runtime. One application of control arrays is to hold menu items, as the shared event handler can be used for code common to all the menu items in the control array.Control arrays are a convenient way to handle groups of controls that perform a similar function. All the events available to the single control are still available to the array of controls, the only difference being an argument indicating the index of the selected array element is passed to the event. Hence, instead of writing individual procedures for each control (i.e. not using control arrays), you only have to write one procedure for each array.
Control array:
Control arrays are no longer supported in Visual Basic 2006, as "changes to the event model" made them unnecessary. The Visual Basic Upgrade Wizard can convert code that uses control arrays into Visual Basic 2008 code that uses more recent structures. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dexel**
Dexel:
The term Dexel has two common uses: Dexel ("depth pixel") is a concept used for a discretized representation of functions defined on surfaces used in geometrical modeling and physical simulation, sometimes also referred to as multilevel Z-map. Dexel is a nodal value of a scalar or vector field on a meshed surface. Dexels are used in simulation of manufacturing processes (such as turning, milling or rapid prototyping), when workpiece surfaces are subject to modifications. It is practical to express the surface evolution by dexels especially when the surface evolution scale is very different from the structural finite element 3D model discretization step (e.g. in machining the depth of cut variation is often several orders of magnitude smaller (1–10 µm) than the FE model mesh step (1 mm)).
Dexel:
Dexel ("detector element") is the analog of a pixel ("picture element") but native to a detector rather than a visible picture. That is, it describes the elements in a detector, which may be processed, combined, resampled, or otherwise mangled, before creating a picture. As such, there may not be a one-to-one correspondence between the pixels in an image, and the dexels used to create that image. For example, cameras labeled as "10-megapixel" can be used to create a 640x480 picture. Using dexel terminology, the camera actually uses 10 million dexels to create a picture with 640x480 pixels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**View synthesis**
View synthesis:
View synthesis aims to create new views of a specific subject starting from a number of pictures taken from given point of views.
Currently a study branch of Computer Science Research, Vision Research and Artificial Intelligence fields are involved in the definition of suitable approaches to the problem.
View synthesis:
A method of using view synthesis is to take a number of images of a specific subject, from a certain point with a specific camera orientation and setting, and then use that data to build a synthetic image that looks as if it was taken from a virtual camera that is placed at a different point and with the same settings.
View synthesis:
Two people interact through their computers, using a webcam. Try to render corrected images, as if taken from a virtual webcam positioned behind the application window. This would solve the long-standing Eye contact problem which is experienced in this environment. A double illusion is perceived by the users: each of them looks at each other's face, but neither of them get the proper feeling of it.
View synthesis:
An example application of view synthesis is Free viewpoint television. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Environmental impact-minimizing vehicle tuning**
Environmental impact-minimizing vehicle tuning:
Environmental impact-minimizing vehicle tuning is the modification (or tuning) of cars to reduce energy consumption.
General tuning:
Hybridization: change to a hybrid electric vehicle. One can use an aftermarket kit for the powertrain or use a hybrid adapter trailer.
General tuning:
Modifying key engine-selection parameters in the Battery Management System of a hybrid vehicle. Vehicles as mild hybrids have a parameter for the threshold speed on which the vehicle is to switch from electric propulsion to the internal combustion engine. Introducing a higher speed as a parameter can reduce emissions and increase fuel efficiency (although it may increase strain on the battery).
General tuning:
Pluginization of hybrid or electric vehicles. A plug-in hybrid electric vehicle (PHEV) is a hybrid which has additional battery capacity and the ability to be recharged from an external electrical outlet. A plug-in electric vehicle is basically the same, without an extra internal combustion engine. In addition, modifications are made to the vehicle's control software. The vehicle can be used for short trips of moderate speed without needing the internal combustion engine (ICE) component of the vehicle, thereby saving fuel costs. In this mode of operation the vehicle operates as a pure battery electric vehicle with a weight penalty (the ICE). The long range and additional power of the ICE power train is available when needed.
General tuning:
Electric vehicle conversion. An electric vehicle conversion is the modification of a conventional internal combustion engine (ICE) driven vehicle to battery electric propulsion, creating a battery electric vehicle. In some cases the vehicle may be built by the converter, or assembled from a kit car. In some countries, the user can choose to buy a converted vehicle of any model in the automaker dealerships only paying the cost of the batteries and motor, with no installation costs (it is called preconversion or previous conversion).
General tuning:
Modifying the engine to run an alternative fuel. These include natural gas conversion of gasoline-powered cars and Vegetable oil conversion of diesel cars. Cars with Diesel engines can be converted reasonably cheaply and easily to run on 100% vegetable oil. Vegetable oil is often cheaper and cleaner than petrodiesel, but local laws often levy harsh fines to users who fail to pay fuel taxes when acquiring their fuel outside regular distribution channels. Liquid nitrogen, Hydrogen fuel conversions and Ethanol conversions are other alternative fuel conversions that can be done with internal combustion engines. The first two will eliminate all vehicle emissions, while the third one will only slightly decrease emissions.
General tuning:
Replacing the internal combustion engine of a hybrid vehicle with a hydrogen fuel cell to make the vehicle completely emissionless; even in recharging mode.
Adding a hydrogen fuel cell to a battery electric vehicle to increase its driving range.
Adding more electric batteries to a battery electric vehicle to increase driving range. Besides placing more batteries, this operation often requires additional modification of the Battery Management System. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pickering series**
Pickering series:
The Pickering series (also known as the Pickering–Fowler series) consists of three lines of singly ionized helium found, usually in absorption, in the spectra of hot stars like Wolf–Rayet stars. The name comes from Edward Charles Pickering and Alfred Fowler. The lines are produced by transitions from a higher energy level of an electron to a level with principal quantum number n = 4. The lines have wavelengths: 4339 Å (n = 10 to n = 4) 4541 Å (n = 9 to n = 4) 4859 Å (n = 8 to n = 4) 5412 Å (n = 7 to n = 4) 6560 Å (n = 6 to n = 4) 10124 Å (n = 5 to n = 4)The transitions from the even-n states overlap with hydrogen lines and are therefore masked in typical absorption stellar spectra. However, they are seen in emission in the spectra of Wolf-Rayet stars, as these stars have little or no hydrogen.
Pickering series:
In 1896, Pickering published observations of previously unknown lines in the spectra of the star Zeta Puppis. Pickering attributed the observation to a new form of hydrogen with half-integer transition levels. Fowler managed to produce similar lines from a hydrogen–helium mixture in 1912, and supported Pickering's conclusion as to their origin. Niels Bohr, however, included an analysis of the series in his 'trilogy' on atomic structure and concluded that Pickering and Fowler were wrong and that the spectral lines arise instead from ionised helium, He+. Fowler was initially skeptical but was ultimately convinced that Bohr was correct, and by 1915 "spectroscopists had transferred [the Pickering series] definitively [from hydrogen] to helium." Bohr's theoretical work on the Pickering series had demonstrated the need for "a re-examination of problems that seemed already to have been solved within classical theories" and provided important confirmation for his atomic theory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lovevery**
Lovevery:
Lovevery is an American company producing play-kit subscription boxes for children.
Description:
Lovevery produces educational toys, books, and games via play-kit subscription boxes "designed to meet the developmental needs and brain development of toddlers and babies". The toys, produced in consultation with child development experts, physical therapists, and cognitive developmental psychologists, follow the Montessori educational model.The company offers subscription boxes and off-the-shelf toys from birth through age three; age-appropriate play kits are sent to subscribers every two to three months. Play guides with each product suggest play ideas and developmental milestones, and a parenting app also accompanies the subscription. Lovevery's products are made from organic and sustainably-sourced materials.Lovevery products are also available at Target and other retailers.The company is headquartered in Boise, Idaho and is a certified B Corporation.
Reviews:
Reviews of the company products, including one in The Wall Street Journal, express the opinions of users, consumer experts, and psychologists. The Wall Street Journal asked, "...are these services worth the money, or can pillaging the toy aisles at Target work just as well?"Parent Susie Allison of @busytoddler on Instagram answered, "...kids don't need fancy to have fun… The toy or the book or whatever it is that comes is not ever going to reach every single child... I think it's more important to curate something specific to your child".Dana L. Suskind, who wrote Thirty Million Words: Building a Child's Brain, focused on the transformative educational effect of parents and children interacting together—even if the child is not yet verbal. Of Lovevery products, she said "My feeling is if they ...help generate interaction between parent and child, that's an amazing thing."According to Harvey Karp, author of The Happiest Baby on the Block, Lovevery products can alleviate parental stress of finding the right toys for kids' ages: "Other than the expense, I don't really see a downside." He said, "I think they're well done... They're really trying to be supportive and to be educational as well as being helpful for the child."
History:
Lovevery was founded in 2015 by Jessica Rolph and Roderick Morris.In 2019, Maveron led a $20 million funding cycle for Lovevery, along with Google Ventures and the Chan Zuckerberg Initiative. In October 2021, Lovevery raised $100 million in new investments, led by TCG. Other investors include Reach Capital, SoGal Ventures, as well as the Collaborative Fund.
Awards:
2018 — Gold Parents' Choice Award 2018 — Listed by Time Magazine as a "Best Invention" 2018 and 2020 — Red Dot award 2019 — Finalist for the Fast Company designation of Most Innovative Company by Design 2021 — Founders received the Ernst & Young Entrepreneur of the Year Award, Utah division | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Enduro motorcycle**
Enduro motorcycle:
An enduro motorcycle is an off-road racing motorcycle used in enduros, which are long-distance cross-country, Trails, time trial competitions.
Types and features of enduro motorcycles:
Enduro motorcycles closely resemble motocross, or "MX" bikes (upon which they are often based). They may have special features such as oversized gas tanks, engines tuned for reliability and longevity, sump protectors, and more durable (and heavier) components. Enduro bikes combine the long-travel suspension of an off-road motocross bike with engines that are reliable and durable over long distances, and may be fitted with oversize gas tanks for adequate range. Some enduro bikes have street-legal features such as headlights and quiet mufflers to enable them to use public roadways. The engine of an enduro bike is usually a single cylinder 2-stroke between 125 cc and 360 cc, or 4-stroke between 195 and 650 cc.
Types and features of enduro motorcycles:
A large and powerful engine is not always an advantage, and riders may prefer smaller bikes that are lighter and more maneuverable. In the UK, where enduros are often held in wet, boggy areas such as the Welsh hills, 250 cc may be sufficient. In drier climates, where the dirt surface is firmer (albeit dusty), good riders can benefit from having a heavier bike with more power.There may exist several design differences between enduro motorcycles and moto/supercross bikes, according to the rules of the particular competition. For an enduro event such as endurocross (Enduro-X), these may include: Headlight for on-road and after-dark use Brake light/tail light for on-road use Protective hardware such as brake and clutch handguards for protection against branches and leaves i.e. "bark busters" Exhaust system that is street legal and meets regulations for noise and spark arresting Wide-ratio gear box Narrower handlebars so that the bike can fit between branches and trees easily Roll chart holder/Enduro computer Heavy flywheel
Manufacturers:
Past and present enduro manufacturers include AJP, ATK, Beta, Bultaco, CCM, Fantic, Gas Gas, Hodaka, Honda, Husaberg, Husqvarna, Indian, Kawasaki, KTM, Maico, Montesa, MZ, Ossa, Penton, Sherco, Suzuki, SWM and Yamaha.
History:
Motorcycles specifically intended for enduro competition first appeared at the International Six Day Trial (ISDT) now called the International Six Days Enduro (ISDE). The ISDE was first held in 1913 at Carlisle, England. The ISDE requires an enduro motorcycle to withstand over six days and upwards of 1250 km (777 miles) of competition; repairs are limited to those performed by the rider with limited parts. The ISDE has occurred annually, apart from interruptions due to World War I and World War II, at various locations throughout the world. The early events were a test of rider skill and motorcycle reliability. The earliest ISDE courses used the dirt roads common in that era. Today, most of the routes are off-road. In 1980, the ISDT was renamed the International Six Day Enduro (ISDE).
History:
Until 1973, the ISDE was always held in Europe. In 1973 it was held in the United States, and since then it has been held outside Europe more frequently: twice in Australia (1992 and 1998), again in the USA (1994), Brazil (2003), New Zealand (2006) and Chile (2007). The ISDE has attracted national teams from as many as 32 countries in recent years.In the 1970s the term was used in US marketing applied to dual-purpose motorcycles regardless of their suitability for competition.
History:
In the U.S., enduro motorcycles appeared in light and heavyweight classes during the Greenhorn Enduro hosted by the Pasadena Motorcycle Club (PMC). The Greenhorn Enduro was a nationally recognized 500-mile, two-day desert off-road competition that pounded both rider and machine. Veterans of the early Greenhorn Enduro included Bud Ekins and Steve McQueen.
History:
Many current enduro motorcycles are built along the basic lines of a World Championship (WEC) machine, as used in the World Enduro Championship. The WEC is a time-card enduro, whereby a number of stages are raced in a time trial against the clock over a course of at least 200 km (124 miles) consisting of both paved and unpaved trails and roads (up to 30% of the course may be on public or private asphalted roads).
History:
Another popular type of enduro competition that has spurred enduro motorcycle development is endurocross, a hybrid event combining enduro and supercross.
History:
In the UK, most enduro clubman bikes were 2-strokes, but many ACU events had a separate class for 4-stroke machines, such as the Honda XR series. Such 4-strokes tended to be effectively "sporty trail bikes" rather than de-tuned scramblers, but their much improved fuel economy (compared to 2-strokes) meant that they could complete lengthy laps of thirty miles or more without refuelling. This obviated the need for a back-up team to man refueling stops, allowing 4-stroke clubmen to compete without a support entourage.
Technical developments:
MX racing bikes have often been used as platforms for building enduro bikes. This was partially driven by the conversion of MX from 2-stroke to 4-stroke engine designs to comply with regulatory trends, as well as the development of hybrid competition races such as Enduro-X. Compared to MX bikes, enduro and dual-sport bikes traditionally had a much higher proportion of 4-stroke motors. Though powerful, MX-based off-road motorcycles can experience problems running full enduro courses, where an over-emphasis on light weight and high power may cause engine reliability problems when racing over distances that are much longer than an MX circuit.
Registration requirements for on-road use:
As some enduro courses have portions on public roads as well as off-road tracks, competitors in such events will need enduro bikes that comply with local registration requirements for on-road use. Modern enduro bikes are closely related to their MX counterparts, and the manufacturers may state that enduro machines are for non-highway use only. Some countries refuse the registration of enduro bikes for on-highway use unless they are altered to meet local road-legal specifications. However, some enduro bikes, such as the Husqvarna TE250 and some Husabergs are built to comply with on-road requirements, and it is fairly straightforward to register these machines for on-road use . | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linopristin/flopristin**
Linopristin/flopristin:
Linopristin/flopristin (development codes NXL103 and XRP 2868) is an experimental drug candidate under development by Novexel. It is an oral streptogramin antibiotic that has potent in vitro activity against certain Gram-positive bacteria including methicillin resistant Staphylococcus aureus (MRSA), as well as the important respiratory pathogens including penicillin-, macrolide- and quinolone-resistant strains. It is a combination of linopristin and flopristin.
Clinical trials:
Positive results have been reported from a phase II trial comparing it with amoxicillin. Another phase II trial began in 2010 comparing it with linezolid for treatment of acute bacterial skin and skin structure infections (ABSSSI). No development activity has been reported since 2015. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exterior angle theorem**
Exterior angle theorem:
The exterior angle theorem is Proposition 1.16 in Euclid's Elements, which states that the measure of an exterior angle of a triangle is greater than either of the measures of the remote interior angles. This is a fundamental result in absolute geometry because its proof does not depend upon the parallel postulate.
Exterior angle theorem:
In several high school treatments of geometry, the term "exterior angle theorem" has been applied to a different result, namely the portion of Proposition 1.32 which states that the measure of an exterior angle of a triangle is equal to the sum of the measures of the remote interior angles. This result, which depends upon Euclid's parallel postulate will be referred to as the "High school exterior angle theorem" (HSEAT) to distinguish it from Euclid's exterior angle theorem.
Exterior angle theorem:
Some authors refer to the "High school exterior angle theorem" as the strong form of the exterior angle theorem and "Euclid's exterior angle theorem" as the weak form.
Exterior angles:
A triangle has three corners, called vertices. The sides of a triangle (line segments) that come together at a vertex form two angles (four angles if you consider the sides of the triangle to be lines instead of line segments). Only one of these angles contains the third side of the triangle in its interior, and this angle is called an interior angle of the triangle. In the picture below, the angles ∠ABC, ∠BCA and ∠CAB are the three interior angles of the triangle. An exterior angle is formed by extending one of the sides of the triangle; the angle between the extended side and the other side is the exterior angle. In the picture, angle ∠ACD is an exterior angle.
Euclid's exterior angle theorem:
The proof of Proposition 1.16 given by Euclid is often cited as one place where Euclid gives a flawed proof.Euclid proves the exterior angle theorem by: construct the midpoint E of segment AC, draw the ray BE, construct the point F on ray BE so that E is (also) the midpoint of B and F, draw the segment FC.By congruent triangles we can conclude that ∠ BAC = ∠ ECF and ∠ ECF is smaller than ∠ ECD, ∠ ECD = ∠ ACD therefore ∠ BAC is smaller than ∠ ACD and the same can be done for the angle ∠ CBA by bisecting BC.
Euclid's exterior angle theorem:
The flaw lies in the assumption that a point (F, above) lies "inside" angle (∠ ACD). No reason is given for this assertion, but the accompanying diagram makes it look like a true statement. When a complete set of axioms for Euclidean geometry is used (see Foundations of geometry) this assertion of Euclid can be proved.
Euclid's exterior angle theorem:
Invalidity in spherical geometry The exterior angle theorem is not valid in spherical geometry nor in the related elliptical geometry. Consider a spherical triangle one of whose vertices is the North Pole and the other two lie on the equator. The sides of the triangle emanating from the North Pole (great circles of the sphere) both meet the equator at right angles, so this triangle has an exterior angle that is equal to a remote interior angle. The other interior angle (at the North Pole) can be made larger than 90°, further emphasizing the failure of this statement. However, since the Euclid's exterior angle theorem is a theorem in absolute geometry it is automatically valid in hyperbolic geometry.
High school exterior angle theorem:
The high school exterior angle theorem (HSEAT) says that the size of an exterior angle at a vertex of a triangle equals the sum of the sizes of the interior angles at the other two vertices of the triangle (remote interior angles). So, in the picture, the size of angle ACD equals the size of angle ABC plus the size of angle CAB.
High school exterior angle theorem:
The HSEAT is logically equivalent to the Euclidean statement that the sum of angles of a triangle is 180°. If it is known that the sum of the measures of the angles in a triangle is 180°, then the HSEAT is proved as follows: 180 ∘ b+d=b+a+c ∴d=a+c.
On the other hand, if the HSEAT is taken as a true statement then: d=a+c 180 ∘ 180 ∘.
Proving that the sum of the measures of the angles of a triangle is 180°.
High school exterior angle theorem:
The Euclidean proof of the HSEAT (and simultaneously the result on the sum of the angles of a triangle) starts by constructing the line parallel to side AB passing through point C and then using the properties of corresponding angles and alternate interior angles of parallel lines to get the conclusion as in the illustration.The HSEAT can be extremely useful when trying to calculate the measures of unknown angles in a triangle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peak bone mass**
Peak bone mass:
Peak bone mass is the maximum amount of bone a person has during their life. It typically occurs in the early 20s in females and late 20s in males. Peak bone mass is typically lower in females than males, and is also lower in White people and Asians compared to black populations. A way to determine bone mass is to look at the size and density of the mineralized tissue in the periosteal envelope and using the bone mineral density (BMD) of a person can determine the strength of that bone. Research has shown that puberty affects bone size much more because during this time males typically undergo a longer bone maturation period than women which is why women are more prone to osteoporosis than men. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Right ascension**
Right ascension:
Right ascension (abbreviated RA; symbol α) is the angular distance of a particular point measured eastward along the celestial equator from the Sun at the March equinox to the (hour circle of the) point in question above the earth. When paired with declination, these astronomical coordinates specify the location of a point on the celestial sphere in the equatorial coordinate system.
Right ascension:
An old term, right ascension (Latin: ascensio recta) refers to the ascension, or the point on the celestial equator that rises with any celestial object as seen from Earth's equator, where the celestial equator intersects the horizon at a right angle. It contrasts with oblique ascension, the point on the celestial equator that rises with any celestial object as seen from most latitudes on Earth, where the celestial equator intersects the horizon at an oblique angle.
Explanation:
Right ascension is the celestial equivalent of terrestrial longitude. Both right ascension and longitude measure an angle from a primary direction (a zero point) on an equator. Right ascension is measured from the Sun at the March equinox i.e. the First Point of Aries, which is the place on the celestial sphere where the Sun crosses the celestial equator from south to north at the March equinox and is currently located in the constellation Pisces. Right ascension is measured continuously in a full circle from that alignment of Earth and Sun in space, that equinox, the measurement increasing towards the east.As seen from Earth (except at the poles), objects noted to have 12h RA are longest visible (appear throughout the night) at the March equinox; those with 0h RA (apart from the sun) do so at the September equinox. On those dates at midnight, such objects will reach ("culminate" at) their highest point (their meridian). How high depends on their declination; if 0° declination (i.e. on the celestial equator) then at Earth's equator they are directly overhead (at zenith).
Explanation:
Any units of angular measure could have been chosen for right ascension, but it is customarily measured in hours (h), minutes (m), and seconds (s), with 24h being equivalent to a full circle. Astronomers have chosen this unit to measure right ascension because they measure a star's location by timing its passage through the highest point in the sky as the Earth rotates. The line which passes through the highest point in the sky, called the meridian, is the projection of a longitude line onto the celestial sphere. Since a complete circle contains 24h of right ascension or 360° (degrees of arc), 1/24 of a circle is measured as 1h of right ascension, or 15°; 1/1440 of a circle is measured as 1m of right ascension, or 15 minutes of arc (also written as 15′); and 1/86400 of a circle contains 1s of right ascension, or 15 seconds of arc (also written as 15″). A full circle, measured in right-ascension units, contains 24 × 60 × 60 = 86400s, or 24 × 60 = 1440m, or 24h.
Explanation:
Because right ascensions are measured in hours (of rotation of the Earth), they can be used to time the positions of objects in the sky. For example, if a star with RA = 1h 30m 00s is at its meridian, then a star with RA = 20h 00m 00s will be on the/at its meridian (at its apparent highest point) 18.5 sidereal hours later.
Explanation:
Sidereal hour angle, used in celestial navigation, is similar to right ascension but increases westward rather than eastward. Usually measured in degrees (°), it is the complement of right ascension with respect to 24h. It is important not to confuse sidereal hour angle with the astronomical concept of hour angle, which measures the angular distance of an object westward from the local meridian.
Effects of precession:
The Earth's axis traces a small circle (relative to its celestial equator) slowly westward about the celestial poles, completing one cycle in about 26,000 years. This movement, known as precession, causes the coordinates of stationary celestial objects to change continuously, if rather slowly. Therefore, equatorial coordinates (including right ascension) are inherently relative to the year of their observation, and astronomers specify them with reference to a particular year, known as an epoch. Coordinates from different epochs must be mathematically rotated to match each other, or to match a standard epoch. Right ascension for "fixed stars" on the equator increases by about 3.1 seconds per year or 5.1 minutes per century, but for fixed stars away from the equator the rate of change can be anything from negative infinity to positive infinity. (To this must be added the proper motion of a star.) Over a precession cycle of 26,000 years, "fixed stars" that are far from the ecliptic poles increase in right ascension by 24h, or about 5.6' per century, whereas stars within 23.5° of an ecliptic pole undergo a net change of 0h. The right ascension of Polaris is increasing quickly—in AD 2000 it was 2.5h, but when it gets closest to the north celestial pole in 2100 its right ascension will be 6h. The North Ecliptic Pole in Draco and the South Ecliptic Pole in Dorado are always at right ascension 18h and 6h respectively.
Effects of precession:
The currently used standard epoch is J2000.0, which is January 1, 2000 at 12:00 TT. The prefix "J" indicates that it is a Julian epoch. Prior to J2000.0, astronomers used the successive Besselian epochs B1875.0, B1900.0, and B1950.0.
History:
The concept of right ascension has been known at least as far back as Hipparchus who measured stars in equatorial coordinates in the 2nd century BC. But Hipparchus and his successors made their star catalogs in ecliptic coordinates, and the use of RA was limited to special cases.
History:
With the invention of the telescope, it became possible for astronomers to observe celestial objects in greater detail, provided that the telescope could be kept pointed at the object for a period of time. The easiest way to do that is to use an equatorial mount, which allows the telescope to be aligned with one of its two pivots parallel to the Earth's axis. A motorized clock drive often is used with an equatorial mount to cancel out the Earth's rotation. As the equatorial mount became widely adopted for observation, the equatorial coordinate system, which includes right ascension, was adopted at the same time for simplicity. Equatorial mounts could then be accurately pointed at objects with known right ascension and declination by the use of setting circles. The first star catalog to use right ascension and declination was John Flamsteed's Historia Coelestis Britannica (1712, 1725). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bach's algorithm**
Bach's algorithm:
Bach's algorithm is a probabilistic polynomial time algorithm for generating random numbers along with their factorizations, named after its discoverer, Eric Bach. It is of interest because no algorithm is known that efficiently factors numbers, so the straightforward method, namely generating a random number and then factoring it, is impractical.
The algorithm performs, in expectation, O(log n) primality tests.
A simpler, but less efficient algorithm (performing, in expectation, log 2n) primality tests), is due to Adam Kalai.Bach's algorithm may theoretically be used within cryptographic algorithms.
Overview:
Bach's algorithm produces a number x uniformly at random in the range N/2<x≤N (for a given input N ), along with its factorization. It does this by picking a prime number p and an exponent a such that pa≤N , according to a certain distribution. The algorithm then recursively generates a number y in the range M/2<y≤M , where M=N/pa , along with the factorization of y . It then sets x=pay , and appends pa to the factorization of y to produce the factorization of x . This gives x with logarithmic distribution over the desired range; rejection sampling is then used to get a uniform distribution. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**1602 AM**
1602 AM:
Copies of the World Radio TV Handbook (including the 1991 edition) have identified 1602 kHz as a local frequency, akin to the Class C (former Class IV) radio stations in North America which are limited to 1kW.
The following radio stations broadcast on AM frequency 1602 kHz:
Australia:
2CP in Cooma, NSW (ABC SE New South Wales) 5LC in Leigh, SA (ABC North & West) 3WL in Warrnambool, VIC (ABC South West Victoria)
Japan:
JOCC in Asahikawa JODD in Fukuyama JOFD in Fukushima JOKC in Kofu JOSB in Kitakyushu | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pollenizer**
Pollenizer:
A pollenizer (or polleniser), sometimes pollinizer (or polliniser, see spelling differences) is a plant that provides pollen.
The word pollinator is often used when pollenizer is more precise. A pollinator is the biotic agent that moves the pollen, such as bees, moths, bats, and birds. Bees are thus often referred to as 'pollinating insects'.
The verb form to pollenize is to be the source of pollen, or to be the sire of the next plant generation.
Pollenizer:
While some plants are capable of self-pollenization, pollenizer is more often used in pollination management for a plant that provides abundant, compatible, and viable pollen at the same flowering time as the pollinated plant. For example, most crabapple varieties are good pollenizers for any apple tree that blooms at the same time, and are often used in apple orchards for the purpose. Some apple cultivars produce very little pollen or pollen that is sterile or incompatible with other apple varieties. These are poor pollenizers.
Pollenizer:
A pollenizer can also be the male plant in dioecious species (where entire plants are of a single sex), such as with kiwifruit or holly.
Nursery catalogs often specify that a cultivar should be planted as a "pollinator" for another cultivar, when they actually should be referring to it as a pollenizer. Strictly, a plant can only be a pollinator when it is self-fertile and it physically pollinates itself without the aid of an external pollinator. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gateway Handbook**
Gateway Handbook:
The HandBook was a very small and lightweight subnotebook originally introduced by Gateway 2000 in 1992. It quickly achieved critical acclaim and a cult-like following, especially in Japan.
It was designed by IQV and Tottori Sanyo and manufactured by Tottori Sanyo in Japan. The lead engineer on the product was Howard Fullmer and other significant contributors included Bob Burnett and Rick Murayama.
Gateway Handbook:
The product was only 9.7 in (250 mm) wide, 5.9 in (150 mm) deep, and 1.6 in (41 mm) high, and weighed less than 3 lb (1.4 kg). While it used a Chips and Technologies 8680 microprocessor, it was marketed as having 286-level performance. The C&T chip set included hardware emulation of the Intel 80186 processor and the HandBook used a special feature of the chip set called SuperSet whereby 80286 instructions were trapped and then emulated in software. This same feature was used to emulate the 8051 keyboard controller, serial port and numerous other I/O functions. Intel worked closely with IQV to include similar capabilities in the SL chip sets which were introduced in the mid-90s.
Gateway Handbook:
The HandBook had 640 KB of RAM, a 20 MB hard drive, and a monochrome blue-white CGA-compatible display. The unit could be powered by a rechargeable NiMH battery or six AA batteries in a special battery pack. The rechargeable batteries were unusual in that they are able to be charged without actually being in the laptop. A floppy disk was attached through a proprietary parallel port connector. A tremendous engineering effort went into the design of the HandBook's keyboard. It featured 17.8 mm center-to center key spacing and 2 mm travel for a firm feel.
Gateway Handbook:
After the success of the original Gateway HandBook, Gateway came out with a 486 model. The HandBook 486 (as it was called) was originally available as two models: A 486SX/25 and a 486DX/40 model. Gateway later on came out with HandBook 486 models utilizing a 486SX/33 or 486DX/50 processor. All of these handbooks used a grayscale 640x480 VGA display. Because of the small size of the unit, the display was distorted — what appear as circles on other displays come out as ovals on the HandBook 486.
Gateway Handbook:
The built-in hard disk for the HandBook 486 was usually 120 MB in size. The HandBook 486 was produced between 1993 and 1995. The HandBook 486 had 4 MB of built-in RAM, which can be expanded to 20 MB. As of 2005, it is still possible to buy memory for the HandBook 486, although one should test the memory with memtest since memory for older computers is more likely to be defective.
Gateway Handbook:
It was possible to install Linux or OpenBSD on these computers; the HandBook 486 is probably the earliest Linux-compatible subnotebook released. It was even possible to run the X Window System after the memory was expanded. The HandBook 486 has a PCMCIA II interface. While Modern Cardbus cards do not work with this interface, most older PCMCIA II cards (as long as they use no more than 250 mA of power) work fine. The HandBook 486 also has a pointing device similar to the IBM trackpoint located on the right hand side of the keyboard just above the enter key.
Gateway Handbook:
The Gateway HandBook remains one of the smallest laptops ever produced and was a precursor to Netbooks such as the Asus Eee PC, the Dell Inspiron Mini Series, and the Acer Aspire One. The Acer Aspire One is about the same size as the HandBook, and exists in a Gateway-branded form as the Gateway LT1004u. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nucleogenic**
Nucleogenic:
A nucleogenic isotope, or nuclide, is one that is produced by a natural terrestrial nuclear reaction, other than a reaction beginning with cosmic rays (the latter nuclides by convention are called by the different term cosmogenic). The nuclear reaction that produces nucleogenic nuclides is usually interaction with an alpha particle or the capture of fission or thermal neutrons. Some nucleogenic isotopes are stable and others are radioactive.
Example:
An example of a nucleogenic nuclide is neon-21 produced from neon-20 that absorbs a thermal neutron (though some neon-21 is also primordial). Other nucleogenic reactions that produce heavy neon isotopes are (fast neutron capture, alpha emission) reactions, starting with magnesium-24 and magnesium-25, respectively. The source of the neutrons in these reactions is often secondary neutrons produced by alpha radiation from natural uranium and thorium in rock.
Types:
Because nucleogenic isotopes have been produced later than the birth of the solar system (and the nucleosynthetic events that preceded it), nucleogenic isotopes, by definition, are not primordial nuclides. However, nucleogenic isotopes should not be confused with much more common radiogenic nuclides that are also younger than primordial nuclides, but which arise as simple daughter isotopes from radioactive decay. Nucleogenic isotopes, as noted, are the result of a more complicated nuclear reaction, although such reactions may begin with a radioactive decay event.
Types:
Alpha particles that produce nucleogenic reactions come from natural alpha particle emitters in uranium and thorium decay chains. Neutrons to produce nucleogenic nuclides may be produced by a number of processes, but due to the short half-life of free neutrons, all of these reactions occur on Earth. Among the most common are cosmic ray spallation production of neutrons from elements near the surface of the Earth. Alpha emission produced by some radioactive decay also produces neutrons by spallation knockout of neutron rich isotopes, such as the reaction of alpha particles with oxygen-18. Neutrons are also produced by neutron emission (a form of radioactive decay in some neutron-rich nuclides) and spontaneous fission of fissile isotopes on Earth (particularly uranium-235).
Nucleogenesis:
Nucleogenesis (also known as nucleosynthesis) as a general phenomenon is a process usually associated with production of nuclides in the Big Bang or in stars, by nuclear reactions there. Some of these neutron reactions (such as the r-process and s-process) involve absorption by atomic nuclei of high-temperature (high energy) neutrons from the star. These processes produce most of the chemical elements in the universe heavier than zirconium (element 40), because nuclear fusion processes become increasingly inefficient and unlikely for elements heavier than this. By convention, such heavier elements produced in normal elemental abundance, are not referred to as "nucleogenic". Instead, this term is reserved for nuclides (isotopes) made on Earth from natural nuclear reactions.
Nucleogenesis:
Also, the term "nucleogenic" by convention excludes artificially produced radionuclides, for example tritium, many of which are produced in large amounts by a similar artificial processes, but using the copious neutron flux produced by conventional nuclear reactors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Geophagia**
Geophagia:
Geophagia (), also known as geophagy (), is the intentional practice of eating earth or soil-like substances such as clay, chalk, or termite mounds. It is a behavioural adaptation that occurs in many non-human animals and has been documented in more than 100 primate species. Geophagy in non-human primates is primarily used for protection from parasites, to provide mineral supplements and to help metabolize toxic compounds from leaves. Geophagy also occurs in humans and is most commonly reported among children and pregnant women.Human geophagia is a form of pica – the craving and purposive consumption of non-food items – and is classified as an eating disorder in the Diagnostic and Statistical Manual of Mental Disorders (DSM) if not socially or culturally appropriate. Sometimes geophagy is a consequence of carrying a hookworm infection. Although its etiology remains unknown, geophagy has many potential adaptive health benefits as well as negative consequences.
Animals:
Geophagia is widespread in the animal kingdom. Galen, the Greek philosopher and physician, was the first to record the use of clay by sick or injured animals in the second century AD. This type of geophagia has been documented in "many species of mammals, birds, reptiles, butterflies and isopods, especially among herbivores".
Animals:
Birds Many species of South American parrots have been observed at clay licks, and sulphur-crested cockatoos have been observed ingesting clays in Papua New Guinea. Analysis of soils consumed by wild birds show that they often prefer soils with high clay content, usually with the smectite clay families being well represented.The preference for certain types of clay or soil can lead to unusual feeding behaviour. For example, Peruvian Amazon rainforest parrots congregate not just at one particular bend of the Manu River but at one specific layer of soil which runs hundreds of metres horizontally along that bend. The parrots avoid eating the substrate in layers one metre above or below the preferred layer. These parrots regularly eat seeds and unripe fruits containing alkaloids and other toxins that render the seeds and fruits bitter and even lethal. Because many of these chemicals become positively charged in the acidic stomach, they bind to clay minerals which have negatively charged cation-exchange sites, and are thereby rendered safe. Their preferred soils have a much higher cation-exchange capacity than the adjacent, rejected layers of soils because they are rich in the minerals smectite, kaolin, and mica. The preferred soils surpass the pure mineral kaolinate and surpass or approach pure bentonite in their capacity to bind quinine and tannic acid.In vitro and in vivo tests of these soils and many others from southeastern Peru indicate that they also release nutritionally important quantities of minerals such as calcium and sodium. In the Manu River example cited above, the preferred soil bands had much higher levels of sodium than those that were not chosen. Repeated studies have shown that the soils consumed most commonly by parrots in South America have higher sodium contents than those that are not consumed.
Animals:
It is unclear which factor is driving avian geophagy. However, evidence is mounting that sodium is the most important driver among parrots in southeastern Peru. Parrots are known to eat toxic foods globally, but geophagy is concentrated in very specific regions. Researchers Lee et al. show that parrot geophagy in South America is positively correlated to a significant degree with distance from the ocean. This suggests that overall lack of sodium in the ecosystem, not variation in food toxicity, is a better predictor of the spatial distribution of geophagy. This work, coupled with the recent findings of consistently high sodium levels in consumed soils, make it highly likely that sodium is the primary driver of avian geophagy among parrots (and possibly other taxa) in the western Amazon Basin. This supplemental nutrients hypothesis is further supported by peak geophagy occurring during the parrots' breeding season.
Animals:
Primates There are several hypotheses about the importance of geophagia in bats and primates.: 436 Chimpanzees in Kibale National Park, Uganda, have been observed to consume soil rich in kaolinite clay shortly before or after consuming plants including Trichilia rubescens, which possesses antimalarial properties in the laboratory.Geophagy is a behavioural adaptation seen in 136 species of nonhuman primates from the suborder Haplorrhini (81%) and Strepsirrhini (19%). The most commonly ingested soils are soils from mounds, soils from tree bases, soils from termite mounds, 'Pong' soils, forest floor. Studies have shown many benefits of geophagy such as protection from parasites (4.9%), minerals supplements (19.5%) and helps metabolize toxic compounds from leaves (12.2%) nonexclusive. From soil analysis it has been seen that one of the main compounds in the earth consumed by these primates is clay minerals that contains kaolinite which is commonly used in medications for diarrheal and intestinal problems. Geophagic behaviour plays an important role in nonhuman primates health. This kind of zoopharmacognosy use differs from one species to another. For example, Mountain Gorillas from Rwanda tend to ingest clay soil during dry season, when the vegetation changes forcing them to feed on plants that have more toxic compounds, in this case the ingested clay absorbs these toxins providing digestive benefits. This kind of seasonal behavioural adaptation is also seen in the Red-Handed Howler monkeys from the western Brazilian Amazonia, which also have to adapt to the shift of feeding on leaves that contains more toxic compounds. In other cases, geophagy is used by the Ring-Tailed Lemurs as a preventive and therapeutic behaviour for parasites control and intestinal infection. These benefits from clay ingestion can also be observed among Rhesus Macaques. In a study that was carried out in the Island of Cayo Santiago, it has been observed that the Rhesus Macaques had intestinal parasites and they were their health was not affected and they didn't have many gastrointestinal effects from these parasites. Data observed, shows that this was caused by the consumption of clay soil by this specie. On the other hand observations have shown that behavioural geophagy provides minerals supplements, as seen among Cambodian's Colobinae. The study was done at the salt licks in Veun Sai-Siem Pang Conservation Area, a site that is visited by various species of nonhuman primates. More in-depth research needs to be carried out in order to better understand this behavioural adaptation of geaophagy among nonhuman primates.
Animals:
Bats There is debate over whether geophagia in bats is primarily for nutritional supplementation or detoxification. It is known that some species of bats regularly visit mineral or salt licks to increase mineral consumption. However, Voigt et al. demonstrated that both mineral-deficient and healthy bats visit salt licks at the same rate. Therefore, mineral supplementation is unlikely to be the primary reason for geophagia in bats. Additionally, bat presence at salt licks increases during periods of high energy demand. Voigt et al. concluded that the primary purpose for bat presence at salt licks is for detoxification purposes, compensating for the increased consumption of toxic fruit and seeds.
Humans:
Anthropological and historical evidence Evidence for the likely origin of geophagy was found in the remains of early humans in Africa: The oldest evidence of geophagy practised by humans comes from the prehistoric site at Kalambo Falls on the border between Zambia and Tanzania (Root-Bernstein & Root-Bernstein, 2000). Here, a calcium-rich white clay was found alongside the bones of Homo habilis (the immediate predecessor of Homo sapiens).
Humans:
Geophagia is nearly universal around the world in tribal and traditional rural societies (although apparently it has not been documented in Japan or Korea). In the ancient world, several writers noted the phenomenon of geophagia. Pliny is said to have noted the ingestion of soil on Lemnos, an island of Greece, and the use of the soils from this island was noted until the 14th century. The textbook of Hippocrates (460–377 BCE) mentions geophagia, and the famous medical textbook titled De Medicina edited by A. Cornelius Celsus (14–37 CE) seems to link anaemia to geophagia.The existence of geophagy among Native Americans was noted by early explorers in the Americas, including Gabriel Soares de Sousa, who in 1587 reported a tribe in Brazil using it in suicide, and Alexander von Humboldt, who said that a tribe called the Otomacs ate large amounts of soil. In Africa, David Livingstone wrote about slaves eating soil in Zanzibar, and it is also thought that large numbers of slaves brought with them soil-eating practices when they were trafficked to the New World as part of the transatlantic slave trade. Slaves who practised geophagia were nicknamed "clay-eaters" because they were known to consume clay, as well as spices, ash, chalk, grass, plaster, paint, and starch.In more recent times, according to Dixie's Forgotten People: the South's Poor Whites, geophagia was common among poor whites in the Southeastern United States in the 19th and early 20th centuries, and was often ridiculed in popular literature. The literature also states, "Many men believed that eating clay increased sexual prowess, and some females claimed that eating clay helped pregnant women to have an easy delivery." Geophagia among Southerners may have been caused by the high prevalence of hookworm disease, of which the desire to consume soil is a symptom. Geophagia has become less prevalent as rural Americans assimilate into urban culture. However, cooked, baked, and processed dirt and clay are sold in health food stores and rural flea markets in the American South.
Humans:
Contemporary practices In Africa, kaolinite, sometimes known as kalaba (in Gabon and Cameroon), calaba, and calabachop (in Equatorial Guinea), is eaten for pleasure or to suppress hunger. Kaolin for human consumption is sold at most markets in Cameroon and is often flavoured with spices such as black pepper and cardamom. Consumption is greatest among women, especially to cure nausea during pregnancy, in spite of the possible dangerous levels of arsenic and lead to the unborn child. Another example of geophagia was reported in Mangaung, Free State Province in South Africa, where the practice was geochemically investigated. Calabash chalk is also eaten in west Africa.
Humans:
In Haiti, poor people are known to eat bonbon tè made from soil, salt, and vegetable shortening. These biscuits hold minimal nutritional value, but manage to keep the poor alive. However, long-term consumption of the biscuits is reported to cause stomach pains and malnutrition, and is not recommended by doctors.In Central Java and East Java, Indonesia a food made of soil called ampo is eaten as a snack or light meal. It consists of pure clay, without any mixture of ingredients.Bentonite clay is available worldwide as a digestive aid; kaolin is also widely used as a digestive aid and as the base for some medicines. Attapulgite, another type of clay, is an active ingredient in many anti-diarrheal medicines.
Humans:
Impact on health Clay minerals have been reported to have beneficial microbiological effects, such as protecting the stomach against toxins, parasites, and pathogens. Humans are not able to synthesize vitamin B12 (cobalamin), so geophagia may be a behavioral adaption to obtain it from bacteria in the soil. Mineral content in soils may vary by region, but many contain high levels of calcium, copper, magnesium, iron, and zinc, minerals that are critical for developing fetuses which can cause metallic, soil, or chewing ice cravings in pregnant women. To the extent that these cravings, and subsequent mineral consumption (as well as in the case of cravings for ice, or other cold neck vasoconstricting food which aid in increasing brain oxygen levels by restricting neck veins) are therapeutically effective decreasing infant mortality, those genetic predispositions and the associated environmental triggers, are likely to be found in the infant as well. Likewise, multigenerationally impoverished villages or other homogenous socioeconomic closed genetic communities are more likely to have rewarded gene expression of soil or clay consumption cravings, by increasing the likelihood of survival through multiple pregnancies for both sexes.There are obvious health risks in the consumption of soil that is contaminated by animal or human feces; in particular, helminth eggs, such as Ascaris, which can stay viable in the soil for years, can lead to helminth infections. Tetanus poses a further risk. Lead poisoning is also associated with soil ingestion, as well as health risks associated with zinc exposure can be problematic among people who eat soils on a regular basis. Gestational geophagia (geophagia in pregnancy) has been associated with various homeostatic disruptions and oxidative damage in rats. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Beer boot**
Beer boot:
A beer boot (German: Bierstiefel) is a boot-shaped beer glass. Beer boots exist in sizes ranging from 0.5 litres (17 US fl oz) up to 5 litres (1+3⁄8 US gal), but 2 litres (1⁄2 US gal) is a more typical size. Beer boots are commonly consumed communally and are popular with younger people as part of drinking games.
Production:
Because of their shape, beer boots are often made from blown or pressed glass.
Origin:
Shoe- or boot-shaped drinking containers have a long tradition; archaeologists have found examples at Urnfield culture sites in Unterhautzenthal near Korneuburg in Lower Austria or at the Glauberg in Hesse, Germany. In Asia Minor, shoe-shaped drinking vessels have been found dating to the early 2nd millennium BCE; others, dating to the early 1st millennium BCE have been found in Azerbaijan, Armenia, and Urartu sites near Lake Van. Similar glasses are attested into the middle ages. The modern beer boot takes its form from the Hessian boot, which saw military use into the 19th century. Drinking from shoes was a common hazing ritual in the military, and which spread further through German student fraternities.
Use:
Due to the size and volume, a beer boot is usually consumed communally. When drinking, if the toe of the boot is facing away from the drinker, a portion of the beer is held at low pressure in the toe. When the air reaches the toe, the beer can rush out into the face of the drinker.
The use of beer boots featured prominently in the 2006 film Beerfest. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Muscle relaxant**
Muscle relaxant:
A muscle relaxant is a drug that affects skeletal muscle function and decreases the muscle tone. It may be used to alleviate symptoms such as muscle spasms, pain, and hyperreflexia. The term "muscle relaxant" is used to refer to two major therapeutic groups: neuromuscular blockers and spasmolytics. Neuromuscular blockers act by interfering with transmission at the neuromuscular end plate and have no central nervous system (CNS) activity. They are often used during surgical procedures and in intensive care and emergency medicine to cause temporary paralysis. Spasmolytics, also known as "centrally acting" muscle relaxant, are used to alleviate musculoskeletal pain and spasms and to reduce spasticity in a variety of neurological conditions. While both neuromuscular blockers and spasmolytics are often grouped together as muscle relaxant, the term is commonly used to refer to spasmolytics only.
History:
The earliest known use of muscle relaxant drugs was by natives of the Amazon Basin in South America who used poison-tipped arrows that produced death by skeletal muscle paralysis. This was first documented in the 16th century, when European explorers encountered it. This poison, known today as curare, led to some of the earliest scientific studies in pharmacology. Its active ingredient, tubocurarine, as well as many synthetic derivatives, played a significant role in scientific experiments to determine the function of acetylcholine in neuromuscular transmission. By 1943, neuromuscular blocking drugs became established as muscle relaxants in the practice of anesthesia and surgery.The U.S. Food and Drug Administration (FDA) approved the use of carisoprodol in 1959, metaxalone in August 1962, and cyclobenzaprine in August 1977.Other skeletal muscle relaxants of that type used around the world come from a number of drug categories and other drugs used primarily for this indication include orphenadrine (anticholinergic), chlorzoxazone, tizanidine (clonidine relative), diazepam, tetrazepam and other benzodiazepines, mephenoxalone, methocarbamol, dantrolene, baclofen, Drugs once but no longer or very rarely used to relax skeletal muscles include meprobamate, barbiturates, methaqualone, glutethimide and the like; some subcategories of opioids have muscle relaxant properties, and some are marketed in combination drugs with skeletal and/or smooth muscle relaxants such as whole opium products, some ketobemidone, piritramide and fentanyl preparations and Equagesic.
Neuromuscular blockers:
Muscle relaxation and paralysis can theoretically occur by interrupting function at several sites, including the central nervous system, myelinated somatic nerves, unmyelinated motor nerve terminals, nicotinic acetylcholine receptors, the motor end plate, and the muscle membrane or contractile apparatus. Most neuromuscular blockers function by blocking transmission at the end plate of the neuromuscular junction. Normally, a nerve impulse arrives at the motor nerve terminal, initiating an influx of calcium ions, which causes the exocytosis of synaptic vesicles containing acetylcholine. Acetylcholine then diffuses across the synaptic cleft. It may be hydrolysed by acetylcholine esterase (AchE) or bind to the nicotinic receptors located on the motor end plate. The binding of two acetylcholine molecules results in a conformational change in the receptor that opens the sodium-potassium channel of the nicotinic receptor. This allows Na+ and Ca2+ ions to enter the cell and K+ ions to leave the cell, causing a depolarization of the end plate, resulting in muscle contraction. Following depolarization, the acetylcholine molecules are then removed from the end plate region and enzymatically hydrolysed by acetylcholinesterase.Normal end plate function can be blocked by two mechanisms. Nondepolarizing agents, such as tubocurarine, block the agonist, acetylcholine, from binding to nicotinic receptors and activating them, thereby preventing depolarization. Alternatively, depolarizing agents, such as succinylcholine, are nicotinic receptor agonists which mimic Ach, block muscle contraction by depolarizing to such an extent that it desensitizes the receptor and it can no longer initiate an action potential and cause muscle contraction. Both of these classes of neuromuscular blocking drugs are structurally similar to acetylcholine, the endogenous ligand, in many cases containing two acetylcholine molecules linked end-to-end by a rigid carbon ring system, as in pancuronium (a nondepolarizing agent).
Spasmolytics:
The generation of the neuronal signals in motor neurons that cause muscle contractions is dependent on the balance of synaptic excitation and inhibition the motor neuron receives. Spasmolytic agents generally work by either enhancing the level of inhibition or reducing the level of excitation. Inhibition is enhanced by mimicking or enhancing the actions of endogenous inhibitory substances, such as GABA.
Spasmolytics:
Terminology Because they may act at the level of the cortex, brain stem, or spinal cord, or all three areas, they have traditionally been referred to as "centrally acting" muscle relaxants. However, it is now known not every agent in this class has CNS activity (e.g. dantrolene), so this name is inaccurate.Most sources still use the term "centrally acting muscle relaxant". According to MeSH, dantrolene is usually classified as a centrally acting muscle relaxant. The World Health Organization, in its ATC, uses the term "centrally acting agents", but adds a distinct category of "directly acting agents", for dantrolene. Use of this terminology dates back to at least 1973.The term "spasmolytic" is also considered a synonym for antispasmodic.
Spasmolytics:
Clinical use Spasmolytics such as carisoprodol, cyclobenzaprine, metaxalone, and methocarbamol are commonly prescribed for low back pain or neck pain, fibromyalgia, tension headaches and myofascial pain syndrome. However, they are not recommended as first-line agents; in acute low back pain, they are not more effective than paracetamol or nonsteroidal anti-inflammatory drugs (NSAIDs), and in fibromyalgia they are not more effective than antidepressants. Nevertheless, some (low-quality) evidence suggests muscle relaxants can add benefit to treatment with NSAIDs. In general, no high-quality evidence supports their use. No drug has been shown to be better than another, and all of them have adverse effects, particularly dizziness and drowsiness. Concerns about possible abuse and interaction with other drugs, especially if increased sedation is a risk, further limit their use. A muscle relaxant is chosen based on its adverse-effect profile, tolerability, and cost.Muscle relaxants (according to one study) were not advised for orthopedic conditions, but rather for neurological conditions such as spasticity in cerebral palsy and multiple sclerosis. Dantrolene, although thought of primarily as a peripherally acting agent, is associated with CNS effects, whereas baclofen activity is strictly associated with the CNS.
Spasmolytics:
Muscle relaxants are thought to be useful in painful disorders based on the theory that pain induces spasm and spasm causes pain. However, considerable evidence contradicts this theory.In general, muscle relaxants are not approved by FDA for long-term use. However, rheumatologists often prescribe cyclobenzaprine nightly on a daily basis to increase stage 4 sleep. By increasing this sleep stage, patients feel more refreshed in the morning. Improving sleep is also beneficial for patients who have fibromyalgia.Muscle relaxants such as tizanidine are prescribed in the treatment of tension headaches.Diazepam and carisoprodol are not recommended for older adults, pregnant women, or people who have depression or for those with a history of drug or alcohol addiction.
Spasmolytics:
Mechanism Because of the enhancement of inhibition in the CNS, most spasmolytic agents have the side effects of sedation and drowsiness and may cause dependence with long-term use. Several of these agents also have abuse potential, and their prescription is strictly controlled.The benzodiazepines, such as diazepam, interact with the GABAA receptor in the central nervous system. While it can be used in patients with muscle spasm of almost any origin, it produces sedation in most individuals at the doses required to reduce muscle tone.Baclofen is considered to be at least as effective as diazepam in reducing spasticity, and causes much less sedation. It acts as a GABA agonist at GABAB receptors in the brain and spinal cord, resulting in hyperpolarization of neurons expressing this receptor, most likely due to increased potassium ion conductance. Baclofen also inhibits neural function presynaptically, by reducing calcium ion influx, and thereby reducing the release of excitatory neurotransmitters in both the brain and spinal cord. It may also reduce pain in patients by inhibiting the release of substance P in the spinal cord, as well.Clonidine and other imidazoline compounds have also been shown to reduce muscle spasms by their central nervous system activity. Tizanidine is perhaps the most thoroughly studied clonidine analog, and is an agonist at α2-adrenergic receptors, but reduces spasticity at doses that result in significantly less hypotension than clonidine. Neurophysiologic studies show that it depresses excitatory feedback from muscles that would normally increase muscle tone, therefore minimizing spasticity. Furthermore, several clinical trials indicate that tizanidine has a similar efficacy to other spasmolytic agents, such as diazepam and baclofen, with a different spectrum of adverse effects.The hydantoin derivative dantrolene is a spasmolytic agent with a unique mechanism of action outside of the CNS. It reduces skeletal muscle strength by inhibiting the excitation-contraction coupling in the muscle fiber. In normal muscle contraction, calcium is released from the sarcoplasmic reticulum through the ryanodine receptor channel, which causes the tension-generating interaction of actin and myosin. Dantrolene interferes with the release of calcium by binding to the ryanodine receptor and blocking the endogenous ligand ryanodine by competitive inhibition. Muscle that contracts more rapidly is more sensitive to dantrolene than muscle that contracts slowly, although cardiac muscle and smooth muscle are depressed only slightly, most likely because the release of calcium by their sarcoplasmic reticulum involves a slightly different process. Major adverse effects of dantrolene include general muscle weakness, sedation, and occasionally hepatitis.Other common spasmolytic agents include: methocarbamol, carisoprodol, chlorzoxazone, cyclobenzaprine, gabapentin, metaxalone, and orphenadrine.
Spasmolytics:
Thiocolchicoside is a muscle relaxant with anti-inflammatory and analgesic effects and an unknown mechanism of action. It acts as a competitive antagonist at GABAA and glycine receptors with similar potencies, as well as at nicotinic acetylcholine receptors, albeit to a much lesser extent. It has powerful proconvulsant activity and should not be used in seizure-prone individuals.
Side effects:
Patients most commonly report sedation as the main adverse effect of muscle relaxants. Usually, people become less alert when they are under the effects of these drugs. People are normally advised not to drive vehicles or operate heavy machinery while under muscle relaxants' effects.
Side effects:
Cyclobenzaprine produces confusion and lethargy, as well as anticholinergic side effects. When taken in excess or in combination with other substances, it may also be toxic. While the body adjusts to this medication, it is possible for patients to experience dry mouth, fatigue, lightheadedness, constipation or blurred vision. Some serious but unlikely side effects may be experienced, including mental or mood changes, possible confusion and hallucinations, and difficulty urinating. In a very few cases, very serious but rare side effects may be experienced: irregular heartbeat, yellowing of eyes or skin, fainting, abdominal pain including stomach ache, nausea or vomiting, lack of appetite, seizures, dark urine or loss of coordination.Patients taking carisoprodol for a prolonged time have reported dependence, withdrawal and abuse, although most of these cases were reported by patients with addiction history. These effects were also reported by patients who took it in combination with other drugs with abuse potential, and in fewer cases, reports of carisoprodol-associated abuse appeared when used without other drugs with abuse potential.Common side effects eventually caused by metaxalone include dizziness, headache, drowsiness, nausea, irritability, nervousness, upset stomach and vomiting. Severe side effects may be experienced when consuming metaxalone, such as severe allergic reactions (rash, hives, itching, difficulty breathing, tightness in the chest, swelling of the mouth, face, lips, or tongue), chills, fever, and sore throat, may require medical attention. Other severe side effects include unusual or severe tiredness or weakness, as well as yellowing of the skin or the eyes. When baclofen is administered intrathecally, it may cause CNS depression accompanied with cardiovascular collapse and respiratory failure. Tizanidine may lower blood pressure. This effect can be controlled by administering a low dose at the beginning and increasing it gradually. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Time Stamp Counter**
Time Stamp Counter:
The Time Stamp Counter (TSC) is a 64-bit register present on all x86 processors since the Pentium. It counts the number of CPU cycles since its reset. The instruction RDTSC returns the TSC in EDX:EAX. In x86-64 mode, RDTSC also clears the upper 32 bits of RAX and RDX. Its opcode is 0F 31. Pentium competitors such as the Cyrix 6x86 did not always have a TSC and may consider RDTSC an illegal instruction. Cyrix included a Time Stamp Counter in their MII.
Use:
The Time Stamp Counter was once an excellent high-resolution, low-overhead way for a program to get CPU timing information. With the advent of multi-core/hyper-threaded CPUs, systems with multiple CPUs, and hibernating operating systems, the TSC cannot be relied upon to provide accurate results — unless great care is taken to correct the possible flaws: rate of tick and whether all cores (processors) have identical values in their time-keeping registers. There is no promise that the timestamp counters of multiple CPUs on a single motherboard will be synchronized. Therefore, a program can get reliable results only by limiting itself to run on one specific CPU. Even then, the CPU speed may change because of power-saving measures taken by the OS or BIOS, or the system may be hibernated and later resumed, resetting the TSC. In those latter cases, to stay relevant, the program must re-calibrate the counter periodically.
Use:
Relying on the TSC also reduces portability, as other processors may not have a similar feature. Recent Intel processors include a constant rate TSC (identified by the kern.timecounter.invariant_tsc sysctl on FreeBSD or by the "constant_tsc" flag in Linux's /proc/cpuinfo). With these processors, the TSC ticks at the processor's nominal frequency, regardless of the actual CPU clock frequency due to turbo or power saving states. Hence TSC ticks are counting the passage of time, not the number of CPU clock cycles elapsed.
Use:
On Windows platforms, Microsoft strongly discourages using the TSC for high-resolution timing for exactly these reasons, providing instead the Windows APIs QueryPerformanceCounter and QueryPerformanceFrequency (which itself uses RDTSCP if the system has an invariant TSC, i.e. the frequency of the TSC doesn't vary according to the current core's frequency). On Linux systems, a program can get similar function by reading the value of CLOCK_MONOTONIC_RAW clock using the clock_gettime function.Starting with the Pentium Pro, Intel processors have practiced out-of-order execution, where instructions are not necessarily performed in the order they appear in the program. This can cause the processor to execute RDTSC earlier than a simple program expects, producing a misleading cycle count. The programmer can solve this problem by inserting a serializing instruction, such as CPUID, to force every preceding instruction to complete before allowing the program to continue. The RDTSCP instruction is a variant of RDTSC that features partial serialization of the instruction stream, but should not be considered as serializing.
Implementation in various processors:
Intel processor families increment the time-stamp counter differently: For Pentium M processors (family [06H], models [09H, 0DH]); for Pentium 4 processors, Intel Xeon processors (family [0FH], models [00H, 01H, or 02H]); and for P6 family processors: the time-stamp counter increments with every internal processor clock cycle. The internal processor clock cycle is determined by the current core-clock to busclock ratio. Intel SpeedStep technology transitions may also impact the processor clock.
Implementation in various processors:
For Pentium 4 processors, Intel Xeon processors (family [0FH], models [03H and higher]); for Intel Core Solo and Intel Core Duo processors (family [06H], model [0EH]); for the Intel Xeon processor 5100 series and Intel Core 2 Duo processors (family [06H], model [0FH]); for Intel Core 2 and Intel Xeon processors (family [06H], display_model [17H]); for Intel Atom processors (family [06H], display_model [1CH]): the time-stamp counter increments at a constant rate. That rate may be set by the maximum core-clock to bus-clock ratio of the processor or may be set by the maximum resolved frequency at which the processor is booted. The maximum resolved frequency may differ from the maximum qualified frequency of the processor.The specific processor configuration determines the behavior. Constant TSC behavior ensures that the duration of each clock tick is uniform and makes it possible to use the TSC as a wall-clock timer even if the processor core changes frequency. This is the architectural behavior for all later Intel processors.
Implementation in various processors:
AMD processors up to the K8 core always incremented the time-stamp counter every clock cycle. Thus, power management features were able to change the number of increments per second, and the values could get out of sync between different cores or processors in the same system. For Windows, AMD provides a utility to periodically synchronize the counters on multiple core CPUs.
Implementation in various processors:
Since the family 10h (Barcelona/Phenom), AMD chips feature a constant TSC, which can be driven either by the HyperTransport speed or the highest P state. A CPUID bit (Fn8000_0007:EDX_8) advertises this; Intel-CPUs also report their invariant TSC on that bit.
Operating system use:
An operating system may provide methods that both use and don't use the RDTSC instruction for time keeping, under administrator control. For example, on some versions of the Linux kernel, seccomp sandboxing mode disables RDTSC. It can also be disabled using the PR_SET_TSC argument to the prctl() system call.
Use in exploiting cache side-channel attacks:
The time stamp counter can be used to time instructions accurately which can be exploited in the Meltdown and Spectre security vulnerabilities. However, if this is not available other counters or timers can be used, as is the case with the ARM processors vulnerable to this type of attack.
Other architectures:
Other processors also have registers which count CPU clock cycles, but with different names. For instance, on the AVR32, it is called the Performance Clock Counter (PCCNT) register. SPARC V9 provides the TICK register. PowerPC provides the 64-bit TBR register.
ARMv7 and ARMv8-A architectures provide a generic counter which counts at a constant frequency. ARMv7 provides the Cycle Counter Register (CCNT instruction) to read and write the counter, but the instruction is privileged. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transposition of the great vessels**
Transposition of the great vessels:
Transposition of the great vessels (TGV) is a group of congenital heart defects involving an abnormal spatial arrangement of any of the great vessels: superior and/or inferior venae cavae, pulmonary artery, pulmonary veins, and aorta. Congenital heart diseases involving only the primary arteries (pulmonary artery and aorta) belong to a sub-group called transposition of the great arteries (TGA), which is considered the most common congenital heart lesion that presents in neonates.
Types:
Transposed vessels can present with atriovenous, ventriculoarterial and/or arteriovenous discordance. The effects may range from a slight change in blood pressure to an interruption in circulation depending on the nature and degree of the misplacement, and on which specific vessels are involved.Although "transposed" literally means "swapped", many types of TGV involve vessels that are in abnormal positions, while not actually being swapped with each other. The terms TGV and TGA are most commonly used in reference to dextro-TGA – in which the two main arteries are in swapped positions; however, both terms are also commonly used, though to a slightly lesser extent, in reference to levo-TGA – in which both the arteries and the ventricles are swapped; while other defects in this category are almost never referred to by either of these terms.
Types:
Dextro-Transposition of the great arteries Dextro-Transposition of the great arteries (also known as dextro-TGA) is a cyanotic heart defect in which the aorta arises from the right ventricle and the pulmonary artery arises from the left ventricle. This switch causes deoxygenated blood from the right heart to be pumped immediately through the aorta and circulated throughout the body and the heart itself, bypassing the lungs altogether. In this same condition, the left heart continuously pumps oxygenated blood back into the lungs through the pulmonary artery, instead of out into the body's circulation as it normally would. In effect, two separate "parallel" circulatory systems are created. It is called a cyanotic congenital heart defect (CHD) because the newborn infant turns blue (cyanotic) from the lack of oxygen.
Types:
Levo-Transposition of the great arteries Levo-Transposition of the great arteries (also known as Levo-TGA, congenitally corrected TGA, double discordance, or ventricular inversion) is a rare, acyanotic heart defect in which the primary arteries are transposed, with the aorta anterior and to the left of the pulmonary artery, and the morphological left and right ventricles with their corresponding atrioventricular valves are also transposed. In other words, the right ventricle is on the left side of the heart and the left ventricle is on the right side of the heart. The systemic and the pulmonary circulation are connected in this condition. Complications can arise from the pressure change due to the fact that the right ventricle, which is adapted for pumping blood into the low-pressure pulmonary circulation, is being tasked with pumping blood at a much higher pressure against the high resistance of the systemic circulation, since it is now in the position of where the left ventricle is typically located.
Types:
Simple and complex TGV In many cases, TGV is accompanied by other heart defects, the most common type being intracardiac shunts such as atrial septal defect including patent foramen ovale, ventricular septal defect, and patent ductus arteriosus. Stenosis, or other defects, of valves and/or vessels may also be present.When no other heart defects are present it is called 'simple' TGV; when other defects are present it is called 'complex' TGV.
Symptoms and signs:
Symptoms may appear at birth or after birth. The severity of symptoms depends on the type of TGV, and the type and size of other heart defects that may be present (Ventricular septal defect, Atrial septal defect, or Patent ductus arteriosus). Most babies with TGA have blue skin color (cyanosis) in the first hours or days of their lives, since dextro-TGA is the more common type.Other symptoms include:•Fast breathing (tachypnea) •Difficulty breathing (dyspnea) •Fast heart rate (tachycardia) •Poor feeding
Risk factors:
Preexisting diabetes mellitus of a pregnant mother is a risk factor that has been described for the fetus having TGV.
Diagnosis:
•Electrocardiogram: An electrocardiogram (ECG) records the electrical activity of the heart through the use of electrodes that are placed on the body. The findings through this diagnostic method are not specific to only TGA. If TGA is present, rightward deviation of the QRS complex and right ventricular hypertrophy or biventricular hypertrophy may be noted.•Chest X-Ray: On chest X-ray (CXR), transposition of the great vessels typically shows a cardio-mediastinal silhouette appearing as an "egg on a string ", in which the enlarged heart represents an egg on its side and the narrowed, atrophic thymus of the superior mediastinum represents the string.•Echocardiogram: An echocardiogram is an ultrasound of the heart that accurately assesses the heart’s structure and function, and can show the specific features of TGA, if present. This imaging modality allows for the definitive diagnosis of TGA to be made.•Cardiac catheterization: Catheterization is done if other diagnostic tests do not provide enough information to make a diagnosis, or if a neonate is unstable. During this procedure, a catheter is inserted in the artery or vein in the groin and makes its way up to the heart. Dye is used to visualize the heart’s structures on x-ray. It can also measure the pressures in the heart and lungs.
Treatment:
All infants with TGA will need surgery to correct the defect. Life expectancy is only a few months if corrective surgery is not performed.Before surgery: For newborns with transposition, prostaglandins can be given to keep the ductus arteriosus open which allows for the mixing of the otherwise isolated pulmonary and systemic circuits. Thus, oxygenated blood that recirculates back to the lungs can mix with blood that circulates throughout the body and can keep the body oxygenated until surgery can be performed. Atrial septostomy can also be performed, usually with a cardiac catheter instead of surgery, to enlarge a natural connection between the heart's upper chambers (atria). This will allow for the oxygen-rich and oxygen-poor blood to mix, resulting in improved oxygen delivery to the baby's body.Surgery: The Arterial switch operation is a surgery where the pulmonary artery and the aorta are moved to their normal positions. This is the most common surgery done to correct dextro-TGA, and is considered the definitive treatment. The Atrial switch operation is an alternative surgical option when the arterial switch is not feasible due to the particular coronary artery anatomy. This operation creates a tunnel (baffle) between the heart's two upper chambers (atria).After surgery: Lifelong follow-up care with a cardiologist is needed. Most infants who undergo surgery have their symptoms relieved and are able to live a normal life. Potential complications that can occur include coronary artery problems, heart valves problems or irregular heart rhythms (arrhythmias).
History:
Transposition of the Great Vessels was first described in 1797 by Matthew Baillie. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Código F.A.M.A.**
Código F.A.M.A.:
Código F.A.M.A. is the first reality television show for children in Mexico.
Format:
From thousands of auditions, 40 (in season 1), 16 (season 2), 17 (season 3) children are chosen to form the first "phase" of the show, which is called "Código Bronce" or "Code Bronze".
In the second phase, the finalists — 8 (season 1), 6 (seasons 2 and 3) — are revealed. The second phase is called "Código Plata" or "Code Silver".
The third and final phase is reached as a winner is announced and this level is known as "Código Oro" or "Code Gold".
Every participant who reaches a certain "code" receives a medal of the respective metal, i.e., bronze, silver or gold.
Season 1: Código F.A.M.A.:
Winner: Código Oro (Code Gold) Miguel Martinez" Finalists: Código Plata (Code Silver) 2nd place: Adán Nieves 3rd place: Gladys Gallegos 4th place: Sergio Guerrero 5th place: Diego Boneta (then credited as Diego González) 6th place: Xitlali Rodríguez 7th place: María Chacón 8th place: Jesús Zavala Alegríjes y Rebujos The winner, Miguel Martínez, went on to star in the Mexican soap opera Alegrijes y Rebujos", along with seven others from the final group of eight (María Chacón, Jesús Zavala, Diego González, Nora Cano, Michelle Álvarez, Antonio Hernández and Allisson Lozano). The soap opera was a success, as were the two soundtrack albums that were released from the show. All eight of the actors continued to perform as a musical group with the same name as the soap opera. They toured Mexico and were also involved in the second season of Código F.A.M.A..
Season 2: Código F.A.M.A. 2:
Winner: Código Oro (Code Gold) Jonathan Becerra Finalists: Código Plata (Code Silver) 2nd place: Marijose Salazar 3rd place: Jorge Escobedo 4th place: Alex Rivera 5th place: Brissia Mayagoitia 6th place: J. Sergio Ortiz Pérez Eliminated: Código Bronce (Code Bronze) 7th: José Alberto Inzunza 8th: Anhuar Escalante 9th: Elisabet Martínez Saldívar 10th: Claudia Ledón Olguín 11th: Paula Gutierrez D'Esesarte 12th: Viviana Ramos Macouzet 13th: María Fernanda González 14th: Israel Salas Hernández 15th: Ricardo Lorenzo Balderas 16th: Mónica López Alonso Misión S.O.S.
Season 2: Código F.A.M.A. 2:
The winner went on to star in the novela Misión S.O.S. with three other contestants (Marijose Salazar, Alex Rivera and Anhuar Escalante). Also appearing were contestants from the first season of Código FAMA, including Miguel Martínez (the winner) and Gladys Gallegos in her first TV role. The novela was also a success. They also kept a group with the same name as the novela, toured Mexico and released a soundtrack with original music made for the telenovela.
Season 3: Código F.A.M.A. 3:
Winner: Código Oro (Code Gold) Adriana Ahumada Finalists: Código Plata (Code Silver) Miguel Jiménez Fernanda Jiménez Rodrigo Salas Eliminated: Código Bronce (Code Bronze) Jesús Trejo Alann Mora Evelin Acosta Maritza Barraza Joel Bernal Cecilia Camacho Ricardo Ceceña Estefania Contreras Mariana Dávila Iván Félix Juan José Huerta Alejandra Leza Mónica López Claritze Rodríguez Joel Bernal La Fea Más Bella Unlike the previous winners, there was not a children's telenovela starring Adriana Ahumada. She had a small role in the telenovela by the same producer of the show. She played the daughter of Lola, who is a part of Letty's "Ugly Squad". Also appearing were Miguel Jiménez and Fernanda Jiménez, who also had small roles in La Fea Más Bella, as the son of Paula María and the daughter of Martha (Paula and Martha are also part of Letty's "Ugly Squad").
Season 3: Código F.A.M.A. 3:
The winner of Codigo F.A.M.A. International, Elizabeth Suarez, was the only one that could not participate in La fea mas bella because she was living in Dominican Republic. She was supposed to make a soap opera in Mexico, but the project never continued.
International: Código F.A.M.A. Internacional:
This fourth installment began immediately after CF3 concluded. Twenty participants, representing Latin American countries, the United States and Spain, were brought to Mexico to compete for the Code Diamond prize: a recording/acting contract, a music tour of all the twenty participating countries, and scholarships.
International: Código F.A.M.A. Internacional:
Winner: Código Diamante (Code Diamond) Elizabeth María Suárez Rosario, Dominican Republic Finalists 2nd place: Felipe Morales Saez, Chile 3rd place: Fabiola Rodas Valladare, Guatemala 4th place: Priscila Alcántara Fonseca, Brazil 5th place: Miguel Darío Narváez Romero, Paraguay Semi-finalists Laura Natalia Esquivel, Argentina Oscar Mario Paz Hurtado, Bolivia Steve Alberto Cabrera Ortega, Ecuador Daniela Hernández, El Salvador Gabriel Morales, United States Eliminated participants Adriana Ahumada, Mexico Jessie Gabriela Flores Madrid, Honduras Lucila María Morena Arana, Nicaragua Kevin Alberca Alarcón, Peru Erika Lisbeth Loaiza Ramírez, Colombia Génesis Díaz Bejarano, Costa Rica Javier Vidal Martínez, Spain Nallybeth Araúz Martínez, Panama Nicolás Aquino Goicoechea, Uruguay Asly D'Janine Toro Álvarez, Venezuela Telenovela The winner, Elizabeth María Suárez Rosario, has yet to appear in a TV part or record an album. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indole-3-acetaldehyde reductase (NADPH)**
Indole-3-acetaldehyde reductase (NADPH):
In enzymology, an indole-3-acetaldehyde reductase (NADPH) (EC 1.1.1.191) is an enzyme that catalyzes the chemical reaction (indol-3-yl)ethanol + NADP+ ⇌ (indol-3-yl)acetaldehyde + NADPH + H+Thus, the two substrates of this enzyme are (indol-3-yl)ethanol and NADP+, whereas its 3 products are (indol-3-yl)acetaldehyde, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (indol-3-yl)ethanol:NADP+ oxidoreductase. Other names in common use include indoleacetaldehyde (reduced nicotinamide adenine dinucleotide, phosphate) reductase, indole-3-acetaldehyde reductase (NADPH), and indole-3-ethanol:NADP+ oxidoreductase. This enzyme participates in tryptophan metabolism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polaroid art**
Polaroid art:
Polaroid art is a type of alternative photography which consists of modifying an instant picture, usually while it is being developed. The most common types of Polaroid art are the emulsion lift, the Polaroid transfer and emulsion manipulation.
Emulsion lift:
An emulsion lift, or emulsion transfer, is a process used to remove the photographic emulsion from an instant print. The emulsion can then be transferred to another material, such as glass, wood or paper. The emulsion lift technique can be performed on peel-apart film and Polaroid Originals integral film, but not on Fujifilm Instax film. The procedure, for integral type film, involves cutting off the picture's border, separating the negative layer from the positive layer and submerging the positive layer in warm water. The emulsion will start to come free from the plastic layer and float on the water. While it is still wet, it can be placed on another material and shaped. It can be laid flat, or it can be folded, ripped or otherwise customized as desired. When done with Fujifilm FP-100C, the picture is placed in water near the boiling point and then submerged in cold water. This will release the emulsion, which resembles cellophane and is harder to manipulate than Polaroid emulsions.
Polaroid transfer:
A Polaroid transfer, sometimes known as an image transfer, is a technique used to develop a peel-apart film picture on to a different material, like drawing paper. In a Polaroid transfer, the image is peeled apart prematurely and the negative is placed down on a desired material. A roller is sometimes used to ensure the negative is laying down flat on the material. After a certain amount of time, the negative is peeled back.
Emulsion manipulation:
Emulsion manipulation is used to modify integral film pictures while they are developing. The technique yields the best results with the original SX-70 Time Zero film, which was discontinued in 2005, and the currently manufactured Polaroid Originals film is less manipulable. As the picture develops, modifications can performed by applying pressure on the surface of the film, using tools that do not scratch the outer plastic layer. Alternatively a pattern can be superimposed on the image by laying the film face-down on a textured surface and applying pressure.
Emulsion manipulation:
As development finishes the emulsion hardens, but to continue the manipulation it can be softened by warming it up. The technique was used to make the cover of Peter Gabriel's third self-titled album. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Canadian Bioinformatics Workshops**
Canadian Bioinformatics Workshops:
Canadian Bioinformatics Workshops (CBW) are a series of advanced training workshops in bioinformatics, founded in 1999 in response to an identified need for a skilled bioinformatics workforce in Canada.
1999-2007:
The Canadian Bioinformatics Workshops series began offering one and two week short courses in bioinformatics, genomics and proteomics in 1999, in response to an identified need for a skilled bioinformatics workforce in Canada. In partnership with the Canadian Genetics Diseases Network and Human Resources Development Canada, and under the scientific direction of Director, Francis Ouellette, the CBW series was established.
1999-2007:
For eight years, the series offered short courses in bioinformatics, genomics and proteomics in various cities across Canada. The courses were taught by top faculty from Canada and the US, and offered small classes and hands-on instruction.
2007-Present:
In 2007, the Canadian Bioinformatics Workshops moved to Toronto, where it is now hosted by the Ontario Institute for Cancer Research. A new format and series of workshops were designed in the fall of 2007. It was recognized that with the introduction of new technologies and scientific approaches to research, having the computational biology capacity and skill to deal with this new data has become an even greater asset.
2007-Present:
The new series of workshops focuses on training the experts and users of these advanced technologies on the latest approaches used in computational biology to deal with the new data. The Canadian Bioinformatics Workshops began offering the 2-day advanced topic workshops in 2008.
All workshop material is licensed under a Creative Commons-Share Alike 2.5 license and is available on the Bioinformatics.ca website.
The CBW is sponsored by the Canadian Institute of Health Research and the Ontario Institute for Cancer Research. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tenoroon**
Tenoroon:
The tenor bassoon or tenoroon is a member of the bassoon family of double reed woodwind instruments. Similar to the alto bassoon, also called octave bassoon, it is relatively rare.
Nomenclature:
Many debates have been had on the nomenclature of the smaller bassoons. All small bassoons have at one time or another been called fagottino (pl. fagottini), but this term is historically usually only applied to the octave bassoon. The terms quart-bassoon (Quartfagott) and quint-bassoon (Quintfagott) are applied respectively to the instruments pitched a fourth above and a fifth above the normal bassoon. To add to the confusion, these terms can also be applied to instruments a fifth lower (quint-bassoon in F) and a fourth lower (quart-bassoon in G) known as semi-contrabassoons. Note that the keys of the lower and higher versions are reversed. Often the terms bass and tenor or high are added to clarify which instrument one is talking about, e.g. quart-bass bassoon or high quint-bassoon (Hochquintfagott). One of the most common terms used to describe these instruments is the term tenoroon which is a contraction of "tenor bassoon", which is the more correct title of the instrument although tenoroon is quite accepted nowadays. Altoon (the combination of the words alto and bassoon), as a moniker for the smaller octave bassoon, has yet to catch on. A recently introduced alternative is the fagonello which is of similar size and weight to these smaller bassoons, but plays at the normal pitch, albeit with a slightly reduced range.
History:
During the Renaissance, instruments were made in every size available, from sopranos, sopraninos, and garkleins down to bass, great bass, and contrabass. The bassoon (or more properly in this era, the dulcian or curtal) was to be found in at least six sizes. The larger sizes, the bass, and the great bass were more popular, but the smaller sizes were still used, being found in several of Heinrich Schütz's motets; they were also quite popular in Spain, where they were known as "bajoncillos". Smaller bassoons appeared throughout the later Baroque and Classical eras, although their exact use is somewhat clouded. It is true that virtually no literature exists for the smaller bassoons. A notable exception is a partita by Johann Kaspar Frost (not Trost, as sometimes listed) which is scored for two octave bassoons, two tenor bassoons, two bassoons, and two horns. It seems that this was exactly the situation during the nineteenth century. Such notable names as Carl Almenräder advocated the use of the smaller bassoons for teaching purposes and it is said that Jancourt would often perform solos on one during recitals. Hector Berlioz lamented its non-use in his Treatise on Instrumentation and even specified that his perfect orchestra would contain five tenor bassoons (though he never wrote for the instrument himself).
History:
In the late nineteenth century, several improved models of tenor bassoon were unveiled in Paris, but were not very well received, as the real need at the time was a working contrabassoon (and not the sarrusophone that was currently in use). But the tenor bassoon was eventually used, despite its obscurity. After an absence of about one hundred years, the tenor bassoon made its comeback in 1989 when Guntram Wolf of Kronach made the first modern, Heckel system tenor bassoon. Since that reintroduction the tenor bassoon has flourished, being used as a children's instrument in Germany (and in locales all over the world) and is being looked at by professionals as a serious instrument worthy of use.
Manufacturers:
In many regards the smaller bassoons play much like the full-size bassoon. Currently there are three sizes available from four different makers. [Moosmann makes an instrument in F (a fourth higher than the normal bassoon) with simplified fingerings that descend only to low C and is intended for young children. The company Bassetto in Switzerland produces instruments in G (a fifth higher) and has an added bonus of a model with an altissimo vent in the bocal, but no whisper key. Bruno Salenson in Nîmes in France is producing a petit basson in E-flat with simplified French or German keywork, specifically for children. Howarth of London markets instruments designed and manufactured by Guntram Wolf both in F and in G (respectively called by the company "tenoroon" and "mini-bassoon"). Guntram Wolf makes and sells his own F and G models, plus an octave bassoon one full octave higher than the normal bassoon. He also offers all three instruments with extra keywork for professional players: the F and G instruments having a full whisper key mechanism and F–F♯ link; and the octave bassoon (called "Fagottino" by Wolf) can have up to nine keys, adding a wing speaker key, and C♯, B♭, F♯, and low E♭ keys to the basic children's model. By far the most used of these would appear to be the Wolf instruments: the model in G or quint-bassoon is marketed more toward children as its slightly smaller size suits them better, while the F or quart-bassoon is more suited to older children or professionals due to its slightly bigger size as well as its feeling and sounding more like a full-sized bassoon, although in its basic children's model it has limited professional use due to the lack of full modern keywork and indeed some professional players using this instrument have opted to have alternative F♯ and G♯ keys added to Wolf's professional instrument (standard features on the regular bassoon) to facilitate the playing of more advanced music.
Octave bassoon:
Instruments pitched an octave above the bassoon are, like all smaller bassoons, historically quite old instruments. Virtually no literature exists for this size of bassoon other than a partita by Frost (or Trost?) and a cantata by Zachau (which specifies "Bassonetti" which would appear to be small bassoons). The instrument has enjoyed something of a revival in the past decade. Once again both historically accurate copies and modern instruments are being constructed. The modern instruments are very simple in their fingering, needing only four keys (although as many of nine or more can be had), considerably fewer keys than the original bassoon. The instrument is not, however, fully chromatic. It lacks the bottom B♮ and C♯, which is akin to Baroque and Classical instruments. This instrument is not a remedy for high notes on the bassoon nor can it extend the range considerably. The octave bassoon can generally only reach the written F above the bass clef (but sounding an octave higher) although professionals may be able to extend this range. The sound is thin and would not be out of place in a Renaissance or Baroque wind ensemble. Due to the somewhat smaller size and abundance of cross-fingerings, technique on the octave bassoon may be somewhat challenging for a non-professional. Octave bassoons (alto bassoons) have been made in various keys, D, D♭ (an octave above the lowest and largest tenoroon), and C.
Tenor bassoons:
The tenor bassoon is a historically very old instrument evolved from the tenor dulcian or curtal. There is virtually no literature for the instrument aside from a few pieces written in the late Baroque by relatively obscure composers (namely a work by Frost). An old theory that the exposed English horn part in Rossini's overture to his opera William Tell was originally written for the tenor bassoon (due to its being written in old Italian notation in bass clef) has now been widely debunked. There are also many names by which the instrument is known: tenoroon (a contraction of tenor bassoon), quart- and quint-bassoon (the former for the instrument in F and the latter for the one in G), fagottino, and mini-bassoon. Tenor bassoons have been made in many various keys: D♭, E♭, F, and G. Only the E♭, F and G instruments are currently available. Many times these smaller instruments are used for young children to begin on, as the normal-sized bassoon would be far too large for anyone under about the age of 10. Naturally, due to the smaller size of the instrument, the tone is much lighter and reedier than that of the bassoon. The instruments are remarkably quick in response and with some practice one could have faster technique on the tenor bassoon. Most tenor bassoons have a somewhat simplified fingering system with most of the alternate keys on the butt joint removed for space reasons. A light and narrow bassoon reed is preferred on the tenor bassoon so that a wholly different reed is not needed. However, a shorter and narrower reed will tend to favor the higher notes. The upper-register fingerings are somewhat different from the bassoon and in scale it can only ascend to B♭ (although their response is questionable, a B and C, and even C♯ and D, are possible). In general professionals prefer the F instrument as it feels and responds more like a bassoon while the smaller G instrument is used more for children.
Tenor bassoons:
The sound of the tenor bassoon can be compared to that of a dull English horn and has been described as somewhat saxophone-like. It can make an excellent tenor or alto voice in a wind ensemble or orchestra (the latter could benefit from having a true tenor instrument in the woodwind department). It could effectively bridge the octave (or octave and a half if bass oboe is omitted) gap between the bassoon family and the oboe family. Although it does not effectively extend the range of the bassoon, it can be used to give more flexibility in that range where the bassoon lacks in mobility. The D♭ instrument's sounding range is from B1 to B4, the E♭ instrument's sounding range is from D♭2 to D♭5, the F instrument's sounding range is from E♭2 to E♭5 and the G instrument's sounding range is from F2 to F5.
Tenor bassoons:
Notable works Johann Kaspar Frost (Trost?) – Parthia No. IV for 2 horns in C, 2 octave bassoons, 2 tenor bassoons in F and 2 bassoons Victor Bruns – Sonatina for tenor bassoon in F and piano, Op. 96 Victor Bruns – Trio for tenor bassoon in F, bassoon and contrabassoon, Op. 97, dedicated to William Waterhouse Timothy Raymond – "Lost Music" for tenor bassoon in F and piano (1992) Bret Newton – "Osiris: Lord of the Duat", symphonic poem for solo tenor bassoon in F and orchestra (2005) Bret Newton – "Forest Scenes" for tenor bassoon in F and 3 marimbas (2006) Robert Harvey – "Runes" for tenor bassoon in F and piano (2009) Graham Waterhouse – "The Akond of Swat" for tenor bassoon in F, bassoon and piano (2009) Elliott Schwartz – "Tenor Variations" for tenor bassoon in F and piano (2013) Vincenzo Toscano – "Prosopon" for tenor bassoon in G, 2 bassoons and contrabassoon (2018) Carla Magnan – "Il magnifico canocchiale" for 2 tenor bassoons in G, bassoon and contrabassoon (2018) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Driving factors**
Driving factors:
In energy monitoring and targeting, a driving factor is something recurrent and measurable whose variation explains variation in energy consumption. The term independent variable is sometimes used as a synonym.
Driving factors:
One of the most common driving factors is the weather, expressed usually as heating or cooling degree days. In energy-intensive processes, production throughputs would usually be used. For electrical circuits feeding outdoor lighting, the number of hours of darkness can be employed. For a borehole pump, the quantity of water delivered would be used; and so on. What these examples all have in common is that on a weekly basis (say) numerical values can be recorded for each factor and one would expect particular streams of energy consumption to correlate with them either singly or in a multivariate model.
Driving factors:
Correlation is arguably more important than causality. Variation in the driving factor merely has to explain variation in consumption; it does not necessarily have to cause it, although that will in most scenarios be the case.Driving factors differ from static factors, such as building floor areas, which determine energy consumption but change only rarely (if at all). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Topology (electrical circuits)**
Topology (electrical circuits):
The circuit topology of an electronic circuit is the form taken by the network of interconnections of the circuit components. Different specific values or ratings of the components are regarded as being the same topology. Topology is not concerned with the physical layout of components in a circuit, nor with their positions on a circuit diagram; similarly to the mathematical concept of topology, it is only concerned with what connections exist between the components. There may be numerous physical layouts and circuit diagrams that all amount to the same topology.
Topology (electrical circuits):
Strictly speaking, replacing a component with one of an entirely different type is still the same topology. In some contexts, however, these can loosely be described as different topologies. For instance, interchanging inductors and capacitors in a low-pass filter results in a high-pass filter. These might be described as high-pass and low-pass topologies even though the network topology is identical. A more correct term for these classes of object (that is, a network where the type of component is specified but not the absolute value) is prototype network.
Topology (electrical circuits):
Electronic network topology is related to mathematical topology. In particular, for networks which contain only two-terminal devices, circuit topology can be viewed as an application of graph theory. In a network analysis of such a circuit from a topological point of view, the network nodes are the vertices of graph theory, and the network branches are the edges of graph theory.
Topology (electrical circuits):
Standard graph theory can be extended to deal with active components and multi-terminal devices such as integrated circuits. Graphs can also be used in the analysis of infinite networks.
Circuit diagrams:
The circuit diagrams in this article follow the usual conventions in electronics; lines represent conductors, filled small circles represent junctions of conductors, and open small circles represent terminals for connection to the outside world. In most cases, impedances are represented by rectangles. A practical circuit diagram would use the specific symbols for resistors, inductors, capacitors etc., but topology is not concerned with the type of component in the network, so the symbol for a general impedance has been used instead.
Circuit diagrams:
The Graph theory section of this article gives an alternative method of representing networks.
Topology names:
Many topology names relate to their appearance when drawn diagrammatically. Most circuits can be drawn in a variety of ways and consequently have a variety of names. For instance, the three circuits shown in Figure 1.1 all look different but have identical topologies.
This example also demonstrates a common convention of naming topologies after a letter of the alphabet to which they have a resemblance. Greek alphabet letters can also be used in this way, for example Π (pi) topology and Δ (delta) topology.
Series and parallel topologies:
For a network with two branches, there are only two possible topologies: series and parallel.
Even for these simplest of topologies, there are variations in the way the circuit can be presented.
For a network with three branches, there are four possible topologies.
Note that the parallel-series topology is another representation of the Delta topology discussed later.
Series and parallel topologies can continue to be constructed with greater and greater numbers of branches ad infinitum. The number of unique topologies that can be obtained from n∈N series or parallel branches is 1, 2, 4, 10, 24, 66, 180, 522, 1532, 4624, … (sequence A000084 in the OEIS).
Y and Δ topologies:
Y and Δ are important topologies in linear network analysis due to these being the simplest possible three-terminal networks. A Y-Δ transform is available for linear circuits. This transform is important because there are some networks that cannot be analysed in terms of series and parallel combinations. These networks arise often in 3-phase power circuits as they are the two most common topologies for 3-phase motor or transformer windings. An example of this is the network of figure 1.6, consisting of a Y network connected in parallel with a Δ network. Say it is desired to calculate the impedance between two nodes of the network. In many networks this can be done by successive applications of the rules for combination of series or parallel impedances. This is not, however, possible in this case where the Y-Δ transform is needed in addition to the series and parallel rules.
Y and Δ topologies:
The Y topology is also called star topology. However, star topology may also refer to the more general case of many branches connected to the same node rather than just three.
Simple filter topologies:
The topologies shown in figure 1.7 are commonly used for filter and attenuator designs. The L-section is identical topology to the potential divider topology. The T-section is identical topology to the Y topology. The Π-section is identical topology to the Δ topology.
All these topologies can be viewed as a short section of a ladder topology. Longer sections would normally be described as ladder topology. These kinds of circuits are commonly analysed and characterised in terms of a two-port network.
Bridge topology:
Bridge topology is an important topology with many uses in both linear and non-linear applications, including, amongst many others, the bridge rectifier, the Wheatstone bridge and the lattice phase equaliser. There are several ways that bridge topology is rendered in circuit diagrams. The first rendering in figure 1.8 is the traditional depiction of a bridge circuit. The second rendering clearly shows the equivalence between the bridge topology and a topology derived by series and parallel combinations. The third rendering is more commonly known as lattice topology. It is not so obvious that this is topologically equivalent. It can be seen that this is indeed so by visualising the top left node moved to the right of the top right node.
Bridge topology:
It is normal to call a network bridge topology only if it is being used as a two-port network with the input and output ports each consisting of a pair of diagonally opposite nodes. The box topology in figure 1.7 can be seen to be identical to bridge topology but in the case of the filter the input and output ports are each a pair of adjacent nodes. Sometimes the loading (or null indication) component on the output port of the bridge will be included in the bridge topology as shown in figure 1.9.
Bridged T and twin-T topologies:
Bridged T topology is derived from bridge topology in a way explained in the Zobel network article. There are many derivative topologies also discussed in the same article.
Bridged T and twin-T topologies:
There is also a twin-T topology which has practical applications where it is desirable to have the input and output share a common (ground) terminal. This may be, for instance, because the input and output connections are made with co-axial topology. Connecting together an input and output terminal is not allowable with normal bridge topology and for this reason Twin-T is used where a bridge would otherwise be used for balance or null measurement applications. The topology is also used in the twin-T oscillator as a sine wave generator. The lower part of figure 1.11 shows twin-T topology redrawn to emphasise the connection with bridge topology.
Infinite topologies:
Ladder topology can be extended without limit and is much used in filter designs. There are many variations on ladder topology, some of which are discussed in the Electronic filter topology and Composite image filter articles.
Infinite topologies:
The balanced form of ladder topology can be viewed as being the graph of the side of a prism of arbitrary order. The side of an anti-prism forms a topology which, in this sense, is an anti-ladder. Anti-ladder topology finds an application in voltage multiplier circuits, in particular the Cockcroft-Walton generator. There is also a full-wave version of the Cockcroft-Walton generator which uses a double anti-ladder topology.Infinite topologies can also be formed by cascading multiple sections of some other simple topology, such as lattice or bridge-T sections. Such infinite chains of lattice sections occur in the theoretical analysis and artificial simulation of transmission lines, but are rarely used as a practical circuit implementation.
Components with more than two terminals:
Circuits containing components with three or more terminals greatly increase the number of possible topologies. Conversely, the number of different circuits represented by a topology diminishes and in many cases the circuit is easily recognisable from the topology even when specific components are not identified.
With more complex circuits the description may proceed by specification of a transfer function between the ports of the network rather than the topology of the components.
Graph theory:
Graph theory is the branch of mathematics dealing with graphs. In network analysis, graphs are used extensively to represent a network being analysed. The graph of a network captures only certain aspects of a network; those aspects related to its connectivity, or, in other words, its topology. This can be a useful representation and generalisation of a network because many network equations are invariant across networks with the same topology. This includes equations derived from Kirchhoff's laws and Tellegen's theorem.
Graph theory:
History Graph theory has been used in the network analysis of linear, passive networks almost from the moment that Kirchhoff's laws were formulated. Gustav Kirchhoff himself, in 1847, used graphs as an abstract representation of a network in his loop analysis of resistive circuits. This approach was later generalised to RLC circuits, replacing resistances with impedances. In 1873 James Clerk Maxwell provided the dual of this analysis with node analysis. Maxwell is also responsible for the topological theorem that the determinant of the node-admittance matrix is equal to the sum of all the tree admittance products. In 1900 Henri Poincaré introduced the idea of representing a graph by its incidence matrix, hence founding the field of algebraic topology. In 1916 Oswald Veblen applied the algebraic topology of Poincaré to Kirchhoff's analysis. Veblen is also responsible for the introduction of the spanning tree to aid choosing a compatible set of network variables.
Graph theory:
Comprehensive cataloguing of network graphs as they apply to electrical circuits began with Percy MacMahon in 1891 (with an engineer friendly article in The Electrician in 1892) who limited his survey to series and parallel combinations. MacMahon called these graphs yoke-chains. Ronald M. Foster in 1932 categorised graphs by their nullity or rank and provided charts of all those with a small number of nodes. This work grew out of an earlier survey by Foster while collaborating with George Campbell in 1920 on 4-port telephone repeaters and produced 83,539 distinct graphs.For a long time topology in electrical circuit theory remained concerned only with linear passive networks. The more recent developments of semiconductor devices and circuits have required new tools in topology to deal with them. Enormous increases in circuit complexity have led to the use of combinatorics in graph theory to improve the efficiency of computer calculation.
Graph theory:
Graphs and circuit diagrams Networks are commonly classified by the kind of electrical elements making them up. In a circuit diagram these element-kinds are specifically drawn, each with its own unique symbol. Resistive networks are one-element-kind networks, consisting only of R elements. Likewise capacitive or inductive networks are one-element-kind. The RC, RL and LC circuits are simple two-element-kind networks. The RLC circuit is the simplest three-element-kind network. The LC ladder network commonly used for low-pass filters can have many elements but is another example of a two-element-kind network.Conversely, topology is concerned only with the geometric relationship between the elements of a network, not with the kind of elements themselves. The heart of a topological representation of a network is the graph of the network. Elements are represented as the edges of the graph. An edge is drawn as a line, terminating on dots or small circles from which other edges (elements) may emanate. In circuit analysis, the edges of the graph are called branches. The dots are called the vertices of the graph and represent the nodes of the network. Node and vertex are terms that can be used interchangeably when discussing graphs of networks. Figure 2.2 shows a graph representation of the circuit in figure 2.1.Graphs used in network analysis are usually, in addition, both directed graphs, to capture the direction of current flow and voltage, and labelled graphs, to capture the uniqueness of the branches and nodes. For instance, a graph consisting of a square of branches would still be the same topological graph if two branches were interchanged unless the branches were uniquely labelled. In directed graphs, the two nodes that a branch connects to are designated the source and target nodes. Typically, these will be indicated by an arrow drawn on the branch.
Graph theory:
Incidence Incidence is one of the basic properties of a graph. An edge that is connected to a vertex is said to be incident on that vertex. The incidence of a graph can be captured in matrix format with a matrix called an incidence matrix. In fact, the incidence matrix is an alternative mathematical representation of the graph which dispenses with the need for any kind of drawing. Matrix rows correspond to nodes and matrix columns correspond to branches. The elements of the matrix are either zero, for no incidence, or one, for incidence between the node and branch. Direction in directed graphs is indicated by the sign of the element.
Graph theory:
Equivalence Graphs are equivalent if one can be transformed into the other by deformation. Deformation can include the operations of translation, rotation and reflection; bending and stretching the branches; and crossing or knotting the branches. Two graphs which are equivalent through deformation are said to be congruent.In the field of electrical networks, there are two additional transforms that are considered to result in equivalent graphs which do not produce congruent graphs. The first of these is the interchange of series connected branches. This is the dual of interchange of parallel connected branches which can be achieved by deformation without the need for a special rule. The second is concerned with graphs divided into two or more separate parts, that is, a graph with two sets of nodes which have no branches incident to a node in each set. Two such separate parts are considered an equivalent graph to one where the parts are joined by combining a node from each into a single node. Likewise, a graph that can be split into two separate parts by splitting a node in two is also considered equivalent.
Graph theory:
Trees and links A tree is a graph in which all the nodes are connected, either directly or indirectly, by branches, but without forming any closed loops. Since there are no closed loops, there are no currents in a tree. In network analysis, we are interested in spanning trees, that is, trees that connect every node present in the graph of the network. In this article, spanning tree is meant by an unqualified tree unless otherwise stated. A given network graph can contain a number of different trees. The branches removed from a graph in order to form a tree are called links, the branches remaining in the tree are called twigs. For a graph with n nodes, the number of branches in each tree, t, must be; t=n−1 An important relationship for circuit analysis is; b=ℓ+t where b is the number of branches in the graph and ℓ is the number of links removed to form the tree.
Graph theory:
Tie sets and cut sets The goal of circuit analysis is to determine all the branch currents and voltages in the network. These network variables are not all independent. The branch voltages are related to the branch currents by the transfer function of the elements of which they are composed. A complete solution of the network can therefore be either in terms of branch currents or branch voltages only. Nor are all the branch currents independent from each other. The minimum number of branch currents required for a complete solution is l. This is a consequence of the fact that a tree has l links removed and there can be no currents in a tree. Since the remaining branches of the tree have zero current they cannot be independent of the link currents. The branch currents chosen as a set of independent variables must be a set associated with the links of a tree: one cannot choose any l branches arbitrarily.In terms of branch voltages, a complete solution of the network can be obtained with t branch voltages. This is a consequence the fact that short-circuiting all the branches of a tree results in the voltage being zero everywhere. The link voltages cannot, therefore, be independent of the tree branch voltages.
Graph theory:
A common analysis approach is to solve for loop currents rather than branch currents. The branch currents are then found in terms of the loop currents. Again, the set of loop currents cannot be chosen arbitrarily. To guarantee a set of independent variables the loop currents must be those associated with a certain set of loops. This set of loops consists of those loops formed by replacing a single link of a given tree of the graph of the circuit to be analysed. Since replacing a single link in a tree forms exactly one unique loop, the number of loop currents so defined is equal to l. The term loop in this context is not the same as the usual meaning of loop in graph theory. The set of branches forming a given loop is called a tie set. The set of network equations are formed by equating the loop currents to the algebraic sum of the tie set branch currents.It is possible to choose a set of independent loop currents without reference to the trees and tie sets. A sufficient, but not necessary, condition for choosing a set of independent loops is to ensure that each chosen loop includes at least one branch that was not previously included by loops already chosen. A particularly straightforward choice is that used in mesh analysis in which the loops are all chosen to be meshes. Mesh analysis can only be applied if it is possible to map the graph on to a plane or a sphere without any of the branches crossing over. Such graphs are called planar graphs. Ability to map onto a plane or a sphere are equivalent conditions. Any finite graph mapped onto a plane can be shrunk until it will map onto a small region of a sphere. Conversely, a mesh of any graph mapped onto a sphere can be stretched until the space inside it occupies nearly all of the sphere. The entire graph then occupies only a small region of the sphere. This is the same as the first case, hence the graph will also map onto a plane.There is an approach to choosing network variables with voltages which is analogous and dual to the loop current method. Here the voltage associated with pairs of nodes are the primary variables and the branch voltages are found in terms of them. In this method also, a particular tree of the graph must be chosen in order to ensure that all the variables are independent. The dual of the tie set is the cut set. A tie set is formed by allowing all but one of the graph links to be open circuit. A cut set is formed by allowing all but one of the tree branches to be short circuit. The cut set consists of the tree branch which was not short-circuited and any of the links which are not short-circuited by the other tree branches. A cut set of a graph produces two disjoint subgraphs, that is, it cuts the graph into two parts, and is the minimum set of branches needed to do so. The set of network equations are formed by equating the node pair voltages to the algebraic sum of the cut set branch voltages. The dual of the special case of mesh analysis is nodal analysis.
Graph theory:
Nullity and rank The nullity, N, of a graph with s separate parts and b branches is defined by; N=b−n+s The nullity of a graph represents the number of degrees of freedom of its set of network equations. For a planar graph, the nullity is equal to the number of meshes in the graph.The rank, R of a graph is defined by; R=n−s Rank plays the same role in nodal analysis as nullity plays in mesh analysis. That is, it gives the number of node voltage equations required. Rank and nullity are dual concepts and are related by; R+N=b Solving the network variables Once a set of geometrically independent variables have been chosen the state of the network is expressed in terms of these. The result is a set of independent linear equations which need to be solved simultaneously in order to find the values of the network variables. This set of equations can be expressed in a matrix format which leads to a characteristic parameter matrix for the network. Parameter matrices take the form of an impedance matrix if the equations have been formed on a loop-analysis basis, or as an admittance matrix if the equations have been formed on a node-analysis basis.These equations can be solved in a number of well-known ways. One method is the systematic elimination of variables. Another method involves the use of determinants. This is known as Cramer's rule and provides a direct expression for the unknown variable in terms of determinants. This is useful in that it provides a compact expression for the solution. However, for anything more than the most trivial networks, a greater calculation effort is required for this method when working manually.
Graph theory:
Duality Two graphs are dual when the relationship between branches and node pairs in one is the same as the relationship between branches and loops in the other. The dual of a graph can be found entirely by a graphical method.The dual of a graph is another graph. For a given tree in a graph, the complementary set of branches (i.e., the branches not in the tree) form a tree in the dual graph. The set of current loop equations associated with the tie sets of the original graph and tree are identical to the set of voltage node-pair equations associated with the cut sets of the dual graph.The following table lists dual concepts in topology related to circuit theory.
Graph theory:
The dual of a tree is sometimes called a maze It consists of spaces connected by links in the same way that the tree consists of nodes connected by tree branches.Duals cannot be formed for every graph. Duality requires that every tie set has a dual cut set in the dual graph. This condition is met if and only if the graph is mappable on to a sphere with no branches crossing. To see this, note that a tie set is required to "tie off" a graph into two portions and its dual, the cut set, is required to cut a graph into two portions. The graph of a finite network which will not map on to a sphere will require an n-fold torus. A tie set that passes through a hole in a torus will fail to tie the graph into two parts. Consequently, the dual graph will not be cut into two parts and will not contain the required cut set. Consequently, only planar graphs have duals.Duals also cannot be formed for networks containing mutual inductances since there is no corresponding capacitive element. Equivalent circuits can be developed which do have duals, but the dual cannot be formed of a mutual inductance directly.
Graph theory:
Node and mesh elimination Operations on a set of network equations have a topological meaning which can aid visualisation of what is happening. Elimination of a node voltage from a set of network equations corresponds topologically to the elimination of that node from the graph. For a node connected to three other nodes, this corresponds to the well known Y-Δ transform. The transform can be extended to greater numbers of connected nodes and is then known as the star-mesh transform.The inverse of this transform is the Δ-Y transform which analytically corresponds to the elimination of a mesh current and topologically corresponds to the elimination of a mesh. However, elimination of a mesh current whose mesh has branches in common with an arbitrary number of other meshes will not, in general, result in a realisable graph. This is because the graph of the transform of the general star is a graph which will not map on to a sphere (it contains star polygons and hence multiple crossovers). The dual of such a graph cannot exist, but is the graph required to represent a generalised mesh elimination.
Graph theory:
Mutual coupling In conventional graph representation of circuits, there is no means of explicitly representing mutual inductive couplings, such as occurs in a transformer, and such components may result in a disconnected graph with more than one separate part. For convenience of analysis, a graph with multiple parts can be combined into a single graph by unifying one node in each part into a single node. This makes no difference to the theoretical behaviour of the circuit so analysis carried out on it is still valid. It would, however, make a practical difference if a circuit were to be implemented this way in that it would destroy the isolation between the parts. An example would be a transformer earthed on both the primary and secondary side. The transformer still functions as a transformer with the same voltage ratio but can now no longer be used as an isolation transformer.More recent techniques in graph theory are able to deal with active components, which are also problematic in conventional theory. These new techniques are also able to deal with mutual couplings.
Graph theory:
Active components There are two basic approaches available for dealing with mutual couplings and active components. In the first of these, Samuel Jefferson Mason in 1953 introduced signal-flow graphs. Signal-flow graphs are weighted, directed graphs. He used these to analyse circuits containing mutual couplings and active networks. The weight of a directed edge in these graphs represents a gain, such as possessed by an amplifier. In general, signal-flow graphs, unlike the regular directed graphs described above, do not correspond to the topology of the physical arrangement of components.The second approach is to extend the classical method so that it includes mutual couplings and active components. Several methods have been proposed for achieving this. In one of these, two graphs are constructed, one representing the currents in the circuit and the other representing the voltages. Passive components will have identical branches in both trees but active components may not. The method relies on identifying spanning trees that are common to both graphs. An alternative method of extending the classical approach which requires only one graph was proposed by Chen in 1965. Chen's method is based on a rooted tree.
Graph theory:
Hypergraphs Another way of extending classical graph theory for active components is through the use of hypergraphs. Some electronic components are not represented naturally using graphs. The transistor has three connection points, but a normal graph branch may only connect to two nodes. Modern integrated circuits have many more connections than this. This problem can be overcome by using hypergraphs instead of regular graphs.
Graph theory:
In a conventional representation components are represented by edges, each of which connects to two nodes. In a hypergraph, components are represented by hyperedges which can connect to an arbitrary number of nodes. Hyperedges have tentacles which connect the hyperedge to the nodes. The graphical representation of a hyperedge may be a box (compared to the edge which is a line) and the representations of its tentacles are lines from the box to the connected nodes. In a directed hypergraph, the tentacles carry labels which are determined by the hyperedge's label. A conventional directed graph can be thought of as a hypergraph with hyperedges each of which has two tentacles. These two tentacles are labelled source and target and usually indicated by an arrow. In a general hypergraph with more tentacles, more complex labelling will be required.Hypergraphs can be characterised by their incidence matrices. A regular graph containing only two-terminal components will have exactly two non-zero entries in each row. Any incidence matrix with more than two non-zero entries in any row is a representation of a hypergraph. The number of non-zero entries in a row is the rank of the corresponding branch, and the highest branch rank is the rank of the incidence matrix.
Graph theory:
Non-homogeneous variables Classical network analysis develops a set of network equations whose network variables are homogeneous in either current (loop analysis) or voltage (node analysis). The set of network variables so found is not necessarily the minimum necessary to form a set of independent equations. There may be a difference between the number of variables in a loop analysis to a node analysis. In some cases the minimum number possible may be less than either of these if the requirement for homogeneity is relaxed and a mix of current and voltage variables allowed. A result from Kishi and Katajini in 1967 is that the absolute minimum number of variables required to describe the behaviour of the network is given by the maximum distance between any two spanning forests of the network graph.
Graph theory:
Network synthesis Graph theory can be applied to network synthesis. Classical network synthesis realises the required network in one of a number of canonical forms. Examples of canonical forms are the realisation of a driving-point impedance by Cauer's canonical ladder network or Foster's canonical form or Brune's realisation of an immittance from his positive-real functions. Topological methods, on the other hand, do not start from a given canonical form. Rather, the form is a result of the mathematical representation. Some canonical forms require mutual inductances for their realisation. A major aim of topological methods of network synthesis has been to eliminate the need for these mutual inductances. One theorem to come out of topology is that a realisation of a driving-point impedance without mutual couplings is minimal if and only if there are no all-inductor or all-capacitor loops.Graph theory is at its most powerful in network synthesis when the elements of the network can be represented by real numbers (one-element-kind networks such as resistive networks) or binary states (such as switching networks).
Graph theory:
Infinite networks Perhaps, the earliest network with an infinite graph to be studied was the ladder network used to represent transmission lines developed, in its final form, by Oliver Heaviside in 1881. Certainly all early studies of infinite networks were limited to periodic structures such as ladders or grids with the same elements repeated over and over. It was not until the late 20th century that tools for analysing infinite networks with an arbitrary topology became available.Infinite networks are largely of only theoretical interest and are the plaything of mathematicians. Infinite networks that are not constrained by real-world restrictions can have some very unphysical properties. For instance Kirchhoff's laws can fail in some cases and infinite resistor ladders can be defined which have a driving-point impedance which depends on the termination at infinity. Another unphysical property of theoretical infinite networks is that, in general, they will dissipate infinite power unless constraints are placed on them in addition to the usual network laws such as Ohm's and Kirchhoff's laws. There are, however, some real-world applications. The transmission line example is one of a class of practical problems that can be modelled by infinitesimal elements (the distributed-element model). Other examples are launching waves into a continuous medium, fringing field problems, and measurement of resistance between points of a substrate or down a borehole.Transfinite networks extend the idea of infinite networks even further. A node at an extremity of an infinite network can have another branch connected to it leading to another network. This new network can itself be infinite. Thus, topologies can be constructed which have pairs of nodes with no finite path between them. Such networks of infinite networks are called transfinite networks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Online quiz**
Online quiz:
Online quizzes are quizzes that are published on the Internet and are generally for entertainment purposes.
Introduction:
Online quizzes are a popular form of entertainment for web surfers. Online quizzes are generally free to play and for entertainment purposes only though some online quiz websites offer prizes. Websites feature online quizzes on many subjects. One popular type of online quiz is a personality quiz or relationship quiz which is similar to what can be found in many women's or teen magazines. Websites hosting quizzes include Quizilla, FunTrivia, OkCupid, Sporcle, Quizlet, and JetPunk.
Blog quizzes:
Blog quizzes (also known as quiz blog) refer to a specific genre of quizzes which are conducted by the quizzers on blogs. Blog quizzes may be about verbs or a wide range of other topics.
Educational quizzes:
Quiz is one of the most common eLearning patterns for many of the online course. Some companies and schools use online quizzes as a means to educate their employees or students respectively. Popular websites hosting quizzes for this purpose include Quizlet and Revision Quiz Maker.
Practical applications:
Many online quizzes are set up to actually test knowledge or identify a person's attributes. Some companies use online quizzes as an efficient way of testing a potential hire's knowledge without that candidate needing to travel. Online dating services often use personality quizzes to find a match between similar members.
Other:
Most online quizzes are to be taken lightly. The results do not often reflect the true personality or relationship. They are also rarely psychometrically valid. However, they may occasion reflection on the subject of the quiz and provide a springboard for a person to explore his or her emotions, beliefs, or actions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PLCB3**
PLCB3:
1-Phosphatidylinositol-4,5-bisphosphate phosphodiesterase beta-3 is an enzyme that in humans is encoded by the PLCB3 gene.The gene codes for the enzyme phospholipase C β3. The enzyme catalyzes the formation of inositol 1,4,5-trisphosphate and diacylglycerol from phosphatidylinositol 4,5-bisphosphate. This reaction uses calcium as a cofactor and plays an important role in the intracellular transduction of many extracellular signals. This gene is activated by two G-protein alpha subunits, alpha-q and alpha-11, as well as G-beta gamma subunits.
Interactions:
PLCB3 has been shown to interact with Sodium-hydrogen exchange regulatory cofactor 2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Table of prime factors**
Table of prime factors:
The tables contain the prime factorization of the natural numbers from 1 to 1000.
When n is a prime number, the prime factorization is just n itself, written in bold below.
The number 1 is called a unit. It has no prime factors and is neither prime nor composite.
Properties:
Many properties of a natural number n can be seen or directly computed from the prime factorization of n.
Properties:
The multiplicity of a prime factor p of n is the largest exponent m for which pm divides n. The tables show the multiplicity for each prime factor. If no exponent is written then the multiplicity is 1 (since p = p1). The multiplicity of a prime which does not divide n may be called 0 or may be considered undefined.
Properties:
Ω(n), the big Omega function, is the number of prime factors of n counted with multiplicity (so it is the sum of all prime factor multiplicities).
A prime number has Ω(n) = 1. The first: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 (sequence A000040 in the OEIS). There are many special types of prime numbers.
A composite number has Ω(n) > 1. The first: 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21 (sequence A002808 in the OEIS). All numbers above 1 are either prime or composite. 1 is neither.
A semiprime has Ω(n) = 2 (so it is composite). The first: 4, 6, 9, 10, 14, 15, 21, 22, 25, 26, 33, 34 (sequence A001358 in the OEIS).
A k-almost prime (for a natural number k) has Ω(n) = k (so it is composite if k > 1).
An even number has the prime factor 2. The first: 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24 (sequence A005843 in the OEIS).
An odd number does not have the prime factor 2. The first: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23 (sequence A005408 in the OEIS). All integers are either even or odd.
A square has even multiplicity for all prime factors (it is of the form a2 for some a). The first: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144 (sequence A000290 in the OEIS).
A cube has all multiplicities divisible by 3 (it is of the form a3 for some a). The first: 1, 8, 27, 64, 125, 216, 343, 512, 729, 1000, 1331, 1728 (sequence A000578 in the OEIS).
A perfect power has a common divisor m > 1 for all multiplicities (it is of the form am for some a > 1 and m > 1). The first: 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 81, 100 (sequence A001597 in the OEIS). 1 is sometimes included.
A powerful number (also called squareful) has multiplicity above 1 for all prime factors. The first: 1, 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 72 (sequence A001694 in the OEIS).
A prime power has only one prime factor. The first: 2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19 (sequence A000961 in the OEIS). 1 is sometimes included.
An Achilles number is powerful but not a perfect power. The first: 72, 108, 200, 288, 392, 432, 500, 648, 675, 800, 864, 968 (sequence A052486 in the OEIS).
A square-free integer has no prime factor with multiplicity above 1. The first: 1, 2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17 (sequence A005117 in the OEIS)). A number where some but not all prime factors have multiplicity above 1 is neither square-free nor squareful.
The Liouville function λ(n) is 1 if Ω(n) is even, and is -1 if Ω(n) is odd.
The Möbius function μ(n) is 0 if n is not square-free. Otherwise μ(n) is 1 if Ω(n) is even, and is −1 if Ω(n) is odd.
A sphenic number has Ω(n) = 3 and is square-free (so it is the product of 3 distinct primes). The first: 30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154 (sequence A007304 in the OEIS).
a0(n) is the sum of primes dividing n, counted with multiplicity. It is an additive function.
Properties:
A Ruth-Aaron pair is two consecutive numbers (x, x+1) with a0(x) = a0(x+1). The first (by x value): 5, 8, 15, 77, 125, 714, 948, 1330, 1520, 1862, 2491, 3248 (sequence A039752 in the OEIS), another definition is the same prime only count once, if so, the first (by x value): 5, 24, 49, 77, 104, 153, 369, 492, 714, 1682, 2107, 2299 (sequence A006145 in the OEIS) A primorial x# is the product of all primes from 2 to x. The first: 2, 6, 30, 210, 2310, 30030, 510510, 9699690, 223092870, 6469693230, 200560490130, 7420738134810 (sequence A002110 in the OEIS). 1# = 1 is sometimes included.
Properties:
A factorial x! is the product of all numbers from 1 to x. The first: 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800, 39916800, 479001600 (sequence A000142 in the OEIS). 0! = 1 is sometimes included.
A k-smooth number (for a natural number k) has largest prime factor ≤ k (so it is also j-smooth for any j > k).
m is smoother than n if the largest prime factor of m is below the largest of n.
A regular number has no prime factor above 5 (so it is 5-smooth). The first: 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16 (sequence A051037 in the OEIS).
A k-powersmooth number has all pm ≤ k where p is a prime factor with multiplicity m.
A frugal number has more digits than the number of digits in its prime factorization (when written like below tables with multiplicities above 1 as exponents). The first in decimal: 125, 128, 243, 256, 343, 512, 625, 729, 1024, 1029, 1215, 1250 (sequence A046759 in the OEIS).
An equidigital number has the same number of digits as its prime factorization. The first in decimal: 1, 2, 3, 5, 7, 10, 11, 13, 14, 15, 16, 17 (sequence A046758 in the OEIS).
An extravagant number has fewer digits than its prime factorization. The first in decimal: 4, 6, 8, 9, 12, 18, 20, 22, 24, 26, 28, 30 (sequence A046760 in the OEIS).
An economical number has been defined as a frugal number, but also as a number that is either frugal or equidigital.
gcd(m, n) (greatest common divisor of m and n) is the product of all prime factors which are both in m and n (with the smallest multiplicity for m and n).
m and n are coprime (also called relatively prime) if gcd(m, n) = 1 (meaning they have no common prime factor).
lcm(m, n) (least common multiple of m and n) is the product of all prime factors of m or n (with the largest multiplicity for m or n).
gcd(m, n) × lcm(m, n) = m × n. Finding the prime factors is often harder than computing gcd and lcm using other algorithms which do not require known prime factorization.
m is a divisor of n (also called m divides n, or n is divisible by m) if all prime factors of m have at least the same multiplicity in n.The divisors of n are all products of some or all prime factors of n (including the empty product 1 of no prime factors).
The number of divisors can be computed by increasing all multiplicities by 1 and then multiplying them.
Divisors and properties related to divisors are shown in table of divisors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Standard cubic feet per minute**
Standard cubic feet per minute:
Standard cubic feet per minute (SCFM) is the molar flow rate of a gas expressed as a volumetric flow at a "standardized" temperature and pressure thus representing a fixed number of moles of gas regardless of composition and actual flow conditions. It is related to the mass flow rate of the gas by a multiplicative constant which depends only on the molecular weight of the gas. There are different standard conditions for temperature and pressure, so care is taken when choosing a particular standard value. Worldwide, the "standard" condition for pressure is variously defined as an absolute pressure of 101,325 pascals (Atmospheric pressure), 1.0 bar (i.e., 100,000 pascals), 14.73 psia, or 14.696 psia and the "standard" temperature is variously defined as 68 °F, 60 °F, 0 °C, 15 °C, 20 °C, or 25 °C. The relative humidity (e.g., 36% or 0%) is also included in some definitions of standard conditions.
Standard cubic feet per minute:
In Europe, the standard temperature is most commonly defined as 0 °C, but not always. In the United States, the standard temperature is most commonly defined as 60 °F or 70 °F, but again, not always. A variation in standard temperature can result in a significant volumetric variation for the same mass flow rate. For example, a mass flow rate of 1,000 kg/h of air at 1 atmosphere of absolute pressure is 455 SCFM when defined at 32 °F (0 °C) but 481 SCFM when defined at 60 °F (16 °C).
Standard cubic feet per minute:
In countries using the SI metric system of units, the term "normal cubic metre" (Nm3) is very often used to denote gas volumes at some normalized or standard condition. Again, as noted above, there is no universally accepted set of normalized or standard conditions.
Actual cubic feet per minute:
Actual cubic foot per minute (ACFM) is the volume of gas flowing anywhere in a system, taking into account its temperature and pressure. If the system were moving a gas at exactly the "standard" condition, then ACFM would equal SCFM. This usually is not the case as the most important change between these two definitions is the pressure. To move a gas, a positive pressure or a vacuum must be created. When positive pressure is applied to a standard cubic foot of gas, it is compressed. When a vacuum is applied to a standard cubic foot of gas, it expands. The volume of gas after it is pressurized or rarefied is referred to as its "actual" volume.
Actual cubic feet per minute:
SCF and ACF for an ideal gas are related in accordance with the combined gas law: P1V1T1=P2V2T2 Defining standard conditions by the subscript 1 and actual conditions by the subscript 2, then: SCF=ACF⋅(PactualPstandard)(TstandardTactual) where P is in absolute pressure units and T is in absolute temperature units (i.e., either kelvins or degrees Rankine).
This is only valid when at a pressure and temperature close to standard conditions. For non-ideal gasses (most gasses) a compressibility factor "Z" is introduced to allow for non-ideality. To introduce the compressibility factor to the equation divide ACF by "Z".
Cubic feet per minute:
Cubic feet per minute (CFM) is an often confusing term because it has no single definition that applies to all instances. Gases are compressible, which means that a figure in cubic feet per minute cannot necessarily be compared with another figure when it comes the mass of the gas. To further confuse the issue, a centrifugal fan is a constant CFM device or a constant volume device. This means that, provided the fan speed remains constant, a centrifugal fan will pump a constant volume of air. This is not the same as pumping a constant mass of air. Again, the fan will pump the same volume, though not mass, at any other air density. This means that the air velocity in a system is the same even though mass flow rate through the fan is not. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lapsed listener problem**
Lapsed listener problem:
In computer programming, the lapsed listener problem is a common source of memory leaks for object-oriented programming languages, among the most common ones for garbage collected languages.It originates in the observer pattern, where observers (or listeners) register with a subject (or publisher) to receive events. In basic implementation, this requires both explicit registration and explicit deregistration, as in the dispose pattern, because the subject holds strong references to the observers, keeping them alive. The leak happens when an observer fails to unsubscribe from the subject when it no longer needs to listen. Consequently, the subject still holds a reference to the observer which prevents it from being garbage collected — including all other objects it is referring to — for as long as the subject is alive, which could be until the end of the application.
Lapsed listener problem:
This causes not only a memory leak, but also a performance degradation with an "uninterested" observer receiving and acting on unwanted events. This can be prevented by the subject holding weak references to the observers, allowing them to be garbage collected as normal without needing to be unregistered. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neurotensin receptor 1**
Neurotensin receptor 1:
Neurotensin receptor type 1 is a protein that in humans is encoded by the NTSR1 gene. For a crystal structure of NTS1, see pdb code 4GRV. In addition, high-resolution crystal structures have been determined in complex with the peptide full agonist NTS8-13, the non-peptide full agonist SRI-9829, the partial agonist RTI-3a, and the antagonists / inverse agonists SR48692 and SR142948A, as well as in the ligand-free apo state., see PDB codes 6YVR (NTSR1-H4X:NTS8–13), 6Z4V (NTSR1-H4bmX:NTS8–13), 6Z8N (NTSR1-H4X:SRI-9829), 6ZA8 (NTSR1-H4X:RTI-3a), 6Z4S (NTSR1-H4bmX:SR48692), 6ZIN (NTSR1-H4X:SR48692), 6Z4Q (NTSR1-H4X: SR142948A), and 6Z66 (apo NTSR1-H4X).
Function:
Neurotensin receptor 1, also called NTR1, belongs to the large superfamily of G-protein coupled receptors and is considered a class-A GPCR. NTSR1 mediates multiple biological processes through modulation by neurotensin, such as low blood pressure, high blood sugar, low body temperature, antinociception, anti-neuronal damage and regulation of intestinal motility and secretion.
Ligands:
ML314 – β-arrestin biased agonist Neurotensin (NT1) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Computational Infrastructure for Geodynamics**
Computational Infrastructure for Geodynamics:
The Computational Infrastructure for Geodynamics (CIG) is a community-driven organization that advances Earth science by developing and disseminating software for geophysics and related fields. It is a National Science Foundation-sponsored collaborative effort to improve geodynamic modelling and develop, support, and disseminate open-source software for the geodynamics research and higher education communities.
CIG is located at the University of California, Davis, and is a member-governed consortium with 62 US institutional members and 15 international affiliates.
History:
CIG was established in 2005 in response to the need for coordinated development and dissemination of software for geodynamics applications. Founded with an NSF cooperative agreement to Caltech, in 2010, CIG moved to UC Davis under a new cooperative agreement from NSF.
Software:
CIG hosts open source software in a wide range of disciplines and topic areas, such as geodynamics, computational science, seismology, mantle convection, long-term tectonics, and short-term crustal dynamics.
Software Attribution for Geoscience Applications (SAGA):
CIG started the SAGA project with an NSF EAGER award from the SBE Office of Multidisciplinary Activities for "Development of Software Citation Methodology for Open Source Computational Science". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudomon-1 RNA motif**
Pseudomon-1 RNA motif:
The Pseudomon-1 RNA motif is a conserved RNA identified by bioinformatics. It is used by most species whose genomes have been sequenced and that are classified within the genus Pseudomonas, and is also present in Azotobacter vinelandii, a closely related species. It is presumed to function as a non-coding RNA. Pseudomon-1 RNAs consistently have a downstream rho-independent transcription terminator.
Pseudomon-1 RNA motif:
The intergenic region containing Pseudomon-1 RNAs were also detected later by an independent study based on deep sequencing and called SPA0122 The genes surrounding this region suggest a similarity to Spot 42 RNA, but the Pseudomonas RNA functions to regulate the AlgC enzyme Based on this information, the new name ErsA for the RNA was adopted. Negatively regulates major porin oprD expression responsible for uptake of carbapenem antibiotics, by base pairing with oprD 5′UTR (leading to increased bacterial resistance to meropenem). ErsA positively regulates amrZ mRNA and contributes to biofilm formation and motility. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Liquid crystal on silicon**
Liquid crystal on silicon:
Liquid crystal on silicon (LCoS or LCOS) is a miniaturized reflective active-matrix liquid-crystal display or "microdisplay" using a liquid crystal layer on top of a silicon backplane. It is also referred to as a spatial light modulator. LCoS was initially developed for projection televisions but is now used for wavelength selective switching, structured illumination, near-eye displays and optical pulse shaping. By way of comparison, some LCD projectors use transmissive LCD, allowing light to pass through the liquid crystal.
Liquid crystal on silicon:
In an LCoS display, a complementary metal–oxide–semiconductor (CMOS) chip controls the voltage on square reflective aluminium electrodes buried just below the chip surface, each controlling one pixel. For example, a chip with XGA resolution will have 1024x768 plates, each with an independently addressable voltage. Typical cells are about 1–3 centimeters square and about 2 mm thick, with pixel pitch as small as 2.79 μm. A common voltage for all the pixels is supplied by a transparent conductive layer made of indium tin oxide on the cover glass.
Displays:
History The history of LCos projectors dates back to the late 1980s when the technology was first developed. At the time, the primary use of LCos projectors was in the military and scientific fields due to their large and bulky size. However, in the late 1990s, companies like JVC and Hughes Electronics began developing smaller and more affordable LCos projectors for commercial use.
Displays:
The early LCos projectors had their challenges. They suffered from a phenomenon called "image sticking," where the image would remain on the screen after it was supposed to be gone. This was due to the mirrors sticking in their positions, which resulted in ghosting on the screen. However, manufacturers continued to refine the technology, and today's LCos projectors have largely overcome this issue.
Displays:
One of the biggest milestones in the history of LCos projectors came in 2004 when Sony introduced its SXRD (Silicon X-tal Reflective Display) technology. SXRD was an evolution of LCos technology that used even smaller pixels and a higher resolution, resulting in an even more accurate image. The SXRD technology was used in Sony's high-end home theater projectors, and it quickly gained a reputation for its exceptional picture quality.
Displays:
Another significant development in the history of LCos projectors came in 2006 when JVC introduced its D-ILA (Direct-Drive Image Light Amplifier) technology. D-ILA was an improvement over traditional LCos projectors in that it eliminated the need for a polarizing filter, resulting in a brighter and more vibrant image. The D-ILA technology has since become a popular choice for home theater enthusiasts.
Displays:
In recent years, LCos projectors have continued to evolve, with manufacturers introducing features like 4K resolution and HDR (High Dynamic Range) support. LCos projectors are now available at a range of price points, from affordable models for home theater use to high-end professional models used in commercial installations.
Overall, the history of LCos projectors is one of innovation and improvement. From their early days as bulky military and scientific equipment to today's sleek and affordable consumer models, LCos projectors have come a long way. Their high image quality, accurate color reproduction, and other advantages have made them a popular choice for home theater enthusiasts and professionals alike.
Display system architectures LCos (Liquid Crystal on Silicon) display technology is a type of microdisplay that has gained popularity due to its high image quality and ability to display high-resolution images. LCos display systems typically consist of three main components: the LCos panel, the light source, and the optical system.
Displays:
The LCos panel is the heart of the display system. It consists of an array of pixels that are arranged in a grid pattern. Each pixel is made up of a liquid crystal layer, a reflective layer, and a silicon substrate. The liquid crystal layer controls the polarization of light that passes through it, while the reflective layer reflects the light back towards the optical system. The silicon substrate is used to control the individual pixels and provides the necessary electronics to drive the LCos panel.
Displays:
The light source is used to provide the necessary illumination for the LCos panel. The most common light source used in LCos display systems is a high-intensity lamp. This lamp emits a broad spectrum of light that is filtered through a color wheel or other optical components to provide the necessary color gamut for the display system.
Displays:
The optical system is responsible for directing the light from the light source onto the LCos panel and projecting the resulting image onto a screen or other surface. The optical system consists of a number of lenses, mirrors, and other optical components that are carefully designed and calibrated to provide the necessary magnification, focus, and color correction for the display system.
Displays:
There are two main types of LCos display systems: transmissive and reflective. Transmissive LCos displays use a backlight behind the LCos panel to provide illumination, while reflective LCos displays use the ambient light in the environment to illuminate the LCos panel. Reflective LCos displays are more power-efficient and offer better contrast ratios than transmissive displays, but are typically more expensive to manufacture.
Displays:
Three-panel designs The white light is separated into three components (red, green and blue) and then combined back after modulation by the 3 LCoS devices. The light is additionally polarized by beam splitters.
Displays:
One-panel designs Both Toshiba's and Intel's single-panel LCOS display program were discontinued in 2004 before any units reached final-stage prototype. There were single-panel LCoS displays in production: One by Philips and one by Microdisplay Corporation. Forth Dimension Displays continues to offer a Ferroelectric LCoS display technology (known as Time Domain Imaging) available in QXGA, SXGA and WXGA resolutions which today is used for high resolution near-eye applications such as Training & Simulation, structured light pattern projection for AOI. Citizen Finedevice (CFD) also continues to manufacturer single panel RGB displays using FLCoS technology (Ferroelectric Liquid Crystals). They manufacture displays in multiple resolutions and sizes that are currently used in pico-projectors, electronic viewfinders for high end digital cameras, and head-mounted displays.
Displays:
Pico projectors, near-eye and head-mounted displays Whilst initially developed for large-screen projectors, LCoS displays have found a consumer niche in the area of pico-projectors, where their small size and low power consumption are well-matched to the constraints of such devices.
Displays:
LCoS devices are also used in near-eye applications such as electronic viewfinders for digital cameras, film cameras, and head-mounted displays (HMDs). These devices are made using ferroelectric liquid crystals (so the technology is named FLCoS) which are inherently faster than other types of liquid crystals to produce high quality images. Google's initial foray into wearable computing, Google glass, also uses a near-eye LCoS display.
Displays:
At CES 2018, Hong Kong Applied Science and Technology Research Institute Company Limited (ASTRI) and OmniVision showcased a reference design for a wireless augmented reality headset that could achieve 60 degree field of view (FoV). It combined a single-chip 1080p LCOS display and image sensor from OmniVision with ASTRI's optics and electronics. The headset is said to be smaller and lighter than others because of its single-chip design with integrated driver and memory buffer.
Wavelength-selective switches:
LCoS is particularly attractive as a switching mechanism in a wavelength-selective switch (WSS). LCoS-based WSS were initially developed by Australian company Engana, now part of Finisar. The LCoS can be employed to control the phase of light at each pixel to produce beam-steering where the large number of pixels allow a near continuous addressing capability. Typically, a large number of phase steps are used to create a highly efficient, low-insertion loss switch shown. This simple optical design incorporates polarisation diversity, control of mode size and a 4-f wavelength optical imaging in the dispersive axis of the LCoS providing integrated switching and optical power control.In operation, the light passes from a fibre array through the polarisation imaging optics which separates physically and aligns orthogonal polarisation states to be in the high efficiency s-polarisation state of the diffraction grating. The input light from a chosen fibre of the array is reflected from the imaging mirror and then angularly dispersed by the grating which is at near Littrow incidence, reflecting the light back to the imaging optics which directs each channel to a different portion of the LCoS. The path for each wavelength is then retraced upon reflection from the LCoS, with the beam-steering image applied on the LCOS directing the light to a particular port of the fibre array. As the wavelength channels are separated on the LCoS the switching of each wavelength is independent of all others and can be switched without interfering with the light on other channels. There are many different algorithms that can be implemented to achieve a given coupling between ports including less efficient "images" for attenuation or power splitting.
Wavelength-selective switches:
WSS based on MEMS and/or liquid crystal technologies allocate a single switching element (pixel) to each channel which means the bandwidth and centre frequency of each channel are fixed at the time of manufacture and cannot be changed in service. In addition, many designs of first-generation WSS (particularly those based on MEMs technology) show pronounced dips in the transmission spectrum between each channel due to the limited spectral ‘fill factor’ inherent in these designs. This prevents the simple concatenation of adjacent channels to create a single broader channel. LCoS-based WSS, however, permit dynamic control of channel centre frequency and bandwidth through on-the-fly modification of the pixel arrays via embedded software. The degree of control of channel parameters can be very fine-grained, with independent control of the centre frequency and either upper- or lower-band-edge of a channel with better than 1 GHz resolution possible. This is advantageous from a manufacturability perspective, with different channel plans being able to be created from a single platform and even different operating bands (such as C and L) being able to use an identical switch matrix. Additionally, it is possible to take advantage of this ability to reconfigure channels while the device is operating. Products have been introduced allowing switching between 50 GHz channels and 100 GHz channels, or a mix of channels, without introducing any errors or "hits" to the existing traffic. More recently, this has been extended to support the whole concept of Flexible or Elastic networks under ITU G.654.2 through products such as Finisar's Flexgrid™ WSS.
Other LCoS applications:
Optical pulse shaping The ability of an LCoS-based WSS to independently control both the amplitude and phase of the transmitted signal leads to the more general ability to manipulate the amplitude and/or phase of an optical pulse through a process known as Fourier-domain pulse shaping. This process requires full characterisation of the input pulse in both the time and spectral domains.
Other LCoS applications:
As an example, an LCoS-based Programmable Optical Processor (POP) has been used to broaden a mode-locked laser output into a 20 nm supercontinuum source whilst a second such device was used to compress the output to 400 fs, transform-limited pulses. Passive mode-locking of fiber lasers has been demonstrated at high repetition rates, but inclusion of an LCoS-based POP allowed the phase content of the spectrum to be changed to flip the pulse train of a passively mode-locked laser from bright to dark pulses. A similar approach uses spectral shaping of optical frequency combs to create multiple pulse trains. For example, a 10 GHz optical frequency comb was shaped by the POP to generate dark parabolic pulses and Gaussian pulses, at 1540 nm and 1560 nm, respectively.
Other LCoS applications:
Light structuring Structured light using a fast ferroelectric LCoS is used in 3D-superresolution microscopy techniques and in fringe projection for 3D-automated optical inspection.
Other LCoS applications:
Modal switching in space division multiplexed optical communications systems One of the interesting applications of LCoS is the ability to transform between modes of few-moded optical fibers which have been proposed as the basis of higher capacity transmission systems in the future. Similarly LCoS has been used to steer light into selected cores of multicore fiber transmission systems, again as a type of Space Division Multiplexing.
Other LCoS applications:
Tunable lasers LCoS has been used as a filtering technique, and hence a tuning mechanism, for both semiconductor diode and fiber lasers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2,5-Dimethoxy-4-trifluoromethylamphetamine**
2,5-Dimethoxy-4-trifluoromethylamphetamine:
2,5-Dimethoxy-4-trifluoromethylamphetamine (DOTFM) is a psychedelic drug of the phenethylamine and amphetamine chemical classes. It was first synthesized in 1994 by a team at Purdue University led by David E. Nichols. DOTFM is the alpha-methylated analogue of 2C-TFM, and is around twice as potent in animal studies. It acts as an agonist at the 5HT2A and 5HT2C receptors. In drug-substitution experiments in rats, DOTFM fully substituted for LSD and was slightly more potent than DOI. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cypress forest**
Cypress forest:
A Cypress forest is a western United States plant association typically dominated by one or more cypress species. Example species comprising the canopy include Cupressus macrocarpa. In some cases these forests have been severely damaged by goats, cattle and other grazing animals. While cypress species are clearly dominant within a Cypress forest, other trees such as California Buckeye, Aesculus californica, are found in some Cypress forests.
Examples:
The Guadalupe Island Cypress Forest is situated on Guadalupe Island, offshore from Baja California. This forest was greatly destroyed by the introduction of grazing goats, but conservation biology efforts have been conducted to assist in restoring the forest.Another example on the Pacific Coast mainland of Northern California is the Sargent's cypress Forest, located in coastal Marin County, California. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subaerial eruption**
Subaerial eruption:
A subaerial eruption is any sort of volcanic eruption that occurs on the Earth's surface, or in the open air "under the air", and not underwater or underground. They generally produce pyroclastic flows, lava fountains and lava flows, which are commonly classified in different subaerial eruption types, including Plinian, Peléan and Hawaiian eruptions. Subaerial eruptions contrast with subaqueous, submarine and subglacial eruptions which all originate below forms of a water surface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oxygen rebound mechanism**
Oxygen rebound mechanism:
In biochemistry, the oxygen rebound mechanism is the pathway for hydroxylation of organic compounds by iron-containing oxygenases. Many enzymes effect the hydroxylation of hydrocarbons as a means for biosynthesis, detoxification, gene regulation, and other functions. These enzymes often utilize Fe-O centers that convert C-H bonds into C-OH groups. The oxygen rebound mechanism starts with abstraction of H from the hydrocarbon, giving an organic radical and an iron hydroxide. In the rebound step, the organic radical attacks the Fe-OH center to give an alcohol group, which is bound to Fe as a ligand. Dissociation of the alcohol from the metal allows the cycle to start anew. This mechanistic scenario is an alternative to the direct insertion of an O center into a C-H bond. The pathway is an example of C-H activation.
Oxygen rebound mechanism:
Three main classes of these enzymes are cytochrome P450, alpha-ketoglutarate-dependent hydroxylases, and nonheme-diiron hydroxylases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Royale (brand)**
Royale (brand):
Royale is a Canadian brand of consumer household paper products such as facial tissue, bathroom tissue, paper towel, and paper napkins.
History:
In 1929, New York-based National Cellulose Company dissolved a relationship with its Canadian distributor and opened its first Canadian office in downtown Toronto In 1936, Toronto businessman William S. Gibson and a team of investors bought out National Cellulose and Dominion Cellulose was formed. Dominion Cellulose continued to sell its Facelle tissue in Canada until 1961 when the company was sold to Canadian International Paper Company and was renamed Facelle Company. Facelle Company launched its Royale brand in 1963 with two products; 3-ply facial tissue, and 2-ply bathroom tissue.
History:
In August 1991 the Royale brand was sold to Procter & Gamble where it remained until 2001 when Irving Tissue purchased P&G’s Weston Road plant in Toronto, Ontario along with the rights to the Royale brand.
The Royale Kittens:
The Royale brand is represented by the Royale Kittens, two white Persian kittens who embody the Kitten-y softness of Royale products. They first appeared in a 1973 television commercial which ran until 1984. Since then, the Kittens have appeared in television, print, and Internet marketing material for Royale. In 2010, an official Facebook community page was created in the name of the Royale Kittens.
Advertising:
Royale’s longest running television ad campaign ran from 1973 to 1984, and featured the Royale Kittens playing on a white shag rug and unwinding rolls of bathroom tissue. Other memorable campaigns include Royale’s “The Nose” spot featuring pro hockey player Eddie Shack. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clinical formulation**
Clinical formulation:
A clinical formulation, also known as case formulation and problem formulation, is a theoretically-based explanation or conceptualisation of the information obtained from a clinical assessment. It offers a hypothesis about the cause and nature of the presenting problems and is considered an adjunct or alternative approach to the more categorical approach of psychiatric diagnosis. In clinical practice, formulations are used to communicate a hypothesis and provide framework for developing the most suitable treatment approach. It is most commonly used by clinical psychologists and is deemed to be a core component of that profession. Mental health nurses, social workers, and some psychiatrists may also use formulations.
Types of formulation:
Different psychological schools or models utilize clinical formulations, including cognitive behavioral therapy (CBT) and related therapies: systemic therapy, systemic hypothesising, psychodynamic therapy, and applied behavior analysis. The structure and content of a clinical formulation is determined by the psychological model. Most systems of formulation contain the following broad categories of information: symptoms and problems; precipitating stressors or events; predisposing life events or stressors; and an explanatory mechanism that links the preceding categories together and offers a description of the precipitants and maintaining influences of the person's problems.Behavioral case formulations used in applied behavior analysis and behavior therapy are built on a rank list of problem behaviors, from which a functional analysis is conducted, sometimes based on relational frame theory. Such functional analysis is also used in third-generation behavior therapy or clinical behavior analysis such as acceptance and commitment therapy and functional analytic psychotherapy. Functional analysis looks at setting events (ecological variables, history effects, and motivating operations), antecedents, behavior chains, the problem behavior, and the consequences, short- and long-term, for the behavior.A model of formulation that is more specific to CBT is described by Jacqueline Persons. This has seven components: problem list, core beliefs, precipitants and activating situations, origins, working hypothesis, treatment plan, and predicted obstacles to treatment.
Types of formulation:
A psychodynamic formulation would consist of a summarizing statement, a description of nondynamic factors, description of core psychodynamics using a specific model (such as ego psychology, object relations or self psychology), and a prognostic assessment which identifies the potential areas of resistance in therapy.One school of psychotherapy which relies heavily on the formulation is cognitive analytic therapy (CAT). CAT is a fixed-term therapy, typically of around 16 sessions. At around session four, a formal written reformulation letter is offered to the patient which forms the basis for the rest of the treatment. This is usually followed by a diagrammatic reformulation to amplify and reinforce the letter.Many psychologists use an integrative psychotherapy approach to formulation. This is to take advantage of the benefits of resources from each model the psychologist is trained in, according to the patient's needs.
Critical evaluation of formulations:
The quality of specific clinical formulations, and the quality of the general theoretical models used in those formulations, can be evaluated with criteria such as: Clarity and parsimony: Is the model understandable and internally consistent, and are key concepts discrete, specific, and non-redundant? Precision and testability: Does the model produce testable hypotheses, with operationally defined and measurable concepts? Empirical adequacy: Are the posited mechanisms within the model empirically validated? Comprehensiveness and generalizability: Is the model holistic enough to apply across a range of clinical phenomena? Utility and applied value: Does it facilitate shared meaning-making between clinician and client, and are interventions based on the model shown to be effective?Formulations can vary in temporal scope from case-based to episode-based or moment-based, and formulations may evolve during the course of treatment. Therefore, ongoing monitoring, testing, and assessment during treatment are necessary: monitoring can take the form of session-by-session progress reviews using quantitative measures, and formulations can be modified if an intervention is not as effective as hoped.
History:
Psychologist George Kelly, who developed personal construct theory in the 1950s, noted his complaint against traditional diagnosis in his book The Psychology of Personal Constructs (1955): "Much of the reform proposed by the psychology of personal constructs is directed towards the tendency for psychologists to impose preemptive constructions upon human behaviour. Diagnosis is all too frequently an attempt to cram a whole live struggling client into a nosological category.": 154 In place of nosological categories, Kelly used the word "formulation" and mentioned two types of formulation:: 337 a first stage of structuralization, in which the clinician tentatively organizes clinical case information "in terms of dimensions rather than in terms of disease entities": 192 while focusing on "the more important ways in which the client can change, and not merely ways in which the psychologist can distinguish him from other persons",: 154 and a second stage of construction, in which the clinician seeks a kind of negotiated integration of the clinician's organization of the case information with the client's personal meanings.Psychologists Hans Eysenck, Monte B. Shapiro, Vic Meyer, and Ira Turkat were also among the early developers of systematic individualized alternatives to diagnosis.: 4 Meyer has been credited with providing perhaps the first training course of behaviour therapy based on a case formulation model, at the Middlesex Hospital Medical School in London in 1970.: 13 Meyer's original choice of words for clinical formulation were "behavioural formulation" or "problem formulation".: 14 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interpolation inequality**
Interpolation inequality:
In the field of mathematical analysis, an interpolation inequality is an inequality of the form ‖u0‖0≤C‖u1‖1α1‖u2‖2α2…‖un‖nαn,n≥2, where for 0≤k≤n , uk is an element of some particular vector space Xk equipped with norm ‖⋅‖k and αk is some real exponent, and C is some constant independent of u0,..,un . The vector spaces concerned are usually function spaces, and many interpolation inequalities assume u0=u1=⋯=un and so bound the norm of an element in one space with a combination norms in other spaces, such as Ladyzhenskaya's inequality and the Gagliardo-Nirenberg interpolation inequality, both given below. Nonetheless, some important interpolation inequalities involve distinct elements u0,..,un , including Hölder's Inequality and Young's inequality for convolutions which are also presented below.
Applications:
The main applications of interpolation inequalities lie in fields of study, such as partial differential equations, where various function spaces are used. An important example are the Sobolev spaces, consisting of functions whose weak derivatives up to some (not necessarily integer) order lie in Lp spaces for some p. There interpolation inequalities are used, roughly speaking, to bound derivatives of some order with a combination of derivatives of other orders. They can also be used to bound products, convolutions, and other combinations of functions, often with some flexibility in the choice of function space. Interpolation inequalities are fundamental to the notion of an interpolation space, such as the space Ws,p , which loosely speaking is composed of functions whose sth order weak derivatives lie in Lp . Interpolation inequalities are also applied when working with Besov spaces Bp,qs(Ω) , which are a generalization of the Sobolev spaces. Another class of space admitting interpolation inequalities are the Hölder spaces.
Examples:
A simple example of an interpolation inequality — one in which all the uk are the same u, but the norms ‖·‖k are different — is Ladyzhenskaya's inequality for functions u: ℝ2 → ℝ, which states that whenever u is a compactly supported function such that both u and its gradient ∇u are square integrable, it follows that the fourth power of u is integrable and ∫R2|u(x)|4dx≤2∫R2|u(x)|2dx∫R2|∇u(x)|2dx, i.e.
Examples:
A slightly weaker form of Ladyzhenskaya's inequality applies in dimension 3, and Ladyzhenskaya's inequality is actually a special case of a general result that subsumes many of the interpolation inequalities involving Sobolev spaces, the Gagliardo-Nirenberg interpolation inequality.The following example, this one allowing interpolation of non-integer Sobolev spaces, is also a special case of the Gagliardo-Nirenberg interpolation inequality. Denoting the L2 Sobolev spaces by Hk=Wk,2 , and given real numbers {\textstyle 1\leq k<\ell <m} and a function u∈Hm , we have ‖u‖Hℓ≤‖u‖Hkm−ℓm−k‖u‖Hmℓ−km−k An example of an interpolation inequality where the elements differ is Young's inequality for convolutions. Given exponents 1≤p,q,r≤∞ such that 1p+1q=1+1r and functions f∈Lp,g∈Lq , their convolution lies in Lr and ‖f∗g‖Lr≤‖f‖Lp‖g‖Lq The well known Hölder's inequality is another of this type: given 1≤p,q≤∞ and functions f∈Lp(Ω),g∈Lq(Ω) with 1/p+1/q=1 , their product is in L1(Ω) and ‖fg‖L1≤‖f‖Lp‖g‖Lq
Examples of interpolation inequalities:
Agmon's inequality Gagliardo–Nirenberg interpolation inequality Ladyzhenskaya's inequality Landau–Kolmogorov inequality Marcinkiewicz interpolation theorem Nash's inequality Riesz–Thorin theorem Young's inequality for convolutions | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Good spanning tree**
Good spanning tree:
In the mathematical field of graph theory, a good spanning tree T of an embedded planar graph G is a rooted spanning tree of G whose non-tree edges satisfy the following conditions.
Good spanning tree:
there is no non-tree edge (u,v) where u and v lie on a path from the root of T to a leaf, the edges incident to a vertex v can be divided by three sets Xv,Yv and Zv , where, Xv is a set of non-tree edges, they terminate in red zone Yv is a set of tree edges, they are children of v Zv is a set of non-tree edges, they terminate in green zone
Formal definition:
Let Gϕ be a plane graph. Let T be a rooted spanning tree of Gϕ . Let P(r,v)=(r=u1),u2,…,(v=uk) be the path in T from the root r to a vertex v≠r . The path P(r,v) divides the children of ui , (1≤i<k) , except ui+1 , into two groups; the left group L and the right group R . A child x of ui is in group L and denoted by uiL if the edge (ui,x) appears before the edge (ui,ui+1) in clockwise ordering of the edges incident to ui when the ordering is started from the edge (ui,ui+1) . Similarly, a child x of ui is in the group R and denoted by uiR if the edge (ui,x) appears after the edge (ui,ui+1) in clockwise order of the edges incident to ui when the ordering is started from the edge (ui,ui+1) . The tree T is called a good spanning tree of Gϕ if every vertex v (v≠r) of Gϕ satisfies the following two conditions with respect to P(r,v) [Cond1] Gϕ does not have a non-tree edge (v,ui) , i<k ; and [Cond2] the edges of Gϕ incident to the vertex v excluding (uk−1,v) can be partitioned into three disjoint (possibly empty) sets Xv,Yv and Zv satisfying the following conditions (a)-(c) (a) Each of Xv and Zv is a set of consecutive non-tree edges and Yv is a set of consecutive tree edges.
Formal definition:
(b) Edges of set Xv , Yv and Zv appear clockwise in this order from the edge (uk−1,v) (c) For each edge (v,v′)∈Xv , v′ is contained in TuiL , i<k , and for each edge (v,v′)∈Zv , v′ is contained in TuiR , i<k
Applications:
In monotone drawing of graphs, in 2-visibility representation of graphs.
Finding good spanning tree:
Every planar graph G has an embedding Gϕ such that Gϕ contains a good spanning tree. A good spanning tree and a suitable embedding can be found from G in linear-time. Not all embeddings of G contain a good spanning tree. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thalamocortical radiations**
Thalamocortical radiations:
In neuroanatomy, thalamocortical radiations also known as thalamocortical fibres, are the efferent fibres that project from the thalamus to distinct areas of the cerebral cortex. They form fibre bundles that emerge from the lateral surface of the thalamus.
Structure:
Thalamocortical fibers (TC fibres) have been referred to as one of the two constituents of the isothalamus, the other being microneurons. Thalamocortical fibers have a bush or tree-like appearance as they extend into the internal capsule and project to the layers of the cortex. The main thalamocortical fibers extend from different nuclei of the thalamus and project to the visual cortex, somatosensory (and associated sensori-motor) cortex, and the auditory cortex in the brain. Thalamocortical radiations also innervate gustatory and olfactory pathways, as well as pre-frontal motor areas. Visual input from the optic tract is processed by the lateral geniculate nucleus of the thalamus, auditory input in the medial geniculate nucleus, and somatosensory input in the ventral posterior nucleus of the thalamus. Thalamic nuclei project to cortical areas of distinct architectural organization and relay the processed information back to the area of original activity in the thalamus via corticothalamic fibers (CT fibres). The thalamic reticular nucleus (TRN) receives incoming signals via corticothalamic pathways and regulates activity within the thalamus accordingly. Cortico-thalamic feedback neurons are mostly found in layer VI of the cortex. Reciprocal CT projections to the thalamus are of a higher order than, and synapse with, the TRN in much greater number than do thalamocortical projections to cortex. This suggests that the cortex has a much bigger role in top down processing and regulation of thalamic activity than do the processes originating in thalamic interneurons. Large-scale frequency oscillations and electrical rhythms have also been shown to regulate TC activity for long periods of time, as is evident during the sleep cycle. Other evidence suggests CT modulation of TC rhythms can occur over different time scales, adding even more complexity to their function.
Structure:
Relay cells Thalamic interneurons process sensory information and signal different regions of the thalamic nuclei. These nuclei extend to relay cells, which in turn innervate distinct areas of the cortex via thalamocortical fibers. Either specifically or nonspecifically, TC relay cells project specifically to organized areas of the cortex directly and nonspecifically project to large areas of cortex through the innervation of many interconnected collateral axons.
Structure:
According to Jones (2001) there are two primary types of relay neurons in the thalamus of primates–core cells and matrix cells–each creating distinct pathways to various parts and layers throughout the cerebral cortex. Matrix cells of the thalamus, or calbindin-immuno-reactive neurons (CIR neurons), are widely distributed and diffusely dispersed in each of the nuclei of the dorsal thalamus. In comparison, parvalbumin immuno-reactive neurons (PIR neurons) can be found only in principal sensory and motor relay nuclei, and in the pulvinar nuclei as well as the intralaminar nuclei. The PIR neurons cluster together creating "densely terminating afferent fibers…forming a core imposed on a diffuse background matrix of PIR cells" (Jones 2001). PIR cells tend to project upon the cerebral cortex and terminate in an organized topographic manner in specifically localized zones (in deep layer III and in the middle layer IV). In contrast, CIR cells have dispersed projections wherein various adjacent cells connect to non-specific different cortical areas. CIR axons seem to terminate primarily in the superficial layers of the cortex: layers I, II, and upper III.
Function:
Thalamocortical signaling is primarily excitatory, causing the activation of corresponding areas of the cortex, but is mainly regulated by inhibitory mechanisms. The specific excitatory signaling is based upon glutamatergic signaling, and is dependent on the nature of the sensory information being processed. Recurrent oscillations in thalamocortical circuits also provide large-scale regulatory feedback inputs to the thalamus via GABAergic neurons that synapse in the TRN.
Function:
In a study done by Gibbs, Zhang, Shumate, and Coulter (1998) it was found that endogenously released zinc blocked GABA responses within the TC system specifically by interrupting communication between the thalamus and the connected TRN. Computational neuroscientists are particularly interested in thalamocortical circuits because they represent a structure that is disproportionally larger and more complex in humans than other mammals (when body size is taken into account), which may contribute to humans' special cognitive abilities. Evidence from one study (Arcelli et al. 1996) offers partial support to this claim by suggesting that thalamic GABAergic local circuit neurons in mammalian brains relate more to processing ability compared to sensorimotor ability, as they reflect an increasing complexity of local information processing in the thalamus. It is proposed that core relay cells and matrix cells projecting from the dorsal thalamus allow for synchronization of cortical and thalamic cells during "high-frequency oscillations that underlie discrete conscious events", though this is a heavily debated area of research.
Function:
Projections The majority of thalamocortical fibers project to layer IV of the cortex, wherein sensory information is directed to other layers where they either terminate or connect with axons collaterally depending on type of projection and type of initial activation. Activation of the thalamocortical neurons relies heavily on the direct and indirect effects of glutamate, which causes excitatory postsynaptic potentials (EPSPs) at terminal branches in the primary sensory cortices.
Function:
Somatosensory areas Primarily, thalamocortical somatosensory radiation from the VPL, VPM and LP nuclei extends to the primary and secondary somatosensory areas, terminating in cortical layers of the lateral postcentral gyrus. S1 receives parallel thalamocortical radiations from the posterior medial nucleus and the VPN. Projections from the VPN to the postcentral gyrus account for the transfer of sensory information concerning touch and pain. Several studies indicate that parallel innervations to S1 and also S2 via thalamocortical pathways result in the processing of nociceptive and non-nociceptive information. Non-specific projections to sensori-motor areas of the cortex may in part have to do with the relationship between non-noci-receptive processing and motor functions. Past research shows a link between S1 and M1, creating a thalamocortical sensori-motor circuit. When this circuit becomes disrupted symptoms are produced similar to those that accompany Multiple sclerosis, suggesting thalamocortical rhythms are involved in regulating sensori-motor pathways in a highly specialized manner. TC-CT rhythms evident during sleep act to inhibit these thalamocortical fibers so as to maintain the tonic cycling of low frequency waves and the subsequent suppression of motor activity.
Function:
Visual areas The lateral geniculate nucleus and the pulvinar nuclei project to and terminate in V1, and carry motor information from the brain stem as well as other sensory input from the optic tract. The visual cortex connects with other sensory areas which allows for the integration of cognitive tasks such as selective and directed attention, and pre-motor planning, in relation to the processing of incoming visual stimuli. Models of the pulvinar projections to the visual cortex have been proposed by several imaging studies, though their mapping has been difficult due to the fact that pulvinar subdivisions are not conventionally organized and have been difficult to visualize using structural MRI. Evidence from several studies supports the idea that the pulvinar nuclei and superior colliculus receive descending projections from CT fibers while TC fibers extending from the LGN carry visual information to the various areas of the visual cortex near the calcarine fissure.
Function:
Auditory areas Thalamocortical axons project primarily from the medial geniculate nucleus via the sublenticular region of the internal capsule, and terminate in an organized topographic manner in the transverse temporal gyri. MMGN radiations terminate in specific locations while thalamocortical fibers from the VMGN terminate in nonspecific clusters of cells and form collateral connections to neighboring cells. Research done by staining the brains of macaque monkeys reveals projections from the ventral nucleus mainly terminating in layers IV and IIIB, with some nonspecific clusters of PIR cells terminating in layers I, II, IIIA, and VI. Fibers from the dorsal nuclei were found to project more directly to the primary auditory area, with most axons terminating in layer IIIB. The magnocellular nucleus projected a small amount of PIR cells with axons mainly terminating in layer 1, though large regions of the middle cortical layers were innervated through collaterally connected CIR neurons. Past research suggests that the thalamocortical-auditory pathway may be the only neural correlate that can explain a direct translation of frequency information to the cortex via specific pathways.
Function:
Motor areas The primary motor cortex receives terminating thalamocortical fibers from the VL nucleus of the thalamus. This is the primary pathway involved in the transference of cerebellar input to the primary motor cortex. The VA projects widely across the inferior parietal and premotor cortex. Other Non-specific thalamocortical projections, those that originate in the dorsal-medial nuclei of the thalamus, terminate in the prefrontal cortex and have subsequent projections to associative premotor areas via collateral connections. The cortico-basal ganglia-thalamo-cortical loop has been traditionally associated with reward-learning and though has also been noted by some researchers to have a modulatory effect on thalamocortical network functioning–this is due to inherent activation of the premotor areas connecting the VA nucleus with the cortex.
Clinical significance:
Absence seizures Thalamocortical radiations have been researched extensively in the past due to their relationship with attention, wakefulness, and arousal. Past research has shown how an increase in spike-and-wave activity within the TC network can disrupt normal rhythms involved with the sleep-wakefulness cycle, ultimately causing absence seizures and other forms of epileptic behavior. Burst firing within a part of the TC network stimulates GABA receptors within the thalamus causing moments of increased inhibition, leading to frequency spikes, which offset oscillation patterns. Another study done on rats suggests during spike-and-wave seizures, thalamic rhythms are mediated by local thalamic connections, while the cortex controls the synchronization of these rhythms over extended periods of time. Thalamocortical dysrhythmia is a term associated with spontaneously reoccurring low frequency spike-and-wave activity in the thalamus, which causes symptoms normally associated with impulse control disorders such as obsessive compulsive disorder, Parkinson's disease, attention deficit hyperactivity disorder, and other forms of chronic psychosis. Other evidence has shown how reductions in the distribution of connections of nonspecific thalamocortical systems is heavily associated with loss of consciousness, as can be seen with individuals in a vegetative state, or coma.
Clinical significance:
Prefrontal lobotomy The bilateral interruption or severing of the connection between thalamocortical radiations the medial and anterior thalamic nuclei results in a prefrontal lobotomy, which causes a drastic personality change and a subdued behavioral disposition without cortical injury.
Research:
Evolutionary theories of consciousness Theories of consciousness have been linked to thalamocortical rhythm oscillations in TC-CT pathway activity. One such theory, the dynamic core theory of conscious experience, proposes four main pillars in support of conscious awareness as a consequence of dorsal thalamic activity: the results of cortical computations underlay consciousness vegetative states and general anesthetics work primarily to disrupt normal thalamic functioning the anatomy and physiology of the thalamus implies consciousness neural synchronization accounts for the neural basis of consciousness.This area of research is still developing, and most current theories are either partial or incomplete. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Touch DNA**
Touch DNA:
Touch DNA, also known as Trace DNA, is a forensic method for analyzing DNA left at the scene of a crime. It is called "touch DNA" because it only requires very small samples, for example from the skin cells left on an object after it has been touched or casually handled, or from footprints. Touch DNA analysis only requires seven or eight cells from the outermost layer of human skin. The technique has been criticized for high rates of false positives due to contamination—for example, fingerprint brushes used by crime scene investigators can transfer trace amounts of skin cells from one surface to another, leading to inaccurate results. Because of the risk of false positives, it is more often used by the defense to help exclude a suspect rather than the prosecution.The technique is very similar to Low Copy Number DNA analysis, to the extent that court rulings have sometimes confused the two. However, in LCN DNA analysis, the DNA goes through additional cycles of PCR amplification.
Method:
Touch DNA relies on the STR analysis of cells collected off of objects. Upon collection, the cells' DNA is extracted, and 13 genomic locations that vary among individuals are assessed to confirm suspects or exonerate those that are innocent.
Notable use cases:
The duct tape found with the remains of Caylee Anthony was tested for the presence of touch DNA during the criminal case against her mother, Casey Anthony. Richard Eikelenboom testified for the defense that none of Caylee's DNA was found on the tape.
Notable use cases:
Touch DNA was introduced in the third trial of David Camm by the defense. The DNA profile of another man, Charles Boney, was found on a number of objects at the crime scene, including the panties of Camm's wife Kim and a fingernail that is thought to have broken off during the struggle. The DNA evidence aided in his acquittal of the murders.
Notable use cases:
In 2008, the parents of JonBenet Ramsey were cleared as suspects in her 1996 murder following an analysis of touch DNA on her clothing. The family had long been the target of suspicion by the media, the police, and the public in the death of 6 year old JonBenet. The DNA also cleared John Mark Karr, a teacher who was arrested for the murders in 2006. The DNA was determined to belong to an unknown male. The case remains unsolved.
Notable use cases:
The prosecution used touch DNA to help build their case against James Biela for the murder of Brianna Denison. Touch DNA was collected from the doorknob of the residence where Brianna was staying when she was abducted. A DNA sample obtained from panties found near the body was later matched to the touch DNA and to Biela himself.
Notable use cases:
December 2012 a homeless man named Lukis Anderson was charged with the murder of Raveesh Kumra, a Silicon Valley multimillionaire, based on DNA evidence. Anderson was drunk and nearly comatose, hospitalized, under constant medical supervision, the night of the murder. Anderson's DNA was accidentally transferred to the crime scene by paramedics who had arrived at Kumra's residence. The paramedics had treated Anderson earlier the same day—accidentally transferring Anderson's DNA to the crime scene hours later. The case was presented to the annual American Academy of Forensic Sciences meeting in Las Vegas, as a definitive example of a DNA transfer implicating an innocent person.
Notable use cases:
In December 2015, police officer Daniel Holtzclaw was convicted of 18 felony counts, charges stemming from allegations of sexual assault, based on Touch DNA found on one of the alleged victims. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Data corruption**
Data corruption:
Data corruption refers to errors in computer data that occur during writing, reading, storage, transmission, or processing, which introduce unintended changes to the original data. Computer, transmission, and storage systems use a number of measures to provide end-to-end data integrity, or lack of errors.
Data corruption:
In general, when data corruption occurs, a file containing that data will produce unexpected results when accessed by the system or the related application. Results could range from a minor loss of data to a system crash. For example, if a document file is corrupted, when a person tries to open that file with a document editor they may get an error message, thus the file might not be opened or might open with some of the data corrupted (or in some cases, completely corrupted, leaving the document unintelligible). The adjacent image is a corrupted image file in which most of the information has been lost.
Data corruption:
Some types of malware may intentionally corrupt files as part of their payloads, usually by overwriting them with inoperative or garbage code, while a non-malicious virus may also unintentionally corrupt files when it accesses them. If a virus or trojan with this payload method manages to alter files critical to the running of the computer's operating system software or physical hardware, the entire system may be rendered unusable.
Data corruption:
Some programs can give a suggestion to repair the file automatically (after the error), and some programs cannot repair it. It depends on the level of corruption, and the built-in functionality of the application to handle the error. There are various causes of the corruption.
Overview:
There are two types of data corruption associated with computer systems: undetected and detected. Undetected data corruption, also known as silent data corruption, results in the most dangerous errors as there is no indication that the data is incorrect. Detected data corruption may be permanent with the loss of data, or may be temporary when some part of the system is able to detect and correct the error; there is no data corruption in the latter case.
Overview:
Data corruption can occur at any level in a system, from the host to the storage medium. Modern systems attempt to detect corruption at many layers and then recover or correct the corruption; this is almost always successful but very rarely the information arriving in the systems memory is corrupted and can cause unpredictable results.
Data corruption during transmission has a variety of causes. Interruption of data transmission causes information loss. Environmental conditions can interfere with data transmission, especially when dealing with wireless transmission methods. Heavy clouds can block satellite transmissions. Wireless networks are susceptible to interference from devices such as microwave ovens.
Hardware and software failure are the two main causes for data loss. Background radiation, head crashes, and aging or wear of the storage device fall into the former category, while software failure typically occurs due to bugs in the code.
Cosmic rays cause most soft errors in DRAM.
Silent:
Some errors go unnoticed, without being detected by the disk firmware or the host operating system; these errors are known as silent data corruption.There are many error sources beyond the disk storage subsystem itself. For instance, cables might be slightly loose, the power supply might be unreliable, external vibrations such as a loud sound, the network might introduce undetected corruption, cosmic radiation and many other causes of soft memory errors, etc. In 39,000 storage systems that were analyzed, firmware bugs accounted for 5–10% of storage failures. All in all, the error rates as observed by a CERN study on silent corruption are far higher than one in every 1016 bits. Webshop Amazon.com has acknowledged similar high data corruption rates in their systems. In 2021, faulty processor cores were identified as an additional cause in publications by Google and Facebook; cores were found to be faulty at a rate of several in thousands of cores.One problem is that hard disk drive capacities have increased substantially, but their error rates remain unchanged. The data corruption rate has always been roughly constant in time, meaning that modern disks are not much safer than old disks. In old disks the probability of data corruption was very small because they stored tiny amounts of data. In modern disks the probability is much larger because they store much more data, whilst not being safer. That way, silent data corruption has not been a serious concern while storage devices remained relatively small and slow. In modern times and with the advent of larger drives and very fast RAID setups, users are capable of transferring 1016 bits in a reasonably short time, thus easily reaching the data corruption thresholds.As an example, ZFS creator Jeff Bonwick stated that the fast database at Greenplum, which is a database software company specializing in large-scale data warehousing and analytics, faces silent corruption every 15 minutes. As another example, a real-life study performed by NetApp on more than 1.5 million HDDs over 41 months found more than 400,000 silent data corruptions, out of which more than 30,000 were not detected by the hardware RAID controller (only detected during scrubbing). Another study, performed by CERN over six months and involving about 97 petabytes of data, found that about 128 megabytes of data became permanently corrupted silently somewhere in the pathway from network to disk.Silent data corruption may result in cascading failures, in which the system may run for a period of time with undetected initial error causing increasingly more problems until it is ultimately detected. For example, a failure affecting file system metadata can result in multiple files being partially damaged or made completely inaccessible as the file system is used in its corrupted state.
Countermeasures:
When data corruption behaves as a Poisson process, where each bit of data has an independently low probability of being changed, data corruption can generally be detected by the use of checksums, and can often be corrected by the use of error correcting codes (ECC).
Countermeasures:
If an uncorrectable data corruption is detected, procedures such as automatic retransmission or restoration from backups can be applied. Certain levels of RAID disk arrays have the ability to store and evaluate parity bits for data across a set of hard disks and can reconstruct corrupted data upon the failure of a single or multiple disks, depending on the level of RAID implemented. Some CPU architectures employ various transparent checks to detect and mitigate data corruption in CPU caches, CPU buffers and instruction pipelines; an example is Intel Instruction Replay technology, which is available on Intel Itanium processors.Many errors are detected and corrected by the hard disk drives using the ECC codes which are stored on disk for each sector. If the disk drive detects multiple read errors on a sector it may make a copy of the failing sector on another part of the disk, by remapping the failed sector of the disk to a spare sector without the involvement of the operating system (though this may be delayed until the next write to the sector). This "silent correction" can be monitored using S.M.A.R.T. and tools available for most operating systems to automatically check the disk drive for impending failures by watching for deteriorating SMART parameters.
Countermeasures:
Some file systems, such as Btrfs, HAMMER, ReFS, and ZFS, use internal data and metadata checksumming to detect silent data corruption. In addition, if a corruption is detected and the file system uses integrated RAID mechanisms that provide data redundancy, such file systems can also reconstruct corrupted data in a transparent way. This approach allows improved data integrity protection covering the entire data paths, which is usually known as end-to-end data protection, compared with other data integrity approaches that do not span different layers in the storage stack and allow data corruption to occur while the data passes boundaries between the different layers.Data scrubbing is another method to reduce the likelihood of data corruption, as disk errors are caught and recovered from before multiple errors accumulate and overwhelm the number of parity bits. Instead of parity being checked on each read, the parity is checked during a regular scan of the disk, often done as a low priority background process. The "data scrubbing" operation activates a parity check. If a user simply runs a normal program that reads data from the disk, then the parity would not be checked unless parity-check-on-read was both supported and enabled on the disk subsystem.
Countermeasures:
If appropriate mechanisms are employed to detect and remedy data corruption, data integrity can be maintained. This is particularly important in commercial applications (e.g. banking), where an undetected error could either corrupt a database index or change data to drastically affect an account balance, and in the use of encrypted or compressed data, where a small error can make an extensive dataset unusable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dess–Martin periodinane**
Dess–Martin periodinane:
Dess–Martin periodinane (DMP) is a chemical reagent used in the Dess–Martin oxidation, oxidizing primary alcohols to aldehydes and secondary alcohols to ketones. This periodinane has several advantages over chromium- and DMSO-based oxidants that include milder conditions (room temperature, neutral pH), shorter reaction times, higher yields, simplified workups, high chemoselectivity, tolerance of sensitive functional groups, and a long shelf life. However, use on an industrial scale is made difficult by its cost and its potentially explosive nature. It is named after the American chemists Daniel Benjamin Dess and James Cullen Martin who developed the reagent in 1983. It is based on IBX, but due to the acetate groups attached to the central iodine atom, DMP is much more reactive than IBX and is much more soluble in organic solvents.
Preparation:
The most friendly synthesis of IBX has been determined to be treating 2-iodobenzoic acid with oxone in water, at elevated temperatures for 3 hours. IBX is then acylated using Ireland and Liu’s modifications from the original procedure. These modifications allowed for higher yields and a simplified work up procedure. The resulted solids can be obtained via filtration and washing with ether. Ireland and Liu used a catalytic amount of tosylic acid, which allowed the reaction to complete in less than 2 hours (compared to the classic synthesis, utilizing 24 hours) and in yields exceeding 90%.
Preparation:
The classic method presented by R. K. Boeckman and J. J. Mullins involved heating a solution of potassium bromate, sulfuric acid, 2-iodobenzoic acid to afford IBX (1-hydroxy-1,2-benziodoxol-3(1H)-one 1-oxide, 2-iodoxybenzoic acid). IBX was then acylated using acetic acid and acetic anhydride.
Structure:
Dess-Martin periodinane has square pyramidal geometry with 4 heteroatoms in basal positions and one apical phenyl group.
Oxidation mechanism:
Dess–Martin periodinane is mainly used as an oxidant for complex, sensitive and multifunctional alcohols. One of the reasons for its effectiveness is its high selectivity towards complexation of the hydroxyl group, which allows alcohols to rapidly perform ligand exchange; the first step in the oxidation reaction.
Proton NMR has indicated that using one equivalent of alcohol forms the intermediate diacetoxyalkoxyperiodinane. The acetate then acts as a base to deprotonate the α-H from the alcohol to afford the carbonyl compound, iodinane, and acetic acid.
When a diol or more than one equivalent of alcohol is used, acetoxydialkoxyperiodinane is formed instead. Due to the labile nature of this particular periodinane, oxidation occurs much faster.
Oxidation mechanism:
Schreiber and coworkers have shown that water increases the rate of the oxidation reaction. Dess and Martin had originally observed that the oxidation of ethanol was increased when there was an extra equivalent of ethanol. It is believed that the rate of dissociation of the final acetate ligand from the iodine is increased, because of the electron-donating ability of the hydroxyl group (thus weakening the I-OAc bond).
Oxidation mechanism:
Chemoselectivity Using the standard Dess–Martin periodinane conditions, alcohols can be oxidized to aldehydes/ketones without affecting furan rings, sulfides, vinyl ethers, and secondary amides. Allylic alcohols are easily oxidized using DMP, which are typically difficult to convert to their respective carbonyls using the typical oxidants.Myers and coworkers determined that DMP could oxidize N-protected-amino alcohols, without epimerization (unlike most other oxidants, including Swern oxidation). These protected amino alcohols can be very important in the pharmaceutical industry.Benzylic and allylic alcohols react faster than saturated alcohols, while DMP oxidizes aldoximes and ketoximes to their respective aldehydes and ketones, faster than a primary, secondary or benzylic alcohol to its respective carbonyl.One example of the Dess–Martin oxidation involves transforming a sensitive α-β-unsaturated alcohol to its corresponding aldehyde. This moiety has been found in several natural products and due to its high functionality, it could be a valuable synthetic building block in organic synthesis. Thongsornkleeb and Danheiser oxidized this sensitive alcohol by employing the Dess Martin Oxidation and altering the work up procedure (diluting with pentanes, washing with poly(4-vinylpyridine) to remove the acetic acid generated during the reaction, filtering and concentrating via distillation.
Oxidation mechanism:
t-Butyl DMP Difluoro and monofluoro alcohols are more difficult to oxidize. Swern oxidation has been used, but a large excess of the oxidant had to be employed, and in some cases did not give reproducible results. Linderman and Graves found DMP was successful in most cases but could not tolerate the presence of nucleophilic functional groups in the alcohol, as these reacted with DMP by displacing acetate. Using the compound shown below produced the desired carbonyls in high yields as the addition of the tert-butoxy group, due to its steric bulk, minimizes these side reactions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hypalon**
Hypalon:
Hypalon is a chlorosulfonated polyethylene (CSPE) synthetic rubber (CSM) noted for its resistance to chemicals, temperature extremes, and ultraviolet light. It was a product of DuPont Performance Elastomers, a subsidiary of DuPont. Hypalon as it is now known in the marine industry today is a remarketed version of the old Hypalon using an additional layer of neoprene (cr) so the new chemical formulation is csm/cr.
Chemical structure:
Polyethylene is treated with a mixture of chlorine and sulfur dioxide under UV-radiation. The product contains 20-40% chlorine. The polymer also contains a few percent chlorosulfonyl (ClSO2-) groups. These reactive groups allow for vulcanization, which strongly affects the physical durability of the products. An estimated 110,000 tons/y were produced in 1991.
Discontinuance:
DuPont Performance Elastomers announced on May 7, 2009, that it intended to close its manufacturing plant in Beaumont, Texas, by June 30, 2009. This was DPE's sole plant for CSM materials. The company was therefore exiting the business for Hypalon and its related product, Acsium. The plant closure was delayed until April 20, 2010, in response to customer requests. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ground rules**
Ground rules:
Ground rules are rules applying to the field, objects on and near it, and special situations relating to them, in the game of baseball. Major League Baseball has defined a set of "universal ground rules" that apply to all MLB ballparks; individual ballparks have the latitude to set ground rules above and beyond the universal ground rules, as long as they do not directly contradict each other. Additionally, a set of universal ground rules exists for the six MLB stadiums with retractable roofs, with the individual ballparks able to set additional rules.
Ground rules:
Unlike the well-defined playing field of most other sports, the playing area of a baseball field extends to an outfield fence in fair territory and the stadium seating in foul territory. The unique design of each ballpark, including fences, dugouts, bullpens, railings, stadium domes, photographer's wells and TV camera booths, requires that rules be defined to handle situations in which these objects may interact or interfere with the ball in play or with the players, and adaptable by ballpark within the universal rules.
Ground rules:
The term is familiar to most fans through the ground rule double, a batted ball that bounces fair, then over the outfield fence in fair or foul territory for a two-base hit.
MLB:
Universal Ball on the top step (lip) of the dugout is in play.
No equipment is permitted to be left on the top step (lip) of the dugout. If a ball hits equipment left on the top step it is dead.
A player is not permitted to step or go into a dugout to make a catch.
A player is permitted to reach into a dugout to make a catch. If a player makes a catch outside the dugout and the player's momentum carries him into the dugout, then the catch is allowed and the ball remains alive as long as the player does not fall while in the dugout.
A batted ball in flight can be caught between or under railings and around screens.
A catch may be made on the field tarp.
Batted or thrown ball lodging in the rotating signage behind home plate or along first base or third base stands is out of play.
Batted or thrown ball resting on the rotating signage behind home plate or along first base or third base stands is in play.
The facings of railings surrounding the dugout and photographers areas are in play.
Any cameras or microphones permanently attached on railings are treated as part of the railings and are in play.
Any recessed railings or poles that are in the dugout and photographers areas are out of play and should be marked with red to mark them out of play.
Robotic cameras attached to the facing of the backstop screen are considered part of the screen.
A batted ball striking the backstop camera is considered a dead ball.
A thrown ball striking the backstop camera is considered in play.
A ball striking the guy wires that support the backstop is a dead ball.
A ball lodging behind or under canvas on field tarp is out of play.
A ball striking the field tarp and rebounding onto the playing field is in play.
No chairs can be brought out of the dugout or bullpen and onto the playing field.
All yellow lines are in play.
A live ball striking the backstop screen or protective netting located on the field boundaries along the first and third base lines is in play.
A ball striking protective netting located behind out-of-play areas such as dugouts and photographer areas is dead even if it rebounds onto the field.
Where a roof is present, a batted ball that becomes lodged in the roof above fair territory is dead and the runners including batter-runner are awarded two bases. Ballpark-specific ground rules may supersede this rule.
On outfield walls composed of sections with different heights (e.g., Fenway Park, Oracle Park), a batted ball in flight that strikes a taller section of the wall in fair territory at a point higher than the top of the adjacent shorter wall, then bounds out of play over the shorter wall, is a home run.
Conversely, a batted ball in flight that strikes the shorter wall in fair territory then bounds out of play over the adjacent taller wall is a dead ball and the runners including batter-runner are awarded two bases.
MLB:
Individual ballpark Individual ballpark ground rules vary greatly from ballpark to ballpark. For the 2017 season, Citi Field, Kauffman Stadium, Target Field, Yankee Stadium, and Guaranteed Rate Field are the only MLB ballparks that do not have individual ground rules above the universal set.Examples of ground rules that have been or are still in major league ballparks include: Fenway Park (Boston Red Sox) – A fly ball that strikes the top of the ladder on the Green Monster and then bounces out of play is two (2) bases.
MLB:
Minute Maid Park (Houston Astros) – A batted ball striking the flagpole in center field and bouncing onto the field is in play; a ball striking the flagpole while in flight and leaving the playing field is a home run. The flagpole and the hill that it was on were removed following the 2016 season, and the rule has been removed from the specific ballpark rules list.
MLB:
Tropicana Field (Tampa Bay Rays) – A batted ball that hits either of the two lower catwalks (C Ring and D Ring) between the yellow foul poles is ruled a home run. The two upper catwalks (the A Ring and B Ring) are considered in play; a ball that touches either can drop for a hit or be caught for an out.
MLB:
Wrigley Field (Chicago Cubs) – A fair ball becoming lodged in the ivy on the outfield fence awards two bases to the batter and all runners; if the ball falls out of the ivy, it remains in-play.
Citi Field (New York Mets) – Any fair ball in flight hitting the overhanging Pepsi-Cola sign is ruled an automatic home run. The sign has since been changed to "Coca-Cola" following new sponsorship in 2016, and no longer overhangs. The rule has since been removed from the specific ballpark rules.
Movement of retractable roofs These ground rules only apply at ballparks featuring retractable roofs. As of the 2021 season, these are: Rogers Centre, Chase Field, T-Mobile Park, American Family Field, Minute Maid Park, and LoanDepot Park. Rules governing batted balls striking the roof are defined in each individual ballpark's ground rules.
Universal The decision as to whether a game begins with the roof open or closed rests solely with the home club.
If the game begins with the roof open: It shall be closed only in the event of impending rain or other adverse weather conditions. The decision to close the roof shall be made by the home club, after consultation with the Umpire Crew Chief.
The Umpire Crew Chief shall notify the visiting club, which may challenge the closing of the roof if it feels that a competitive imbalance will arise. In such an event, the Umpire Crew Chief shall make a final decision based on the merits of the challenge.
Ballpark-specific All ballpark-specific retractable roof ground rules concern opening of the roof after a game has started.
If the game starts with the roof closed: American Family Field, Chase Field, Minute Maid Park, & T-Mobile Park permit its opening during the game if weather conditions warrant, as long as the following procedure is followed:The roof may be opened only once during the game.
The Umpire Crew Chief will be notified at the beginning of the inning that the roof will be opened at the inning's end.
The Umpire Crew Chief shall notify the visiting club, which may challenge the opening of the roof. In such an event, the Umpire Crew Chief shall make a final decision based on the merits of the challenge.
The opening of the roof shall only begin between innings.
Chase Field requires that the roof is opened in two sets of 2-minute-and-15-second intervals, at the conclusion of one inning and the conclusion of the following inning.If the game starts with the roof open and it is closed during the game: American Family Field permits re-opening during the game as long as the above procedure is followed.
At Chase Field, Minute Maid Park, T-Mobile Park and Rogers Centre, once the roof is closed during a game, it shall not be reopened. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Posterior vein of the left ventricle**
Posterior vein of the left ventricle:
The posterior vein of the left ventricle runs on the diaphragmatic surface of the left ventricle to the coronary sinus, but may end in the great cardiac vein. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CXBN-2**
CXBN-2:
Cosmic X-ray Background Nanosatellite-2 (CXBN-2 or CXBN 2) was a satellite and mission developed by the Morehead State University to follow up on the CXBN mission launched in 2012. It was an improved version of the previous spacecraft and it increased the precision of measurements of the cosmic X-ray background in the 30-50 keV range and helped to improve understanding of the early universe.
Objectives:
The CXBN-2 mission was created in order to map the extragalactic cosmic X-ray background with the use of a Cadmium Zinc Telluride (CZT) detector. Compared to its predecessor, its CZT detector had twice the detection area. It allowed for a new, high-precision measurement of the X-ray background. It helped improve understanding of the origin and evolution of the universe through research on high-energy background radiation. It collected 3 million seconds of data throughout its lifetime.
Design:
The CXBN-2 satellite was a Sun-pointing spin-stabilized 2U CubeSat which had four solar panels which provided 15W of power. It had a 2-wall structure and braces to reinforce its body. When it was in its compact form, it occupied a volume of 10 x 10 x 20cm.
It had two transceivers in the Ultra high frequency and S bands for radio communication.
Instruments:
CXBN-2 contained a Cadmium Zinc Telluride Array as its X-ray detector and a magnetometer on board.
Launch and mission:
Cygnus OA-7 launched on April 18, 2017 as the eighth flight of the Cygnus Orbital ATK uncrewed orbital spacecraft and its seventh flight to the International Space Station (ISS) under NASA's Commercial Resupply Services. On April 22, 2017, the Cygnus spacecraft docked with the ISS.On May 16, 2017, the CXBN-2 satellite was deployed from the ISS via the Nanoracks CubeSat Deployer along with several other CubeSats. On March 1, 2019, it re-entered the Earth's atmosphere. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Casting on (knitting)**
Casting on (knitting):
In knitting, casting on is a family of techniques for adding new stitches that do not depend on earlier stitches, i.e., having an independent lower edge. In principle, it is the opposite of binding off, but the techniques involved are generally unrelated.
The cast-on can also be decorated with various stitch patterns, especially picots. The cast-on stitches can also be twisted clockwise or counterclockwise as they are added to the needle; this is commonly done for the single cast-on described below to give it a neater, more uniform look.
Casting on is sometimes done with doubled-up needles or a needle of larger size than for the main pattern; the extra bit of yarn in each stitch makes the edge less tight and gives it more flexibility.
Casting on (knitting):
When casting on at the beginning, one end of the yarn is usually secured to the knitting needle by knotting it, typically with a slip knot. This knot is unnecessary when casting on in the middle of the fabric (e.g., when making the upper edge of a buttonhole) since the yarn is already secured to the fabric. The original slip knot can be pulled out after a few rows have been knitted without damaging the knitted fabric. It is also possible to cast on using a simple twisted loop.
Casting on (knitting):
Once one loop has been secured around the needle, or if it is already secured to the fabric, there are several different methods for adding others.
Methods for casting on in handknitting:
Knit-on cast-on Perhaps the most straightforward method, in which a new loop is drawn through the previous loop and then added to the needle. However, this method is deprecated for giving an untidy edge. It can also be done in a purl version or even a rib version.Cable cast-on A closely related technique, in which a new loop is drawn through the space between the two previous loops and then added to the needle. This edge is firm and has a neat, corded look; although it may be too bulky with thick yarns.Single cast-on An even simpler method, also called the simple cast-on or "backward loop cast-on," which involves adding a series of half hitches to the needle. This creates a very stretchy, flexible edge. It is a common approach for adding several stitches to the edge in the middle of a knitted fabric, but it is difficult to knit from and make even. A variation is the twisted simple cast on, where one twists the new loop around the thumb, with the yarn going around the back of the thumb to the front as in the simple cast-on, but picking up the new loop from the backside of the loop. This is tighter and neater but has less elasticity.Long tail cast-on A common method, in which all the loops are made with one yarn, while the other end (the dangling end from the original slip knot) is used to secure the base of each loop. The loops will appear like knit stitches. This method is also called the "knit half-hitch cast on". Although popular, this method requires that the knitter estimate the length of the dangling yarn before the stitches are cast on; if the dangling yarn is too short, the knitter will run out of yarn with which to secure the stitches before the full number of stitches have been cast on. In that case, the knitter will have to pull everything out, re-position the slip knot to give a longer tail, and begin anew. Despite this shortcoming, it's a good all-around method for casting on. Another variation for this method is to use two different yarns, one being the main yarn used in the project, and the second being a piece of contrasting waste yarn. One attaches the two with a slip knot, and then using the waste or contrast yarn as the long tail, starts the row. This is useful if for picking up stitches on the cast-on edge in order to knit in the opposite direction. One can also use it decoratively, making the contrast or waste yarn a part of the pattern design. To execute it, start by figuring out how much yarn is required for the cast-on row, and pull out that amount of yarn. With that, put a slip knot on the needle (this is not absolutely necessary, since the first cast-on stitch will create a slip knot in the process, but it is generally more secure to start with a slip knot). Hold the needle in the right hand and the yarn in the left, with the long tail pulled around the thumb and hanging in front, and the yarn from the ball around the first or second finger, with the ball tail heading toward the back. Once completed, take the needle under the front of the long tail, picking up a half hitch, then back to the yarn over the finger from the top side of the yarn, pulling the loop through the half hitch formed. This cast-on can also be done in a purl and a twisted stitch version as well.Tubular cast-on Involves knitting onto a cast on row knitted in a contrasting yarn with half as many stitches. Each knit stitch into the contrasting stitches is followed by a yarn-over to double the number of stitches. After several rows, a tuck is formed by knitting together the first and third rows, forming a tube through which elastic can be pulled. A neat edge, nicely suited for 1x1 ribbing.Provisional cast-on Also known as an "invisible cast-on," since the waste yarn used can be pulled out later to allow the knitter to continue the knitting in the opposite direction. This cast-on is also the best method for double-knit fabrics, since the knitting has no boundary; the knitting is continuous from one side of the fabric to the other. Holding the ends of a waste yarn and the working yarn, make an overhand knot. Place a needle held in the left hand between the two yarns, with the knot below, the waste yarn held underneath and parallel to the needle out to the right, and the working yarn up and in front of the needle. Bring the working yarn down behind the needle and in front of the waste yarn; up behind the waste yarn and over-and-up then down in front of the needle; down behind the waste yarn; then up in front of the needle. Repeat for each two stitches. When the desired number of stitches is reached, loosely fasten the waste yarn and work as usual with the working yarn. To take out the provisional cast-on, unfasten the end of the waste yarn and carefully pull it out, picking up the now loose loops on a needle and working from the opposite direction of previous work.Two-needle cast-on Similar to a long tail cast-on, but using two needles held together. The half-hitch part is formed around the lower needle, while the loop is only wrapped around the upper needle. The second needle is removed before the first round Braided cast-on Frequently used in mitten edges ...Chain cast-on Uses a crochet hook or two knitting needles. To execute, hold a knitting needle in left hand and crochet hook or second knitting needle in right hand. Make a slip-knot in yarn and put it on the crochet hook or right-hand needle. Wrap the yarn from the back of the left-hand needle and over to the front, over the crochet hook or right needle, pass the slip-knot loop over the wrap, leaving the new loop on the crochet hook or right needle. Repeat, wrapping the yarn over the left-hand needle before passing it over the crochet hook or right needle to make a new loop, until there is one less stitch than required. Place the last loop on the left-hand needle as the first stitch that will be worked. This cast-on creates an edge that mimics a standard bind-off edge.Crochet chain cast-on For this, do a simple crochet chain. Once the chain is large enough to equal the number of stitches needed, plus a few extra, turn the chain over so that the bumps that were forming as the yarn was pulled through the hole are visible. Put the knitting needle through those bumps and knit through it as normal. This produces the same edge as knitting on.Provisional chain cast-on Simply the crochet chain cast-on using waste yarn; this is also an "invisible cast-on" that can be pulled out later to allow knitting in the opposite direction. Work a crochet chain in waste yarn, loosely fastening the tail end. With working yarn, pick up the chain-bumps, as for the crochet chain cast-on, to create the working stitches. To take out the cast-on, simply pull out the tail of the waste yarn at the fastened end and "zip off" the crochet chain. Pick up the now loose loops and work from the opposite direction of previous work. This is done in toe-up socks and shawls or scarves with directional patterns that need to start from a center edge.Turkish cast-on Used for circular beginnings, often for the toes of socks made toe-up. It is invisible (as with the provisional cast-on). Begin with two circular needles held one above the other from above (Upper called A, lower called B). Place a slip-knot on B, and wrap the yarn up behind A. Then begin wrapping it around both needles, down in front and up in back, until the number of wraps equals half the number of stitches needed. Slide B along, through the wraps, until they sit on the cable, and the ends dangle on either side. Then bring the other, loose, end of A up, and knit into the wraps still on A. Once all those wraps are knitted, pull A until the wraps are on the cable, and pull B so that the tip of the needle holds the wraps, pointed to the end with the working yarn. Bring up the other end of B and knit across the wraps again. This completes one round. From here, continue to work around the stitches on the two circular needles, increasing as desired, or switch to double pointed needles or a single circular needle for the Magic Loop method of knitting circularly.Magic cast-on Developed by Judy Becker; also known as the "magic toe-up cast-on," due to its popular use in beginning toe-up sock construction. Instructions were first published in an issue of the on-line knitting magazine knitty.com [1].Circular cast-on Popularized by Elizabeth Zimmermann as "Emily Ocker's Circular Beginning."Old Norwegian cast-on Also known as the "German Twisted cast-on" and similar to the "long-tail cast-on" but uses a longer tail due to a second twist in the thumb loop, giving the cast on edge more stretch than the long-tail cast on. Leaving a tail the necessary length, make a slipknot and place it on a needle held in the right hand. The slipknot counts as the first stitch. Place the thumb and index finger of the left hand between the yarn ends so that the strand connected to the ball is around the index finger and the tail end is around the thumb. Secure the yarn ends with the other fingers and hold the palm upwards, making a V of yarn. Bring the needle in front of the thumb, under both yarns around the thumb, down into the center of the thumb loop, back forward, and over the top of the yarn around the index finger. Use the needle to catch this yarn, then bring the needle back down through the thumb loop, turning the thumb slightly to make room for the needle to pass through. Drop the loop off the thumb and place the thumb back in the V configuration while tightening up the resulting stitch on the needle. Instructions published on knittingdaily.com [2]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Urticarial erythema multiforme**
Urticarial erythema multiforme:
Urticarial erythema multiforme is an unusual reaction virtually always associated with antibiotic ingestions, characterized by skin lesions that consist of urticarial papules and plaques, some of which clear centrally forming annular lesions, but with no true urticarial lesions.: 130 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Middle meningeal artery**
Middle meningeal artery:
The middle meningeal artery (Latin: arteria meningea media) is typically the third branch of the first portion of the maxillary artery. After branching off the maxillary artery in the infratemporal fossa, it runs through the foramen spinosum to supply the dura mater (the outer meningeal layer) and the calvaria. The middle meningeal artery is the largest of the three (paired) arteries that supply the meninges, the others being the anterior meningeal artery and the posterior meningeal artery.
Middle meningeal artery:
The anterior branch of the middle meningeal artery runs beneath the pterion. It is vulnerable to injury at this point, where the skull is thin. Rupture of the artery may give rise to an epidural hematoma. In the dry cranium, the middle meningeal, which runs within the dura mater surrounding the brain, makes a deep groove in the calvarium.
The middle meningeal artery is intimately associated with the auriculotemporal nerve, which wraps around the artery making the two easily identifiable in the dissection of human cadavers and also easily damaged in surgery.
Structure:
It ascends between the sphenomandibular ligament and the lateral pterygoid muscle, and between the two roots of the auriculotemporal nerve to the foramen spinosum of the sphenoid bone, through which it enters the cranium; it then runs forward in a groove on the great wing of the sphenoid bone, and divides into two branches, anterior and posterior.
The anterior branch, the larger, crosses the great wing of the sphenoid, reaches the groove, or canal, in the sphenoidal angle of the parietal bone, and then divides into branches that spread out between the dura mater and internal surface of the cranium, some passing upward as far as the vertex, and others backward to the occipital region.
The posterior branch curves backward on the squamous part of the temporal bone, and, reaching the parietal bone some distance in front of its mastoid angle, divides into branches that supply the posterior part of the dura mater and cranium.
The branches of the middle meningeal artery are distributed partly to the dura mater, but chiefly to the bones; they anastomose with the arteries of the opposite side, and with the anterior and posterior meningeal arteries. The very smallest distal branches anastomose through the skull with small arterioles from the scalp.
On entering the cranium, the middle meningeal artery gives off the following branches: Numerous small vessels supply the trigeminal ganglion and the dura mater A superficial petrosal branch enters the hiatus of the facial canal, supplies the facial nerve, and anastomoses with the stylomastoid branch of the posterior auricular artery.
A superior tympanic artery runs in the canal of the tensor tympani muscle, and supplies this muscle and the lining of the canal.
Orbital branches pass through the superior orbital fissure or through separate canals in the great wing of the sphenoid, to anastomose with the lacrimal or other branches of the ophthalmic artery.
Temporal branches pass through foramina in the great wing of the sphenoid, and anastomose in the temporal fossa with the deep temporal arteries.
Variation In approximately half of subjects it branches into an accessory meningeal artery.
Very rarely the ophthalmic artery may arise as a branch of the middle meningeal artery.
The middle meningeal artery may arise not only from the maxillary artery but also from the ophthalmic artery, or lacrimal artery.
Clinical relevance:
An injured middle meningeal artery is the most common cause of an epidural hematoma. A head injury (e.g., from a road traffic accident or sports injury) is required to rupture the artery. Emergency treatment requires decompression of the haematoma, usually by craniotomy. Subdural bleeding is usually venous in nature, rather than arterial.
Clinical relevance:
The middle meningeal artery runs in a groove on the inside of the cranium. This can clearly be seen on a lateral skull X-ray, where it may be mistaken for a fracture of the skull. On a dry specimen, the groove is easy to see. This means that the artery is easy to study, even in specimens centuries old, and several classifications of the branches have been proposed, e.g. Adachi's classification of 1928. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Erythronolide synthase**
Erythronolide synthase:
In enzymology, an erythronolide synthase (also 6-Deoxyerythronolide B Synthase or DEBS) is an enzyme that catalyzes the chemical reaction 6 malonyl-CoA + propanoyl-CoA ⇌ 7 CoA + 6-deoxyerythronolide BThus, the two substrates of this enzyme are malonyl-CoA and propanoyl-CoA, whereas its two products are CoA and 6-deoxyerythronolide b. This enzyme participates in biosynthesis of 12-, 14- and 16-membered macrolides.
Erythronolide synthase:
This enzyme belongs to the family of transferases, it has been identified as part of a Type 1 polyketide synthase module. DEBS is found in Saccharopolyspora erythraea and other actinobacteria, and is responsible for the synthesis of the macrolide ring which is the precursor of the antibiotic erythromycin. There have been three categories of polyketide synthases identified to date, type 1, 2 and 3. Type one synthases involve large multidomain proteins containing all the sites necessary for polyketide synthesis. Type two synthases contain active sites distributed among several smaller polypeptides, and type three synthases are large multi-protein complexes containing modules which have a single active site for each and every step of polyketide synthesis. In the case of DEBS, there are three large multi-functional proteins, DEBS 1,2, and 3, that each exist as a dimer of two modules. Each module consists of a minimum of a Ketosynthase (KS), Acyl carrier protein (ACP) site, and acyltransferase (AT), but may also contain a Ketoreductase (KR), Dehydrotase (DH), and Enol Reductase (ER) for additional reduction reactions. The DEBS complex also contains a Loading Domain on module 1 consisting of an acyl carrier protein and an acyltransferase. The terminal Thioesterase acts solely to terminate DEBS polyketide synthesis and cyclize the macrolide ring.
Module components and functions:
Essential components Ketosynthase The active site of this enzyme has a very broad specificity, which allows for the synthesis of long chains of carbon atoms by joining, via a thioester linkage, small organic acids, such as acetic and malonic acid. The KS domain receives the growing polyketide chain from the upstream module and subsequently catalyzes formation of the C-C bond between this substrate and an ACP-bound extender unit that is selected by the AT domain.
Module components and functions:
Acyltransferase Each AT domain has an α-carboxylated CoA thioester (i.e. methylmalonyl-CoA) This specificity prevents non-essential addition of enzymes within the module. The AT captures a nucleophilic β-carboxyacyl-CoA extender unit and transfers it to the phosphopantetheine arm of the ACP domain.Functions via catalyzing acyl transfer from methylmalonyl-CoA to the ACP domain within the same module via a covalent acyl-AT intermediate. The importance of the AT to the stringent incorporation of specific extender unit in the synthesis of polyketide building blocks makes it vital that the mechanism and structure of these domains be well-elucidated in order to develop efficient strategies for the regiospecific engineering of extender unit incorporation in polyketide biosynthesis.
Module components and functions:
Acyl Carrier Protein The ACP is not substrate specific, which allows for the interaction with every domain present within its module. This protein collaborates with the ketosynthase (KS) domain of the same module to catalyze polyketide chain elongation, and subsequently engages with the KS domain of the next module to facilitate forward chain transfer. The ACP first accepts the extender unit from the AT, then collaborates with the KS domain in chain elongation, and finally anchors the newly elongated chain as it undergoes modification at the β-keto position. In order to carry out their function, the ACP domains require post-translational addition of a phosphopantetheine group to a conserved serine residue of the ACP. The terminal sulfhydryl group of the phosphopantetheine is the site of attachment of the growing polyketide chain.
Module components and functions:
Thioesterase Located at the C-terminus site of the furthest downstream module. It is terminated in a thioesterase, which releases the mature polyketide (either as the free acid or a cyclized product), via lactonization.Note: As stated above, the first module of DEBS contains an additional acyltransferase and ACP for initiation of the reactions Non-essential components Additional components, may have any one or a combination of the following: Ketoreductase- Uses NADPH to stereospecifically reduce it to a hydroxyl groupDehydratase- Catalyzes the removal of the hydroxyl group to create a double bond from organic compounds in the form of water Enolreductase- Utilizes NADPH to reduce the double bond from the organic compound
Comparison between fatty acid synthesis and polyketide synthesis:
Fatty acid synthesis in most prokaryotes occurs by a type II synthase made of many enzymes located in the cytoplasm that can be separated. However, some bacteria such as Mycobacterium smegmatis as well as mammals and yeast use a type I synthase which is a large multifunctional protein similar to the synthase used for polyketide synthesis. This Type I synthase includes discrete domains on which individual reactions are catalyzed.
Comparison between fatty acid synthesis and polyketide synthesis:
In both fatty acid synthesis and polyketide synthesis, the intermediates are covalently bound to ACP, or acyl carrier protein. However, in fatty acid synthesis the original molecules are Acyl-CoA or Malonyl-CoA but polyketide synthases can use multiple primers including acetyl-CoA, propionyl-CoA, isobutyryl-CoA, cyclohexanoyl-CoA, 3-amino-5-hydroxybenzoyl-CoA, or cinnamyl-CoA. In both fatty acid synthesis and polyketide synthesis these CoA carriers will be exchanged for ACP before they are incorporated into the growing molecule.
Comparison between fatty acid synthesis and polyketide synthesis:
During the elongation steps of fatty acid synthesis, ketosynthase, ketoreductase, dehydratase, and enoylreductase are all used in sequence to create a saturated fatty acid then postsynthetic modification can be done to create an unsaturated or cyclo fatty acid. However, in polyketide synthesis these enzymes can be used in different combinations to create segments of polyketide that are saturated, unsaturated, or have a hydroxyl or carbonyl functional group. There are also enzymes used in both fatty acid synthesis and polyketide synthesis that can make modifications to the molecule after it has been synthesized.
Comparison between fatty acid synthesis and polyketide synthesis:
As far as regulating the length of the molecule being synthesized, the specific mechanism by which fatty acid chain length remains unknown but it is expected that ACP-bound fatty acid chains of the correct length act as allosteric inhibitors of the fatty acid synthesis enzymes. In polyketide synthesis, the synthases are composed of modules in which the order of enzymatic reactions is defined by the structure of the protein complex. This means that once the molecule reaches the last reaction of the last module, the polyketide is released from the complex by a thioesterase enzyme. Therefore, regulation of fatty acid chain length is most likely due to allosteric regulation, and regulation of polyketide length is due to a specific enzyme within the polyketide synthase.
Application:
Since the late 1980s and early 1990s research on polyketide synthases (PKS), a number of strategies for the genetic modification of such PKS have been developed and elucidated. Such changes in PKS are of particular interest to the pharmaceutical industry as new compounds with antibiotic or other antimicrobial effects are commonly synthesized after changes to the structure of the PKS have been made. Engineering the PKS complex is a much more practical method than synthesizing each product via chemical reactions in vitro due to the cost of reagents and the number of reactions that must take place. Just to exemplify the potential rewards of synthesizing new and effective antimicrobials, in 1995, the worldwide sales of erythromycin and its derivatives exceeded 3.5 billion dollars. This portion will examine the modifications of structure in the DEBS PKS to create new products in regards to erythromycin derivatives as well as completely new polyketides generated by various means of engineering the modular complex.
Application:
There are five general methods in which DEBS is regularly modified: 1.
Deletion or inactivation of active sites and modules 2.
Substitution or addition of active sites and modules 3.
Precursor-directed biosynthesis 4.
KR replacement for altered stereospecificity 5.
Application:
Tailoring enzyme modifications Deletion or inactivation of active sites and modules The first reported instance of genetic engineering of DEBS came in 1991 from the Katz group who deleted the activity of the KR in module 5 of DEBS which produced a 5-keto macrolide instead of the usual 5-hydroxy macrolide. Since then, deletion or inactivation (often via introduction of point mutations) of many active sites to skip reduction and/or dehydration reactions have been created. Such modifications target the various KR, DH, ER active sites seen on different modules in DEBS. In fact, whole modules can be deleted in order to reduce the chain-length of the polyketides and alter the cycle of reduction/dehydration normally seen.
Application:
Substitution or addition of active sites and modules In one of the first reorganizations of DEBS, a copy of the terminal TE was placed at the end of each module in separate trials, which as predicted resulted in the cleavage and release of the correspondingly shortened products. Following this, ever more complex methods were devised for the addition or substitution of single or multiple active sites to the DEBS complex.
Application:
The most common method of engineering DEBS as of 2005 is AT substitution, in which the native AT domain is replaced with an AT specific for a different primer or extender molecule. Under normal circumstances, DEBS has a “loading” or priming AT specific for predominantly propionyl-CoA while all six subsequent AT are specific for the extender molecule, methylmalonyl-CoA. The native AT of DEBS have all been successfully substituted with AT from other modular PKS such as the PKS that produces rapamycin; which replaces the methylmalonyl-CoA specific AT with malonyl-CoA AT and produces a non-methylated erythromycin derivative. This mode of engineering in particular shows the versatility that can be achieved as both the priming molecule and the extender molecule can be changed to produce many new products.
Application:
In addition to the AT sites, any of the reductive/dehydrating enzyme active sites may be replaced with one or more additional reductive/dehydrating enzyme active sites. For example, in one study, the KR of module 2 of DEBS was replaced by a full set of reductive domains (DH, ER and KR) derived from module 1 of the rapamycin PKS as shown in Figure 2 FIGURE 2 There is at least one report of a whole module substitution, in which module 2 of DEBS was replaced with module 5 of the rapamycin PKS The activities of the two modules is identical, and the same erythromycin precursor (6-deoxyerythronolide B) was produced by the chimeric PKS; however, this shows the possibility of creating PKS with modules from two or even several different PKS in order to produce a multitude of new products. There is one problem with connecting heterologous modules though; there is recent evidence that the amino acid sequence between the ACP domain and the subsequent KS domain of downstream modules plays an important role in the transfer of the growing polyketide from one module to another. These regions have been labeled as “linkers” and although they have no direct catalytic role, any substitution of a linker region that is not structurally compatible with the wild-type PKS may cause poor yields of the expected product.
Application:
Precursor-directed biosynthesis Using a semi-synthetic approach, a diketide intermediate may be added either in vitro or in vivo to a DEBS complex in which the activity of the first KS has been deleted. This means that the diketide will load onto the second KS (in module 2 of DEBS) and be processed all the way to the end as normal. It has been shown that this second KS is fairly nonspecific and a large variety of synthetic diketides can be accepted and subsequently fully elongated and released. However, it has also been seen that this KS is not highly tolerant of structural changes at the C2 and C3 positions, especially if the stereochemistry is altered. To date, this has been the most successful approach to making macrolides with potency equal to or greater than erythromycin.
Application:
Ketoreductase replacement to alter stereospecificity In modular PKS, KR active sites catalyze stereospecific reduction of polyketides. Inversion of an alcohol stereocenter to the opposite stereoisomer is possible via replacement of a wild-type KR with a KR of the opposite specificity. This has rarely been done successfully, and only at the terminal KR of the DEBS complex. It has been theorized that changing the stereospecificity of a KR in an earlier module would also require the concurrent modification of all downstream KS.Recent studies of the amino acid sequence of the two types of stereospecificity in KR have determined a perfect correlation with these residues and the predicted stereochemical outcome. This is particularly useful in situations where the gene sequence of a modular PKS is known but the final product structure has not yet been elucidated.
Application:
Tailoring enzyme modifications Enzymes that act on the macrolide after it has been released and cyclized by DEBS are called tailoring enzymes. Many such enzymes are involved in the production of erythromycin from the final product of unmodified DEBS, 6-deoxyerythronolide B. Such classes of enzymes include mainly oxidoreductases and glycosyl transferases and are essential for the antibiotic activity of erythromycin.Thus far, few attempts have been made to modify tailoring pathways, however, the enzymes which participate in such pathways are currently being characterized and are of great interest. Studies are facilitated by their respective genes being located adjacent to the PKS genes, and many are therefore readily identifiable. There is no doubt that in the future, alteration of tailoring enzymes could produce many new and effective antimicrobials.
Structural studies:
As of late 2007, 8 structures have been solved for this class of enzymes, with PDB accession codes 1KEZ, 1MO2, 1PZQ, 1PZR, 2HG4, 2JU1, 2JU2, and 2QO3.
Other names of this enzyme class is malonyl-CoA:propanoyl-CoA malonyltransferase (cyclizing). Other names in common use include erythronolide condensing enzyme, and malonyl-CoA:propionyl-CoA malonyltransferase (cyclizing). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Restriction fragment mass polymorphism**
Restriction fragment mass polymorphism:
Restriction Fragment Mass Polymorphism (RFMP) is a technology which digests DNA into oligonucleotide fragments, and detects variation of DNA sequences by molecular weight of the fragments. RFMP is a proprietary technology of GeneMatrix and can be utilized for genotyping viruses and microorganisms, and for human genome research. It is relatively restricted in usage due to the existence of many other genotyping products.
Overview:
Restriction fragment mass polymorphism (RFMP) is an application of matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF), used for identifying individual nucleotides from a DNA fragment, most commonly used in labeling single nucleotide polymorphisms (SNP). RFMP was developed as a successor to the similar restriction fragment length polymorphism (RFLP) with the intent to allow for more SNPs. Rather than read out lengths of fragments as RFLP does, the individual nucleotides are read out using MALDI-TOF, which gives specific clarity over same-length site cutting.
Methodology:
Like RFLP, the basic mechanism for RFMP is to run polymerase chain reaction (PCR) over a test sample. Modified PCR primers are used to create known restriction sites for enzymatic digestion. From the known fragment lengths, then, selection by length size can filter out DNA of interest. Finally, MALDI-TOF is run on the fragments of interest to produce a m/z (mass-to-charge ratio) identification spectra of the individual nucleotides.
Methodology:
A specific process, for example, would be Hong's 2008 strategy, outlined as the following: Primers are modified with a GGATG recognition site and amplified with PCR.
The Fok-I enzyme is used to cut 9 (3’) and 13 (5’) bases upstream of the recognition site, leaving an overhang. BstF5I similarly cuts upstream at distances 2 (3’) and 0 (3’), making an additional overhang.
(This produces two oligonucleotide strands – a 7-mer and a 13-mer.) Strands of either length are put under MALDI-TOF mass spectroscopy, to determine the individual nucleotides.These steps, like any experimental methodology, are case-specific, and can vary between experimental setup's goals and/or constraints.
Application:
RFMP is still primarily limited to South Korean medical literature, as it is an array assay that competes with many other specialized detection systems (whereas RFMP serves as a more general functionality).There has been focus for RFMP to be used in HPV detection in recent years. This is motivated by fact that it has a sensitivity two log10-fold better than standard of care. However, this still does not put RFMP as the clear top choice in the HPV landscape as there are others such as the Roche Linear Array, Abbot Realtime genotype II, and Sysmex HISCL HCV Gr that experimentally outperform RFMP in terms of detection accuracy.Other limitations that hinder RFMP's spread in the medical world are attributed to its lack of information on SNP mutation rate (e.g. masses have no correspondence to mutagenesis), as well as a general increase in user-handling difficulty compared to its peers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Horseshoe theory**
Horseshoe theory:
In popular discourse, the horseshoe theory asserts that the far-left and the far-right, rather than being at opposite and opposing ends of a linear continuum of the political spectrum, closely resemble each other, analogous to the way that the opposite ends of a horseshoe are close together. The theory is attributed to the French philosopher and writer of fiction and poetry Jean-Pierre Faye in his 2002 book, Le Siècle des idéologies ("The Century of Ideologies").Several political scientists, psychologists, and sociologists have criticized the horseshoe theory. Proponents point to a number of perceived similarities between extremes and allege that both have a tendency to support authoritarianism or totalitarianism; this does not appear to be supported by scholars in the field, and the few peer-reviewed research on the subject is scarce, with existing studies and comprehensive reviews often finding only limited support and only under certain conditions, and that generally contradict the theory's central premises.
Origin:
The horseshoe metaphor was used as early as during the Weimar Republic to describe the ideology of the Black Front. The later use of the term in political theory was seen in Le Siècle des idéologies. Faye's book discussed the use of ideologies (he said that ideology is a pair of Greek words that were joined in French) that he argued are rooted in philosophy by totalitarian regimes with specific reference to Friedrich Nietzsche and Adolf Hitler, and Karl Marx and Joseph Stalin; for instance, Faye used the horseshoe metaphor to describe the political position of German political parties, from the Communist Party of Germany to the Nazi Party, in 1932. Others have attributed the theory, also called the centrist/extremist theory and sometimes referred to as the Pluralist School, as having come from the American sociologists Seymour Martin Lipset and Daniel Bell, and others who became part of the neoconservative movement in the United States; according to critics, who formed complex social movement theories in response, it is a legacy of Cold War liberal politics. Because the theory is also popular in Germany, a co-contributor to the theory is the German political scientist Eckhard Jesse.
Modern usage:
In his 2006 book, Where Did the Party Go?, the American political scientist Jeff Taylor wrote: "It may be more useful to think of the Left and the Right as two components of populism, with elitism residing in the Center. The political spectrum may be linear, but it is not a straight line. It is shaped like a horseshoe." In the same year, the term was used in discussing a resurgent hostility toward Jews and a new antisemitism from both the far-left and the far-right. In an essay from 2008, Josef Joffe, a visiting fellow at the Hoover Institution, an American conservative think tank, wrote: "Will globalization survive the gloom? The creeping revolt against globalization actually preceded the Crash of '08. Everywhere in the West, populism began to show its angry face at mid-decade. The two most dramatic instances were Germany and Austria, where populist parties scored big with a message of isolationism, protectionism and redistribution. In Germany, it was left-wing populism ('Die Linke'); in Austria it was a bunch of right-wing parties that garnered almost 30% in the 2008 election. Left and right together illustrated once more the 'horseshoe' theory of modern politics: As the iron is bent backward, the two extremes almost touch." In a 2015 article for The Daily Beast, "The Left's Witchunt Against Muslims", the reformist Muslim Maajid Nawaz invoked the horseshoe theory while lamenting a common tendency on both extremes toward blacklisting, such as the McCarthyist compiling and publishing of "lists of our political foes". He wrote: "As the political horseshoe theory attributed to Jean-Pierre Faye highlights, if we travel far-left enough, we find the very same sneering, nasty and reckless bully-boy tactics used by the far-right. The two extremes of the political spectrum end up meeting like a horseshoe, at the top, which to my mind symbolizes totalitarian control from above. In their quest for ideological purity, Stalin and Hitler had more in common than modern neo-Nazis and far-left agitators would care to admit." In a 2018 article for Eurozine, "How Right Is the Left?", political scientist Kyrylo Tkachenko wrote about the common cause found recently between both extremes in Ukraine. He said: "The pursuit of a common political agenda is a trend discernible at both extremes of the political spectrum. Though this phenomenon manifests itself primarily through content-related overlaps, I believe there are good reasons to refer to it as a red-brown alliance. Its commonalities are based on shared anti-liberal resentment. Of course, there remain palpable differences between far left and the far right. But we should not underestimate the dangers already posed by these left-right intersections, as well as what we might lose if the resentment-driven backlash becomes mainstream." In a 2021 Reason article, "Let's Play Horseshoe Theory", Katherine Mangu-Ward, the American libertarian magazine's editor-in-chief, wrote: "The [horseshoe] theory is typically used to explain why 20th century communists and fascists seemed to have so much in common, though it likely predates the last century. But in the United States in 2021, a softer version of this iron law is at play, with the center-left and the center-right mushily converging toward expensive authoritarian policies that look astonishingly similar despite their supposedly opposite goals. Still a horseshoe, but more like one of the marshmallow ones you can find in bowls of Lucky Charms." In a December 2022 article for The Atlantic, "The Crunchy-to-Alt-Right Pipeline", examining the connections between "natural-food-and-body community and white-power and militant-right online spaces", historian Kathleen Belew wrote that an examination of documents connected with the white power movement indicated that a horseshoe is not quite right as a visual metaphor for the relationship of the far-left and the far-right, that, in fact, the archive showed that it was more like a circle, at least in the specific case she examined. The theory has also been cited when referring to American far-right and far-left organizations both supporting Vladimir Putin in the Russian invasion of Ukraine. The probability of autocratization in the year after election shows a horseshoe behavior along the economic left–right axis but not along the cultural dimension.
Academic studies and criticism:
The horseshoe theory does not enjoy wide support within academic circles; peer-reviewed research by political scientists on the subject is scarce, and existing studies and comprehensive reviews have often contradicted its central premises, or found only limited support for the theory under certain conditions. A 2011 study about the far-left and the far-right within the context of the 2007 French presidential election concluded: "Divergent social and political logics explain the electoral support for these two candidates: their voters do not occupy the same political space, they do not have the same social background, and they do not hold the same values." A 2012 study concluded: "The present results thus do not corroborate the idea that adherents to extreme ideologies on the left-wing and right-wing sides resemble each other but instead support the alternative perspective that different extreme ideologies attract different people. In other words, extremists should be distinguished on the basis of the ideology to which they adhere, and there is no universal extremist type that feels at home in any extreme ideology." A 2019 study concluded that "our findings suggest that speaking of 'extreme left-wing values' or 'extreme right-wing values' may not be meaningful, as members of both groups are heterogeneous in the values that they endorse." A 2022 study about antisemitism concluded: "On all items, the far left has lower agreement with these statements relative to moderates, and the far right has higher agreement with these statements compared to moderates. Contrary to a 'horseshoe' theory, the evidence reveals increasing antisemitism moving from left to right." Paul H. P. Hanel, a research associate at the University of Essex, et al. summarized some of those studies. They wrote: "Likewise, some even argue that all extremists, across the political left and right, in fact, support similar policies, in a view known as 'horseshoe theory'. However, not only do recent studies fail to support such beliefs, they also contradict them ... Van Hiel also found that left-wing respondents reported significantly lower endorsement of values associated with conservation, self-enhancement, and anti-immigration attitudes compared to both moderate and right-wing activists, with individuals on the right reporting greater endorsement of such values and attitudes ... Overall, van Hiel provided evidence demonstrating that Western European extremist groups are far from being homogenous, and left- and right-wing groups represent distinct ideologies." Several scholars dismissed the theory as an oversimplification and generalization that ignores their fundamental differences, and have questioned the theory's general premises, citing significative differences of the left and right on the political spectrum and governance. Chip Berlet, an expert on right-wing movements, has dismissed perceived far-left–far-right flirtations as an oversimplification of political ideologies, ignoring fundamental differences between them. In a 2000 book about the radical right in the United States, Right-Wing Populism in America: Too Close for Comfort, he and Matthew N. Lyons, another expert on right-wing movements, dismissed a Southern Poverty Law Center report that "relied heavily on centrist/extremist analysis", and those who saw the far-right's role, such as in the 1999 Seattle protests, as having had a major role, which they described as being false. Within the context of the anti-globalization movement, they also mentioned that those on the political left were concerned about the far-right infiltrating their own anti-WTO groups, which they characterized as very broad and also included centrist liberals and social democrats, and that those groups took the problem seriously because they did not want to be associated with "right-wing nationalists and bigots". Some, such as the Peoples' Global Action, amended their own manifestos to specifically reject any such alliances on principle.In a 2014 paper, Vassilis Pavlopoulos, a professor in social psychology at the University of Athens, argued: "The so-called centrist/extremist or horseshoe theory points to notorious similarities between the two extremes of the political spectrum (e.g., authoritarianism). It remains alive though many sociologists consider it to have been thoroughly discredited (Berlet & Lyons, 2000). Furthermore, the ideological profiles of the two political poles have been found to differ considerably (Pavlopoulos, 2013). The centrist/extremist hypothesis narrows civic political debate and undermines progressive organizing. Matching the neo-Nazi with the radical left leads to the legitimization of far-right ideology and practices."Simon Choat, a senior lecturer in political theory at Kingston University, has criticized the horseshoe theory. In a 2017 article for The Conversation, "'Horseshoe theory' is nonsense – the far right and far left have little in common", he argues that far-left and far-right ideologies only share similarities in the vaguest sense, in that they both oppose the liberal democratic status quo, but that the two sides have very different reasons and very different aims for doing so. Choat uses the issue of globalization as an example; both the far-left and the far-right attack neoliberal globalization and its elites but have conflicting views on who those elites are and conflicting reasons for attacking them. Additionally, Choat argues that although proponents of the horseshoe theory may cite examples of alleged history of collusion between fascists and communists, those on the far-left usually oppose the rise of far-right or fascist regimes in their countries. Instead, he argues that it has been centrists who have supported far-right and fascist regimes that they prefer in power over socialist ones, and that the horseshoe theory is biased towards centrists, who he says use it to smear or attack the left more than the right. He cites the example of the 2016 United States presidential election and the 2017 French presidential election, in which supporters of Bernie Sanders and Jean-Luc Mélenchon were alleged to have preferred or voted for Donald Trump and Marine Le Pen. In this sense, he argues that the horseshoe theory is used to engage in red-baiting or reductio ad Hitlerum, which allows them to "discredit the left while disavowing their own complicity with the far right." Choat says that "it is patently absurd to compare Stalin to present-day leftists like Mélenchon or Corbyn", and concludes: "If liberals genuinely want to understand and confront the rise of the far right, then rather than smearing the left they should perhaps reflect on their own faults."While this formal academic analysis is fairly recent, criticism of horseshoe theory and its antecedents is long-standing, and a frequent basis for criticism has been the tendency of an observer from one position to group opposing movements together. As early as 1938, Marxist theorist and politician Leon Trotsky wrote "Their Morals and Ours", which became the basis for his 1939 book, Their Morals and Ours: Marxist Versus Liberal Views on Morality. In the 1938 article, which was first published in the United States by the theoretical journal of the Socialist Workers Party of the International Left Opposition, he wrote: "The fundamental feature of [arguments comparing disparate political movements] lies in their completely ignoring the material foundation of the various currents, that is, their class nature and by that token their objective historical role. Instead they evaluate and classify different currents according to some external and secondary manifestation ... To Hitler, liberalism and Marxism are twins because they ignore 'blood and honour'. To a democrat, fascism and Bolshevism are twins because they do not bow before universal suffrage ... Different classes in the name of different aims may in certain instances utilise similar means. Essentially it cannot be otherwise. Armies in combat are always more or less symmetrical; were there nothing in common in their methods of struggle they could not inflict blows upon each other." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Composing stick**
Composing stick:
In letterpress printing and typesetting, a composing stick is a tray-like tool used to assemble pieces of metal type into words and lines, which are then transferred to a galley before being locked into a forme and printed. Many composing sticks have one adjustable end, allowing the length of the lines and consequent width of the page or column to be set, with spaces and quadrats of different sizes being used to make up the exact width. Early composing sticks often had a fixed measure, as did many used in setting type for newspapers, which were fixed to the width of a standard column, when newspapers were still composed by hand.
Composing stick:
The compositor takes the pieces of type from the boxes (compartments) of the type case and places them in the composing stick, working from left to right and placing the letters upside-down with the nick to the top.
Composing stick:
Early composing sticks were made of wood, but later iron, brass, steel, aluminium, pewter and other metals were used. Wooden composing sticks continued to be made in large sizes into the nineteenth century, for setting wood letter and other large sizes of type for display. In the industrial age, composing sticks were manufactured by many companies, but notably in America by the H. B. Rouse company, which made composing sticks that were adjustable to the half pica, as well as a stick containing a micrometer that was infinitely adjustable. Some sticks were marked in agates as well, to aid in newspaper and advertisement composition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tinkerforge**
Tinkerforge:
Tinkerforge is an open source hardware platform of stackable microcontroller building blocks (Bricks) that can control different modules (Bricklets). The primary communication interface of the building blocks can be extended using Master Extensions. The hardware can be controlled by external programs written in C, C++, C#, Object Pascal, Java, Perl, PHP, Python, Ruby, Shell and VB.NET over a USB, Wifi or Ethernet connection, and running on Windows, Linux and macOS. This non-embedded programming approach eliminates the typical requirements and limitations (development tools, limited availability of RAM and processing power) of conventional embedded software development (such as Arduino). Tinkerforge hardware and software are both Open Source, and all files are hosted on GitHub.
Tinkerforge:
The computer magazine Chip awarded Tinkerforge 2012 the "Product of the Year" award.
Bricks:
Bricks are 4x4 cm circuit boards. They can evaluate measurements, control motors and communicate with other building blocks. Each Brick has a 32-Bit ARM microcontroller, a USB connector and connectors for more Bricks and Bricklets.
It is possible to stack several Bricks onto each other. The bottom Brick of such Stacks needs to be a Master Brick.
Bricklets:
Bricklets extend the features of Bricks. They provide means for in- and output of data. Many Bricklets are sensors, but there are also LCD-Bricklets and Bricklets for digital and analog in- and output.
Master Extensions:
Master Extensions extend the communication interfaces of Bricks. Like Bricks, Master Extensions are 4x4cm circuit boards. There are Extensions for Wi-Fi, Ethernet and RS-485. From a programming perspective the different interfaces are transparent. A stack with Master Extension behaves as if every board in the stack would be directly connected to the PC over a USB connection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Airborne wind shear detection and alert system**
Airborne wind shear detection and alert system:
The airborne wind shear detection and alert system, fitted in an aircraft, detects and alerts the pilot both visually and aurally of a wind shear condition. A reactive wind shear detection system is activated by the aircraft flying into an area with a wind shear condition of sufficient force to pose a hazard to the aircraft. A predictive wind shear detection system is activated by the presence of a wind shear condition ahead of the aircraft. In 1988, the U.S. Federal Aviation Administration (FAA) mandated that all turbine-powered commercial aircraft must have on-board wind shear detection systems by 1993. Airlines successfully lobbied to have commercial turbo-prop aircraft exempted from this requirement.In the predictive wind shear detection mode, the weather radar processor of the aircraft detects the presence of a microburst, a type of vertical wind shear condition, by detecting the Doppler frequency shift of the microwave pulses caused by the microburst ahead of the aircraft, and displays the area where it is present in the Navigation Display Unit (of the Electronic Flight Instrument System) along with an aural warning.
History of development:
In June 1975, Eastern Air Lines Flight 66 crashed on approach to New York JFK Airport due to microburst-induced wind shear. Then, in July 1982, Pan Am Flight 759 crashed on takeoff from New Orleans International Airport in similar weather conditions. Finally, in August 1985, wind shear and inadequate reactions by the pilots caused the crash of Delta Air Lines Flight 191 on approach to Dallas/Fort Worth International Airport in a thunderstorm.
History of development:
On July 24, 1986, the FAA and NASA signed a memorandum of agreement to formally begin the Airborne Wind-Shear Detection and Avoidance Program (AWDAP). As a result, a wind-shear program was established in the Flight Systems Directorate of NASA's Langley Research Center. After five years of intensely studying various weather phenomena and sensor technologies, the researchers decided to validate their findings in actual flight conditions. They chose an extensively modified Boeing 737, which was equipped with a rear research cockpit in place of the forward section of the passenger cabin. A modified Rockwell Collins model 708 X-band ground-based radar unit was used in the AWDAP experiments. The real-time radar processor system used during 1992 flight experiments was a VME bus-based system with a Motorola 68030 host processor and three DSP boards. On September 1, 1994, the weather radar model RDR-4B of the Allied-Signal/Bendix (now Honeywell) became the first predictive wind-shear system to be certified for commercial airline operations. In the same year, Continental Airlines became the first commercial carrier to install an airborne predictive wind-shear detection system on its aircraft. By June 1996, Rockwell Collins and Westinghouse's Defense and Electronics Group (now Grumman/Martin) also came up with FAA-certified predictive wind-shear detection systems.The IEEE Intelligent Transportation Systems Society is conducting research for further development of this system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SGI Indigo**
SGI Indigo:
The Indigo, introduced as the IRIS Indigo, is a line of workstation computers developed and manufactured by Silicon Graphics, Inc. (SGI). SGI first announced the system in July 1991.The Indigo is considered one of the most capable graphics workstations of its era, and was essentially peerless in the realm of hardware-accelerated three-dimensional graphics rendering. For use as a graphics workstation, the Indigo was equipped with a two-dimensional framebuffer or, for use as a 3D graphics workstation, with the Elan graphics subsystem including one to four Geometry Engines (GEs). SGI sold a server version with no video adapter.
SGI Indigo:
The Indigo's design is based on a simple cube motif in indigo hue. Graphics and other peripheral expansions are accomplished via the GIO32 expansion bus.
The Indigo was superseded generally by the SGI Indigo2, and in the low-cost market segment by the SGI Indy.
Technical specifications:
The first Indigo model (code-named Hollywood) was introduced on July 22, 1991. It is based on the IP12 processor board, which contains a 32-bit MIPS R3000A microprocessor soldered on the board and proprietary memory slots supporting up to 96 MB of RAM.
Technical specifications:
The later version (code-named Blackjack) is based on the IP20 processor board, which has a removable processor module (PM1 or PM2) containing a 64-bit MIPS R4000 (100 MHz) or R4400 processor (100 MHz or 150 MHz) that implements the MIPS-III instruction set. The IP20 uses standard 72-pin SIMMs with parity, and has 12 SIMM slots for a total of 384 MB of RAM at maximum.
Technical specifications:
A Motorola 56000 DSP is used for Audio IO, giving it 4-channel 16-bit audio. Ethernet is supported on board by the SEEQ 80C03 chipset coupled with the HPC (High-performance Peripheral Controller), which provides the DMA engine. The HPC interfaces primarily between the GIO bus and the Ethernet, SCSI (WD33C93 chipset) and the 56000 DSP. The GIO bus interface is implemented by the PIC (Processor Interface Controller) on IP12 and MC (Memory Controller) on IP20.
Technical specifications:
Much of the hardware design can be traced back to the SGI IRIS 4D/3x series, which shared the same memory controller, Ethernet, SCSI, and optionally DSP as the IP12 Indigo. The 4D/30, 4D/35 and Indigo R3000 are all considered IP12 machines and run the same IRIX kernel. The Indigo R3000 is effectively a reduced cost 4D/35 without a VME bus. The PIC supports a VME expansion bus (used on the 4D/3x series) and GIO expansion slots (used on the Indigo). In all IP12, IP20, and IP22/IP24 (see SGI Indigo2) systems the HPC attached to the GIO bus.
Graphics options:
Entry graphics For entry graphics, the 8-bit color frame buffer comes in three versions. One version uses the system's GIO expansion bus. Another uses the main backplane like the XS, XZ, and Elan graphics options. The final is the same, but adds a second video output, giving the computer the ability to have two "heads", or monitors.
Graphics options:
XS Graphics The Indigo's XS Graphics option has a single GE7 Geometry Engine (GE), a RE3 Raster engine, a HQ2 Command engine, VC1, XMAP5. It is ideal for low-cost wireframe operations, compared to more powerful, and expensive options for textured graphics. Part of SGI's Express line of graphics, four XS graphics options were produced for the Indigo: the XS-8 offers 8-bit color, with one VM2 video RAM module; the XS-Z adds the ZB-4 Z buffer; the XS-24 adds two VM2 modules and offers 24 color bits and 32 bits including brightness; and the XS-24Z adds a Z buffer.
Graphics options:
XZ Graphics The XZ graphics option is also a member of SGI's Express graphics line. It is similar to the XS-24z, but it includes a second GE7 Geometry Engine ASIC, doubling its geometry performance.
Elan Graphics The highest performance graphics option offered for the Indigo, it is a member of SGI's Express graphics line. It is like the XS-24z and XZ, but has 4 GE7 Geometry Engine ASICs, giving it twice the performance of the XZ option.
Operating system:
The Indigo was designed to run IRIX, SGI's version of Unix. The Indigos with R3000 processors are supported up to IRIX version 5.3, and Indigo equipped with an R4000 or R4400 processor can run up to IRIX 6.5.22.
Additionally, the free Unix-like operating system NetBSD has support for both the IP12 and IP20 Indigos as part of the sgimips port. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Esaote**
Esaote:
Esaote SpA is an Italian company operating in the biomedical sector that deals with the design, production, sale and maintenance of equipment for medical diagnostics.
History:
Esaote was founded in Genoa in 1982 as a division of Biomedical Electronics Ansaldo Ansaldo SpA. In 1984, its activities with those of Elsag SpA converge in Esacontrol. In 1986 STET acquires Tuscan Officine Elettromeccaniche Biomedica - OTE Biomedica Elettronica SpA and in 1988 moved to the Finmeccanica which shall merge it with the biomedical sector Esacontrol, giving rise to Esaote Biomedica SpA, created by Elsag, Selenia and Ansaldo (hence the acronym ESA). Privatized by the state in 1994, it changed its name to Esaote SpA and two years later was listed on the Milan Stock Exchange, which was released in 2003 following a takeover bid for 100% of the share capital sponsored by Bracco SpA, already the majority shareholder since 1998 through Bracco Holding NV.
History:
In May 2016, Esaote opened a new office at the Erzelli science and technology park of Genoa.
History:
In March 2017, the company inaugurated in Florence the new Center of Excellence for the production of probes and transducers for ultrasound diagnostic systems and a new hub, worldwide, located in Sesto Fiorentino (Florence). In May Esaote moved the production of dedicated magnetic resonance systems from the "historic" headquarters in Genoa, in via Siffredi, to a new, modern production plant built in Genoa Multedo. The activities of the Group's Research and Development laboratories for magnetic resonance imaging, the ultrasound diagnostic system repair center and the spare parts center for all equipment are also concentrated in the same location.In December 2017, 100% of Esaote was sold for between 300-400 million euros to a consortium of Chinese investors made up of six leading companies in the Chinese medical technology sector and investment funds with experience in healthcare. Among them also a fund of Jack Ma, founder together with David Yu of Alibaba, dedicated to hi tech. Under the agreement, finalized in April 2018 after the favorable opinion of the Italian government in February, the headquarters of Esaote remain in Genoa and Karl-Heinz Lumpi at the helm of the company. In May 2019 new CEO: Franco Fontana, in the company since 2008. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**(Cyclopentadienyl)titanium trichloride**
(Cyclopentadienyl)titanium trichloride:
(Cyclopentadienyl)titanium trichloride is an organotitanium compound with the formula (C5H5)TiCl3. It is a moisture sensitive orange solid. The compound adopts a piano stool geometry.
Preparation and reactions:
(C5H5)TiCl3 is prepared by the reaction of titanocene dichloride and titanium tetrachloride: (C5H5)2TiCl2 + TiCl4 → 2 (C5H5)TiCl3The complex is electrophilic, readily forming alkoxide complexes upon treatment with alcohols.Reduction of (cyclopentadienyl)titanium trichloride with zinc powder gives the polymeric Ti(III) derivative (cyclopentadienyl)titanium dichloride: (C5H5)2TiCl2 + 0.5 Zn → 1/n [(C5H5)TiCl3]n + 0.5 ZnCl2 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.