text
stringlengths
60
353k
source
stringclasses
2 values
**Leica M mount** Leica M mount: The Leica M mount is a camera lens mount introduced in 1954 with the Leica M3, and a range of lenses. It has been used on all the Leica M-series cameras and certain accessories (e.g. Visoflex reflex viewing attachment) up to the current film Leica M-A and digital Leica M11 cameras. This lens mount has also been used by Epson, Ricoh, Minolta, Konica, Cosina Voigtländer, Rollei, Carl Zeiss AG and Rollei Fototechnic on some of their cameras. Overview: The Leica M mount was introduced in 1954 at that year's Photokina show, with the Leica M3 as its first camera. The 'M' stands for Messsucher or rangefinder in German. This new camera abandoned the M39 lens mount in favour of a new bayonet mount. The bayonet mount allowed lenses to be changed more quickly and made the fitting more secure. Other innovations introduced by the M3 included a single window for the viewfinder (for composition) and the rangefinder (for focusing). With a double-stroke film advance lever (later models have a single-stroke lever). The M3 was a success and over 220,000 units were sold, by the time production ended in 1966. It remains the best-selling M mount camera ever made. The M3 uses 135 film (or 35 mm film), with the canister being loaded behind a detachable bottom plate. The M3 was followed by many other M mount cameras, released over 40 years, with many of the basic concepts remaining in these designs. With the introduction of the Through-the-lens metering (TTL) in the Leica M5 and the digital Leica M8 being the most notable innovations since then.The lenses for the M mount were also introduced in 1954 and were based on the earlier M39 thread mount. Almost all M mount lenses are Prime lenses. These lenses are divided by Leica based on their maximum aperture number (also known as f-number). They are distinguished by their names: M Mount camera bodies: Film cameras Digital cameras Professional Entry-Level Monochrom No display Increased resolution Other manufacturers Epson R-D1 by Epson Minolta CLE by Minolta Hexar RF by Konica Bessa R2A, R3A, R2M, R3M, R4M and R4A by Cosina Voigtländer Rollei 35 RF by Rollei Fototechnic Recent Zeiss Ikon rangefinder camera by Carl Zeiss AG Ricoh GXR by Ricoh PIXII by Pixii SAS M mount lenses: Other manufacturers Minolta Carl Zeiss Cosina Voigtländer Zenit M
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lifshitz theory of van der Waals force** Lifshitz theory of van der Waals force: In condensed matter physics and physical chemistry, the Lifshitz theory of van der Waals forces, sometimes called the macroscopic theory of van der Waals forces, is a method proposed by Evgeny Mikhailovich Lifshitz in 1954 for treating van der Waals forces between bodies which does not assume pairwise additivity of the individual intermolecular forces; that is to say, the theory takes into account the influence of neighboring molecules on the interaction between every pair of molecules located in the two bodies, rather than treating each pair independently. Need for a non-pairwise additive theory: The van der Waals force between two molecules, in this context, is the sum of the attractive or repulsive forces between them; these forces are primarily electrostatic in nature, and in their simplest form might consist of a force between two charges, two dipoles, or between a charge and a dipole. Thus, the strength of the force may often depend on the net charge, electric dipole moment, or the electric polarizability ( α ) (see for example London force) of the molecules, with highly polarizable molecules contributing to stronger forces, and so on. Need for a non-pairwise additive theory: The total force between two bodies, each consisting of many molecules in the van der Waals theory is simply the sum of the intermolecular van der Waals forces, where pairwise additivity is assumed. That is to say, the forces are summed as though each pair of molecules interacts completely independently of their surroundings (See Van der Waals forces between Macroscopic Objects for an example of such a treatment). This assumption is usually correct for gasses, but presents a problem for many condensed materials, as it is known that the molecular interactions may depend strongly on their environment and neighbors. For example, in a conductor, a point-like charge might be screened by the electrons in the conductance band, and the polarizability of a condensed material may be vastly different from that of an individual molecule. In order to correctly predict the van der Waals forces of condensed materials, a theory that takes into account their total electrostatic response is needed. General principle: The problem of pairwise additivity is completely avoided in the Lifshitz theory, where the molecular structure is ignored and the bodies are treated as continuous media. The forces between the bodies are now derived in terms of their bulk properties, such as dielectric constant and refractive index, which already contain all the necessary information from the original molecular structure. General principle: The original Lifshitz 1955 paper proposed this method relying on quantum field theory principles, and is, in essence, a generalization of the Casimir effect, from two parallel, flat, ideally conducting surfaces, to two surfaces of any material. Later papers by Langbein, Ninham, Parsegian and Van Kampen showed that the essential equations could be derived using much simpler theoretical techniques, an example of which is presented here. Hamaker constant: The Lifshitz theory can be expressed as an effective Hamaker constant in the van der Waals theory. Hamaker constant: Consider, for example, the interaction between an ion of charge {\textstyle Q} , and a nonpolar molecule with polarizability {\textstyle \alpha _{2}} at distance {\textstyle r} . In a medium with dielectric constant ϵ3 , the interaction energy between a charge and an electric dipole p is given by U(r)=−Qp4πϵ0ϵ3r2 with the dipole moment of the polarizable molecule given by {\textstyle p=\alpha _{2}E} , where {\textstyle E} is the strength of the electric field at distance {\textstyle r} from the ion. According to Coulomb's law: E=Q4πϵ0ϵ31r2 so we may write the interaction energy as U(r)=−Q2α2(4πϵ0ϵ3)2r4 Consider now, how the interaction energy will change if the right hand molecule is replaced with a medium of density {\textstyle \rho _{2}} of such molecules. According to the "classical" van der Waals theory, the total force will simply be the summation over individual molecules. Integrating over the volume of the medium (see the third figure), we might expect the total interaction energy with the charge to be U(D)=−2πQ2α2ρ2(4πϵ0ϵ3)2∫z=D∞dz∫x=0∞dxx(z2+x2)2=−πQ2α2ρ2(4πϵ0ϵ3)21D But this result cannot be correct, since It is well known that a charge {\textstyle Q} in a medium of dielectric constant ϵ3 at a distance {\textstyle D} from the plane surface of a second medium of dielectric constant ϵ2 experiences a force as if there were an 'image' charge of strength {\textstyle Q'=-Q(\epsilon _{2}-\epsilon _{3})/(\epsilon _{2}+\epsilon _{3})} at distance D on the other side of the boundary. The force between the real and image charges must then be F(D)=−Q2(4πϵ0ϵ3)(2D)2ϵ2−ϵ3ϵ2+ϵ3 and the energy, therefore U(D)=−Q2(4πϵ0ϵ3)4Dϵ2−ϵ3ϵ2+ϵ3 Equating the two expressions for the energy, we define a new effective polarizability that must obey ρ2α2=ϵ0ϵ3ϵ2−ϵ3ϵ2+ϵ3 Similarly, replacing the real charge {\textstyle Q} with a medium of density {\textstyle \rho _{1}} and polarizability α1 gives an expression for α1ρ1 . Using these two relations, we may restate our theory in terms of an effective Hamaker constant. Specifically, using McLachlan's generalized theory of VDW forces the Hamaker constant for an interaction potential of the form {\textstyle U(r)=-C/r^{n}} between two bodies at temperature {\textstyle T} is 1... Hamaker constant: ∞α1(iνn)α2(iνn)ϵ32 with {\textstyle \nu _{n}=2\pi nk_{B}T/h} , where {\textstyle k_{B}} and {\textstyle h} are Boltzmann's and Planck's constants correspondingly. Inserting our relations for ρα and approximating the sum as an integral 1... Hamaker constant: {\textstyle k_{B}T\sum _{n=0,1...}^{\infty }\rightarrow {\frac {h}{2\pi }}\int \limits _{\nu _{1}}^{\infty }d\nu } , the effective Hamaker constant in the Lifshitz theory may be approximated as A≈34kBT(ϵ1−ϵ3ϵ1+ϵ3)(ϵ2−ϵ3ϵ2+ϵ3)+3h4π∫ν1∞dν(ϵ1(iν)−ϵ3(iν)ϵ1(iν)+ϵ3(iν))(ϵ2(iν)−ϵ3(iν)ϵ2(iν)+ϵ3(iν)) We note that ϵ(iν) are real functions, and are related to measurable properties of the medium; thus, the Hamaker constant in the Lifshitz theory can be expressed in terms of observable properties of the physical system. Experimental validation: The macroscopic theory of van der Waals theory has many experimental validations. Among which, some of the most notable ones are Derjaguin (1960); Derjaguin, Abrikosova and Lifshitz (1956) and Israelachvili and Tabor (1973), who measured the balance of forces between macroscopic bodies of glass, or glass and mica; Haydon and Taylor (1968), who measured the forces across bilayers by measuring their contact angle; and lastly Shih and Parsegian (1975), who investigated Van der Waals potentials between heavy alkali-metal atoms and gold surfaces using atomic-beam-deflection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pad see ew** Pad see ew: Pad see ew (phat si-io or pad siew, Thai: ผัดซีอิ๊ว, RTGS: phat si-io, pronounced [pʰàt sīːʔíw]) is a stir-fried noodle dish that is commonly eaten in Thailand. It can be found easily among street food vendors and is also quite popular in Thai restaurants around the world. The origins of the dish can be traced to China from where the noodle stir-frying technique was brought.The dish is prepared in a wok which allows the black soy sauce added at the end of the cooking process to stick to the noodles for an exaggerated caramelizing and charring effect. The dish may look a little burnt, but the charred smoky flavor is the defining feature of the dish.The name of the dish translates to "fried with soy sauce". Variations of the dish can be found in other countries as well. It is very similar to the char kway teow of Malaysia and Singapore and to Cantonese chow fun. It is also similar to rat na (in Thai) or lard na (in Laos). The difference is that pad see ew is normally stir-fried dry and made with beef, while the aforementioned dishes are served in a thickened sauce and generally have a lighter taste.Pad see ew is made with light soy sauce (''si-io khao'', similar to the regular soy sauce), dark soy sauce (si-io dam, having a more syrupy consistency), garlic, broad rice noodles called kuaitiao sen yai in Thai, Chinese broccoli, egg, and tofu or some form of thinly sliced meat – commonly pork, chicken, beef, shrimp, or mixed seafood. It is generally garnished with ground white pepper. Pad see ew: Pad see ew is sometimes also called kuaitiao phat si-io, which reflects the general practice of using fresh flat rice noodle as the main ingredient. However, thin rice noodles may also be used, for which it is called sen mi phat si-io. Egg noodles are also used in Southern Thailand, for which it is called mi lueang phat si-io (mi lueang meaning "yellow noodle").
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CleanBrowsing** CleanBrowsing: CleanBrowsing is a free public DNS resolver with content filtering, founded by Daniel B. Cid and Tony Perez. It supports DNS TLS over port 853 and DNS over HTTP over port 443 in addition to the standard DNS over port 53. CleanBrowsing filters can be used by parents to protect their children from adult and inappropriate content online. Services: CleanBrowsing has 3 standard filters accessible via the following anycast IP addresses: Family Filter Blocks access to adult content, proxy and VPNs, phishing and malicious domains. It enforces Safe Search on Google, Bing and YouTube. Adult Filter Less restrictive than the Family filter and only blocks access to adult content and malicious/phishing domains. Security Filter Blocks access to malicious and phishing domains.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zussmanite** Zussmanite: Zussmanite is a hydrated iron-rich silicate mineral with the chemical formula K(Fe2+,Mg,Mn)13[AlSi17O42](OH)14. It occurs as pale green crystals with perfect cleavage. Discovery and occurrence: It was first described in 1960 by Stuart Olof Agrell in the Laytonville quarry, Mendocino County, California. Zussmanite is named in honor of Jack Zussman (born 1924), Head of the University of Manchester’s Department of Geology and co-author of Rock-Forming Minerals. In the Laytonville quarry, Zussmanite occurs in metamorphosed shales, siliceous ironstones and impure limestones of the Franciscan Formation. It is a location of high pressure and low temperatures where blueschist facies metamorphic rocks occur. This is also the locality in which Deerite and Howieite were first discovered. This type of locality also produces micas, which have a similar structure as zussmanite. Discovery and occurrence: The locality in which zussmanite occurs is one of ultra high to high pressure and low temperatures. This Barrovian type of metamorphism is usually distinguished by the P/T range rather than the ranges in pressure and temperatures (Miyashiro 1973). The three principal Barrovian types are low P/T type, medium P/T type, and high P/T type. The high P/T type, referred to as glaucophanic metamorphism, is characterized by the presence of glaucophane and forms glaucophane schists (Miyashiro 1973). Glaucophane schists, commonly referred to as blueschist-facies, result from metamorphism of basaltic rocks and are usually located in folded geosynclinal terranes (Deer, Howie & Zussman 1993). Glaucophane schists are characterized by low temperature (100–250 °C) high pressure (4-9 kbar) metamorphism (Deer, Howie & Zussman 1993). Zussmanite is commonly found with stilpnomelane and quartz, usually forming abundant porphyroblasts up to 1 mm in size, in the newly discovered locality in Southern Central Chile (Massonne et al. 1998). Composition: The blueschist facies phyllosilicate mineral occurs as a result of subduction of oceanic crustal rocks and oceanic-continental margin sediments along convergent plate boundaries. The ideal formula for zussmanite is KFe13Si17AlO42(OH)14 with possible substitutions of sodium (Na) for potassium (K), in extremely small amounts (Lopes-Vieira & Zussman 1969). The possible iron (Fe2+) substitutes are mainly magnesium (Mg) with trace amounts that could include: manganese (Mn), aluminium (Al), iron3+ (Fe3+) and titanium (Ti) (Lopes-Vieira & Zussman 1969). Zussmanite was discovered in combination with deerite and howieite, two new minerals discovered in the Franciscan formation, Mendocino County, California. Deerite and howieite have been found at other locations while zussmanite has only been found at this type locality, making it a rare occurring mineral. Experiments have revealed that zussmanite is stable up to 600 °C at pressures between 10 kb and 30 kb and that the end members of zussmanite are orthoferrosilite, biotite and quartz. The example of the reaction is KFe13[AlSi17042](OH)14 (zussmanite) yields 10FeSiO3 (orthoferrosilite) + 1⁄2 K2Fe6Si6Al2O20(OH)4 (biotite) + 4SiO2 (quartz) + 6H20 (water) (Dempsey 1981). The manganese analogue of zussmanite, coombsite, has been found in manganese-rich siliceous rocks in the Otago Schist in New Zealand. Structure: The space group and cell of Zussmanite are R*3, ahex 11.66 and chex28.69 Angstroms (Agrell, Bown & McKie 1965). The structure of Zussmanite contains continuous sheets of rhombohedrally stacked layers of Fe-O octahedral parallel to (0001) (Lopes-Vieira & Zussman 1967) and to either side of these are attached (Si,Al)–O tetrahedral in a way to produce a rhombohedral unit cell (Lopes-Vieira & Zussman 1969). These layers are linked to one another by Potassium (K) atoms and also by three-member rings of tetrahedra that share oxygens with the six-members; displayed in figure 2 (Lopes-Vieira & Zussman 1967). Zussmanite’s structure has a close affinity to that of the trioctahedral micas which have a layer of Fe-O octahedral sandwiched between inward pointing tetrahedral. It differs from the micas because its Si-O ratio is 9:21 which results in a sharing coefficient 1.83, as compared with 2.5 and 1.75 for micas, and 1.2 and 2.0 for framework silicates (Lopes-Vieira & Zussman 1969). The Fe-(O,OH) mean distance in the first octahedron is 2.1 Angstroms, the second octahedron is 2.14 Angstroms, and in the third Octahedron is 2.17 Angstroms. The mean distance in the Si-O bonds in Zussmanite are 1.61 Angstroms for the first tetrahedron, 1.61 Angstroms for the second tetrahedron, and 1.65 Angstroms for the third tetrahedron; data given in table I (Lopes-Vieira & Zussman 1969). The six-member rings are not directly linked to one another which allows for adjustment by tilting outwards of all tetrahedral, as opposed to many micas where rotations and tilts are used to achieve the larger dimensions of the octahedral layer. The flattening of the octahedral layer perpendicular to the layer is pronounced in Zussmanite due to shared and unshared edges. This flattening could be due to the tendency for shared oxygens to come closer and shields iron (Fe) atoms from other neighboring iron (Fe) atoms. Physical properties: Zussmanite occurs in pale green tabular crystals with perfect cleavage. It tends to be uniaxial, weakly pleochroic and a specific gravity of 3.146 (Agrell, Bown & McKie 1965). Other types of zussmanite found in Laytonville, which are of fine-grained samples are assumed to be late-stage metamorphic products. Physical properties: The perfect cleavage is a result of the continuous sheets of (Fe,Mg)−(O,OH) octahedra parallel to (0001). The optical properties result from virtually pure zussmanite that was separated from thin sections, approximately 200 micrometers thick, under a polarizing microscope by means of a microdrill. The indices of refraction compare well with those determined be Agrell, Bown & McKie 1965 for the chemically different Zussmanite from the Laytonville quarry (Massonne et al. 1998).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isonocardicin synthase** Isonocardicin synthase: In enzymology, an isonocardicin synthase (EC 2.5.1.38) is an enzyme that catalyzes the chemical reaction S-adenosyl-L-methionine + nocardicin E ⇌ 5'-methylthioadenosine + isonocardicin AThus, the two substrates of this enzyme are S-adenosyl-L-methionine and nocardicin E, whereas its two products are 5'-methylthioadenosine and isonocardicin A. This enzyme belongs to the family of transferases, specifically those transferring aryl or alkyl groups other than methyl groups. The systematic name of this enzyme class is S-adenosyl-L-methionine:nocardicin-E 3-amino-3-carboxypropyltransferase. This enzyme is also called nocardicin aminocarboxypropyltransferase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Python Server Pages** Python Server Pages: Python Server Pages (PSP) is a name used by several different implementations of server-side script engines for creating dynamically-generated web pages by embedding Python in HTML. For example, an implementation of Python Server Pages was released with mod_python 3.1 in 2004.Spyce, which also claims the phrase "Python Server Pages", was first released in 2002. The Webware for Python suite also contains an implementation of Python Server Pages released as early as 2000. An earlier tool with a similar function also called Python Server Pages but based on Java and JPython was first released in 1999.It was one of the earliest web development support in Python and has long since been surpassed in popularity by systems such as Django or Flask.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pitch (baseball)** Pitch (baseball): In baseball, the pitch is the act of throwing the baseball toward home plate to start a play. The term comes from the Knickerbocker Rules. Originally, the ball had to be thrown underhand, much like "pitching in horseshoes". Overhand pitching was not allowed in baseball until 1884. The biomechanics of pitching have been studied extensively. The phases of pitching include the windup, early cocking, late cocking, early acceleration, late acceleration, deceleration, and follow-through. Pitch (baseball): Pitchers throw a variety of pitches, each of which has a slightly different velocity, trajectory, movement, hand position, wrist position and/or arm angle. These variations are introduced to confuse the batter and ultimately aid the defensive team in getting the batter or baserunners out. To obtain variety, and therefore enhance defensive baseball strategy, the pitcher manipulates the grip on the ball at the point of release. Variations in the grip cause the seams to catch the air differently, thereby changing the trajectory of the ball, making it harder for the batter to hit. Pitch (baseball): The selection of which pitch to use can depend on a wide variety of factors including the type of hitter who is being faced; whether there are any base runners; how many outs have been made in the inning; and the current score. Pitchers may bounce their pitches in the dirt before they reach the batter, but these pitches are called balls even if they pass through the strike zone. Signaling: The responsibility for selecting the type of pitch is traditionally made by the catcher, who gives hand signals to the pitcher with their fingers, usually one finger for fastball or the pitcher's best pitch, with the pitcher having the option to ask for another selection by shaking his head.Alternatively, the manager or a coach relays the pitch selection to the catcher, via secret hand signals, to prevent the opposing team from having the advantage of knowing what the next pitch will be. Fastballs: The fastball is the most common pitch in baseball, and most pitchers have some form of a fastball in their arsenal. Most pitchers throw four-seam fastballs. It is basically a pitch thrown very fast, generally as hard as a given pitcher can throw while maintaining control. Some variations involve movement or breaking action, some do not and are simply straight, high-speed pitches. While throwing the fastball it is very important to have proper mechanics, because this increases the chance of getting the ball to its highest velocity, making it difficult for the opposing player to hit the pitch. The cut fastball, split-finger fastball, and forkball are variations on the fastball with extra movement, and are sometimes called sinking-fastballs because of the trajectories. The most common fastball pitches are: Cutter Four-seam fastball Sinker Split-finger fastball Two-seam fastball Breaking balls: Well-thrown breaking balls have movement, usually sideways or downward. A ball moves due to the changes in the pressure of the air surrounding the ball as a result of the kind of pitch thrown. Therefore, the ball keeps moving in the path of least resistance, which constantly changes. For example, the spin from a properly thrown slider (thrown by a right-handed pitcher) results in lower air pressure on the pitcher's left side, resulting in the ball moving to the left (from the pitcher's perspective). The goal is usually to make the ball difficult to hit by confusing the batters. Most breaking balls are considered off-speed pitches. The most common breaking pitches are: 12–6 curveball Curveball Knuckle curve Screwball Slider Slurve Changeups: The changeup is an off-speed pitch, usually thrown to look like a fastball but arriving much slower to the plate. Its reduced speed coupled with its deceptive delivery is meant to confuse the batter's timing. It is thrown the same as a fastball, but simply farther back in the hand, which makes it release from the hand slower but still retaining the look of a fastball. A changeup is generally thrown 8–15 miles per hour slower than a fastball. If thrown correctly, the changeup will confuse the batter because the human eye cannot discern that the ball is coming significantly slower until it is around 30 feet from the plate. For example, a batter swings at the ball as if it was a 90 mph fastball but it is coming at 75 mph which means he is swinging too early to hit the ball well, making the changeup very effective. The most common changeups are: Circle changeup Forkball Fosh Palmball Straight changeup Vulcan changeup Other pitches: Other pitches which are or have been used in baseball are: Gyroball Junk pitches Eephus pitch Knuckleball Shuuto Illegal pitches Beanball Emery ball Shine ball Spitball Purpose pitches Brushback pitch Pickoff Pitchout Pitching deliveries: The most common pitching delivery is the three-quarters delivery. Other deliveries include the submarine (underhand) and the sidearm deliveries. There is also the crossfire pitch, which only works for sidearm delivery.A pickoff move is the motion the pitcher goes through in making pickoff. Pitching positions: There are two legal pitching positions: the windup the set which is often referred to as the stretch.Typically, pitchers from the set use a high leg kick, but may instead release the ball more quickly by using the slide step.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chillcuring** Chillcuring: Chillcuring is a grain ventilating process, especially for fresh-harvested shelled corn. Process: As described in inventions of Sylvester L. Steffen, chillcuring is an electrical ventilating process that facilitates the after-ripening of bulk stored seeds. The ventilation process is controlled by monitoring the wet-bulb temperature of air around the grain (as measured by evaporative cooling), by which the grain is brought to equilibrium moisture and temperature with atmospheric air. Seed dormancy is better maintained at cooler (chill) atmospheric temperatures, and grain weight and seed vigor are better preserved. The after-ripening of seeds is a biochemical process of carbohydrate/protein stabilization associated with the release of water from the seeds. Water, as well as carbon dioxide, is a good absorber of infrared light. The process uses "grainlamps" which produce infrared, which is readily absorbed by water vapor in the air under high humidity conditions. History: The Steffen Patents were at issue in federal lawsuits in Minnesota, Indiana and Iowa. The validity of the Steffen Process Patents was upheld. The chillcuring process was marketed from the 1960s into the late 1990s by Harvestall Industries, Inc., an agribusiness of the former and now deceased Speaker of the Iowa House of Representatives, Vincent B. Steffen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cytoskeleton associated protein 2 like** Cytoskeleton associated protein 2 like: Cytoskeleton associated protein 2 like is a protein that in humans is encoded by the CKAP2L gene. Function: The protein encoded by this gene is thought to be a mitotic spindle protein important to neural stem or progenitor cells. Mutations in this gene have been associated with spindle organization defects, including mitotic spindle defects, lagging chromosomes, and chromatin bridges. There is evidence that mutations in this gene are associated with Filippi syndrome, characterized by growth defects, microcephaly, intellectual disability, facial feature defects, and syndactyly. There is a pseudogene of this gene on chromosome 20. Alternative splicing results in multiple transcript variants. [provided by RefSeq, Jan 2015].
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Personality changes** Personality changes: == Introduction == Personality change refers to the different forms of change in various aspects of personality. These changes include how we experience things, how our perception of experiences changes, and how we react in situations. An individual's personality may stay somewhat consistent throughout their life. Still, more often than not, everyone undergoes some form of change to their personality in their lifetime.Personality refers to individual differences in characteristic thinking, feeling, and behavior patterns. Our personality is like a puzzle; each piece can come from internal or external factors. The many pieces can come from events, circumstances, genetics, or life experiences. Each piece creates our personality as a whole. Personality changes: Every person has their own "individual differences in particular personality characteristics" that separate them from others. The overall study of personality focuses on two broad areas; The first one is understanding individual differences in personality characteristics.The second is understanding how the various parts of a person come together as a whole.This article will explain why an individual's personality may change. These changes include how social interactions shape us, how our personality evolves with age, how our experiences can influence us, and how significant events (especially traumatic events) may alter our perceptions. Personality changes: Each person has their own unique personality, and as a result, the many differences and changes that occur, may be confusing. Even psychologists are still studying and researching to fully understand what personality means and why personality changes. The development of personality is often dependent on the stage of life a person is in. Most development occurs in the earlier stages of life and becomes more stable as one grows into adulthood.While still uncertain, research suggests that genetics play a role in the change and stability of certain traits in a personality. They have also discovered that environmental sources affect personality too. The debate over nature versus nuture have pervaded the field of psychology since its beginning. Cultural is also a large factor in personality trait differences as well. Definition of personality: Personality, one's characteristic way of feeling, behaving and thinking, is often conceptualized as a person's standing on each Big Five personality trait (extraversion, neuroticism, openness to experience, agreeableness and conscientiousness). A person's personality profile is thus gauged from their standing on five broad concepts which predict, among other life outcomes, behavior and the quality of interpersonal relationships. Initially, it was believed that one's Big Five profile was static and dichotomous in that one was either at one extreme of each trait or another For example, people are typically categorized as introverted or extraverted. Personality was therefore assessed in terms of generalities or averages. In noticing the strong inconsistencies in how people behaved across situations, some psychologists dismissed personality as nonexistent.This school of thought attributes human behavior to environmental factors, relegating individual differences to situational artifacts and contesting the existence of individual predispositions. It was led by situationists like Walter Mischel (1968). Their contention held that personality was a fictitious concept. For them, the discrepancies observed across one's behaviors were evidence that inter-individual differences did not exist Some aspects of the situationist perspective even suggest that all human beings are the same and that the differences we observe are simply illusory byproducts of the environment.However, personality experts (sometimes referred to as personologists) soon integrated these inconsistencies into their conceptualization of personality. They modified the old, more monolithic construct by measuring how people differ across situations. Their new methods of personality assessment describe fluctuations in personality characteristics that are consistent and predictable for each person, based on his predispositions and the environment they are in. Some work suggests that people can adopt different levels of a personality dimension as the social situations and time of day change.Therefore, someone is not conscientious all of the time, but can be conscientious at work and a lot less so when they are home. This work also suggests that intrapersonal variations on a trait can be even larger than interpersonal variations. Extraversion varies more within a person than across individuals, for example. This work was based on individual self-ratings during the day across a long period of time. This allowed for researchers to assess moment-to-moment and day to day variations on personality attributes. The impact of social roles: In addition, social roles (e.g. employee) have been identified as potential sources of personality change. Researchers have found strong correspondences between the demands of a social role and one's personality profile. If the role requires that the person enacting it be conscientious, her standing on this trait is more likely to be high. Conversely, once he leaves that role or takes on another which entails less conscientiousness, he will manifest a lower level standing on that trait. Longitudinal research demonstrates that people's personality trajectories can often be explained by the social roles they adopted and relinquished throughout their life stages. Thus social roles are often studied as fundamental predictors of personality. The goals associated with them elicit the appropriation of certain personality profiles by the people enacting them. For example, employees judged effective by their peers and superiors are often described as conscientious. Personality also changes through life stages. This may be due to physiological changes associated with development but also experiences that impact behavior. Adolescence and young adulthood have been found to be prime periods of personality changes, especially in the domains of extraversion and agreeableness. It has long been believed that personality development is shaped by life experiences that intensify the propensities that led individuals to those experiences in the first place, which is known as the Correspondence Principle.Subsequent research endeavors have integrated these findings in their methods of investigation. Researchers distinguish between mean level and rank order changes in trait standing during old age. Their study of personality trajectories is thus contingent on time and on age considerations. Mottus, Johnson and Geary (2012) found that instability engendered by aging does not necessarily affect one's standing within an age cohort. Hence, fluctuations and stability coexist so that one changes relative to one's former self but not relative to one's peers. Similarly, other psychologists found that Neuroticism, Extraversion (only in men), and Openness decreased with age after 70, but Conscientiousness and Agreeableness increased with age (the latter only in men). Moreover, they suggest that there is a decline on each trait after the age of 81. Inconsistency as a trait: Personality inconsistency has become such a prevalent consideration for personologists that some even conceptualize it as a predisposition in itself. Fleisher and Woehr (2008) suggest that consistency across the Big Five is a construct that is fairly stable and contributes to the predictive validity of personality measures. Hence, inconsistency is quantifiable much like a trait, and constitutes an index of - and enhances - the fit of psychological models.To accommodate the inconsistency demonstrated on personality tests, researchers developed the Frame Of Reference principle (FOR). Frame of Reference (FOR) refers to the set of conjectures an individual or group of individuals uses to judge ideas, actions, and experiences to create meaning. FOR's include beliefs, values, schemas, preferences and culture. This can lead to prejudice, biases, and stereotypes due to the limited view an individual has. According to this theory, people tend to think of their personality in terms of a specific social context when they are asked to rate them. Whichever environment is cognitively salient at the time of the personality measurement will influence the respondent's ratings on a trait measure. Inconsistency as a trait: If, for example, the person is thinking in terms of their student identity, then the personality ratings he reports will most likely reflect the profile he espouses in the context of student life. Accounting for the FOR principle aims at increasing the validity of personality measures. This demonstrates that the predictive validity of personality measures which specify a social context is a lot higher than those measures which take a more generic approach. Inconsistency as a trait: This point is substantiated by yet another body of work suggesting that FOR instructions moderated the link between extraversion and openness scores on manager ratings of employee performance This research thus recognizes that the importance of intrapersonal fluctuations contingent on personality is context specific and is not necessarily generalizable across social domains and time. There are several different FOR's: Compensatory Frame of Reference Rehabilitative Frame of Reference Biomechanical Frame of Reference Psychoanalytic Frame of Reference Psychodynamic Frame of Reference: is based on Freud's theories of Interpersonal relationships and unconscious drives. Developmental Frame of Reference Behavioral Frame of Reference Cognitive-Behavioral Frame of Reference Psychospiritual Integration Frame of Reference: stresses the nature of spirituality, the expression of spirituality in professional/ work related behaviors, and how spirituality affects an individual's health and well being. There are six elements: becoming, meaning, being, centeredness, connectedness, and transcendence. Occupational Adaptation Frame of Reference Social Participation Frame of Reference Acquisitional Frame of Reference Process of change: If "Personality... is one of the strongest and most consistent predictors of subjective well-being," then does personality not change? In fact, "personality does change". But what makes that happen? Most people, in their lifetime, will experience an event that opens their eyes to a new understanding of the world. For example, someone who is carefree and happy might become more serious and stern after experiencing abuse in a relationship. Another who is serious and stern might become more happy and interested in life after finding a religion that provides them with closure and answered questions. Each day of life is met with events and situations that result in a response from those who experience them - and sometimes, these events can change who we are and how we think at the core. Process of change: Research has found a correlation between being multilingual and personality, specifically how one may change personality based on the language currently being spoken. One who is raised bilingual or lived a number of years in a foreign country and learned the language of the land not only experience personality change but often adopt different personalities based on the language they are speaking. These changes are often based on cultural norms of the language's origin.A study published in 2012 found that "personality does change and that the extent to which personality changes is comparable to other characteristics, such as income, unemployment and marital status". Some of the biggest concerns faced in life are the previously listed factors - how much money does one make (income)? Does one have a job or not (unemployment)? Does one have a lifelong companion (marital status)? These situations can lead to bigger, more complex situations. If one seeks to be married but is not, they may become cold. If one has no job but then gets hired somewhere, they may become grateful and filled with hope. When positive changes happen, "personality... meaningfully predicts changes to life satisfaction". Simply, when one experiences a personality change, it can strongly determine how that person will then feel about life. Change over a lifetime: There are two very specific types of change that researchers tend to focus on: rank-order change and mean-level change. A rank-order change refers to a change in an individual's personality trait relative to other individuals; such changes do not occur very often. A mean-level change refers to an absolute change in the individual's level of a certain trait over time. Longitudinal research shows that mean-level change does occur. However, some traits tend to change while some traits tend to stay stable. Change over a lifetime: During adolescence there are many increases or rapid changes in hormones, societal pressures, and environment factors, among other things. These things theoretically factor into significant personality changes as one progresses through adolescence. As a person progresses through adulthood, their personality becomes more stable and predictable because they establish patterns of thinking, behaving, and feeling.Personality does not stop changing at a specific age. Biological and social transitions in life may also be a factor for change. Biological transitions are stages like puberty or giving birth for the first time. Social transitions might be changes in social roles like becoming a parent or working at a first job. These life transitions do not necessarily cause change, but they may be reasons for change. As humans we do not adapt just in our body. Our mind also makes changes to itself in order to thrive in our environment. One theory says that whether or not these life transitions cause personality change is based on whether the transition was expected based on age or was unforeseen. The events that are expected will cause personality change because those events have common scripts. However, events that are unexpected will give prominence to the traits that already exist for the individual. Historical context also affects personality change. Major life events can lead to changes in personality that can persist for more than a decade. A longitudinal study followed women over 30 years and found that they showed increases in individualism. This may have been due to the changes that were occurring in their country at the time. Change over a lifetime: Stressful life events and trauma Negative life events, long-term difficulties, and deteriorated life quality, all predict small but persistent increases in neuroticism, while positive life events, and improved life quality, predict small but persistent decreases in neuroticism. There appears to be no point during the lifespan that neuroticism is immutable, which is known as the Plasticity Principle.While extreme, traumatic brain injury can impact a person's personality, even having an effect throughout the rest of their life. Change over a lifetime: Mechanisms of change There are multiple ways for an individual's personality to change. Individuals will change their behavior based on the ideas in their environment that emit rewards and punishments. Some of these ideas might be implicit, like social roles. The individual changes his or her personality to fit into a social role if it is favorable. Other ideas might be more explicit like a parent trying to change a child's behavior. Change over a lifetime: An individual may decide to actively try to change his or her own behavior/ personality after thinking about his or her own actions. Therapy involves the same type of introspection. The individual along with the therapist identifies the behaviors that are inappropriate, and then self-monitors in order to change them. Eventually the individual internalizes the behavior they want to attain, and that trait will generalize to other areas of the individual's life. Personality change also occurs when individuals observe the actions of others. Individuals may mimic the behaviors of others and then internalize those behaviors. Once the individual internalizes those behaviors they are said to be a part of that person's personality. Change over a lifetime: Individuals also receive feedback from other individuals or groups about their own personality. This is a driving force of change because the individual has social motivations to change his or her personality; people often act a certain way based on the popular/majority vote of the people they are around. For example, a girl who likes country music may say she hates country music when she learns that all her peers don't like country music. It has also been shown that major positive and negative life events can predict changes in personality. Some of the largest changes are observed in individuals with psychiatric or neurodegenerative disorders, such as Alzheimer's disease and related dementia. A meta-analyses found consistent evidence that large increases in neuroticism and large declines on the other major personality traits are observed in individuals with dementia. Change over a lifetime: MeditationStudies have shown that mindfulness-meditation therapies have a positive effect of personality maturity. Cognitive Behavioral TherapyCognitive Behavioral Therapy has been tested and proved to be effective in the treatment of adults with anxiety disorders. Psilocybin TherapyFollowing psilocybin therapy one study communicates that Neuroticism scores lowered substantially while Extraversion increased. The Big Five personality traits: The Big Five personality traits are often used to measure change in personality. There is a mean-level change in the Big Five traits from age 10 to 65. The Big Five personality traits: The trends seen in adulthood are different from trends seen in childhood and adolescence. Some research suggests that during adolescence rank-order change does occur and therefore personality is relatively unstable. Gender differences are also shown before adulthood. Conscientiousness drops from late childhood to adolescence, but then picks back up from adolescence into adulthood. As well, a meta-analysis done by Melissa C. O'Connor and Sampo V. Paunonen, "Big Five Personality Predictors of Post-Secondary Academic Performance", 2006, showed that "... conscientiousness, in particular, [is] most strongly and consistently associated with academic success". Agreeableness also drops from late childhood to adolescence, but then picks back up from adolescence into adulthood. Neuroticism shows a different trend for males and females in childhood and adolescence. For females, Neuroticism increases from childhood to adolescence. Then Neuroticism levels from adolescence into adulthood and continues the adult trend of decreasing. Males however, tend to gradually decrease in Neuroticism from childhood to adolescence into adulthood. Extraversion drops from childhood to adolescence and then does not significantly change. Openness to experience also shows a different trend for different genders. Females tend to decrease in Openness to experience from childhood to early adulthood and then gradually increases all throughout adulthood. Males tend to decrease in Openness to experience from childhood to adolescence, then it tends to increase through adulthood. In the same study done by O'Connor and Paunonen, "Openness to Experience was sometimes positively associated with scholastic achievement..." In adulthood, Neuroticism tends to decrease, while Conscientiousness and Agreeableness tend to increase. Extraversion and Openness to experience do not seem to change much during adulthood. These trends seen in adulthood are different from trends seen in childhood and adolescence. Cross-cultural research shows that German, British, Czech, and Turkish people show similar trends of these personality traits. Similar trends seem to exist in other countries.In a study done by Deborah A. Cobb-Clark and Stefanie Schurer, "The Stability of Big-Five Personality Traits," done in 2011, showed that "On average, individuals report slightly higher levels of agreeableness, emotional stability, and conscientiousness than extraversion and openness to experience. [On top of that], women report higher scores on each trait except for openness to experience". For clarification, openness to experience can be referred to simply as openness. It is often seen as one's willingness to embrace new things, new ideas, and new activities. The Big Five personality traits: The Big Five personality traits can also be broken down into facets. Different facets of each personality trait are often correlated with different behavioral outcomes. Breaking down the personality traits into facets is difficult and not yet at a consensus. However, it is important to look at change in facets over a lifetime separate from just the change in traits because different facets of the same trait show different trends. For example, openness with values decreases substantially with age, while openness with aesthetics is more stable. Neuroticism can be broken into the two facets of anxiety and depression. Anxiety has the same trend as Neuroticism for both males and females. For females, anxiety increases from childhood to adolescence, at emerging adulthood it levels out, and then starts to decrease into and throughout middle age. Anxiety in males tends to decrease from late childhood through adulthood. Depression (not clinical depression, but rather susceptibility to negative affect) shows two peaks in females. Females tend to have higher levels of this kind of depression in adolescence and then again in early adulthood. Depression does, however, have a negative trend through adulthood. For males, depression tends to show an increase from childhood to early adulthood and then shows a slight decrease through middle age. There are four facets that accompany Extraversion. They are social self-esteem, liveliness, social boldness, and sociability. Social Self-esteem, liveliness, and social boldness starts to increase during our mid-teens and continually increases throughout early adulthood and into late adulthood. Sociability seems to follow a different trend that is pretty high during our early teens but tends to decrease in early-adulthood and then stabilize around the age of 39. Late life changes: Although there is debate surrounding whether or not personality can change in the late stages of life, more evidence is being discovered about how the environmental factors affect people of all ages. Changes in health are regarded as an influential source of personality stability and change. Across multiple facets of health which include cognitive, physical, and sensory functioning, older adults' ability to maintain their everyday routine and lifestyle is being challenged. There are noticeable finds on reverse trends in maturity-related traits, such as increases in neuroticism and declines in conscientiousness. Mainly the debate in this area revolves around whether the health consequences of old age can be linked to changes in traits and whether these changes can, in turn, impair health and functioning.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rayleigh–Ritz method** Rayleigh–Ritz method: The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz. Rayleigh–Ritz method: It is used in all applications that involve approximating eigenvalues and eigenvectors, often under different names. In quantum mechanics, where a system of particles is described using a Hamiltonian, the Ritz method uses trial wave functions to approximate the ground state eigenfunction with the lowest energy. In the finite element method context, mathematically the same algorithm is commonly called the Ritz-Galerkin method. The Rayleigh–Ritz method or Ritz method terminology is typical in mechanical and structural engineering to approximate the eigenmodes and resonant frequencies of a structure. Naming and attribution: The name Rayleigh–Ritz is being debated vs. the Ritz method after Walther Ritz, since the numerical procedure has been published by Walther Ritz in 1908-1909. According to A. W. Leissa, Lord Rayleigh wrote a paper congratulating Ritz on his work in 1911, but stating that he himself had used Ritz's method in many places in his book and in another publication. This statement, although later disputed, and the fact that the method in the trivial case of a single vector results in the Rayleigh quotient make the arguable misnomer persist. According to S. Ilanko, citing Richard Courant, both Lord Rayleigh and Walther Ritz independently conceived the idea of utilizing the equivalence between boundary value problems of partial differential equations on the one hand and problems of the calculus of variations on the other hand for numerical calculation of the solutions, by substituting for the variational problems simpler approximating extremum problems in which a finite number of parameters need to be determined; see the article Ritz method for details. Ironically for the debate, the modern justification of the algorithm drops the calculus of variations in favor of the simpler and more general approach of orthogonal projection as in Galerkin method named after Boris Galerkin, thus leading also to the Ritz-Galerkin method naming. For matrix eigenvalue problems: In numerical linear algebra, the Rayleigh–Ritz method is commonly applied to approximate an eigenvalue problem for the matrix A∈CN×N of size N using a projected matrix of a smaller size m<N , generated from a given matrix V∈CN×m with orthonormal columns. The matrix version of the algorithm is the most simple: Compute the m×m matrix V∗AV , where V∗ denotes the complex-conjugate transpose of V Solve the eigenvalue problem V∗AVyi=μiyi Compute the Ritz vectors x~i=Vyi and the Ritz value λ~i=μi Output approximations (λ~i,x~i) , called the Ritz pairs, to eigenvalues and eigenvectors of the original matrix A .If the subspace with the orthonormal basis given by the columns of the matrix V∈CN×m contains k≤m vectors that are close to eigenvectors of the matrix A , the Rayleigh–Ritz method above finds k Ritz vectors that well approximate these eigenvectors. The easily computable quantity ‖Ax~i−λ~ix~i‖ determines the accuracy of such an approximation for every Ritz pair. For matrix eigenvalue problems: In the easiest case m=1 , the N×m matrix V turns into a unit column-vector v , the m×m matrix V∗AV is a scalar that is equal to the Rayleigh quotient ρ(v)=v∗Av/v∗v , the only i=1 solution to the eigenvalue problem is yi=1 and μi=ρ(v) , and the only one Ritz vector is v itself. Thus, the Rayleigh–Ritz method turns into computing of the Rayleigh quotient if m=1 Another useful connection to the Rayleigh quotient is that μi=ρ(vi) for every Ritz pair (λ~i,x~i) , allowing to derive some properties of Ritz values μi from the corresponding theory for the Rayleigh quotient. For example, if A is a Hermitian matrix, its Rayleigh quotient (and thus its every Ritz value) is real and takes values within the closed interval of the smallest and largest eigenvalues of A Example The matrix has eigenvalues 1,2,3 and the corresponding eigenvectors Let us take then with eigenvalues 1,3 and the corresponding eigenvectors so that the Ritz values are 1,3 and the Ritz vectors are We observe that each one of the Ritz vectors is exactly one of the eigenvectors of A for the given V as well as the Ritz values give exactly two of the three eigenvalues of A . A mathematical explanation for the exact approximation is based on the fact that the column space of the matrix V happens to be exactly the same as the subspace spanned by the two eigenvectors xλ=1 and xλ=3 in this example. For matrix singular value problems: Truncated singular value decomposition (SVD) in numerical linear algebra can also use the Rayleigh–Ritz method to find approximations to left and right singular vectors of the matrix M∈CM×N of size M×N in given subspaces by turning the singular value problem into an eigenvalue problem. For matrix singular value problems: Using the normal matrix The definition of the singular value σ and the corresponding left and right singular vectors is Mv=σu and M∗u=σv . Having found one set (left of right) of approximate singular vectors and singular values by applying naively the Rayleigh–Ritz method to the Hermitian normal matrix M∗M∈CN×N or MM∗∈CM×M , whichever one is smaller size, one could determine the other set of left of right singular vectors simply by dividing by the singular values, i.e., u=Mv/σ and v=M∗u/σ . However, the division is unstable or fails for small or zero singular values. For matrix singular value problems: An alternative approach, e.g., defining the normal matrix as A=M∗M∈CN×N of size N×N , takes advantage of the fact that for a given N×m matrix W∈CN×m with orthonormal columns the eigenvalue problem of the Rayleigh–Ritz method for the m×m matrix can be interpreted as a singular value problem for the N×m matrix MW . This interpretation allows simple simultaneous calculation of both left and right approximate singular vectors as follows. For matrix singular value problems: Compute the N×m matrix MW Compute the thin, or economy-sized, SVD MW=UΣVh, with N×m matrix U , m×m diagonal matrix Σ , and m×m matrix Vh Compute the matrices of the Ritz left U=U and right Vh=VhW∗ singular vectors. For matrix singular value problems: Output approximations U,Σ,Vh , called the Ritz singular triplets, to selected singular values and the corresponding left and right singular vectors of the original matrix M representing an approximate Truncated singular value decomposition (SVD) with left singular vectors restricted to the column-space of the matrix W .The algorithm can be used as a post-processing step where the matrix W is an output of an eigenvalue solver, e.g., such as LOBPCG, approximating numerically selected eigenvectors of the normal matrix A=M∗M Example The matrix has its normal matrix singular values 1,2,3,4 and the corresponding thin SVD where the columns of the first multiplier from the complete set of the left singular vectors of the matrix A , the diagonal entries of the middle term are the singular values, and the columns of the last multiplier transposed (although the transposition does not change it) are the corresponding right singular vectors. For matrix singular value problems: Let us take with the column-space that is spanned by the two exact right singular vectors corresponding to the singular values 1 and 2. For matrix singular value problems: Following the algorithm step 1, we compute and on step 2 its thin SVD MW=UΣVh with Thus we already obtain the singular values 2 and 1 from Σ and from U the corresponding two left singular vectors u as [0,1,0,0,0]∗ and [1,0,0,0,0]∗ , which span the column-space of the matrix W , explaining why the approximations are exact for the given W . Finally, step 3 computes the matrix Vh=VhW∗ recovering from its rows the two right singular vectors v as [0,1,0,0]∗ and [1,0,0,0]∗ We validate the first vector: Mv=σu and M∗u=σv Thus, for the given matrix W with its column-space that is spanned by two exact right singular vectors, we determine these right singular vectors, as well as the corresponding left singular vectors and the singular values, all exactly. For an arbitrary matrix W , we obtain approximate singular triplets which are optimal given W in the sense of optimality of the Rayleigh–Ritz method.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boronic acid** Boronic acid: A boronic acid is an organic compound related to boric acid (B(OH)3) in which one of the three hydroxyl groups (−OH) is replaced by an alkyl or aryl group (represented by R in the general formula R−B(OH)2). As a compound containing a carbon–boron bond, members of this class thus belong to the larger class of organoboranes. Boronic acid: Boronic acids act as Lewis acids. Their unique feature is that they are capable of forming reversible covalent complexes with sugars, amino acids, hydroxamic acids, etc. (molecules with vicinal, (1,2) or occasionally (1,3) substituted Lewis base donors (alcohol, amine, carboxylate)). The pKa of a boronic acid is ~9, but they can form tetrahedral boronate complexes with pKa ~7. They are occasionally used in the area of molecular recognition to bind to saccharides for fluorescent detection or selective transport of saccharides across membranes. Boronic acid: Boronic acids are used extensively in organic chemistry as chemical building blocks and intermediates predominantly in the Suzuki coupling. A key concept in its chemistry is transmetallation of its organic residue to a transition metal. Boronic acid: The compound bortezomib with a boronic acid group is a drug used in chemotherapy. The boron atom in this molecule is a key substructure because through it certain proteasomes are blocked that would otherwise degrade proteins. Boronic acids are known to bind to active site serines and are part of inhibitors for porcine pancreatic lipase, subtilisin and the protease Kex2. Furthermore, boronic acid derivatives constitute a class of inhibitors for human acyl-protein thioesterase 1 and 2, which are cancer drug targets within the Ras cycle.The boronic acid functional group is reputed to have low inherent toxicity. This is one of the reasons for the popularity of the Suzuki coupling in the development and synthesis of pharmaceutical agents. However, a significant fraction of commonly used boronic acids and their derivatives were recently found to gives a positive Ames test and act as chemical mutagens. The mechanism of mutagenicity is thought to involve the generation of organic radicals via oxidation of the boronic acid by atmospheric oxygen. Structure and synthesis: In 1860, Edward Frankland was the first to report the preparation and isolation of a boronic acid. Ethylboronic acid was synthesized by a two-stage process. First, diethylzinc and triethyl borate reacted to produce triethylborane. This compound then oxidized in air to form ethylboronic acid. Several synthetic routes are now in common use, and many air-stable boronic acids are commercially available. Structure and synthesis: Boronic acids typically have high melting points. They are prone to forming anhydrides by loss of water molecules, typically to give cyclic trimers. Structure and synthesis: Synthesis Boronic acids can be obtained via several methods. The most common way is reaction of organometallic compounds based on lithium or magnesium (Grignards) with borate esters. For example, phenylboronic acid is produced from phenylmagnesium bromide and trimethyl borate followed by hydrolysis PhMgBr + B(OMe)3 → PhB(OMe)2 + MeOMgBr PhB(OMe)2 + 2 H2O → PhB(OH)2 + 2 MeOHAnother method is reaction of an arylsilane (RSiR3) with boron tribromide (BBr3) in a transmetallation to RBBr2 followed by acidic hydrolysis. Structure and synthesis: A third method is by palladium catalysed reaction of aryl halides and triflates with diboronyl esters in a coupling reaction known as the Miyaura borylation reaction. An alternative to esters in this method is the use of diboronic acid or tetrahydroxydiboron ([B(OH2)]2). Boronic esters (also named boronate esters): Boronic esters are esters formed between a boronic acid and an alcohol. The compounds can be obtained from borate esters by condensation with alcohols and diols. Phenylboronic acid can be selfcondensed to the cyclic trimer called triphenyl anhydride or triphenylboroxin. Compounds with 5-membered cyclic structures containing the C–O–B–O–C linkage are called dioxaborolanes and those with 6-membered rings dioxaborinanes. Organic chemistry applications: Suzuki coupling reaction Boronic acids are used in organic chemistry in the Suzuki reaction. In this reaction the boron atom exchanges its aryl group with an alkoxy group from palladium. Organic chemistry applications: Chan–Lam coupling In the Chan–Lam coupling the alkyl, alkenyl or aryl boronic acid reacts with a N–H or O–H containing compound with Cu(II) such as copper(II) acetate and oxygen and a base such as pyridine forming a new carbon–nitrogen bond or carbon–oxygen bond for example in this reaction of 2-pyridone with trans-1-hexenylboronic acid: The reaction mechanism sequence is deprotonation of the amine, coordination of the amine to the copper(II), transmetallation (transferring the alkyl boron group to copper and the copper acetate group to boron), oxidation of Cu(II) to Cu(III) by oxygen and finally reductive elimination of Cu(III) to Cu(I) with formation of the product. Direct reductive elimination of Cu(II) to Cu(0) also takes place but is very slow. In catalytic systems oxygen also regenerates the Cu(II) catalyst. Organic chemistry applications: Liebeskind–Srogl coupling In the Liebeskind–Srogl coupling a thiol ester is coupled with a boronic acid to produce a ketone. Organic chemistry applications: Conjugate addition The boronic acid organic residue is a nucleophile in conjugate addition also in conjunction with a metal. In one study the pinacol ester of allylboronic acid is reacted with dibenzylidene acetone in such a conjugate addition: The catalyst system in this reaction is tris(dibenzylideneacetone)dipalladium(0) / tricyclohexylphosphine.Another conjugate addition is that of gramine with phenylboronic acid catalyzed by cyclooctadiene rhodium chloride dimer: Oxidation Boronic esters are oxidized to the corresponding alcohols with base and hydrogen peroxide (for an example see: carbenoid) Homologation In boronic ester homologization an alkyl group shifts from boron in a boronate to carbon: In this reaction dichloromethyllithium converts the boronic ester into a boronate. A Lewis acid then induces a rearrangement of the alkyl group with displacement of the chlorine group. Finally an organometallic reagent such as a Grignard reagent displaces the second chlorine atom effectively leading to insertion of an RCH2 group into the C-B bond. Another reaction featuring a boronate alkyl migration is the Petasis reaction. Organic chemistry applications: Electrophilic allyl shifts Allyl boronic esters engage in electrophilic allyl shifts very much like silicon pendant in the Sakurai reaction. In one study a diallylation reagent combines both: Hydrolysis Hydrolysis of boronic esters back to the boronic acid and the alcohol can be accomplished in certain systems with thionyl chloride and pyridine. Aryl boronic acids or esters may be hydrolyzed to the corresponding phenols by reaction with hydroxylamine at room temperature. Organic chemistry applications: C–H coupling reactions The diboron compound bis(pinacolato)diboron reacts with aromatic heterocycles or simple arenes to an arylboronate ester with iridium catalyst [IrCl(COD)]2 (a modification of Crabtree's catalyst) and base 4,4′-di-tert-butyl-2,2′-bipyridine in a C-H coupling reaction for example with benzene: In one modification the arene reacts using only a stoichiometric equivalent rather than a large excess using the cheaper pinacolborane: Unlike in ordinary electrophilic aromatic substitution (EAS) where electronic effects dominate, the regioselectivity in this reaction type is solely determined by the steric bulk of the iridium complex. This is exploited in a meta-bromination of m-xylene which by standard AES would give the ortho product: Protonolysis Protodeboronation is a chemical reaction involving the protonolysis of a boronic acid (or other organoborane compound) in which a carbon-boron bond is broken and replaced with a carbon-hydrogen bond. Protodeboronation is a well-known undesired side reaction, and frequently associated with metal-catalysed coupling reactions that utilise boronic acids (see Suzuki reaction). For a given boronic acid, the propensity to undergo protodeboronation is highly variable and dependent on various factors, such as the reaction conditions employed and the organic substituent of the boronic acid: Supramolecular chemistry: Saccharide recognition The covalent pair-wise interaction between boronic acids and hydroxy groups as found in alcohols and acids is rapid and reversible in aqueous solutions. The equilibrium established between boronic acids and the hydroxyl groups present on saccharides has been successfully employed to develop a range of sensors for saccharides. One of the key advantages with this dynamic covalent strategy lies in the ability of boronic acids to overcome the challenge of binding neutral species in aqueous media. If arranged correctly, the introduction of a tertiary amine within these supramolecular systems will permit binding to occur at physiological pH and allow signalling mechanisms such as photoinduced electron transfer mediated fluorescence emission to report the binding event. Supramolecular chemistry: Potential applications for this research include blood glucose monitoring systems to help manage diabetes mellitus. As the sensors employ an optical response, monitoring could be achieved using minimally invasive methods, one such example is the investigation of a contact lens that contains a boronic acid based sensor molecule to detect glucose levels within ocular fluids.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Medical statistics** Medical statistics: Medical statistics deals with applications of statistics to medicine and the health sciences, including epidemiology, public health, forensic medicine, and clinical research. Medical statistics has been a recognized branch of statistics in the United Kingdom for more than 40 years but the term has not come into general use in North America, where the wider term 'biostatistics' is more commonly used. However, "biostatistics" more commonly connotes all applications of statistics to biology. Medical statistics is a subdiscipline of statistics. "It is the science of summarizing, collecting, presenting and interpreting data in medical practice, and using them to estimate the magnitude of associations and test hypotheses. It has a central role in medical investigations. It not only provides a way of organizing information on a wider and more formal basis than relying on the exchange of anecdotes and personal experience, but also takes into account the intrinsic variation inherent in most biological processes." Pharmaceutical statistics: Pharmaceutical statistics is the application of statistics to matters concerning the pharmaceutical industry. This can be from issues of design of experiments, to analysis of drug trials, to issues of commercialization of a medicine. There are many professional bodies concerned with this field including: European Federation of Statisticians in the Pharmaceutical Industry (EFSPI) Statisticians In The Pharmaceutical Industry (PSI)There are also journals including: Statistics in Medicine Pharmaceutical Statistics Clinical biostatistics: Clinical biostatistics is concerned with research into the principles and methodology used in the design and analysis of clinical research and to apply statistical theory to clinical medicine.There is a society for Clinical Biostatistics with annual conferences since its founding in 1978.Clinical Biostatistics is taught in postgraduate biostatistical and applied statistical degrees, for example as part of the BCA Master of Biostatistics program in Australia. Basic concepts: For describing situationsIncidence (epidemiology) vs. Prevalence vs. Cumulative incidence Many medical tests (such as pregnancy tests) have two possible results: positive or negative. However, tests will sometimes yield incorrect results in the form of false positives or false negatives. False positives and false negatives can be described by the statistical concepts of type I and type II errors, respectively, where the null hypothesis is that the patient will test negative. The precision of a medical test is usually calculated in the form of positive predictive values (PPVs) and negative predicted values (NPVs). PPVs and NPVs of medical tests depend on intrinsic properties of the test as well as the prevalence of the condition being tested for. For example, if any pregnancy test was administered to a population of individuals who were biologically incapable of becoming pregnant, then the test's PPV will be 0% and its NPV will be 100% simply because true positives and false negatives cannot exist in this population. Basic concepts: Transmission rate vs. force of infection Mortality rate vs. standardized mortality ratio vs. age-standardized mortality rate Pandemic vs. epidemic vs. endemic vs. syndemic Serial interval vs. incubation period Cancer cluster Sexual network Years of potential life lost Maternal mortality rate Perinatal mortality rate Low birth weight ratioFor assessing the effectiveness of an intervention Absolute risk reduction Control event rate Experimental event rate Number needed to harm Number needed to treat Odds ratio Relative risk reduction Relative risk Relative survival Minimal clinically important difference Related statistical theory: Survival analysis Proportional hazards models Active control trials: clinical trials in which a kind of new treatment is compared with some other active agent rather than a placebo. ADLS(Activities of daily living scale): a scale designed to measure physical ability/disability that is used in investigations of a variety of chronic disabling conditions, such as arthritis. This scale is based on scoring responses to questions about self-care, grooming, etc. Actuarial statistics: the statistics used by actuaries to calculate liabilities, evaluate risks and plan the financial course of insurance, pensions, etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Falu red** Falu red: Falu red or falun red ( FAH-loo; Swedish: falu rödfärg, pronounced [ˈfɑ̂ːlɵ ˈrø̂ː(d)færj]) is a permeable red paint commonly used on wooden cottages and barns in Sweden, Finland, and Norway. History: Following hundreds of years of mining in Falun, large piles of residual product were deposited above ground in the vicinity of the mines. History: By the 16th Century, mineralization of the mine's tailings and slag added by smelters began to produce a red-coloured sludge rich in copper, limonite, silicic acid, and zinc. When the sludge was heated for a few hours and then mixed with linseed oil and rye flour, it was found to form an excellent anti-weathering paint. During the 17th century, falu red began to be daubed onto wooden buildings to mimic the red-brick façades built by the upper classes. History: In Sweden's built-up areas, wooden buildings were often painted with falu red until the early 19th century, until authorities began to oppose use of the paint. History: Resurgence Falu red saw a resurgence in popularity in the Swedish countryside during the 19th century, when poorer farmers and crofters began to paint their houses. Falu red is still widely used in the countryside. The Finnish expression punainen tupa ja perunamaa, "a red cottage and a potato patch", referring to idyllic home and life, is a direct allusion to a country house painted in falu red. Composition: The paint consists of water, rye flour, linseed oil, silicates, iron oxides, copper compounds, and zinc. As falu red ages the binder deteriorates, leaving the color granules loose, but restoration is easy since simply brushing the surface is sufficient before repainting.The actual color may be different depending on the degree to which the oxide is burnt, ranging from almost black to a bright, light red. Different tones of red have been popular at different times.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interpolation attack** Interpolation attack: In cryptography, an interpolation attack is a type of cryptanalytic attack against block ciphers. Interpolation attack: After the two attacks, differential cryptanalysis and linear cryptanalysis, were presented on block ciphers, some new block ciphers were introduced, which were proven secure against differential and linear attacks. Among these there were some iterated block ciphers such as the KN-Cipher and the SHARK cipher. However, Thomas Jakobsen and Lars Knudsen showed in the late 1990s that these ciphers were easy to break by introducing a new attack called the interpolation attack. Interpolation attack: In the attack, an algebraic function is used to represent an S-box. This may be a simple quadratic, or a polynomial or rational function over a Galois field. Its coefficients can be determined by standard Lagrange interpolation techniques, using known plaintexts as data points. Alternatively, chosen plaintexts can be used to simplify the equations and optimize the attack. Interpolation attack: In its simplest version an interpolation attack expresses the ciphertext as a polynomial of the plaintext. If the polynomial has a relative low number of unknown coefficients, then with a collection of plaintext/ciphertext (p/c) pairs, the polynomial can be reconstructed. With the polynomial reconstructed the attacker then has a representation of the encryption, without exact knowledge of the secret key. Interpolation attack: The interpolation attack can also be used to recover the secret key. It is easiest to describe the method with an example. Example: Let an iterated cipher be given by ci=(ci−1⊕ki)3, where c0 is the plaintext, ci the output of the ith round, ki the secret ith round key (derived from the secret key K by some key schedule), and for a r -round iterated cipher, cr is the ciphertext. Consider the 2-round cipher. Let x denote the message, and c denote the ciphertext. Then the output of round 1 becomes c1=(x+k1)3=(x2+k12)(x+k1)=x3+k12x+x2k1+k13, and the output of round 2 becomes c2=c=(c1+k2)3=(x3+k12x+x2k1+k13+k2)3 =x9+x8k1+x6k2+x4k12k2+x3k22+x2(k1k22+k14k2)+x(k12k22+k18)+k13k22+k19+k23, Expressing the ciphertext as a polynomial of the plaintext yields p(x)=a1x9+a2x8+a3x6+a4x4+a5x3+a6x2+a7x+a8, where the ai 's are key dependent constants. Using as many plaintext/ciphertext pairs as the number of unknown coefficients in the polynomial p(x) , then we can construct the polynomial. This can for example be done by Lagrange Interpolation (see Lagrange polynomial). When the unknown coefficients have been determined, then we have a representation p(x) of the encryption, without knowledge of the secret key K Existence: Considering an m -bit block cipher, then there are 2m possible plaintexts, and therefore 2m distinct p/c pairs. Let there be n unknown coefficients in p(x) . Since we require as many p/c pairs as the number of unknown coefficients in the polynomial, then an interpolation attack exist only if n≤2m Time complexity: Assume that the time to construct the polynomial p(x) using p/c pairs are small, in comparison to the time to encrypt the required plaintexts. Let there be n unknown coefficients in p(x) . Then the time complexity for this attack is n , requiring n known distinct p/c pairs. Interpolation attack by Meet-In-The-Middle: Often this method is more efficient. Here is how it is done. Interpolation attack by Meet-In-The-Middle: Given an r round iterated cipher with block length m , let z be the output of the cipher after s rounds with s<r We will express the value of z as a polynomial of the plaintext x , and as a polynomial of the ciphertext c . Let g(x)∈GF(2m)[x] be the expression of z via x , and let h(c)∈GF(2m)[c] be the expression of z via c . The polynomial g(x) is obtain by computing forward using the iterated formula of the cipher until round s , and the polynomial h(c) is obtain by computing backwards from the iterated formula of the cipher starting from round r until round s+1 So it should hold that g(x)=h(c), and if both g and h are polynomials with a low number of coefficients, then we can solve the equation for the unknown coefficients. Interpolation attack by Meet-In-The-Middle: Time complexity Assume that g(x) can be expressed by p coefficients, and h(c) can be expressed by q coefficients. Then we would need p+q known distinct p/c pairs to solve the equation by setting it up as a matrix equation. However, this matrix equation is solvable up to a multiplication and an addition. So to make sure that we get a unique and non-zero solution, we set the coefficient corresponding to the highest degree to one, and the constant term to zero. Therefore, p+q−2 known distinct p/c pairs are required. So the time complexity for this attack is p+q−2 , requiring p+q−2 known distinct p/c pairs. Interpolation attack by Meet-In-The-Middle: By the Meet-In-The-Middle approach the total number of coefficients is usually smaller than using the normal method. This makes the method more efficient, since less p/c pairs are required. Key-recovery: We can also use the interpolation attack to recover the secret key K If we remove the last round of an r -round iterated cipher with block length m , the output of the cipher becomes y~=cr−1 . Call the cipher the reduced cipher. The idea is to make a guess on the last round key kr , such that we can decrypt one round to obtain the output y~ of the reduced cipher. Then to verify the guess we use the interpolation attack on the reduced cipher either by the normal method or by the Meet-In-The-Middle method. Here is how it is done. Key-recovery: By the normal method we express the output y~ of the reduced cipher as a polynomial of the plaintext x . Call the polynomial p(x)∈GF(2m)[x] . Then if we can express p(x) with n coefficients, then using n known distinct p/c pairs, we can construct the polynomial. To verify the guess of the last round key, then check with one extra p/c pair if it holds that p(x)=y~. Key-recovery: If yes, then with high probability the guess of the last round key was correct. If no, then make another guess of the key. Key-recovery: By the Meet-In-The-Middle method we express the output z from round s<r as a polynomial of the plaintext x and as a polynomial of the output of the reduced cipher y~ . Call the polynomials g(x) and h(y~) , and let them be expressed by p and q coefficients, respectively. Then with q+p−2 known distinct p/c pairs we can find the coefficients. To verify the guess of the last round key, then check with one extra p/c pair if it holds that g(x)=h(y~). Key-recovery: If yes, then with high probability the guess of the last round key was correct. If no, then make another guess of the key. Once we have found the correct last round key, then we can continue in a similar fashion on the remaining round keys. Time complexity With a secret round key of length m , then there are 2m different keys. Each with probability 1/2m to be correct if chosen at random. Therefore, we will on average have to make 1/2⋅2m guesses before finding the correct key. Hence, the normal method have average time complexity 2m−1(n+1) , requiring n+1 known distinct c/p pairs, and the Meet-In-The-Middle method have average time complexity 2m−1(p+q−1) , requiring p+q−1 known distinct c/p pairs. Real world application: The Meet-in-the-middle attack can be used in a variant to attack S-boxes, which uses the inverse function, because with an m -bit S-box then S:f(x)=x−1=x2m−2 in GF(2m) The block cipher SHARK uses SP-network with S-box S:f(x)=x−1 . The cipher is resistant against differential and linear cryptanalysis after a small number of rounds. However it was broken in 1996 by Thomas Jakobsen and Lars Knudsen, using interpolation attack. Denote by SHARK (n,m,r) a version of SHARK with block size nm bits using n parallel m -bit S-boxes in r rounds. Jakobsen and Knudsen found that there exist an interpolation attack on SHARK (8,8,4) (64-bit block cipher) using about 21 chosen plaintexts, and an interpolation attack on SHARK 16 ,7) (128-bit block cipher) using about 61 chosen plaintexts. Real world application: Also Thomas Jakobsen introduced a probabilistic version of the interpolation attack using Madhu Sudan's algorithm for improved decoding of Reed-Solomon codes. This attack can work even when an algebraic relationship between plaintexts and ciphertexts holds for only a fraction of values.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Olympus Zuiko Digital 14-54mm f/2.8-3.5** Olympus Zuiko Digital 14-54mm f/2.8-3.5: The Zuiko Digital 14–54 mm f/2.8–3.5 is a Four Thirds System High Grade series lens by Olympus Corporation, initially sold in a kit with the Olympus E-1 camera body and also available separately. Three glass aspherical lenses are used in its optical formulation. It was positioned as an upgrade to the 14-45mm kit lens in terms of focal length range while having larger apertures. It was replaced as the premium kit lens by the Olympus Zuiko Digital ED 12-60mm f/2.8-4 SWD with the release of the E-3, and later was directly replaced by the Olympus Zuiko Digital 14-54mm f/2.8-3.5 II, which is more suited for mirror-up or mirrorless operation. Olympus Zuiko Digital 14-54mm f/2.8-3.5: As with all Pro ("High Grade") series lenses by Olympus, it is sealed against water splashes, rain and dust.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ontology (information science)** Ontology (information science): In information science, an ontology encompasses a representation, formal naming, and definition of the categories, properties, and relations between the concepts, data, and entities that substantiate one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of concepts and categories that represent the subject. Ontology (information science): Every academic discipline or field creates ontologies to limit complexity and organize data into information and knowledge. Each uses ontological assumptions to frame explicit theories, research and applications. New ontologies may improve problem solving within that domain. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages. For instance, the definition and ontology of economics is a primary concern in Marxist economics, but also in other subfields of economics. An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining what capital assets are at risk and by how much (see risk management). Ontology (information science): What ontologies in both information science and philosophy have in common is the attempt to represent entities, ideas and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems of ontology engineering (e.g., Quine and Kripke in philosophy, Sowa and Guarino), and debates concerning to what extent normative ontology is possible (e.g., foundationalism and coherentism in philosophy, BFO and Cyc in artificial intelligence). Ontology (information science): Applied ontology is considered a successor to prior work in philosophy, however many current efforts are more concerned with establishing controlled vocabularies of narrow domains than first principles, the existence of fixed essences or whether enduring objects (e.g., perdurantism and endurantism) may be ontologically more primary than processes. Artificial intelligence has retained the most attention regarding applied ontology in subfields like natural language processing within machine translation and knowledge representation, but ontology editors are being used often in a range of fields like education without the intent to contribute to AI. Ontology in Philosophy: Ontology is a branch of philosophy and intersects areas such as metaphysics, epistemology, and philosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality. Metaphysics deals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those between particulars and universals, intrinsic and extrinsic properties, or essence and existence. Metaphysics has been an ongoing topic of discussion since recorded history. Etymology: The compound word ontology combines onto-, from the Greek ὄν, on (gen. ὄντος, ontos), i.e. "being; that which is", which is the present participle of the verb εἰμί, eimí, i.e. "to be, I am", and -λογία, -logia, i.e. "logical discourse", see classical compounds for this type of word formation.While the etymology is Greek, the oldest extant record of the word itself, the Neo-Latin form ontologia, appeared in 1606 in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius). Etymology: The first occurrence in English of ontology as recorded by the OED (Oxford English Dictionary, online edition, 2008) came in Archeologia Philosophica Nova or New Principles of Philosophy by Gideon Harvey. Formal Ontology: Since the mid-1970s, researchers in the field of artificial intelligence (AI) have recognized that knowledge engineering is the key to building large and powerful AI systems. AI researchers argued that they could create new ontologies as computational models that enable certain kinds of automated reasoning, which was only marginally successful. In the 1980s, the AI community began to use the term ontology to refer to both a theory of a modeled world and a component of knowledge-based systems. In particular, David Powers introduced the word ontology to AI to refer to real world or robotic grounding, publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy. Formal Ontology: In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" by Tom Gruber used ontology as a technical term in computer science closely related to earlier idea of semantic networks and taxonomies. Gruber introduced the term as a specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy. Formal Ontology: Attempting to distance ontologies from taxonomies and similar efforts in knowledge modeling that rely on classes and inheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions — that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world. To specify a conceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms. Formal Ontology: As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity." Formal Ontology Components: Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations. Formal Ontology Components: Types Domain ontology A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card" meaning of the word, while an ontology about the domain of computer hardware would model the "punched card" and "video card" meanings. Formal Ontology Components: Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.). Formal Ontology Components: At present, merging ontologies that are not developed from a common upper ontology is a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies, but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like the OBO Foundry. Formal Ontology Components: Upper ontology An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs a core glossary that overarches the terms and associated object descriptions as they are used in various relevant domain ontologies. Standardized upper ontologies available for use include BFO, BORO method, Dublin Core, GFO, Cyc, SUMO, UMBEL, the Unified Foundational Ontology (UFO), and DOLCE. WordNet has been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies. Hybrid ontology The Gellish ontology is an example of a combination of an upper and a domain ontology. Visualization: A survey of ontology visualization methods is presented by Katifori et al. An updated survey of ontology visualization methods and tools was published by Dudás et al. The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al. A visual language for ontologies represented in OWL is specified by the Visual Notation for OWL Ontologies (VOWL). Engineering: Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain. It is a subfield of knowledge engineering that studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include: Ensuring the ontology is current with domain knowledge and term use Providing sufficient specificity and concept coverage for the domain of interest, thus minimizing the content completeness problem Ensuring the ontology can support its use cases Editors Ontology editors are applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or more ontology languages. Engineering: Aspects of ontology editors include: visual navigation possibilities within the knowledge model, inference engines and information extraction; support for modules; the import and export of foreign knowledge representation languages for ontology matching; and the support of meta-ontologies such as OWL-S, Dublin Core, etc. Engineering: Learning Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction and text mining have been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges. Engineering: Research Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have. Languages: An ontology language is a formal language used to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based: Common Algebraic Specification Language is a general logic-based specification language developed within the IFIP working group 1.3 "Foundations of System Specifications" and is a de facto standard language for software specifications. It is now being applied to ontology specifications in order to provide modularity and structuring mechanisms. Languages: Common logic is ISO standard 24707, a specification of a family of ontology languages that can be accurately translated into each other. The Cyc project has its own ontology language called CycL, based on first-order predicate calculus with some higher-order extensions. DOGMA (Developing Ontology-Grounded Methods and Applications) adopts the fact-oriented modeling approach to provide a higher level of semantic stability. The Gellish language includes rules for its own extension and thus integrates an ontology with an ontology language. IDEF5 is a software engineering method to develop and maintain usable, accurate, domain ontologies. KIF is a syntax for first-order logic that is based on S-expressions. SUO-KIF is a derivative version supporting the Suggested Upper Merged Ontology. MOF and UML are standards of the OMG Olog is a category theoretic approach to ontologies, emphasizing translations between ontologies using functors. OBO, a language used for biological and biomedical ontologies. OntoUML is an ontologically well-founded profile of UML for conceptual modeling of domain ontologies. OWL is a language for making ontological statements, developed as a follow-on from RDF and RDFS, as well as earlier ontology language projects including OIL, DAML, and DAML+OIL. OWL is intended to be used over the World Wide Web, and all its elements (classes, properties and individuals) are defined as RDF resources, and identified by URIs. Rule Interchange Format (RIF) and F-Logic combine ontologies and rules. Semantic Application Design Language (SADL) captures a subset of the expressiveness of OWL, using an English-like language entered via an Eclipse Plug-in. SBVR (Semantics of Business Vocabularies and Rules) is an OMG standard adopted in industry to build ontologies. TOVE Project, TOronto Virtual Enterprise project Published examples: Arabic Ontology, a linguistic ontology for Arabic, which can be used as an Arabic Wordnet but with ontologically-clean content. AURUM - Information Security Ontology, An ontology for information security knowledge sharing, enabling users to collaboratively understand and extend the domain knowledge body. It may serve as a basis for automated information security risk and compliance management. Published examples: BabelNet, a very large multilingual semantic network and ontology, lexicalized in many languages Basic Formal Ontology, a formal upper ontology designed to support scientific research BioPAX, an ontology for the exchange and interoperability of biological pathway (cellular processes) data BMO, an e-Business Model Ontology based on a review of enterprise ontologies and business model literature SSBMO, a Strongly Sustainable Business Model Ontology based on a review of the systems based natural and social science literature (including business). Includes critique of and significant extensions to the Business Model Ontology (BMO). Published examples: CCO and GexKB, Application Ontologies (APO) that integrate diverse types of knowledge with the Cell Cycle Ontology (CCO) and the Gene Expression Knowledge Base (GexKB) CContology (Customer Complaint Ontology), an e-business ontology to support online customer complaint management CIDOC Conceptual Reference Model, an ontology for cultural heritage COSMO, a Foundation Ontology (current version in OWL) that is designed to contain representations of all of the primitive concepts needed to logically specify the meanings of any domain entity. It is intended to serve as a basic ontology that can be used to translate among the representations in other ontologies or databases. It started as a merger of the basic elements of the OpenCyc and SUMO ontologies, and has been supplemented with other ontology elements (types, relations) so as to include representations of all of the words in the Longman dictionary defining vocabulary. Published examples: Computer Science Ontology, an automatically generated ontology of research topics in the field of computer science Cyc, a large Foundation Ontology for formal representation of the universe of discourse Disease Ontology, designed to facilitate the mapping of diseases and associated conditions to particular medical codes DOLCE, a Descriptive Ontology for Linguistic and Cognitive Engineering Drammar, ontology of drama Dublin Core, a simple ontology for documents and publishing Financial Industry Business Ontology (FIBO), a business conceptual ontology for the financial industry Foundational, Core and Linguistic Ontologies Foundational Model of Anatomy, an ontology for human anatomy Friend of a Friend, an ontology for describing persons, their activities and their relations to other people and objects Gene Ontology for genomics Gellish English dictionary, an ontology that includes a dictionary and taxonomy that includes an upper ontology and a lower ontology that focusses on industrial and business applications in engineering, technology and procurement. Published examples: Geopolitical ontology, an ontology describing geopolitical information created by Food and Agriculture Organization(FAO). The geopolitical ontology includes names in multiple languages (English, French, Spanish, Arabic, Chinese, Russian and Italian); maps standard coding systems (UN, ISO, FAOSTAT, AGROVOC, etc.); provides relations among territories (land borders, group membership, etc.); and tracks historical changes. In addition, FAO provides web services of geopolitical ontology and a module maker to download modules of the geopolitical ontology into different formats (RDF, XML, and EXCEL). See more information at FAO Country Profiles. Published examples: GAO (General Automotive Ontology) - an ontology for the automotive industry that includes 'car' extensions GOLD, General Ontology for Linguistic Description GUM (Generalized Upper Model), a linguistically motivated ontology for mediating between clients systems and natural language technology IDEAS Group, a formal ontology for enterprise architecture being developed by the Australian, Canadian, UK and U.S. Defence Depts. Linkbase, a formal representation of the biomedical domain, founded upon Basic Formal Ontology. LPL, Landmark Pattern Language NCBO Bioportal, biological and biomedical ontologies and associated tools to search, browse and visualise NIFSTD Ontologies from the Neuroscience Information Framework: a modular set of ontologies for the neuroscience domain. Published examples: OBO-Edit, an ontology browser for most of the Open Biological and Biomedical Ontologies OBO Foundry, a suite of interoperable reference ontologies in biology and biomedicine OMNIBUS Ontology, an ontology of learning, instruction, and instructional design Ontology for Biomedical Investigations, an open-access, integrated ontology of biological and clinical investigations ONSTR, Ontology for Newborn Screening Follow-up and Translational Research, Newborn Screening Follow-up Data Integration Collaborative, Emory University, Atlanta. Published examples: Plant Ontology for plant structures and growth/development stages, etc. POPE, Purdue Ontology for Pharmaceutical Engineering PRO, the Protein Ontology of the Protein Information Resource, Georgetown University ProbOnto, knowledge base and ontology of probability distributions. Program abstraction taxonomy Protein Ontology for proteomics RXNO Ontology, for name reactions in chemistry SCDO, the Sickle Cell Disease Ontology, facilitates data sharing and collaborations within the SDC community, amongst other applications (see list on SCDO website). Published examples: Sequence Ontology, for representing genomic feature types found on biological sequences SNOMED CT (Systematized Nomenclature of Medicine—Clinical Terms) Suggested Upper Merged Ontology, a formal upper ontology Systems Biology Ontology (SBO), for computational models in biology SWEET, Semantic Web for Earth and Environmental Terminology SSN/SOSA, The Semantic Sensor Network Ontology (SSN) and Sensor, Observation, Sample, and Actuator Ontology (SOSA) are W3C Recommendation and OGC Standards for describing sensors and their observations. Published examples: ThoughtTreasure ontology TIME-ITEM, Topics for Indexing Medical Education Uberon, representing animal anatomical structures UMBEL, a lightweight reference structure of 20,000 subject concept classes and their relationships derived from OpenCyc WordNet, a lexical reference system YAMATO, Yet Another More Advanced Top-level OntologyThe W3C Linking Open Data community project coordinates attempts to converge different ontologies into worldwide Semantic Web. Libraries: The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries. The following are libraries of human-selected ontologies. COLORE is an open repository of first-order ontologies in Common Logic with formal links between ontologies in the repository. DAML Ontology Library maintains a legacy of ontologies in DAML. Ontology Design Patterns portal is a wiki repository of reusable components and practices for ontology design, and also maintains a list of exemplary ontologies. Protégé Ontology Library contains a set of OWL, Frame-based and other format ontologies. SchemaWeb is a directory of RDF schemata expressed in RDFS, OWL and DAML+OIL.The following are both directories and search engines. OBO Foundry is a suite of interoperable reference ontologies in biology and biomedicine. Bioportal (ontology repository of NCBO) OntoSelect Ontology Library offers similar services for RDF/S, DAML and OWL ontologies. Ontaria is a "searchable and browsable directory of semantic web data" with a focus on RDF vocabularies with OWL ontologies. (NB Project "on hold" since 2004). Swoogle is a directory and search engine for all RDF resources available on the Web, including ontologies. Open Ontology Repository initiative ROMULUS is a foundational ontology repository aimed at improving semantic interoperability. Currently there are three foundational ontologies in the repository: DOLCE, BFO and GFO. Examples of applications: In general, ontologies can be used beneficially in several fields. Enterprise applications. A more concrete example is SAPPHIRE (Health care) or Situational Awareness and Preparedness for Public Health Incidences and Reasoning Engines which is a semantics-based health information system capable of tracking and evaluating situations and occurrences that may affect public health. Geographic information systems bring together data from different sources and benefit therefore from ontological metadata which helps to connect the semantics of the data. Examples of applications: Domain-specific ontologies are extremely important in biomedical research, which requires named entity disambiguation of various biomedical terms and abbreviations that have the same string of characters but represent different biomedical concepts. For example, CSF can represent Colony Stimulating Factor or Cerebral Spinal Fluid, both of which are represented by the same term, CSF, in biomedical literature. This is why a large number of public ontologies are related to the life sciences. Life science data science tools that fail to implement these types of biomedical ontologies will not be able to accurately determine causal relationships between concepts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lobectomy** Lobectomy: Not to be confused with a Lobotomy Lobectomy means surgical excision of a lobe. This may refer to a lobe of the lung (also simply called a lobectomy), a lobe of the thyroid (hemithyroidectomy), a lobe of the brain (as in anterior temporal lobectomy), or a lobe of the liver (hepatectomy). Lung lobectomy: A lobectomy of the lung is performed in early-stage non-small cell lung cancer patients. It is not performed on patients that have lung cancer that has spread to other parts of the body. Tumor size, type, and location are major factors as to whether a lobectomy is performed. This can be due to cancer or smoking. Lung lobectomies are performed on patients as young as eleven or twelve who have no cancer or smoking history, but have conditions from birth or early childhood that necessitate the operation. Such patients will have reduced lung capacity which tends to limit their range of activities through life. They often need to use inhalers on a daily basis, and are often classified as being asthmatic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crown-rump length** Crown-rump length: Crown-rump length (CRL) is the measurement of the length of human embryos and fetuses from the top of the head (crown) to the bottom of the buttocks (rump). It is typically determined from ultrasound imagery and can be used to estimate gestational age. Introduction: The embryo and fetus float in the amniotic fluid inside the uterus of the mother usually in a curved posture resembling the letter C. The measurement can actually vary slightly if the fetus is temporarily stretching (straightening) its body. The measurement needs to be in the natural state with an unstretched body which is actually C shaped. The measurement of CRL is useful in determining the gestational age (menstrual age starting from the first day of the last menstrual period) and thus the expected date of delivery (EDD). Different human fetuses grow at different rates and thus the gestational age is an approximation. Recent evidence has indicated that CRL growth (and thus the approximation of gestational age) may be influenced by maternal factors such as age, smoking, and folic acid intake. Early in pregnancy gestational age 8 weeks, it is accurate within about +/- 5 days but later in pregnancy due to different growth rates, the accuracy is less. In that situation, other parameters can be used in addition to CRL. The length of the umbilical cord is approximately equal to the CRL throughout pregnancy. Introduction: Gestational age is not the same as fertilization age. It takes about 14 days from the first day of the last menstrual period for conception to take place and thus for the conceptus to form. The age from this point in time (conception) is called the fertilization age and is thus 2 weeks shorter than the gestational age. Thus a 6-week gestational age would be a 4-week fertilization age. Some authorities however casually interchange these terms and the reader is advised to be cautious. An average gestational period (duration of pregnancy from the first day of the last menstrual period up to delivery) is 280 days. On average, this is 9 months and 6 days. Gestational age estimation: A commonly used estimate of gestational age in weeks is (as described by Verburg et al.): 0.2313 1.4653 0.001737 ⋅CRL Gestational age estimation in days is carried out according to the equations: 40.9 3.24585 0.5 0.348956 ⋅CRL ; and SD of GA = 2.39102 + (0.0193474 × CRL).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shimura correspondence** Shimura correspondence: In number theory, the Shimura correspondence is a correspondence between modular forms F of half integral weight k+1/2, and modular forms f of even weight 2k, discovered by Goro Shimura (1973). It has the property that the eigenvalue of a Hecke operator Tn2 on F is equal to the eigenvalue of Tn on f. Let f be a holomorphic cusp form with weight (2k+1)/2 and character χ . For any prime number p, let ∑n=1∞Λ(n)n−s=∏p(1−ωpp−s+(χp)2p2k−1−2s)−1, where ωp 's are the eigenvalues of the Hecke operators T(p2) determined by p. Using the functional equation of L-function, Shimura showed that F(z)=∑n=1∞Λ(n)qn is a holomorphic modular function with weight 2k and character χ2 Shimura's proof uses the Rankin-Selberg convolution of f(z) with the theta series θψ(z)=∑n=−∞∞ψ(n)nνe2iπn2z(ν=1−ψ(−1)2) for various Dirichlet characters ψ then applies Weil's converse theorem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Memoirs of Modern Love: Curious Age** Memoirs of Modern Love: Curious Age: Memoirs of Modern Love: Curious Age (現代愛の事典 知りたい年頃, Gendai Ai no Jiten: Shiritai Toshigoro) aka Contemporary Dictionary of Love: Age of Curiosity is a 1967 Japanese pink film directed by Shin'ya Yamamoto and featuring Naomi Tani in one of her earliest starring roles. Synopsis: While an obscene audio tape is played, a young woman has sex. She becomes obsessed with the recording and can only achieve orgasm if she is listening to it. Complications ensue when her boyfriend becomes troubled by the tape and is unable to perform sexually while it is being played. Cast: Naomi Tani Yumiko Matsumoto Miki Hayashi Background and critical appraisal: Director Shin'ya Yamamoto filmed Memoirs of Modern Love: Curious Age for Mamoru Watanabe's Watanabe Pro and it was released theatrically in Japan by Tōkyō Kōei in 1967. Yamamoto and star Naomi Tani worked together in other early pink films such as Degenerate (変質者, Henshitsusha) (also 1967) and Season For Rapists (痴漢の季節, Chikan no Kisetsu) (1968). They both later worked in Nikkatsu's Roman Porno films, but they did not work together in that series.In their Japanese Cinema Encyclopedia: The Sex Films, Thomas and Yuko Mihara Weisser give Memoirs of Modern Love: Curious Age a rating of two-and-a-half out of four stars. They note that the plotline is "thin and ludicrous", and only an excuse for Tani to show her "primo body".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Urban informatics** Urban informatics: Urban informatics refers to the study of people creating, applying and using information and communication technology and data in the context of cities and urban environments. It sits at the conjunction of urban science, geomatics, and informatics, with an ultimate goal of creating more smart and sustainable cities. Various definitions are available, some provided in the Definitions section. Although first mentions of the term date back as early as 1987, urban informatics did not emerge as a notable field of research and practice until 2006 (see History section). Since then, the emergence and growing popularity of ubiquitous computing, open data and big data analytics, as well as smart cities, contributed to a surge in interest in urban informatics, not just from academics but also from industry and city governments seeking to explore and apply the possibilities and opportunities of urban informatics. Definitions: Many definitions of urban informatics have been published and can be found online. The descriptions provided by Townsend in his foreword and by Foth in his preface to the Handbook of Research on Urban Informatics emphasize two key aspects: (1) the new possibilities (including real-time data) for both citizens and city administrations afforded by ubiquitous computing, and (2) the convergence of physical and digital aspects of the city. Definitions: "Urban informatics is the study, design, and practice of urban experiences across different urban contexts that are created by new opportunities of real-time, ubiquitous technology and the augmentation that mediates the physical and digital layers of people networks and urban infrastructures." In this definition, urban informatics is a trans-disciplinary field of research and practice that draws on three broad domains: people, place and technology. Definitions: "People" can refer to city residents, citizens, and community groups, from various socio-cultural backgrounds, as well as the social dimensions of non-profit organisations and businesses. The social research domains that urban informatics draws from include urban sociology, media studies, communication studies, cultural studies, city planning and others. "Place" can refer to distinct urban sites, locales and habitats, as well as to larger-scale geographic entities such as neighbourhoods, public space, suburbs, regions, or peri-urban areas. The place or spatial research domains entail urban studies, architecture, urban design, urban planning, geography, and others. Definitions: "Technology" can refer to various types of information and communication technology and ubiquitous computing / urban computing technology such as mobile phones, wearable devices, urban screens, media façades, sensors, and other Internet of Things devices. The technology research domains span informatics, computer science, software engineering, human–computer interaction, and others.In addition to geographic data/spatial data, most common sources of data relevant to urban informatics can be divided into three broad categories: government data (census data, open data, etc.); personal data (social media, quantified self data, etc.); and sensor data (transport, surveillance, CCTV, Internet of Things devices, etc.).Although closely related, Foth differentiates urban informatics from the field of urban computing by suggesting that the former focusses more on the social and human implications of technology in cities (similar to the community and social emphases of how community informatics and social informatics are defined), and the latter focusses more on technology and computing. Urban informatics emphasises the relationship between urbanity, as expressed through the many dimensions of urban life, and technology. Definitions: Later, with the increasing popularity of commercial opportunities under the label of smart city and big data, subsequent definitions became narrow and limited in defining urban informatics mainly as big data analytics for efficiency and productivity gains in city contexts – unless the arts and social sciences are added to the interdisciplinary mix. This specialisation within urban informatics is sometimes referred to as 'data-driven, networked urbanism' or urban science.In the book Urban Informatics published in 2021, the term Urban Informatics has been defined in a systematical and principled way. Definitions: "Urban informatics is an interdisciplinary approach to understanding, managing, and designing the city using systematic theories and methods based on new information technologies, and grounded in contemporary developments of computers and communications. It integrates urban science, geomatics, and informatics: urban science provides studies of activities, places, and flows in the urban area; geomatics provides the science and technologies for measuring spatiotemporal and dynamic urban objects in the real world and managing the data obtained from the measurements; informatics provides the science and technologies of information processing, information systems, computer science, and statistics which support the quest to develop applications to cities." History: One of the first occurrences of the term can be found in Mark E. Hepworth's 1987 article "The Information City", which mentions the term "urban informatics" on page 261. However, Hepworth's overall discussion is more concerned with the broader notion of "informatics planning". Considering the article pre-dates the advent of ubiquitous computing and urban computing, it does contain some visionary thoughts about major changes on the horizon brought about by information and communications technology and the impact on cities. History: The Urban Informatics Research Lab was founded at Queensland University of Technology in 2006, the first research group explicitly named to reflect its dedication to the study of urban informatics. The first edited book on the topic, the Handbook of Research on Urban Informatics, published in 2009, brought together researchers and scholars from three broad domains: people, place, and technology; or, the social, the spatial, and the technical. History: There were many precursors to this transdisciplinarity of "people, place, and technology." From an architecture, planning and design background, there is the work of the late William J. Mitchell, Dean of the MIT School of Architecture and Planning, and author of the 1995 book City of Bits: Space, Place, and the Infobahn. Mitchell was influential in suggesting a profound relationship between place and technology at a time when mainstream interest was focused on the promise of the Information Superhighway and what Frances Cairncross called the "Death of Distance". Rather than a decline in the significance of place through remote work, distance education, and e-commerce, the physical / tangible layers of the city started to mix with the digital layers of the internet and online communications. Aspects of this trend have been studied under the terms community informatics and community networks.One of the first texts that systematically examined the impact of information technologies on the spatial and social evolution of cities is Telecommunications and the City: Electronic Spaces, Urban Places, by Stephen Graham and Simon Marvin. The relationship between cities and the internet was further expanded upon in a volume edited by Stephen Graham entitled Cybercities Reader and by various authors in the 2006 book Networked Neighbourhoods: The Connected Community in Context edited by Patrick Purcell. Additionally, contributions from architecture, design and planning scholars are contained in the 2007 journal special issue on "Space, Sociality, and Pervasive Computing" published in the journal Environment and Planning B: Planning and Design, 34(3), guest edited by the late Bharat Dave, as well as in the 2008 book Augmented Urban Spaces: Articulating the Physical and Electronic City, edited by Alessandro Aurigi and Fiorella De Cindio, based on contributions to the Digital Cities 4 workshop held in conjunction with the Communities and Technologies (C&T) conference 2005 in Milan, Italy. History: The first prominent and explicit use of the term "urban informatics" in the sociology and media studies literature appears in the 2007 special issue "Urban Informatics: Software, Cities and the New Cartographies of Knowing Capitalism" published in the journal Information, Communication & Society, 10(6), guest edited by Ellison, Burrows, & Parker. Later on, in 2013, Burrows and Beer argued that the socio-technical transformations described by research studies conducted in the field of urban informatics give reason for sociologists more broadly to not only question epistemological and methodological norms and practices but also to rethink spatial assumptions.In computer science, the sub-domains of human–computer interaction, ubiquitous computing, and urban computing provided early contributions that influenced the emerging field of urban informatics. Examples include the Digital Cities workshop series (see below), Greenfield's 2006 book Everyware: The Dawning Age of Ubiquitous Computing, and the 2006 special issue "Urban Computing: Navigating Space and Context" published in the IEEE journal Computer, 39(9), guest edited by Shklovski & Chang, and the 2007 special issue "Urban Computing" published in the IEEE journal Pervasive Computing, 6(3), guest edited by Kindberg, Chalmers, & Paulos. Digital Cities Workshop Series: The Digital Cities Workshop Series started in 1999 and is the longest running academic workshop series that has focused on, and profoundly influenced, the field of urban informatics. The first two workshops in 1999 and 2001 were both held in Kyoto, Japan, with subsequent workshops since 2003 held in conjunction with the biennial International Conference on Communities and Technologies (C&T). Digital Cities Workshop Series: Each Digital Cities workshop proceedings have become the basis for key anthologies listed below, which in turn have also been formative to a diverse set of emerging fields, including urban informatics, urban computing, smart cities, pervasive computing, internet of things, media architecture, urban interaction design, and urban science. Methods: The diverse range of people, groups and organisations involved in urban informatics is reflective of the diversity of methods being used in its pursuit and practice. As a result, urban informatics borrows from a wide range of methodologies across the social sciences, humanities, arts, design, architecture, planning (including geographic information systems), and technology (in particular computer science, pervasive computing, and ubiquitous computing), and applies those to the urban domain. Examples include: Action research and participatory action research Big data analytics and urban science Critical theory Cultural mapping Grounded theory Interaction design Participatory design Spatial analysis, including urban modelling, complex urban systems analysis, geographic information systems, and space syntax analysis User-centred design
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3M computer** 3M computer: 3M was a goal first proposed in the early 1980s by Raj Reddy and his colleagues at Carnegie Mellon University (CMU) as a minimum specification for academic/technical workstations: at least a megabyte of memory, a megapixel display and a million instructions per second (MIPS) processing power. It was also often said that it should cost no more than a "megapenny" ($10,000). 3M computer: At that time a typical desktop computer such as an early IBM Personal Computer might have 1/8 of a megabyte of memory (128K), 1/4 of a million pixels (640 × 400 monochrome display), and run at 1/3 million instructions per second (5 MHz 8088). The concept was inspired by the Xerox Alto which had been designed in the 1970s at the Xerox Palo Alto Research Center. Several Altos were donated to CMU, Stanford, and MIT in 1979. 3M computer: An early 3M computer was the PERQ Workstation made by Three Rivers Computer Corporation. The PERQ had a 1 million P-codes (Pascal instructions) per second processor, 256 KB of RAM (upgradeable to 1 MB), and a 768×1024 pixel display on a 15-inch (380 mm) display. While not quite a true 3M machine, it was used as the initial 3M machine for the CMU Scientific Personal Integrated Computing Environment (SPICE) workstation project. 3M computer: The Stanford University Network SUN workstation, designed by Andy Bechtolsheim in 1980, is another example. It was then commercialized by Sun Microsystems in 1982. Apollo Computer (in the Route 128 region) announced the Apollo/Domain computer in 1981. By 1986, CMU stated that it expected at least two companies to introduce 3M computers by the end of the year, with academic pricing of $3,000 and retail pricing of $5,000, and Stanford University planned to deploy them in computer labs. The first "megapenny" 3M workstation was the Sun-2/50 diskless desktop workstation with a list price of $8,900 in 1986. 3M computer: The original NeXT Computer was introduced in 1988 as a 3M machine by Steve Jobs, who first heard this term at Brown University. Its so-called "MegaPixel" display had just over 930,000 pixels (with 2 bits per pixel). However, floating point performance, powered with the Motorola 68882 FPU was only about 0.25 megaflops. Modern desktop computers exceed the 3M memory and speed requirements by many thousands of times, however screen pixels are only 2 (in the case of 1080p) to 8 (in the case of 4K) times larger (but full color so each pixel uses at least 24 times as many bits).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UBC Okanagan Digital Microfluidics** UBC Okanagan Digital Microfluidics: The UBC Okanagan Digital Microfluidics Research Group is an interdisciplinary research group at University of British Columbia Okanagan that develops integrated devices for biochip applications. Lab-on-a-chip digital microfluidic devices are fabricated in digital architectures that merge micrometre-scale electrical circuitry with applications requiring dynamic fluid control, as voltage actuation signals from patterned electrodes are used to direct and actuate fluid flow within the chips. The structures are not application-specific. Fluid actuation signals for droplet mixing, splitting, and routing are set by the control software and can be reconfigured as needed and in real-time (unlike continuous-flow microfluidic structures incorporating micropumps, microvalves, and microchannels which are fabricated as permanent application-specific structures).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Branching order of bacterial phyla (Woese, 1987)** Branching order of bacterial phyla (Woese, 1987): There are several models of the Branching order of bacterial phyla, one of these was proposed in 1987 paper by Carl Woese.The branching order proposed by Carl Woese was based on molecular phylogeny, which was considered revolutionary as all preceding models were based on discussions of morphology. (v. Monera). Several models have been proposed since and no consensus is reached at present as to the branching order of the major bacterial lineages.The gene used was the 16S ribosomal DNA. Tree: The names have been changed to reflect more current nomenclature used by molecular phylogenists. Note on names: Despite the impact of the paper on bacterial classification, it was not a proposal for change of taxonomy. Consequently, many clades were given official names. Only subsequently, this occurred: for example, the "purple bacteria and relatives" were renamed Proteobacteria. Discussion: In 1987, Carl Woese, regarded as the forerunner of the molecular phylogeny revolution, divided Eubacteria into 11 divisions based on 16S ribosomal RNA (SSU) sequences, listed below. Many new phyla have been proposed since then. Discussion: Purple Bacteria and their relatives (later renamed Proteobacteria) alpha subdivision (purple non-sulfur bacteria, rhizobacteria, Agrobacterium, Rickettsiae, Nitrobacter) beta subdivision (Rhodocyclus, (some) Thiobacillus, Alcaligenes, Spirillum, Nitrosovibrio) gamma subdivision (enterics, fluorescent pseudomonads, purple sulfur bacteria, Legionella, (some) Beggiatoa) delta subdivision (Sulfur and sulfate reducers (Desulfovibrio), Myxobacteria, Bdellovibrio) Gram-positive EubacteriaHigh-G+C species (later renamed Actinobacteria) (Actinomyces, Streptomyces, Arthrobacter, Micrococcus, Bifidobacterium) Low-G+C species (later renamed Firmicutes) (Clostridium, Peptococcus, Bacillus, Mycoplasma) Photosynthetic species (Heliobacteria) Species with Gram-negative walls (Megasphaera, Sporomusa) Cyanobacteria and chloroplasts (Aphanocapsa, Oscillatoria, Nostoc, Synechococcus, Gloeobacter, Prochloron) Spirochetes and relatives Spirochetes (Spirochaeta, Treponema, Borrelia) Leptospiras (Leptospira, Leptonema) Green sulfur bacteria (Chlorobium, Chloroherpeton) Bacteroides, Flavobacteria and relatives (later renamed Bacteroidetes Bacteroides (Bacteroides, Fusobacterium) Flavobacterium group (Flavobacterium, Cytophaga, Saprospira, Flexibacter) Planctomyces and relatives (later renamed Planctomycetes) Planctomyces group (Planctomyces, Pasteuria [sic]) Thermophiles (Isocystis pallida) Chlamydiae (Chlamydia psittaci, Chlamydia trachomatis) Radioresistant micrococci and relatives (now commonly referred to as Deinococcus–Thermus or Thermi)Deinococcus group (Deinococcus radiodurans) Thermophiles (Thermus aquaticus) Green non-sulfur bacteria and relatives (later renamed Chloroflexi) Chloroflexus group (Chloroflexus, Herpetosiphon) Thermomicrobium group (Thermomicrobium roseum) Thermotogae (Thermotoga maritima) Last universal common ancestor: The root of the tree, i.e. the node of the last universal common ancestor, is placed between the domain Bacteria (or kingdom Eubacteria as it was then known) and the clade formed by the domains Archaea (formerly kingdom Archaebacteria) and Eukaryotes. This is consistent with all subsequent studies, bar the study by Thomas Cavalier-Smith in 2002 and 2004, which was not based on molecular phylogeny.Eukaryotes are a mosaic of different lineages: The genome in the nucleus descends from the first organelle-less eukaryote, the "urkaryote" a sister species to the ancestral archeon The mitochondria are organelles that descended from the proto-mitochondrion, a species of Rickettsiales (Alphaproteobacteria) (v. Reclinomonas and retortamonads). Last universal common ancestor: The chloroplasts are organelles of cyanobacterial origin.Consequently, in Woese (1987) the group is referred to as urkaryote. The clade composed of Archaea and the nuclear genome of eukaryotes is called Neomura by T. Cavalier-Smith
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cell CANARY** Cell CANARY: Cell CANARY (Cellular Analysis and Notification of Antigen Risks and Yields) is a recent technology that uses genetically engineered B cells to identify pathogens. Existing pathogen detection technologies include the Integrated Biological Detection System and the Joint Chemical Agent Detector. History: In 2007, Benjamin Shapiro, Pamela Abshire, Elisabeth Smela, and Denis Wirtz were granted a patent entitled “Cell Canaries for Biochemical Pathogen Detection. They have successfully manipulated the sensors so that they are sensitive to exposure of certain dangers, such as explosive materials or biological pathogens. What sets CANARY apart from the other methods is that the system is quicker and has a lower number of false readings. History: Existing pathogen detection methods required that a sample be packaged and sent to a lab where techniques such as mass spectrometry and Polymerase Chain Reaction ultimately provided a blueprint of the nucleotide sequences present in a sample. The pathogen was then determined based on a database of pathogen nucleotides on file. This often resulted in a large amount of false positives and false negatives due to the non-specific nature of nucleotide binding. These techniques also required time that is not feasible in imminent situations. Method: Cell CANARY is one of the newest, fastest, and most viable approaches to pathogen detection in a sample. It has the ability to detect pathogens in a variety of media, both liquid and air, at a fraction of the concentration that older methods required to produce a viable signal. CANARY uses the B cell, a form of white blood cell that forms the basis for natural human defense. An array of these b-cells is attached to a chip. Genes for producing antibodies are naturally on in these b-cells, which allow antibodies to coat the exterior surface of the cells. The genes for coding antibodies are then up-regulated in these cells, which allows for greater antibody production and therefore more of the cell surface to be coated in antibodies.This engineering principle allows lower concentrations of antigen to be detected by the cells. Antigens can then bind to the antibodies, resulting in a few naturally occurring B-cell reactions. At the final step of these reactions, Ca2+ ions are released, and in the presence of aequorin, photons are emitted. Aequorin is a photoprotein that can be extracted from marine organisms such as luminescent fish. The emitted photons can then be read by a chip, on which the array of modified B cells have been attached to, ultimately providing a readout of the pathogen(s) present. Method: A unique set of responses is exhibited after exposure to each individual pathogen. Therefore, cells will react differently to the introduction of a specific pathogen, the specific nature in which the “canary” cells respond to the pathogen indicates the unique identity of the pathogen that has been introduced. The more responses of a cell to a pathogen that are measured the more precisely the pathogen can be identified. Finally, after determining the presence and identity of the pathogen, all infected people can be effectively treated. Application: There still need improvements on specific aspects of this complicated process. Some of the challenges include "building circuits that can interact with the cells and transmit alerts about their condition", developing technology to control the position of the cells on the chip, keeping the cells viable once on the chip and creating a living environment that supports the cells but protects the sensitive parts of the sensor. Application: The implications of a faster pathogen detection technology are widespread. A patient would be able to visit a medical professional, provide a sample of blood or urine, and get an analysis within minutes. No longer would the patient and doctor have to wait on lab results to determine the presence of foreign bodies. The military would be able to test air samples and water samples to discover threats immediately before dispatching. High profile and even regular office buildings could have these sensors in every corridor to proactively hunt out air-borne pathogens, leaving enough time for evacuation. This goes back to the idea of “canary in a coal mine”, where the B cells act as the canary to sniff out danger ahead of time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calcium in biology** Calcium in biology: Calcium ions (Ca2+) contribute to the physiology and biochemistry of organisms' cells. They play an important role in signal transduction pathways, where they act as a second messenger, in neurotransmitter release from neurons, in contraction of all muscle cell types, and in fertilization. Many enzymes require calcium ions as a cofactor, including several of the coagulation factors. Extracellular calcium is also important for maintaining the potential difference across excitable cell membranes, as well as proper bone formation. Calcium in biology: Plasma calcium levels in mammals are tightly regulated, with bone acting as the major mineral storage site. Calcium ions, Ca2+, are released from bone into the bloodstream under controlled conditions. Calcium is transported through the bloodstream as dissolved ions or bound to proteins such as serum albumin. Parathyroid hormone secreted by the parathyroid gland regulates the resorption of Ca2+ from bone, reabsorption in the kidney back into circulation, and increases in the activation of vitamin D3 to calcitriol. Calcitriol, the active form of vitamin D3, promotes absorption of calcium from the intestines and bones. Calcitonin secreted from the parafollicular cells of the thyroid gland also affects calcium levels by opposing parathyroid hormone; however, its physiological significance in humans is dubious. Calcium in biology: Intracellular calcium is stored in organelles which repetitively release and then reaccumulate Ca2+ ions in response to specific cellular events: storage sites include mitochondria and the endoplasmic reticulum.Characteristic concentrations of calcium in model organisms are: in E. coli 3mM (bound), 100nM (free), in budding yeast 2mM (bound), in mammalian cell 10-100nM (free) and in blood plasma 2mM. Humans: In 2020, calcium was the 204th most commonly prescribed medication in the United States, with more than 2 million prescriptions. Humans: Dietary recommendations The U.S. Institute of Medicine (IOM) established Recommended Dietary Allowances (RDAs) for calcium in 1997 and updated those values in 2011. See table. The European Food Safety Authority (EFSA) uses the term Population Reference Intake (PRIs) instead of RDAs and sets slightly different numbers: ages 4–10 800 mg, ages 11–17 1150 mg, ages 18–24 1000 mg, and >25 years 950 mg.Because of concerns of long-term adverse side effects such as calcification of arteries and kidney stones, the IOM and EFSA both set Tolerable Upper Intake Levels (ULs) for the combination of dietary and supplemental calcium. From the IOM, people ages 9–18 years are not supposed to exceed 3,000 mg/day; for ages 19–50 not to exceed 2,500 mg/day; for ages 51 and older, not to exceed 2,000 mg/day. The EFSA set UL at 2,500 mg/day for adults but decided the information for children and adolescents was not sufficient to determine ULs. Humans: Labeling For U.S. food and dietary supplement labeling purposes the amount in a serving is expressed as a percent of Daily Value (%DV). For calcium labeling purposes 100% of the Daily Value was 1000 mg, but as of May 27, 2016 it was revised to 1300 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake. Humans: Health claims Although as a general rule, dietary supplement labeling and marketing are not allowed to make disease prevention or treatment claims, the FDA has for some foods and dietary supplements reviewed the science, concluded that there is significant scientific agreement, and published specifically worded allowed health claims. An initial ruling allowing a health claim for calcium dietary supplements and osteoporosis was later amended to include calcium and vitamin D supplements, effective January 1, 2010. Examples of allowed wording are shown below. In order to qualify for the calcium health claim, a dietary supplement must contain at least 20% of the Reference Dietary Intake, which for calcium means at least 260 mg/serving. Humans: "Adequate calcium throughout life, as part of a well-balanced diet, may reduce the risk of osteoporosis." "Adequate calcium as part of a healthful diet, along with physical activity, may reduce the risk of osteoporosis in later life." "Adequate calcium and vitamin D throughout life, as part of a well-balanced diet, may reduce the risk of osteoporosis." "Adequate calcium and vitamin D as part of a healthful diet, along with physical activity, may reduce the risk of osteoporosis in later life."In 2005 the FDA approved a Qualified Health Claim for calcium and hypertension, with suggested wording "Some scientific evidence suggests that calcium supplements may reduce the risk of hypertension. However, FDA has determined that the evidence is inconsistent and not conclusive." Evidence for pregnancy-induced hypertension and preeclampsia was considered inconclusive. The same year the FDA approved a QHC for calcium and colon cancer, with suggested wording "Some evidence suggests that calcium supplements may reduce the risk of colon/rectal cancer, however, FDA has determined that this evidence is limited and not conclusive." Evidence for breast cancer and prostate cancer was considered inconclusive. Proposals for QHCs for calcium as protective against kidney stones or against menstrual disorders or pain were rejected.The European Food Safety Authority (EFSA) concluded that "Calcium contributes to the normal development of bones." The EFSA rejected a claim that a cause-and-effect relationship existed between the dietary intake of calcium and potassium and maintenance of normal acid-base balance. The EFSA also rejected claims for calcium and nails, hair, blood lipids, premenstrual syndrome and body weight maintenance. Humans: Food sources The United States Department of Agriculture (USDA) web site has a very complete searchable table of calcium content (in milligrams) in foods, per common measures such as per 100 grams or per a normal serving. Humans: Measurement in blood The amount of calcium in blood (more specifically, in blood plasma) can be measured as total calcium, which includes both protein-bound and free calcium. In contrast, ionized calcium is a measure of free calcium. An abnormally high level of calcium in plasma is termed hypercalcemia and an abnormally low level is termed hypocalcemia, with "abnormal" generally referring to levels outside the reference range. Humans: The main methods to measure serum calcium are: O-Cresolphalein Complexone Method; A disadvantage of this method is that the volatile nature of the 2-amino-2-methyl-1-propanol used in this method makes it necessary to calibrate the method every few hours in a clinical laboratory setup. Humans: Arsenazo III Method; This method is more robust, but the arsenic in the reagent is a health hazard.The total amount of Ca2+ present in a tissue may be measured using Atomic absorption spectroscopy, in which the tissue is vaporized and combusted. To measure Ca2+ concentration or spatial distribution within the cell cytoplasm in vivo or in vitro, a range of fluorescent reporters may be used. These include cell permeable, calcium-binding fluorescent dyes such as Fura-2 or genetically engineered variant of green fluorescent protein (GFP) named Cameleon. Humans: Corrected calcium As access to an ionized calcium is not always available a corrected calcium may be used instead. To calculate a corrected calcium in mmol/L one takes the total calcium in mmol/L and adds it to ((40 minus the serum albumin in g/L) multiplied by 0.02). There is, however, controversy around the usefulness of corrected calcium as it may be no better than total calcium. It may be more useful to correct total calcium for both albumin and the anion gap. Other animals: Vertebrates In vertebrates, calcium ions, like many other ions, are of such vital importance to many physiological processes that its concentration is maintained within specific limits to ensure adequate homeostasis. This is evidenced by human plasma calcium, which is one of the most closely regulated physiological variables in the human body. Normal plasma levels vary between 1 and 2% over any given time. Approximately half of all ionized calcium circulates in its unbound form, with the other half being complexed with plasma proteins such as albumin, as well as anions including bicarbonate, citrate, phosphate, and sulfate. Other animals: Different tissues contain calcium in different concentrations. For instance, Ca2+ (mostly calcium phosphate and some calcium sulfate) is the most important (and specific) element of bone and calcified cartilage. In humans, the total body content of calcium is present mostly in the form of bone mineral (roughly 99%). In this state, it is largely unavailable for exchange/bioavailability. The way to overcome this is through the process of bone resorption, in which calcium is liberated into the bloodstream through the action of bone osteoclasts. The remainder of calcium is present within the extracellular and intracellular fluids. Other animals: Within a typical cell, the intracellular concentration of ionized calcium is roughly 100 nM, but is subject to increases of 10- to 100-fold during various cellular functions. The intracellular calcium level is kept relatively low with respect to the extracellular fluid, by an approximate magnitude of 12,000-fold. This gradient is maintained through various plasma membrane calcium pumps that utilize ATP for energy, as well as a sizable storage within intracellular compartments. In electrically excitable cells, such as skeletal and cardiac muscles and neurons, membrane depolarization leads to a Ca2+ transient with cytosolic Ca2+ concentration reaching around 1 µM. Mitochondria are capable of sequestering and storing some of that Ca2+. It has been estimated that mitochondrial matrix free calcium concentration rises to the tens of micromolar levels in situ during neuronal activity. Other animals: Effects The effects of calcium on human cells are specific, meaning that different types of cells respond in different ways. However, in certain circumstances, its action may be more general. Ca2+ ions are one of the most widespread second messengers used in signal transduction. They make their entrance into the cytoplasm either from outside the cell through the cell membrane via calcium channels (such as calcium-binding proteins or voltage-gated calcium channels), or from some internal calcium storages such as the endoplasmic reticulum and mitochondria. Levels of intracellular calcium are regulated by transport proteins that remove it from the cell. For example, the sodium-calcium exchanger uses energy from the electrochemical gradient of sodium by coupling the influx of sodium into cell (and down its concentration gradient) with the transport of calcium out of the cell. In addition, the plasma membrane Ca2+ ATPase (PMCA) obtains energy to pump calcium out of the cell by hydrolysing adenosine triphosphate (ATP). In neurons, voltage-dependent, calcium-selective ion channels are important for synaptic transmission through the release of neurotransmitters into the synaptic cleft by vesicle fusion of synaptic vesicles. Other animals: Calcium's function in muscle contraction was found as early as 1882 by Ringer. Subsequent investigations were to reveal its role as a messenger about a century later. Because its action is interconnected with cAMP, they are called synarchic messengers. Calcium can bind to several different calcium-modulated proteins such as troponin-C (the first one to be identified) and calmodulin, proteins that are necessary for promoting contraction in muscle. Other animals: In the endothelial cells which line the inside of blood vessels, Ca2+ ions can regulate several signaling pathways which cause the smooth muscle surrounding blood vessels to relax. Some of these Ca2+-activated pathways include the stimulation of eNOS to produce nitric oxide, as well as the stimulation of Kca channels to efflux K+ and cause hyperpolarization of the cell membrane. Both nitric oxide and hyperpolarization cause the smooth muscle to relax in order to regulate the amount of tone in blood vessels. However, dysfunction within these Ca2+-activated pathways can lead to an increase in tone caused by unregulated smooth muscle contraction. This type of dysfunction can be seen in cardiovascular diseases, hypertension, and diabetes.Calcium coordination plays an important role in defining the structure and function of proteins. An example a protein with calcium coordination is von Willebrand factor (vWF) which has an essential role in blood clot formation process. It was discovered using single molecule optical tweezers measurement that calcium-bound vWF acts as a shear force sensor in the blood. Shear force leads to unfolding of the A2 domain of vWF whose refolding rate is dramatically enhanced in the presence of calcium. Other animals: Adaptation Ca2+ ion flow regulates several secondary messenger systems in neural adaptation for visual, auditory, and the olfactory system. It may often be bound to calmodulin such as in the olfactory system to either enhance or repress cation channels. Other times the calcium level change can actually release guanylyl cyclase from inhibition, like in the photoreception system. Ca2+ ion can also determine the speed of adaptation in a neural system depending on the receptors and proteins that have varied affinity for detecting levels of calcium to open or close channels at high concentration and low concentration of calcium in the cell at that time. Other animals: Negative effects and pathology Substantial decreases in extracellular Ca2+ ion concentrations may result in a condition known as hypocalcemic tetany, which is marked by spontaneous motor neuron discharge. In addition, severe hypocalcaemia will begin to affect aspects of blood coagulation and signal transduction. Other animals: Ca2+ ions can damage cells if they enter in excessive numbers (for example, in the case of excitotoxicity, or over-excitation of neural circuits, which can occur in neurodegenerative diseases, or after insults such as brain trauma or stroke). Excessive entry of calcium into a cell may damage it or even cause it to undergo apoptosis, or death by necrosis. Calcium also acts as one of the primary regulators of osmotic stress (osmotic shock). Chronically elevated plasma calcium (hypercalcemia) is associated with cardiac arrhythmias and decreased neuromuscular excitability. One cause of hypercalcemia is a condition known as hyperparathyroidism. Other animals: Invertebrates Some invertebrates use calcium compounds for building their exoskeleton (shells and carapaces) or endoskeleton (echinoderm plates and poriferan calcareous spicules). Plants: Stomata closing When abscisic acid signals the guard cells, free Ca2+ ions enter the cytosol from both outside the cell and internal stores, reversing the concentration gradient so the K+ ions begin exiting the cell. The loss of solutes makes the cell flaccid and closes the stomatal pores. Plants: Cellular division Calcium is a necessary ion in the formation of the mitotic spindle. Without the mitotic spindle, cellular division cannot occur. Although young leaves have a higher need for calcium, older leaves contain higher amounts of calcium because calcium is relatively immobile through the plant. It is not transported through the phloem because it can bind with other nutrient ions and precipitate out of liquid solutions. Plants: Structural roles Ca2+ ions are an essential component of plant cell walls and cell membranes, and are used as cations to balance organic anions in the plant vacuole. The Ca2+ concentration of the vacuole may reach millimolar levels. The most striking use of Ca2+ ions as a structural element in algae occurs in the marine coccolithophores, which use Ca2+ to form the calcium carbonate plates, with which they are covered. Plants: Calcium is needed to form the pectin in the middle lamella of newly formed cells. Calcium is needed to stabilize the permeability of cell membranes. Without calcium, the cell walls are unable to stabilize and hold their contents. This is particularly important in developing fruits. Without calcium, the cell walls are weak and unable to hold the contents of the fruit. Some plants accumulate Ca in their tissues, thus making them more firm. Calcium is stored as Ca-oxalate crystals in plastids. Cell signaling Ca2+ ions are usually kept at nanomolar levels in the cytosol of plant cells, and act in a number of signal transduction pathways as second messengers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flash cut** Flash cut: A flash cut, also called a flash cutover, is an immediate change in a complex system, with no phase-in period. Flash cut: In the United States, some telephone area codes were split or overlaid immediately, rather than being phased in with a permissive dialing period. An example is telephone area code 213, which serves downtown Los Angeles and its immediate environs, split in January 1951 into 213 and 714 all at once. Another example is an immediate switch from an analog television channel to a digital television channel on the same frequency, where the two cannot operate in parallel without interference. Flash cut: A flash cut can also define a procedure in which multiple components of computer infrastructure are upgraded in multiple ways, all at once, with no phase-in period. In film, an extremely brief shot, sometimes as short as one frame, which is nearly subliminal in effect. Also a series of short staccato shots that create a rhythmic effect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Veltuzumab** Veltuzumab: Veltuzumab is a monoclonal antibody (targeted at CD20) which is being investigated for the treatment of non-Hodgkin's lymphoma. As of December 2011, it is undergoing Phase I/II clinical trials. When used with milatuzumab it showed activity.This drug was developed by Immunomedics, Inc. and was originally known as IMMU-106.In August 2015 the US FDA granted it orphan drug status for immune thrombocytopenia (ITP). A phase II trial is planned to run for 5 years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Combining Diacritical Marks for Symbols** Combining Diacritical Marks for Symbols: Combining Diacritical Marks for Symbols is a Unicode block containing arrows, dots, enclosures, and overlays for modifying symbol characters. Its block name in Unicode 1.0 was simply Diacritical Marks for Symbols. History: The following Unicode-related documents record the purpose and process of defining specific characters in the Combining Diacritical Marks for Symbols block:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Art dealer** Art dealer: An art dealer is a person or company that buys and sells works of art, or acts as the intermediary between the buyers and sellers of art. An art dealer in contemporary art typically seeks out various artists to represent, and builds relationships with collectors and museums whose interests are likely to match the work of the represented artists. Some dealers are able to anticipate market trends, while some prominent dealers may be able to influence the taste of the market. Many dealers specialize in a particular style, period, or region. They often travel internationally, frequenting exhibitions, auctions, and artists' studios looking for good buys, little-known treasures, and exciting new works. When dealers buy works of art, they resell them either in their galleries or directly to collectors. Those who deal in contemporary art in particular usually exhibit artists' works in their own galleries. They will often take part in preparing the works of art to be revealed or processed.Art dealers' professional associations serve to set high standards for accreditation or membership and to support art exhibitions and shows. History: The art dealer as a distinct profession perhaps emerged in the Italian Renaissance, in particular to feed the new appetite among collectors for classical antiquities, including coins. The somewhat disreputable character of Jacopo Strada is often said to be reflected in his portrait by Titian (1567). Job requirements: Art dealers often study the history of art before entering on their careers. Related careers that often cross-over include curators of museums and art auction firms are industry-related careers. Gallery owners who do not succeed may seek to work for more successful galleries. Others pursue careers as art critics, academics, curators of museums or auction houses, or practicing artists.Dealers have to understand the business side of the art world. They keep up with trends in the market and are knowledgeable about the style of art people want to buy. They figure out how much they should pay for a piece and then estimate the resale price. They are also often passionate and knowledgeable about art. Those who deal with contemporary art promote new artists, creating a market for the artists' works and securing financial success for themselves. The art world is subject to economic booms and busts just like any other market. Art dealers must be economically conscious in order to maintain their livelihoods. The mark ups of art work must be carefully monitored. If prices and profits are too large, then investments may be devalued should an overstock or economic downturn occur.To determine an artwork's value, dealers inspect the objects or paintings closely, and compare the fine details with similar pieces. Some dealers with many years of experience learn to identify unsigned works by examining stylistic features such as brush strokes, color, form. They recognize the styles of different periods and individual artists. Often art dealers are able to distinguish authentic works from forgeries (although even dealers are sometimes fooled). Contemporary gallery: The term contemporary art gallery refers to a private for-profit commercial gallery. These galleries are found clustered together in large urban centers. Smaller cities are home to at least one gallery, but they may also be found in towns or villages, and remote areas where artists congregate, e.g. the Taos art colony and St Ives, Cornwall. Contemporary gallery: Contemporary art galleries are often open to the general public without charge; however, some are semi-private. They profit by taking a portion of art sales; twenty-five to fifty per cent is typical. There are also many non-profit or collective galleries. Some galleries in cities like Tokyo charge the artists a flat rate per day, though this is considered distasteful in some international art markets. Galleries often hang solo shows. Curators often create group shows with a message about a certain theme, trend in art, or group of associated artists. Galleries sometimes choose to represent exclusive artists, giving them opportunities for regular shows. Contemporary gallery: A gallery's definition can also include the artist cooperative or artist-run space, which often (in North America and Western Europe) operates as a space with a more democratic mission and selection process. Such galleries have a board of directors and a volunteer or paid support staff who select and curate shows by committee, or some kind of similar process to choose art often lacking commercial ends. Vanity galleries: A vanity gallery is an art gallery charging fees from artists to show their work, much like a vanity press does for authors. The shows lack legitimate curation and often include as many artists as possible. Most art professionals are able to identify them on an artist's resume. Professional organizations: Antique Tribal Art Dealers Association, Inc. (ATADA) Art Dealers Association of America (ADAA) Art and Antique Dealers League of America (AADLA) Association of Art and Antiques Dealers (LAPADA) British Art Market Federation (BAMF) British Antique Dealers' Association (BADA) Confédération Internationale des Négociants en Oeuvres d'Art (CINOA) Fine Art Dealers Association (FADA) French art dealers committee New Art Dealers Alliance (NADA) Private Art Dealers Association (PADA) Society of London Art Dealers (SLAD) The European Fine Art Foundation (TEFAF)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Timeline of category theory and related mathematics** Timeline of category theory and related mathematics: This is a timeline of category theory and related mathematics. Its scope ("related mathematics") is taken as: Categories of abstract algebraic structures including representation theory and universal algebra; Homological algebra; Homotopical algebra; Topology using categories, including algebraic topology, categorical topology, quantum topology, low-dimensional topology; Categorical logic and set theory in the categorical context such as algebraic set theory; Foundations of mathematics building on categories, for instance topos theory; Abstract geometry, including algebraic geometry, categorical noncommutative geometry, etc. Timeline of category theory and related mathematics: Quantization related to category theory, in particular categorical quantization; Categorical physics relevant for mathematics.In this article, and in category theory in general, ∞ = ω.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conversational model** Conversational model: The conversational model of psychotherapy was devised by the English psychiatrist Robert Hobson, and developed by the Australian psychiatrist Russell Meares. Hobson listened to recordings of his own psychotherapeutic practice with more disturbed clients, and became aware of the ways in which a patient's self—their unique sense of personal being—can come alive and develop, or be destroyed, in the flux of the conversation in the consulting room. Conversational model: The conversational model views the aim of therapy as allowing the growth of the patient's self through encouraging a form of conversational relating called 'aloneness-togetherness'. This phrase is reminiscent of Winnicott's idea of the importance of being able to be 'alone in the presence of another'. The client comes to eventually feel recognised, accepted and understood as who they are; their sense of personal being, or self, is fostered; and they can start to drop the destructive defenses which disrupt their sense of personal being. Conversational model: The development of the self implies a capacity to embody and span the dialectic of 'aloneness-togetherness'—rather than being disposed toward either schizoid isolation (aloneness) or merging identification with the other (togetherness). Although the therapy is described as psychodynamic, and is accordingly concerned to identify activity and personal meaning in the midst of apparent passivity, it relies more on careful empathic listening and the development of a common 'feeling language' than it does on psychoanalytic interpretation. Psychodynamic Interpersonal Therapy (PIT): In its manualised form ('PIT'), the conversational model is presented as having seven interconnected components. These are: Developing an exploratory rationale: Together with the patient generate an understanding which links emotional or somatic symptoms with interpersonal difficulties Shared understanding: In developing a shared understanding, the therapist uses statements rather than questions, uses mutual ('I' and 'We') language, deploys conditional rather than absolute statements of understanding, allows metaphorical elaborations of the patient's experience to unfold, and makes tentative interpretations or 'hypotheses' about the meaning of the patient's experience. Psychodynamic Interpersonal Therapy (PIT): Focus on the 'here and now': Feelings that are present in the room are encouraged; abstract talk about feelings by the therapist is discouraged. Focus on difficult feelings: Gently commenting on the presence of hidden feelings or the absence of expected feelings. Gaining insight: Interpretations are provided which link the dynamics of the current therapeutic interaction with problematic present and past interactions in the patient's life. Sequencing interpretations: The therapist does not jump in with explanatory interpretations before laying the groundwork of the therapeutic relationship and jointly understanding the emotions present in the room. Acknowledging change: Emotional changes that are made by the patient during therapy are offered positive reinforcement. Research: The conversational model, which has been manualised as Psychodynamic-Interpersonal Therapy, has been subject to outcome research, and has demonstrated effectiveness in the treatment of depression, psychosomatic disorders, self-harm, and borderline personality disorder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Consed** Consed: Consed is a program for viewing, editing, and finishing DNA sequence assemblies. Originally developed for sequence assemblies created with phrap, recent versions also support other sequence assembly programs like Newbler. History: Consed was originally developed as a contig editing and finishing tool for large-scale cosmid shotgun sequencing in the Human Genome Project. At genome sequencing centers, Consed was used to check assemblies generated by phrap, solve assembly problems like those caused by highly identical repeats, and finishing tasks like primer picking and gap closure. Development of Consed has continued after the completion of the Human Genome Project. Current Consed versions support very large projects with millions of reads, enabling the use with newer sequencing methods like 454 sequencing and Solexa sequencing. Consed also has advanced tools for finishing tasks like automated primer picking
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aerobiological engineering** Aerobiological engineering: Aerobiological engineering is the science of designing buildings and systems to control airborne pathogens and allergens in indoor environments. The most-common environments include commercial buildings, residences and hospitals. This field of study is important because controlled indoor climates generally tend to favor the survival and transmission of contagious human pathogens as well as certain kinds of fungi and bacteria. Aerobiological engineering in healthcare facilities: Since healthcare facilities can house a number of different types of patients who potentially have weakened immune systems, aerobiological engineering is of significant importance to engineers of hospitals. The aerobiology that concerns designers of hospitals includes viruses, bacteria, fungi, and other microbiological products such as endotoxins, mycotoxins, and microbial volatile organic compounds (MVOC's). Bacteria and viruses, because of their small size, readily become airborne as bacterial aerosols and can remain suspended in the air for hours. Because of this, adequate precautions and mitigation techniques need to be taken with indoor air quality in hospitals dealing with infectious diseases. Aerobiological engineering in healthcare facilities: Ventilation systems At a minimum, ventilation systems provide dilution and removal of airborne contaminants, which in general leads to improved indoor air quality and happier occupants. If filters are checked and replaced as needed, they can form an integral component of an immune building system designed to prevent the spread of diseases by airborne routes. They can also be used for pressurization of areas within buildings to provide contamination control. Aerobiological engineering in healthcare facilities: Biocontamination in ventilation systems Ventilation systems can contribute to the microbial loading of indoor environment by drawing in microbes from outdoor air and by creating conditions for growth. When microbes land on a wet filter that has been collecting dust, they have the perfect medium on which to grow, and if they grow through the filter they have the potential to be aerosolized and carried throughout the building via the HVAC control system. Aerobiological engineering in healthcare facilities: Dilution rates Bacteria in hospitals can be aerosolized when sick patients cough and sneeze and because of the large number of germs produced it is necessary that the number of air changes per hour (ACH) remain high in treatment and operating rooms. The American Society of Heating, Refrigerating and Air-Conditioning Engineers typically recommends 12-25 ACH in treatment and operating rooms and 4-6 ACH in intensive-care rooms. For rooms containing tuberculosis patients, the Centers for Disease Control and Prevention recommends an ACH of 6 to 12, with exhaust air being sent through high-efficiency-particulate-air (HEPA) filters before being sent outside. Aerobiological engineering in healthcare facilities: Pressurized isolation rooms In order to keep patients safe, hospitals use a range of technologies to combat airborne pathogens. Isolation rooms can be designed to feature positive or negative air-pressure flows. Positive-pressure rooms are used when there are patients who are extremely susceptible to disease, such as HIV patients. For these patients, it is paramount to prevent the ingress of any microorganisms, including common fungi and bacteria that may be harmless to healthy people. These systems filter the air before delivery with a HEPA filter and then pump it into the isolation room at high pressure, which forces air from the isolation room out into the hallway. In a negative-pressure system, the focus is on keeping infectious diseases isolated by controlling the airflow and directing harmful aerosols away from health care workers and other occupied areas. Negative pressure isolation rooms keep contaminants and pathogens from reaching external areas. The most common application of these rooms in the health industry today is for isolating tuberculosis patients. To do this, the air is exhausted from the room at a rate greater than that at which it is being delivered. This makes it difficult for airborne disease to go from a contaminated area to a hospital hallway, because air is constantly being drawn into the room rather than escaping from it. Aerobiological engineering in healthcare facilities: Air sterilization processes The normal means for filtration in healthcare facilities is low-efficiency air filters outside the air-handling unit followed by the HEPA (High Efficiency Particulate Air) filters placed after the air-handling unit. To be HEPA-certified, filters must remove particles of 0.3 μm diameter, with at least a 99.97-percent efficiency. Air burners sterilize air that is leaving contaminated isolation rooms by heating it to 300 °C (572 °F) for six seconds. Ultraviolet germicidal irradiation (UVGI) is another technique for special-purpose air sterilization. It is defined as electromagnetic radiation in the range of about 200 to 320 nm, that is used to destroy microorganisms. When HEPA filters are used in conjunction with UV sterilization tools, the results can be extremely effective. The filter will remove the bigger, hardier spores, and all that is left are the smaller microbes which are killed more efficiently by the high-intensity UV treatment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Holmium(III) iodide** Holmium(III) iodide: Holmium(III) iodide is an iodide of holmium, with the chemical formula of HoI3. It is used as a component of metal halide lamps. Preparation: Holmium(III) iodide can be obtained by directly reacting holmium and iodine: 2 Ho + 3 I2 → 2 HoI3Holmium(III) iodide can also be obtained via the direct reaction between holmium and mercury(II) iodide: 2 Ho + 3 HgI2 → 2 HoI3 + 3 HgThe mercury produced in the reaction can be removed by distillation.Holmium(III) iodide hydrate can be converted to the anhydrous form by dehydration with a large excess of ammonium iodide (since the compound is prone to hydrolysis). Properties: Holmium(III) iodide is a highly hygroscopic substance that dissolves in water. It forms yellow hexagonal crystals with a crystal structure similar to bismuth(III) iodide. In air, it quickly absorbs moisture and forms hydrates. The corresponding oxide iodide is also readily formed at elevated temperature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Psychopia** Psychopia: Psychopia is a small press zine featuring reviews and articles on British comic books and small press comics and interviews with cartoonists. Unusually for comix zines it focussed almost entirely on British comics such as The Beano and The Dandy ignoring American superhero comics. History and profile: Issue #0 was the first published in 1994. Psychopia was created by cartoonist/writer B. Patston. The fanzine evolved out of his small press comic Oy Mister!! published in 1992. Like Escape Magazine it printed comic strips. History and profile: Patston drew comics in his bedroom in Linslade typing up articles on his manual typewriter. He pasted up the final pages on his card table. The zine had a very downbeat amateurish look to it due to the underground sensibilities of the editor. The misspelling Psycopia for the magazine originated with the reputation for text mangling, technical typesetting failures and typographical errors, and once misspelled its own name on the cover as "Psycopia". History and profile: Psychopia has printed comics by small press artists including "The Slap Of Doom" by Joe Berger, Ben Hunt, Lee Kennedy and "The People Under The Bed" by Vic Pratt. In addition Psychopia reprinted artwork by Darryl Cunningham and Marc Baines. Also from issue #3 Psychopia featured jam comic strips with many artists including Vic Pratt, Victor Ambrus, Caspar Williams and underground cartoonists Pete Loveday and Robert Crumb contributing to "TV Funnies" in Psychopia #5. Interviews: Psychopia features interviews with comic artists including Terry Bave Davy Francis Trevor Metcalfe David Mostyn Bill Ritchie Lew Stringer Daniel ClowesAt the same time Patston was offering comics for sale as a mail order distribution "Pretentious Mail Art". It also currently exists as a website with an Image board and forums.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Burnside problem** Burnside problem: The Burnside problem asks whether a finitely generated group in which every element has finite order must necessarily be a finite group. It was posed by William Burnside in 1902, making it one of the oldest questions in group theory and was influential in the development of combinatorial group theory. It is known to have a negative answer in general, as Evgeny Golod and Igor Shafarevich provided a counter-example in 1964. The problem has many refinements and variants (see bounded and restricted below) that differ in the additional conditions imposed on the orders of the group elements, some of which are still open questions. Brief history: Initial work pointed towards the affirmative answer. For example, if a group G is finitely generated and the order of each element of G is a divisor of 4, then G is finite. Moreover, A. I. Kostrikin was able to prove in 1958 that among the finite groups with a given number of generators and a given prime exponent, there exists a largest one. This provides a solution for the restricted Burnside problem for the case of prime exponent. (Later, in 1989, Efim Zelmanov was able to solve the restricted Burnside problem for an arbitrary exponent.) Issai Schur had shown in 1911 that any finitely generated periodic group that was a subgroup of the group of invertible n × n complex matrices was finite; he used this theorem to prove the Jordan–Schur theorem.Nevertheless, the general answer to the Burnside problem turned out to be negative. In 1964, Golod and Shafarevich constructed an infinite group of Burnside type without assuming that all elements have uniformly bounded order. In 1968, Pyotr Novikov and Sergei Adian supplied a negative solution to the bounded exponent problem for all odd exponents larger than 4381. In 1982, A. Yu. Ol'shanskii found some striking counterexamples for sufficiently large odd exponents (greater than 1010), and supplied a considerably simpler proof based on geometric ideas. Brief history: The case of even exponents turned out to be much harder to settle. In 1992, S. V. Ivanov announced the negative solution for sufficiently large even exponents divisible by a large power of 2 (detailed proofs were published in 1994 and occupied some 300 pages). Later joint work of Ol'shanskii and Ivanov established a negative solution to an analogue of Burnside problem for hyperbolic groups, provided the exponent is sufficiently large. By contrast, when the exponent is small and different from 2, 3, 4 and 6, very little is known. General Burnside problem: A group G is called periodic if every element has finite order; in other words, for each g in G, there exists some positive integer n such that gn = 1. Clearly, every finite group is periodic. There exist easily defined groups such as the p∞-group which are infinite periodic groups; but the latter group cannot be finitely generated. General Burnside problem: General Burnside problem. If G is a finitely generated, periodic group, then is G necessarily finite? This question was answered in the negative in 1964 by Evgeny Golod and Igor Shafarevich, who gave an example of an infinite p-group that is finitely generated (see Golod–Shafarevich theorem). However, the orders of the elements of this group are not a priori bounded by a single constant. Bounded Burnside problem: Part of the difficulty with the general Burnside problem is that the requirements of being finitely generated and periodic give very little information about the possible structure of a group. Therefore, we pose more requirements on G. Consider a periodic group G with the additional property that there exists a least integer n such that for all g in G, gn = 1. A group with this property is said to be periodic with bounded exponent n, or just a group with exponent n. Burnside problem for groups with bounded exponent asks: Burnside problem I. If G is a finitely generated group with exponent n, is G necessarily finite? It turns out that this problem can be restated as a question about the finiteness of groups in a particular family. The free Burnside group of rank m and exponent n, denoted B(m, n), is a group with m distinguished generators x1, ..., xm in which the identity xn = 1 holds for all elements x, and which is the "largest" group satisfying these requirements. More precisely, the characteristic property of B(m, n) is that, given any group G with m generators g1, ..., gm and of exponent n, there is a unique homomorphism from B(m, n) to G that maps the ith generator xi of B(m, n) into the ith generator gi of G. In the language of group presentations, free Burnside group B(m, n) has m generators x1, ..., xm and the relations xn = 1 for each word x in x1, ..., xm, and any group G with m generators of exponent n is obtained from it by imposing additional relations. The existence of the free Burnside group and its uniqueness up to an isomorphism are established by standard techniques of group theory. Thus if G is any finitely generated group of exponent n, then G is a homomorphic image of B(m, n), where m is the number of generators of G. Burnside problem can now be restated as follows: Burnside problem II. For which positive integers m, n is the free Burnside group B(m, n) finite? The full solution to Burnside problem in this form is not known. Burnside considered some easy cases in his original paper: B(1, n) is the cyclic group of order n. Bounded Burnside problem: B(m, 2) is the direct product of m copies of the cyclic group of order 2 and hence finite.The following additional results are known (Burnside, Sanov, M. Hall): B(m, 3), B(m, 4), and B(m, 6) are finite for all m.The particular case of B(2, 5) remains open: as of 2020 it was not known whether this group is finite. Bounded Burnside problem: The breakthrough in solving the Burnside problem was achieved by Pyotr Novikov and Sergei Adian in 1968. Using a complicated combinatorial argument, they demonstrated that for every odd number n with n > 4381, there exist infinite, finitely generated groups of exponent n. Adian later improved the bound on the odd exponent to 665. The latest improvement to the bound on odd exponent is 101 obtained by Adian himself in 2015. The case of even exponent turned out to be considerably more difficult. It was only in 1994 that Sergei Vasilievich Ivanov was able to prove an analogue of Novikov–Adian theorem: for any m > 1 and an even n ≥ 248, n divisible by 29, the group B(m, n) is infinite; together with the Novikov–Adian theorem, this implies infiniteness for all m > 1 and n ≥ 248. This was improved in 1996 by I. G. Lysënok to m > 1 and n ≥ 8000. Novikov–Adian, Ivanov and Lysënok established considerably more precise results on the structure of the free Burnside groups. In the case of the odd exponent, all finite subgroups of the free Burnside groups were shown to be cyclic groups. In the even exponent case, each finite subgroup is contained in a product of two dihedral groups, and there exist non-cyclic finite subgroups. Moreover, the word and conjugacy problems were shown to be effectively solvable in B(m, n) both for the cases of odd and even exponents n. Bounded Burnside problem: A famous class of counterexamples to the Burnside problem is formed by finitely generated non-cyclic infinite groups in which every nontrivial proper subgroup is a finite cyclic group, the so-called Tarski Monsters. First examples of such groups were constructed by A. Yu. Ol'shanskii in 1979 using geometric methods, thus affirmatively solving O. Yu. Schmidt's problem. In 1982 Ol'shanskii was able to strengthen his results to establish existence, for any sufficiently large prime number p (one can take p > 1075) of a finitely generated infinite group in which every nontrivial proper subgroup is a cyclic group of order p. In a paper published in 1996, Ivanov and Ol'shanskii solved an analogue of the Burnside problem in an arbitrary hyperbolic group for sufficiently large exponents. Restricted Burnside problem: Formulated in the 1930s, it asks another, related, question: Restricted Burnside problem. If it is known that a group G with m generators and exponent n is finite, can one conclude that the order of G is bounded by some constant depending only on m and n? Equivalently, are there only finitely many finite groups with m generators of exponent n, up to isomorphism? This variant of the Burnside problem can also be stated in terms of certain universal groups with m generators and exponent n. By basic results of group theory, the intersection of two normal subgroups of finite index in any group is itself a normal subgroup of finite index. Thus, the intersection M of all the normal subgroups of the free Burnside group B(m, n) which have finite index is a normal subgroup of B(m, n). One can therefore define the free restricted Burnside group B0(m, n) to be the quotient group B(m, n)/M. Every finite group of exponent n with m generators is isomorphic to B(m,n)/N where N is a normal subgroup of B(m,n) with finite index. Therefore, by the Third Isomorphism Theorem, every finite group of exponent n with m generators is isomorphic to B0(m,n)/(N/M) — in other words, it is a homomorphic image of B0(m, n). Restricted Burnside problem: The restricted Burnside problem then asks whether B0(m, n) is a finite group. Restricted Burnside problem: In the case of the prime exponent p, this problem was extensively studied by A. I. Kostrikin during the 1950s, prior to the negative solution of the general Burnside problem. His solution, establishing the finiteness of B0(m, p), used a relation with deep questions about identities in Lie algebras in finite characteristic. The case of arbitrary exponent has been completely settled in the affirmative by Efim Zelmanov, who was awarded the Fields Medal in 1994 for his work.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gestell** Gestell: Gestell (or sometimes Ge-stell) is a German word used by twentieth-century German philosopher Martin Heidegger to describe what lies behind or beneath modern technology. Heidegger introduced the term in 1954 in The Question Concerning Technology, a text based on the lecture "The Framework" ("Das Gestell") first presented on December 1st 1949, in Bremen. It was derived from the root word stellen, which means "to put" or "to place" and combined with the German prefix Ge-, which denotes a form of "gathering" or "collection". The term encompasses all types of entities and orders them in a certain way. Heidegger's notion of Gestell: Heidegger applied the concept of Gestell to his exposition of the essence of technology. He concluded that technology is fundamentally Enframing (Gestell). As such, the essence of technology is Gestell. Indeed, "Gestell, literally 'framing', is an all-encompassing view of technology, not as a means to an end, but rather a mode of human existence". Heidegger further explained that in a more comprehensive sense, the concept is the final mode of the historical self-concealment of primordial φύσις.In defining the essence of technology as Gestell, Heidegger indicated that all that has come to presence in the world has been enframed. Such enframing pertains to the manner reality appears or unveils itself in the period of modern technology and people born into this "mode of ordering" are always embedded into the Gestell (enframing). Thus what is revealed in the world, what has shown itself as itself (the truth of itself) required first an Enframing, literally a way to exist in the world, to be able to be seen and understood. Concerning the essence of technology and how we see things in our technological age, the world has been framed as the "standing-reserve." Heidegger writes, Enframing means the gathering together of that setting-upon which sets upon man, i.e., challenges him forth, to reveal the real, in the mode of ordering, as standing-reserve. Enframing means that way of revealing which holds sway in the essence of modern technology and which is itself nothing technological. Heidegger's notion of Gestell: Furthermore, Heidegger uses the word in a way that is uncommon by giving Gestell an active role. In ordinary usage the word would signify simply a display apparatus of some sort, like a book rack, or picture frame; but for Heidegger, Gestell is literally a challenging forth, or performative "gathering together", for the purpose of revealing or presentation. If applied to science and modern technology, "standing reserve" is active in the case of a river once it generates electricity or the earth if revealed as a coal-mining district or the soil as a mineral deposit.For some scholars, Gestell effectively explains the violence of technology. This is attributed to Heidegger's explanation that, when Gestell holds sway, "it drives out every other possibility of revealing" and that it "conceals that revealing which, in the sense of poiesis, lets what presences come forth into appearance." Later uses of the concept: Giorgio Agamben drew heavily from Heidegger in his interpretation of Foucault's concept of dispositif (apparatus). In his work, What is an Apparatus, he described apparatus as the "decisive technical term in the strategy of Foucault's thought". Agamben maintained that Gestell is nothing more than what appears as oikonomia. Agamben cited cinema as an apparatus of Gestell since films capture and record the gestures of human beings.Albert Borgmann expanded Heidegger's concept of Gestell by offering a more practical conceptualization of the essence of technology. Heidegger's enframing became Borgmann's Device paradigm, which explains the intimate relationship between people, things and technological devices.Claudio Ciborra developed another interpretation, which focused on the analyses of the Information System infrastructure using the concept of Gestell. He based his improvement of the original meaning of "structural" with "processual" on the etymology of Gestell so that it indicates the pervasive process of arranging, regulating, and ordering of resources that involve both human and natural resources. Ciborra has likened information infrastructure with Gestell and this association was used to philosophically ground many aspects of his works such as his description of its inherent self-feeding process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scaphoid fracture** Scaphoid fracture: A scaphoid fracture is a break of the scaphoid bone in the wrist. Symptoms generally includes pain at the base of the thumb which is worse with use of the hand. The anatomic snuffbox is generally tender and swelling may occur. Complications may include nonunion of the fracture, avascular necrosis of the proximal part of the bone, and arthritis.Scaphoid fractures are most commonly caused by a fall on an outstretched hand. Diagnosis is generally based on a combination of clinical examination and medical imaging. Some fractures may not be visible on plain X-rays. In such cases the affected area may be immobilised in a splint or cast and reviewed with repeat X-rays in two weeks, or alternatively an MRI or bone scan may be performed.The fracture may be preventable by using wrist guards during certain activities. In those in whom the fracture remains well aligned a cast is generally sufficient. If the fracture is displaced then surgery is generally recommended. Healing may take up to six months.It is the most commonly fractured carpal bone. Males are affected more often than females. Signs and symptoms: People with scaphoid fractures generally have snuffbox tenderness. Focal tenderness is usually present in one of three places: 1) volar prominence at the distal wrist for distal pole fractures; 2) anatomic snuff box for waist or midbody fractures; 3) distal to Lister's tubercle for proximal pole fractures. Complications Avascular necrosis (AVN) is one complication of scaphoid fracture. Since the scaphoid receives its arterial supply in a retrograde fashion (i.e. from distal to proximal pole), the part proximal to the fracture is usually affected.Risk of AVN depends on the location of the fracture. Fractures in the proximal third have a high incidence of AVN (~30%) Waist fractures in the middle third is the most frequent fracture site and has moderate risk of AVN. Signs and symptoms: Fractures in the distal third are rarely complicated by AVN.Non union can also occur from undiagnosed or undertreated scaphoid fractures. Arterial flow to the scaphoid enters via the distal pole and travels to the proximal pole. This blood supply is tenuous, increasing the risk of nonunion, particularly with fractures at the wrist and proximal end. If not treated correctly non-union of the scaphoid fracture can lead to wrist osteoarthritis.Symptoms may include aching in the wrist, decreased range of motion of the wrist, and pain during activities such as lifting or gripping. If x-ray results show arthritis due to an old break, the treatment plan will first focus on treating the arthritis through anti-inflammatory medications and wearing a splint when an individual feels pain in the wrist. If these treatments do not help the symptoms of arthritis, steroid injections to the wrist may help alleviate pain. Should these treatments not work, surgery may be required. Mechanism: Fractures of scaphoid can occur either with direct axial compression or with hyperextension of the wrist, such as a fall on the palm on an outstretched hand. Using the Herbert classification system, there are three main types of scaphoid fractures. 10%-20% of fractures are at the proximal pole, 60%-80% are at the waist (middle), and the remainder occur at the distal pole. Diagnosis: Scaphoid fractures are often diagnosed using plain radiographs and multiple views are obtained as standard. However, not all fractures are apparent initially. In 1/4 of cases, the clinical examination suggests a fracture, but the X-ray does not show it, even though there is indeed a fracture. Therefore, people with tenderness over the scaphoid (those who exhibit pain to pressure in the anatomic snuff box ) are often splinted in a thumb spica for 7–10 days at which point a second set of X-rays is taken. If a minimally displaced fracture was present initially, healing will now be apparent. Even then a fracture may not be apparent. A CT Scan can then be used to evaluate the scaphoid with greater resolution. The use of MRI, if available, is preferred over CT and can give one an immediate diagnosis. Bone scintigraphy is also an effective method for diagnosis fracture which do not appear on Xray. Treatment: Treatment of scaphoid fractures is guided by the location in the bone of the fracture (proximal, waist, distal), displacement (or instability) of the fracture, and patient tolerance for cast immobilization.For non and minimally displaced fractures (up to 2mm) of the scaphoid waist, cast immobilisation (with surgical fixation for non-united fractures at 6 to 12 weeks) is as effective as immediate surgery fixation. This was demonstrated by the SWIFFT (Scaphoid Waist Internal Fixation for Fractures Trial) study, 439 patients were randomly allocated to either cast immobilisation or surgical fixation. There was no difference in the healing, pain and function or days off work between the two treatment groups, the cast immobilisation group had less complications and this treatment was more cost effective. The choice of short arm, short arm thumb spica or long arm cast is debated in the medical literature and no clear consensus or proof of the benefit of one type of casting or another has been shown; although it is generally accepted to use a short arm or short arm thumb spica for non displaced fractures. In the SWIFFT study most used a short arm cast with the thumb left free. Non displaced or minimally displaced fracture can also be treated with percutaneous or minimal incision surgery which if performed correctly has a high union rate, low morbidity and faster return to activity than closed cast management. However this was not confirmed by the SWIFFT study. Fractures that are more proximal take longer to heal. It is expected the distal third will heal in 6 to 8 weeks, the middle third will take 8–12 weeks, and the proximal third will take 12–24 weeks. The Scaphoid receives its blood supply primarily from lateral and distal branches of the radial artery. Blood flows from the top/distal end of the bone in a retrograde fashion down to the proximal pole; if this blood flow is disrupted by a fracture, the bone may not heal. Surgery is necessary at this point to mechanically mend the bone together.Percutaneous screw fixation is recommended over an open surgical approach when it is possible to achieve acceptable bone alignment closed as minimal incisions can preserves the palmar ligament complex and local vasculature, and help avoid soft tissue complications. This surgery includes screwing the scaphoid bone back together at the most perpendicular angle possible to promote quicker and stronger healing of the bone. Internal fixation can be done dorsally with a percutaneous incision and arthroscopic assistance or via a minimal open dorsal approach, or via a volar approach in which case slight excavation of the edge of the trapezium bone may be necessary to reach the scaphoid as 80% of this bone is covered with articular cartilage, which makes it difficult to gain access to the scaphoid. Prognosis: A non-union (pseudarthrosis) can occur in 2 to 5% of cases. In the aftermath, 90% of non-operated individuals return to sports, with 88% reaching their previous level. Among those who underwent surgery, the rate of returning to sports is 98%, and 96% return to their previous level. The average time observed for resuming sports is 14 weeks for non-operated individuals and 7 weeks for those who had surgery. Epidemiology: Fractures of the scaphoid are common in young males. They are less common in children and older adults because the distal radius is weaker contributor to the wrist and more likely to fracture in these age groups. Scaphoid fractures account for 50%-80% of carpal injuries. Terminology: These are also called navicular fractures (the scaphoid also being called the carpal navicular), although this can be confused with the navicular bone in the foot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Telecommunications Management Network** Telecommunications Management Network: The Telecommunications Management Network is a protocol model defined by ITU-T for managing open systems in a communications network. It is part of the ITU-T Recommendation series M.3000 and is based on the OSI management specifications in ITU-T Recommendation series X.700. Telecommunications Management Network: TMN provides a framework for achieving interconnectivity and communication across heterogeneous operations system and telecommunication networks. To achieve this, TMN defines a set of interface points for elements which perform the actual communications processing (such as a call processing switch) to be accessed by elements, such as management workstations, to monitor and control them. The standard interface allows elements from different manufacturers to be incorporated into a network under a single management control. Telecommunications Management Network: For communication between Operations Systems and NEs (Network Elements), it uses the Common management information protocol (CMIP) or Mediation devices when it uses Q3 interface. The TMN layered organization is used as fundamental basis for the management software of ISDN, B-ISDN, ATM, SDH/SONET and GSM networks. It is not as commonly used for purely packet-switched data networks. Telecommunications Management Network: Modern telecom networks offer automated management functions and are run by operations support system (OSS) software. These manage modern telecom networks and provide the data that is needed in the day-to-day running of a telecom network. OSS software is also responsible for issuing commands to the network infrastructure to activate new service offerings, commence services for new customers, and detect and correct network faults. Architecture: According to ITU-T M.3010 TMN has 3 architectures: Physical architecture Security architecture Logical layered architecture Logical layers: The framework identifies four logical layers of network management: Business management Includes the functions related to business aspects, analyzes trends and quality issues, for example, or to provide a basis for billing and other financial reports. Service management Handles services in the network: definition, administration and charging of services. Network management Distributes network resources, performs tasks of: configuration, control and supervision of the network. Element management Handles individual network elements including alarm management, handling of information, backup, logging, and maintenance of hardware and software.A network element provides agent services, mapping the physical aspects of the equipment into the TMN framework. Recommendations: The TMN M.3000 series includes the following recommendations: M.3000 Tutorial Introduction to TMN M.3010 Principles for a TMN M.3020 TMN Interface Specification Methodology M.3050 Business Process Framework (eTOM) M.3060 Principles for the Management of the Next Generation Networks M.3100 Generic Network Information Model for TMN M.3200 TMN Management Services Overview M.3300 TMN Management Capabilities at the F Interface
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Separable partial differential equation** Separable partial differential equation: A separable partial differential equation is one that can be broken into a set of separate equations of lower dimensionality (fewer independent variables) by a method of separation of variables. This generally relies upon the problem having some special form or symmetry. In this way, the partial differential equation (PDE) can be solved by solving a set of simpler PDEs, or even ordinary differential equations (ODEs) if the problem can be broken down into one-dimensional equations. The most common form of separation of variables is simple separation of variables in which a solution is obtained by assuming a solution of the form given by a product of functions of each individual coordinate. There is a special form of separation of variables called R -separation of variables which is accomplished by writing the solution as a particular fixed function of the coordinates multiplied by a product of functions of each individual coordinate. Laplace's equation on Rn is an example of a partial differential equation which admits solutions through R -separation of variables; in the three-dimensional case this uses 6-sphere coordinates. Separable partial differential equation: (This should not be confused with the case of a separable ODE, which refers to a somewhat different class of problems that can be broken into a pair of integrals; see separation of variables.) Example: For example, consider the time-independent Schrödinger equation [−∇2+V(x)]ψ(x)=Eψ(x) for the function ψ(x) (in dimensionless units, for simplicity). (Equivalently, consider the inhomogeneous Helmholtz equation.) If the function V(x) in three dimensions is of the form V(x1,x2,x3)=V1(x1)+V2(x2)+V3(x3), then it turns out that the problem can be separated into three one-dimensional ODEs for functions ψ1(x1) , ψ2(x2) , and ψ3(x3) , and the final solution can be written as ψ(x)=ψ1(x1)⋅ψ2(x2)⋅ψ3(x3) . (More generally, the separable cases of the Schrödinger equation were enumerated by Eisenhart in 1948.)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FORM (symbolic manipulation system)** FORM (symbolic manipulation system): FORM is a symbolic manipulation system. It reads text files containing definitions of mathematical expressions as well as statements that tell it how to manipulate these expressions. Its original author is Jos Vermaseren of Nikhef, the Dutch institute for subatomic physics. It is widely used in the theoretical particle physics community, but it is not restricted to applications in this specific field. Features: Definition of mathematical expressions containing various objects (symbols, functions, indices, ...) with elementary arithmetic operations Arbitrary long mathematical expressions (limited only by disk space) Multi-threaded execution, parallelized version for computer clusters Powerful pattern matching and replacing Fast trace calculation especially of gamma matrices Built-in mathematical functions Output into various formats (plain text, Fortran code, Mathematica code) External communication with other software programs Example usage: A text file containing Symbol x,y; Local myexpr = (x+y)^3; Id y = x; Print; .end would tell FORM to create an expression named myexpr, replace therein the symbol y by x, and print the result on the screen. The result would be given like myexpr = 8*x^3; History: FORM was started in 1984 as a successor to Schoonschip, an algebra engine developed by M. Veltman. It was initially coded in FORTRAN 77, but rewritten in C before the release of version 1.0 in 1989. Version 2.0 was released in 1991. The version 3.0 of FORM has been publicized in 2000. It has been made open-source on August 27, 2010 under the GPL license. Applications in high-energy physics and other fields: Mincer: A software package using FORM to compute massless propagator diagrams with up to three loops. FORM has been the essential tool to calculate the higher-order QCD beta function. The mathematical structure of multiple zeta values has been researched with dedicated FORM programs. The software package FormCalc which is widely used in the physics community to calculate Feynman diagrams is built on top of FORM.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UFO sightings in Brazil** UFO sightings in Brazil: This is a list of alleged sightings of unidentified flying objects or UFOs in Brazil. 1947: On 23 July 1947, topographer José Higgins was working with many laborers in Bauru, São Paulo. Suddenly, they heard an extremely sharp sound. Some moments later, they saw a lens-shaped object landing near them. The workmen run away leaving Higgins alone. The man reported that three humanoid figures emerged from the UFO and spoke to him in an unknown language; after about a half-hour, they returned to the UFO which then took them away. 1952: On 5 May 1952, the journalist Joao Martins and the photographer Eduardo Keffel claimed to have seen a flying disk in the vicinity of Barra da Tijuca. Keffel took some photographs of the UFO, which were published by the magazine O Cruzeiro. 1957: On 13 September 1957 The journalist Ibrahim Sued received an envelope containing a letter and three fragments of metal. The author of the letter wrote that he saw a UFO which exploded in the sky over the beach of Ubatuba; he collected some fragments and sent three of these to the journal together with the letter. Sued sent the fragments to a laboratory which analyzed them and discovered that they consisted of pure magnesium. James Harder and other ufologists came to the conclusion that the fragments of Ubatuba have an extraterrestrial origin, but other investigators think that this story is a hoax. 1957: Antonio Villas Boas claimed to have been abducted by extraterrestrials on 16 October 1957. Though similar stories had circulated for years beforehand, Boas's claims was among the first alien abduction stories to receive wide attention. 1957: On the evening of 4 November 1957, two sentinels at the Itaipu Fort, (Praia Grande, São Paulo) suffered moderate burns after being hit by a heat wave from an unidentified flying object, which allegedly came descending from the sky. The entire electricity of the fort, including the emergency circuits, went down during the incident. Afterwards, Brazilian Army and United States Air Force (USAF) personnel, along with investigators of the Brazilian Air Force, flew to the fort to interview the soldiers. USAF's Donald Keyhoe expressed his opinions on the case:"Such civilization could see that in Earth we now have atomic bombs and that we are quickly improving our rockets. Given the past history of mankind - frequent wars showing a belligerent human race - they must have become alarmed. We should, therefore, expect, especially at these times, to receive such visits. According to this, the main objective of the aliens would be to watch our space improvements, fearing we could become a threat to other planets. If this hypothesis is exact, it could be expanded to link the launching of the Sputniks with the attack to the Fort Itaipu. However, this sounded absurd for all investigators. It would mean that the aliens would be worried about our firsts steps in space, and by small space ships so primitive that would look like a canoe if compared to a transatlantic liner. It would also mean that those burnings had the purpose of demonstrating the superior weapons they could use against the aggressive explorers coming from the Earth. However, we were still far from piloted space flight, even to the Moon. According to human logic, we would not be able to threaten a superior space ship - not now nor later." In 2008, a document reporting the incident was written at the Brazilian Embassy in the United States. 1958: At 12:00 pm of 16 January 1958, the Brazilian ship Almirante Saldanha, taking part in projects of the Ano Geofísico Internacional, was preparing to sail away from Ilha de Trindade, off the coast of Espírito Santo. Captain Viegas was on the deck with several scientists and members of the crew when he suddenly noticed a flying object, which had a “ring” around it, just like Saturn. Everyone reportedly saw the UFO at the same time. It came to the island from the east, flew towards the Pico Desejado (Wished Peak), made a step turn and went away very quickly to the northwest. As soon as the object was noticed Almiro Baraúna was requested for photographing. After getting the camera and going up the quarter-deck, he stated that he managed to take pictures of the object. 1966: Two men were found dead near Niterói. Both were wearing lead masks. A UFO allegedly was seen flying near the point where both died. According to veteran UFOlogist, Alberto Francisco do Carmo, sightings and encounters northwest of São Paulo led to the development in 1966 of an office called SIOANI (for Serviço de Informação de Objetos Aéreos Não Identificados). Headed by Major Gilberto Zani de Melo, the study and office was shut down years after his retirement. 1977: The Colares flap refers to an outbreak of UFO sightings that occurred in 1977 on the Brazilian island of Colares. During the outbreak, the UFOs allegedly attacked the citizens with intense beams of radiation that left burn marks and puncture wounds. These sightings led to the Brazilian government dispatching a team to investigate under the codename Operation Saucer (Portuguese: Prato, see below), but the government later recalled the team and classified the files until the late 1990s. 1977: This was the first operation of the Brazilian Air Force conducted only to investigate UFO-related issues. This operation was started shortly after the Colares UFO flap. 1979: On the evening of 28 July 1979, security guard Antonio Carlos Ferreira was allegedly abducted from his workplace - a furniture factory in Mirassol, São Paulo. According to his own accounts, he was approached by three humanoid figures who tranquilized and took him aboard a small ship which ferried him to a larger craft further away. There, he said he was positioned in front of a large television-like device and presented with a variety of images before being forced to mate with a female alien, after which he was tranquilized again and returned to the ground. Ferreira described the creatures as being approximately 1.2 metres (3.9 ft; 120 cm) tall with pointed ears, slanted eyes and human-like mouths. They lacked eyebrows or eyelashes and spoke in a language that superficially resembled Japanese. Some were said to have dark skin and red curly hair, while others had light skin and straight black hair. The ship was spherical with three undercarriage-like legs protruding from the bottom, with the interior lit by bright red and green lights. Ferreira states that he encountered the aliens again in 1982, with the craft supposedly landing close enough for him to see the female alien and a childlike alien observing him from a distance. He said he experienced a third encounter later in 1982 in which he was taken into the hangar of an alien craft via a green beam of light before being injected with a yellow substance. He said he was then taken to meet the two aliens once more, the younger of whom he was led to believe was his own child. Other encounters are said to have followed, to a total of 16 or 20 between 1979 and 1989. 1980: Elias Seixas de Mattos was a truck driver from Rio de Janeiro in 1980, when he had suffered an unexplainable experience. His story, along with the ones from his two friends, has earned an important page in the history of Brazilian Ufology because of the quantity of details he displayed to describe the situations. 1986: A series of UFO sightings all over the states of southeastern Brazil, which led to several jet fighters being scrambled to intercept. During the attempted interceptions, pilots reported the objects capable of 90-degree turns and hypersonic flight. 1996: The Varginha UFO incident was an incident in Varginha, Brazil, in 1996 involving reports of unidentified flying objects and strange creatures (allegedly extraterrestrials) which were supposedly captured by Brazilian officials. 1996: In Saragonha Island, at Patos Lagoon, Haroldo Westendorff witnessed a cone-shaped UFO, 50 to 60 metres (160 to 200 ft) tall, with a base as big as a soccer stadium. He flew around the object for some 15 minutes, keeping 100 metres (330 ft; 110 yd) of distance. The object was spinning around itself and heading towards the sea and was spotted at the radar of Infraero's room at Pelotas' airport. It was not detected by Cindacta II in Curitiba, Paraná, which was responsible for watching the skies of southern Brazil. Westendorff also reported a smaller object coming out of the top of the big UFO, which climbed into the skies at a very fast speed, with the larger UFO following shortly afterwards. The Ministry of Aeronautics kept a secret investigation of the object seen by Westendorff. 2008: In 2014, a document called "Dossier Riolândia" was produced by the amateur organization Inape (Instituto de Astronomia e Pesquisa Espacial (Institute of Astronomy and Space Research in Portuguese)) which allegedly shows the appearance of UFOs in the city of Riolândia on January 20, 2008. 2013: On 19 June, a light was visible in the sky over one of the protests in Brazil, and seen by thousands attending the event. It was reported to be a UFO, but it is currently believed to have been a drone used by local newspaper Folha de S. Paulo in order to shoot aerial images of the demonstrations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mace-bearer** Mace-bearer: A mace-bearer, or macebearer, is a person who carries a mace, either a real weapon or ceremonial. Armed: When the mace was still in actual use as a weapon, it was deemed fit for close-protection, and hence a mace-bearer could be a bodyguard. Thus in French and Dutch, a massier (armed with a masse d'armes 'weapon-mace') could be a member of a formally so-styled guard corps, as in the court of the Dukes of Brabant. In Spain, a macero were originally an armed guard protecting the King of Castile; they were called macero due to the weapon they wielded, a maza (i.e., a mace). Otherwise, a normally more domestic servant could double (arming trusted household staff was not unusual) as macebearer, as in the case of the prophet Mohammed's first muezzin, Bilal ibn Ribah Ceremonial: As for ceremonial maces, which symbolise the power or status of a monarch, institution or high dignitary, the duty to carry them in procession or other formal occasions may either be occasional and vested in an office otherwise named, or give its name to the office, either as a sinecure or in conjunction with other duties (sometimes indeed as an alternative title, as with some university officers, e.g. at St Andrews, the Bedelis is the chief mace bearer and at Oxford the bedels). His main day-to-day duties may be rather that of a general assistant, like say a driver's. In the Anglo-Saxon tradition, there usually is one post of mace-bearer per mace, and rarely more than one mace in use at the same time per master, or only in specific different contexts. Ceremonial: In French, the above-mentioned title massier is nowadays used for a mere huissier (a lowly post, door-keeper or usher) who occasionally carries a 'masse' when taking part in formal ceremonies, rather like a staff of office, as the mace is not given the same reverence as in the Anglo-Saxon tradition, indeed there may be several ones carried at the same time by staff of the same master, without any symbolism or tradition concerning the individual maces. Ceremonial: In Spain, the macero mentioned above evolved to become the symbol of civil authority, usually associated with extra-royal institutions such as municipalities, local authorities, etc. When these institutions celebrate a formal ceremony, the macero will head any parade or surround the figures of authority. Similar to the French case, albeit they always wield a 'maza' (a mace), the mace itself does not bear the ceremonial reverence it does in the Anglo-Saxon tradition. Typically, the maceros are dressed in characteristic 15th century garment, and wear a tabard with the coat of arms of the institution they represent. Ceremonial: A swordbearer is similar to a mace-bearer but carries a ceremonial sword rather than a ceremonial mace. Sources: Larousse, Pierre (1952) Nouveau petit Larousse illustré : dictionnaire encyclopédique, special edition for the fiftieth anniversary of the book of Claude and Paul Augé: refondue et augmentée par E. Gillon et al., Paris : Larousse, 1791 p. [encyclopaedic dictionary, in French]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar eclipse of July 12, 2094** Solar eclipse of July 12, 2094: A partial solar eclipse will occur on July 12, 2094. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. Related eclipses: Solar eclipses 2091–2094 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. Metonic series The metonic series repeats eclipses every 19 years (6939.69 days), lasting about 5 cycles. Eclipses occur in nearly the same calendar date. In addition, the octon subseries repeats 1/5 of that or every 3.8 years (1387.94 days). All eclipses in this table occur at the Moon's ascending node.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Suberedamine** Suberedamine: Suberedamines are bio-active isolates of Suberea, a marine sponge. The compounds are brominated tyrosine dimer derivatives. Extra reading: Tsuda, Masashi; Sakuma, Yusuke; Kobayashi, Jun'ichi (July 2001). "Suberedamines A and B, New Bromotyrosine Alkaloids from a Sponge Suberea Species". Journal of Natural Products. 64 (7): 980–982. doi:10.1021/np010077g. PMID 11473442. Xiong, Fong. "Total Synthesis of Suberedamine a". acs.confex.com. Retrieved 15 November 2020. Cordell, Geoffrey A. (23 August 2005). The Alkaloids: Chemistry and Biology. Gulf Professional Publishing. p. 86. ISBN 978-0-12-469561-0.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mediastinum** Mediastinum: The mediastinum (from Medieval Latin: mediastinus, lit. 'midway';PL: mediastina) is the central compartment of the thoracic cavity. Surrounded by loose connective tissue, it is an undelineated region that contains a group of structures within the thorax, namely the heart and its vessels, the esophagus, the trachea, the phrenic and cardiac nerves, the thoracic duct, the thymus and the lymph nodes of the central chest. Anatomy: The mediastinum lies within the thorax and is enclosed on the right and left by pleurae. It is surrounded by the chest wall in front, the lungs to the sides and the spine at the back. It extends from the sternum in front to the vertebral column behind. It contains all the organs of the thorax except the lungs. It is continuous with the loose connective tissue of the neck. Anatomy: The mediastinum can be divided into an upper (or superior) and lower (or inferior) part: The superior mediastinum starts at the superior thoracic aperture and ends at the thoracic plane. Anatomy: The inferior mediastinum from this level to the diaphragm. This lower part is subdivided into three regions, all relative to the pericardium – the anterior mediastinum being in front of the pericardium, the middle mediastinum contains the pericardium and its contents, and the posterior mediastinum being behind the pericardium.Anatomists, surgeons, and clinical radiologists compartmentalize the mediastinum differently. For instance, in the radiological scheme of Felson, there are only three compartments (anterior, middle, and posterior), and the heart is part of the middle (inferior) mediastinum. Anatomy: Thoracic plane The transverse thoracic plane, thoracic plane, plane of Louis or plane of Ludwig is an important anatomical plane at the level of the sternal angle and the T4/T5 intervertebral disc. It serves as an imaginary boundary that separates the superior and inferior mediastinum.A number of important anatomical structures and transitions occur at the level of the thoracic plane, including: The carinal bifurcation of the trachea into the left and right main bronchi. Anatomy: The left recurrent laryngeal nerve branching off the left vagus nerve and hooking under the ligamentum arteriosum between the aortic arch above and the pulmonary trunk below. The starting of the cardiac plexus. The azygos vein arching over the right main bronchus and joining into the superior vena cava. The thoracic duct crossing the midline from right to left behind the esophagus The end of the pretracheal and prevertebral fasciae. Anatomy: Superior mediastinum The superior mediastinum is bounded: superiorly by the thoracic inlet, the upper opening of the thorax; inferiorly by the transverse thoracic plane. which is an imaginary plane passing from the sternal angle anteriorly to the lower border of the body of the 4th thoracic vertebra posteriorly; laterally by the pleurae; anteriorly by the manubrium of the sternum; posteriorly by the first four thoracic vertebral bodies. Anatomy: Inferior mediastinum Anterior inferior mediastinum Is bounded: laterally by the pleurae; posteriorly by the pericardium; anteriorly by the sternum , the left transversus thoracis and the fifth, sixth, and seventh left costal cartilages. Middle inferior mediastinum Bounded: pericardial sac – It contains the vital organs and is classified into the serous and fibrous pericardium. Anatomy: Posterior inferior mediastinum Is bounded: Anteriorly by (from above downwards): bifurcation of trachea; pulmonary vessels; fibrous pericardium and posterior sloping surface of diaphragm Inferiorly by the thoracic surface of the diaphragm (below); Superiorly by the transverse thoracic plane; Posteriorly by the bodies of the vertebral column from the lower border of the fifth to the twelfth thoracic vertebra (behind); Laterally by the mediastinal pleura (on either side). Clinical significance: The mediastinum is frequently the site of involvement of various tumors: Anterior mediastinum: substernal thyroid goiters, lymphoma, thymoma, and teratoma. Middle mediastinum: lymphadenopathy, metastatic disease such as from small cell carcinoma from the lung. Posterior mediastinum: Neurogenic tumors, either from the nerve sheath (mostly benign) or elsewhere (mostly malignant).Mediastinitis is inflammation of the tissues in the mediastinum, usually bacterial and due to rupture of organs in the mediastinum. As the infection can progress very quickly, this is a serious condition. Pneumomediastinum is the presence of air in the mediastinum, which in some cases can lead to pneumothorax, pneumoperitoneum, and pneumopericardium if left untreated. However, that does not always occur and sometimes those conditions are actually the cause, not the result, of pneumomediastinum. These conditions frequently accompany Boerhaave syndrome, or spontaneous esophageal rupture. Clinical significance: Widening Widened mediastinum/mediastinal widening is where the mediastinum has a width greater than 6 cm on an upright PA chest X-ray or 8 cm on supine AP chest film.A widened mediastinum can be indicative of several pathologies: aortic aneurysm aortic dissection aortic unfolding aortic rupture hilar lymphadenopathy anthrax inhalation - a widened mediastinum was found in 7 of the first 10 victims infected by anthrax (Bacillus anthracis) in 2001. Clinical significance: esophageal rupture - presents usually with pneumomediastinum and pleural effusion. It is diagnosed with water-soluble swallowed contrast. mediastinal mass mediastinitis cardiac tamponade pericardial effusion thoracic vertebrae fractures in trauma patients.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Astronomical spectroscopy** Astronomical spectroscopy: Astronomical spectroscopy is the study of astronomy using the techniques of spectroscopy to measure the spectrum of electromagnetic radiation, including visible light, ultraviolet, X-ray, infrared and radio waves that radiate from stars and other celestial objects. A stellar spectrum can reveal many properties of stars, such as their chemical composition, temperature, density, mass, distance and luminosity. Spectroscopy can show the velocity of motion towards or away from the observer by measuring the Doppler shift. Spectroscopy is also used to study the physical properties of many other types of celestial objects such as planets, nebulae, galaxies, and active galactic nuclei. Background: Astronomical spectroscopy is used to measure three major bands of radiation in the electromagnetic spectrum: visible light, radio waves, and X-rays. While all spectroscopy looks at specific bands of the spectrum, different methods are required to acquire the signal depending on the frequency. Ozone (O3) and molecular oxygen (O2) absorb light with wavelengths under 300 nm, meaning that X-ray and ultraviolet spectroscopy require the use of a satellite telescope or rocket mounted detectors.: 27  Radio signals have much longer wavelengths than optical signals, and require the use of antennas or radio dishes. Infrared light is absorbed by atmospheric water and carbon dioxide, so while the equipment is similar to that used in optical spectroscopy, satellites are required to record much of the infrared spectrum. Background: Optical spectroscopy Physicists have been looking at the solar spectrum since Isaac Newton first used a simple prism to observe the refractive properties of light. In the early 1800s Joseph von Fraunhofer used his skills as a glassmaker to create very pure prisms, which allowed him to observe 574 dark lines in a seemingly continuous spectrum. Soon after this, he combined telescope and prism to observe the spectrum of Venus, the Moon, Mars, and various stars such as Betelgeuse; his company continued to manufacture and sell high-quality refracting telescopes based on his original designs until its closure in 1884.: 28–29 The resolution of a prism is limited by its size; a larger prism will provide a more detailed spectrum, but the increase in mass makes it unsuitable for highly detailed work. This issue was resolved in the early 1900s with the development of high-quality reflection gratings by J.S. Plaskett at the Dominion Observatory in Ottawa, Canada.: 11  Light striking a mirror will reflect at the same angle, however a small portion of the light will be refracted at a different angle; this is dependent upon the indices of refraction of the materials and the wavelength of the light. By creating a "blazed" grating which utilizes a large number of parallel mirrors, the small portion of light can be focused and visualized. These new spectroscopes were more detailed than a prism, required less light, and could be focused on a specific region of the spectrum by tilting the grating.The limitation to a blazed grating is the width of the mirrors, which can only be ground a finite amount before focus is lost; the maximum is around 1000 lines/mm. In order to overcome this limitation holographic gratings were developed. Volume phase holographic gratings use a thin film of dichromated gelatin on a glass surface, which is subsequently exposed to a wave pattern created by an interferometer. This wave pattern sets up a reflection pattern similar to the blazed gratings but utilizing Bragg diffraction, a process where the angle of reflection is dependent on the arrangement of the atoms in the gelatin. The holographic gratings can have up to 6000 lines/mm and can be up to twice as efficient in collecting light as blazed gratings. Because they are sealed between two sheets of glass, the holographic gratings are very versatile, potentially lasting decades before needing replacement.Light dispersed by the grating or prism in a spectrograph can be recorded by a detector. Historically, photographic plates were widely used to record spectra until electronic detectors were developed, and today optical spectrographs most often employ charge-coupled devices (CCDs). The wavelength scale of a spectrum can be calibrated by observing the spectrum of emission lines of known wavelength from a gas-discharge lamp. The flux scale of a spectrum can be calibrated as a function of wavelength by comparison with an observation of a standard star with corrections for atmospheric absorption of light; this is known as spectrophotometry. Background: Radio spectroscopy Radio astronomy was founded with the work of Karl Jansky in the early 1930s, while working for Bell Labs. He built a radio antenna to look at potential sources of interference for transatlantic radio transmissions. One of the sources of noise discovered came not from Earth, but from the center of the Milky Way, in the constellation Sagittarius. In 1942, JS Hey captured the Sun's radio frequency using military radar receivers.: 26  Radio spectroscopy started with the discovery of the 21-centimeter H I line in 1951. Background: Radio interferometry Radio interferometry was pioneered in 1946, when Joseph Lade Pawsey, Ruby Payne-Scott and Lindsay McCready used a single antenna atop a sea cliff to observe 200 MHz solar radiation. Two incident beams, one directly from the sun and the other reflected from the sea surface, generated the necessary interference. The first multi-receiver interferometer was built in the same year by Martin Ryle and Vonberg. In 1960, Ryle and Antony Hewish published the technique of aperture synthesis to analyze interferometer data. The aperture synthesis process, which involves autocorrelating and discrete Fourier transforming the incoming signal, recovers both the spatial and frequency variation in flux. The result is a 3D image whose third axis is frequency. For this work, Ryle and Hewish were jointly awarded the 1974 Nobel Prize in Physics. Stars and their properties: Chemical properties Newton used a prism to split white light into a spectrum of color, and Fraunhofer's high-quality prisms allowed scientists to see dark lines of an unknown origin. In the 1850s, Gustav Kirchhoff and Robert Bunsen described the phenomena behind these dark lines. Hot solid objects produce light with a continuous spectrum, hot gases emit light at specific wavelengths, and hot solid objects surrounded by cooler gases show a near-continuous spectrum with dark lines corresponding to the emission lines of the gases.: 42–44  By comparing the absorption lines of the Sun with emission spectra of known gases, the chemical composition of stars can be determined. Stars and their properties: The major Fraunhofer lines, and the elements with which they are associated, appear in the following table. Designations from the early Balmer Series are shown in parentheses. Not all of the elements in the Sun were immediately identified. Two examples are listed below. Stars and their properties: In 1868 Norman Lockyer and Pierre Janssen independently observed a line next to the sodium doublet (D1 and D2) which Lockyer determined to be a new element. He named it Helium, but it wasn't until 1895 the element was found on Earth.: 84–85  In 1869 the astronomers Charles Augustus Young and William Harkness independently observed a novel green emission line in the Sun's corona during an eclipse. This "new" element was incorrectly named coronium, as it was only found in the corona. It was not until the 1930s that Walter Grotrian and Bengt Edlén discovered that the spectral line at 530.3 nm was due to highly ionized iron (Fe13+). Other unusual lines in the coronal spectrum are also caused by highly charged ions, such as nickel and calcium, the high ionization being due to the extreme temperature of the solar corona.: 87, 297 To date more than 20 000 absorption lines have been listed for the Sun between 293.5 and 877.0 nm, yet only approximately 75% of these lines have been linked to elemental absorption.: 69 By analyzing the equivalent width of each spectral line in an emission spectrum, both the elements present in a star and their relative abundances can be determined. Using this information stars can be categorized into stellar populations; Population I stars are the youngest stars and have the highest metal content (the Sun is a Pop I star), while Population III stars are the oldest stars with a very low metal content. Stars and their properties: Temperature and size In 1860 Gustav Kirchhoff proposed the idea of a black body, a material that emits electromagnetic radiation at all wavelengths. In 1894 Wilhelm Wien derived an expression relating the temperature (T) of a black body to its peak emission wavelength (λmax). max T=b b is a constant of proportionality called Wien's displacement constant, equal to 2.897771955...×10−3 m⋅K. This equation is called Wien's Law. By measuring the peak wavelength of a star, the surface temperature can be determined. For example, if the peak wavelength of a star is 502 nm the corresponding temperature will be 5778 kelvins. Stars and their properties: The luminosity of a star is a measure of the electromagnetic energy output in a given amount of time. Luminosity (L) can be related to the temperature (T) of a star by L=4πR2σT4 ,where R is the radius of the star and σ is the Stefan–Boltzmann constant, with a value of 5.670374419...×10−8 W⋅m−2⋅K−4. Thus, when both luminosity and temperature are known (via direct measurement and calculation) the radius of a star can be determined. Galaxies: The spectra of galaxies look similar to stellar spectra, as they consist of the combined light of billions of stars. Galaxies: Doppler shift studies of galaxy clusters by Fritz Zwicky in 1937 found that the galaxies in a cluster were moving much faster than seemed to be possible from the mass of the cluster inferred from the visible light. Zwicky hypothesized that there must be a great deal of non-luminous matter in the galaxy clusters, which became known as dark matter. Since his discovery, astronomers have determined that a large portion of galaxies (and most of the universe) is made up of dark matter. In 2003, however, four galaxies (NGC 821, NGC 3379, NGC 4494, and NGC 4697) were found to have little to no dark matter influencing the motion of the stars contained within them; the reason behind the lack of dark matter is unknown.In the 1950s, strong radio sources were found to be associated with very dim, very red objects. When the first spectrum of one of these objects was taken there were absorption lines at wavelengths where none were expected. It was soon realised that what was observed was a normal galactic spectrum, but highly red shifted. These were named quasi-stellar radio sources, or quasars, by Hong-Yee Chiu in 1964. Quasars are now thought to be galaxies formed in the early years of our universe, with their extreme energy output powered by super-massive black holes.The properties of a galaxy can also be determined by analyzing the stars found within them. NGC 4550, a galaxy in the Virgo Cluster, has a large portion of its stars rotating in the opposite direction as the other portion. It is believed that the galaxy is the combination of two smaller galaxies that were rotating in opposite directions to each other. Bright stars in galaxies can also help determine the distance to a galaxy, which may be a more accurate method than parallax or standard candles. Interstellar medium: The interstellar medium is matter that occupies the space between star systems in a galaxy. 99% of this matter is gaseous – hydrogen, helium, and smaller quantities of other ionized elements such as oxygen. The other 1% is dust particles, thought to be mainly graphite, silicates, and ices. Clouds of the dust and gas are referred to as nebulae. Interstellar medium: There are three main types of nebula: absorption, reflection, and emission nebulae. Absorption (or dark) nebulae are made of dust and gas in such quantities that they obscure the starlight behind them, making photometry difficult. Reflection nebulae, as their name suggest, reflect the light of nearby stars. Their spectra are the same as the stars surrounding them, though the light is bluer; shorter wavelengths scatter better than longer wavelengths. Emission nebulae emit light at specific wavelengths depending on their chemical composition. Interstellar medium: Gaseous emission nebulae In the early years of astronomical spectroscopy, scientists were puzzled by the spectrum of gaseous nebulae. In 1864 William Huggins noticed that many nebulae showed only emission lines rather than a full spectrum like stars. From the work of Kirchhoff, he concluded that nebulae must contain "enormous masses of luminous gas or vapour." However, there were several emission lines that could not be linked to any terrestrial element, brightest among them lines at 495.9 nm and 500.7 nm. These lines were attributed to a new element, nebulium, until Ira Bowen determined in 1927 that the emission lines were from highly ionised oxygen (O+2). These emission lines could not be replicated in a laboratory because they are forbidden lines; the low density of a nebula (one atom per cubic centimetre) allows for metastable ions to decay via forbidden line emission rather than collisions with other atoms.Not all emission nebulae are found around or near stars where solar heating causes ionisation. The majority of gaseous emission nebulae are formed of neutral hydrogen. In the ground state neutral hydrogen has two possible spin states: the electron has either the same spin or the opposite spin of the proton. When the atom transitions between these two states, it releases an emission or absorption line of 21 cm. This line is within the radio range and allows for very precise measurements: Velocity of the cloud can be measured via Doppler shift The intensity of the 21 cm line gives the density and number of atoms in the cloud The temperature of the cloud can be calculatedUsing this information the shape of the Milky Way has been determined to be a spiral galaxy, though the exact number and position of the spiral arms is the subject of ongoing research. Interstellar medium: Complex molecules Dust and molecules in the interstellar medium not only obscures photometry, but also causes absorption lines in spectroscopy. Their spectral features are generated by transitions of component electrons between different energy levels, or by rotational or vibrational spectra. Detection usually occurs in radio, microwave, or infrared portions of the spectrum. The chemical reactions that form these molecules can happen in cold, diffuse clouds or in dense regions illuminated with ultraviolet light. Most known compounds in space are organic, ranging from small molecules e.g. acetylene C2H2 and acetone (CH3)2CO; to entire classes of large molecule e.g. fullerenes and polycyclic aromatic hydrocarbons; to solids, such as graphite or other sooty material. Motion in the universe: Stars and interstellar gas are bound by gravity to form galaxies, and groups of galaxies can be bound by gravity in galaxy clusters. With the exception of stars in the Milky Way and the galaxies in the Local Group, almost all galaxies are moving away from Earth due to the expansion of the universe. Motion in the universe: Doppler effect and redshift The motion of stellar objects can be determined by looking at their spectrum. Because of the Doppler effect, objects moving towards someone are blueshifted, and objects moving away are redshifted. The wavelength of redshifted light is longer, appearing redder than the source. Conversely, the wavelength of blueshifted light is shorter, appearing bluer than the source light: λ−λ0λ0=v0c where λ0 is the emitted wavelength, v0 is the velocity of the object, and λ is the observed wavelength. Note that v<0 corresponds to λ<λ0, a blueshifted wavelength. A redshifted absorption or emission line will appear more towards the red end of the spectrum than a stationary line. In 1913 Vesto Slipher determined the Andromeda Galaxy was blueshifted, meaning it was moving towards the Milky Way. He recorded the spectra of 20 other galaxies — all but four of which were redshifted — and was able to calculate their velocities relative to the Earth. Edwin Hubble would later use this information, as well as his own observations, to define Hubble's law: The further a galaxy is from the Earth, the faster it is moving away. Hubble's law can be generalised to v=H0d where v is the velocity (or Hubble Flow), H0 is the Hubble Constant, and d is the distance from Earth. Motion in the universe: Redshift (z) can be expressed by the following equations: In these equations, frequency is denoted by f and wavelength by λ . The larger the value of z, the more redshifted the light and the farther away the object is from the Earth. As of January 2013, the largest galaxy redshift of z~12 was found using the Hubble Ultra-Deep Field, corresponding to an age of over 13 billion years (the universe is approximately 13.82 billion years old).The Doppler effect and Hubble's law can be combined to form the equation Hubble c where c is the speed of light. Motion in the universe: Peculiar motion Objects that are gravitationally bound will rotate around a common center of mass. For stellar bodies, this motion is known as peculiar velocity, and can alter the Hubble Flow. Thus, an extra term for the peculiar motion needs to be added to Hubble's law: total =H0d+vpec This motion can cause confusion when looking at a solar or galactic spectrum, because the expected redshift based on the simple Hubble law will be obscured by the peculiar motion. For example, the shape and size of the Virgo Cluster has been a matter of great scientific scrutiny due to the very large peculiar velocities of the galaxies in the cluster. Motion in the universe: Binary stars Just as planets can be gravitationally bound to stars, pairs of stars can orbit each other. Some binary stars are visual binaries, meaning they can be observed orbiting each other through a telescope. Some binary stars, however, are too close together to be resolved. These two stars, when viewed through a spectrometer, will show a composite spectrum: the spectrum of each star will be added together. This composite spectrum becomes easier to detect when the stars are of similar luminosity and of different spectral class.Spectroscopic binaries can be also detected due to their radial velocity; as they orbit around each other one star may be moving towards the Earth whilst the other moves away, causing a Doppler shift in the composite spectrum. The orbital plane of the system determines the magnitude of the observed shift: if the observer is looking perpendicular to the orbital plane there will be no observed radial velocity. For example, a person looking at a carousel from the side will see the animals moving toward and away from them, whereas if they look from directly above they will only be moving in the horizontal plane. Planets, asteroids, and comets: Planets, asteroids, and comets all reflect light from their parent stars and emit their own light. For cooler objects, including Solar System planets and asteroids, most of the emission is at infrared wavelengths we cannot see, but that are routinely measured with spectrometers. For objects surrounded by gas, such as comets and planets with atmospheres, further emission and absorption happens at specific wavelengths in the gas, imprinting the spectrum of the gas on that of the solid object. In the case of worlds with thick atmospheres or complete cloud cover (such as the gas giants, Venus, and Saturn's satellite Titan), the spectrum is mostly or completely due to the atmosphere alone. Planets, asteroids, and comets: Planets The reflected light of a planet contains absorption bands due to minerals in the rocks present for rocky bodies, or due to the elements and molecules present in the atmosphere. To date over 3,500 exoplanets have been discovered. These include so-called Hot Jupiters, as well as Earth-like planets. Using spectroscopy, compounds such as alkali metals, water vapor, carbon monoxide, carbon dioxide, and methane have all been discovered. Planets, asteroids, and comets: Asteroids Asteroids can be classified into three major types according to their spectra. The original categories were created by Clark R. Chapman, David Morrison, and Ben Zellner in 1975, and further expanded by David J. Tholen in 1984. In what is now known as the Tholen classification, the C-types are made of carbonaceous material, S-types consist mainly of silicates, and X-types are 'metallic'. There are other classifications for unusual asteroids. C- and S-type asteroids are the most common asteroids. In 2002 the Tholen classification was further "evolved" into the SMASS classification, expanding the number of categories from 14 to 26 to account for more precise spectroscopic analysis of the asteroids. Planets, asteroids, and comets: Comets The spectra of comets consist of a reflected solar spectrum from the dusty clouds surrounding the comet, as well as emission lines from gaseous atoms and molecules excited to fluorescence by sunlight and/or chemical reactions. For example, the chemical composition of Comet ISON was determined by spectroscopy due to the prominent emission lines of cyanogen (CN), as well as two- and three-carbon atoms (C2 and C3). Nearby comets can even be seen in X-ray as solar wind ions flying to the coma are neutralized. The cometary X-ray spectra therefore reflect the state of the solar wind rather than that of the comet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chalcogel** Chalcogel: A chalcogel or properly metal chalcogenide aerogel is an aerogel made from chalcogenides. Chalcogels preferentially absorb heavy metals, such as mercury, lead, and cadmium, from water. Sulfide chalcogels are also very good at desulfurization.Metal chalcogenide aerogels can be prepared from thiolysis or nanoparticle condensation and contain crystalline nanoparticles in the structure. The synthetic method can be extended to many thioanions, including tetrathiomolybdate-based chalcogels. Different metal ions have been used as linkers Co2+, Ni2+, Pb2+, Cd2+, Bi3+, Cr3+.When the gels are dried aerogels with high surface areas are obtained and the materials have multifunctional nature. For example, chalcogels are especially promising for gas separation. They were reported to exhibit high selectivity in CO2 and C2H6 over H2 and CH4 adsorption. The latter is relevant to exit gas stream composition of water gas shift reaction and steam reforming reactions (reactions widely used for H2 production). For example, separation of gas pairs such as CO2/H2, CO2/CH4, and CO2/N2 are key steps in precombustion capture of CO2, natural gas sweetening and postcombustion capture of CO2 processes leading ultimately at upgrading of the raw gas. The above mentioned conditioning makes the gas suitable for a number of applications in fuel cells. Chalcogel: Chalcogels were shown to be very effective at capturing radionuclides from nuclear waste such as 99Tc, and 238U, and especially 129I.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Galactic plane** Galactic plane: The galactic plane is the plane on which the majority of a disk-shaped galaxy's mass lies. The directions perpendicular to the galactic plane point to the galactic poles. In actual usage, the terms galactic plane and galactic poles usually refer specifically to the plane and poles of the Milky Way, in which Planet Earth is located. Galactic plane: Some galaxies are irregular and do not have any well-defined disk. Even in the case of a barred spiral galaxy like the Milky Way, defining the galactic plane is slightly imprecise and arbitrary since the stars are not perfectly coplanar. In 1959, the IAU defined the position of the Milky Way's north galactic pole as exactly RA = 12h 49m , Dec = 27° 24′ in the then-used B1950 epoch; in the currently-used J2000 epoch, after precession is taken into account, its position is RA 12h 51m 26.282s, Dec 27° 07′ 42.01″. This position is in Coma Berenices, near the bright star Arcturus; likewise, the south galactic pole lies in the constellation Sculptor. Galactic plane: The "zero of longitude" of galactic coordinates was also defined in 1959 to be at position angle 123° from the north celestial pole. Thus the zero longitude point on the galactic equator was at 17h 42m 26.603s, −28° 55′ 00.445″ (B1950) or 17h 45m 37.224s, −28° 56′ 10.23″ (J2000), and its J2000 position angle is 122.932°. The Galactic Center is located at position angle 31.72° (B1950) or 31.40° (J2000) east of north.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tolpiprazole** Tolpiprazole: Tolpiprazole (INN, BAN) (developmental code name H-4170) is an anxiolytic drug of the phenylpiperazine group that was never marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lumican** Lumican: Lumican, also known as LUM, is an extracellular matrix protein that, in humans, is encoded by the LUM gene on chromosome 12. Structure: Lumican is a proteoglycan Class II member of the small leucine-rich proteoglycan (SLRP) family that includes decorin, biglycan, fibromodulin, keratocan, epiphycan, and osteoglycin.Like the other SLRPs, lumican has a molecular weight of about 40 kiloDaltons and has four major intramolecular domains: a signal peptide of 16 amino acid residues; a negatively-charged N-terminal domain containing sulfated tyrosine and disulfide bond(s); ten tandem leucine-rich repeats allowing lumican to bind to other extracellular components such as collagen; a carboxyl terminal domain of 50 amino acid residues containing two conserved cysteines 32 residues apart.There are four N-linked sites within the leucine-rich repeat domain of the protein core that can be substituted with keratan sulfate. The core protein of lumican (like decorin and fibromodulin) is horseshoe shaped. This enables it bind to collagen molecules within a collagen fibril, thus helping keep adjacent fibrils apart. Function: Lumican is a major keratan sulfate proteoglycan of the cornea but is ubiquitously distributed in most mesenchymal tissues throughout the body. Lumican is involved in collagen fibril organization and circumferential growth, corneal transparency, and epithelial cell migration and tissue repair. Corneal transparency is possible due to the exact alignment of collagen fibers by lumican (and keratocan) in the intrafibrillar space. Clinical significance: Mice that have the lumican gene knocked out (Lum-/-) develop opacities of the cornea in both eyes and fragile skin. The lumican (LUM) gene was thought to be a candidate susceptibility gene for high myopia; however, a meta-analysis showed no association between LUM polymorphism and high myopia susceptibility in all genetic models studied.Lum knockout mice also have abnormal collagen in their heart tissue, with fewer and thicker fibrils. Mice deficient in both lumican and fibromodulin develop severe tendinopathy (tendon pathology), revealing the importance of these SLRPs in the development of correctly sized and aligned collagen fibers in tendon. Along with other extracellular matrix components, lumican expression was increased in equine flexor tendons six weeks after an injury.Lumican is present in the extracellular matrix of uteral tissues in fertile women. There is an increase of lumican during the proliferative to secretory phase of the endometrium. In menopausal endometrial tissue, the level of lumican expression decreases and is also low in pathological compared to normal endometrium. Clinical significance: Lumican is highly expressed in pleural effusions (lung fluid) of patients with adenocarcinoma. Its expression was low in cancer cells but high in the extracellular matrix surrounding the tumor. Lumican expression was not associated with tumor grade or stage. In about half the patients with pancreatic ductal adenocarcinoma tested, lumican in the extracellular matrix around the tumor was associated with a reduction in metastatic recurrence after surgery and with a three-fold longer survival than patients without stromal lumican. As lumican can directly bind to and inhibit matrix metalloproteinase-14 (MMP14), lumican may limit tumor progression by preventing extracellular matrix collagen proteolysis by this enzyme.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shopping list** Shopping list: A shopping list is a list of items needed to be purchased by a shopper. Consumers often compile a shopping list of groceries to purchase on the next visit to the grocery store (a grocery list). There are surviving examples of Roman and Bible-era shopping lists. Shopping list: The shopping list itself may be simply a scrap piece of paper or something more elaborate. There are pads with magnets for keeping an incremental list available at the home, typically on the refrigerator, but any magnetic clip with scraps of paper can be used to achieve the same result. There is even a specific device that dispenses a strip of paper from a roll for use in a shopping list. Some shopping carts come with a small clipboard to fit shopping lists on. Psychology: Use of shopping lists may be correlated to personality types. There are "demographic differences between list and non list shoppers; the former are more likely to be female, while the latter are more likely to be childless." Remembering a shopping list is a standard experiment in psychology. Shopping with a list is a commonly employed behavioral weight loss guideline designed to reduce food purchases and therefore food consumption. Studies are divided on the effectiveness of this technique.Some studies show approximately 40% of grocery shoppers use shopping lists, while other studies show 61–67% use lists. Of the items listed, 80% were purchased. However, listed items only accounted for 40% of total items purchased. Use of shopping lists clearly impact shopping behaviour: "Written shopping lists significantly reduce average expenditure." Incremental lists: The list may be compiled immediately before the shopping trip or incrementally as shopping needs arise throughout the week. Incremental lists typically have no structure and new items are added to the bottom of the list as they come up. If the list is compiled immediately before use, it can be organized by store layout (e.g. frozen foods are grouped together on the list) to minimize time in the store. Preprinted lists can be similarly organized.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jersey (clothing)** Jersey (clothing): Traditionally, a jersey is an item of knitted clothing, generally made of wool or cotton, with sleeves, worn as a pullover, as it does not open at the front, unlike a cardigan. It is usually close-fitting and machine knitted in contrast to a guernsey that is more often hand knit with a thicker yarn. The word is usually used interchangeably with sweater.Alternatively, the shirt worn by members of a sports team as part of the team uniform is also referred to as a jersey. Etymology: Jersey, in the Channel Islands, was famous for its knitting trade in medieval times, and because of that original fame, the name "jersey" is still applied to many forms of knitted fabric, which transferred to the garments made from the fabric. In sports: A sports jersey is a shirt worn by members of a team to identify their affiliation with the team. Jerseys identify their wearers' names and/or numbers, generally showing the colors and logo of the team. Numbers are frequently used to identify players, since uniforms give players a similar appearance. A jersey may also include the logo of the team's sponsor. In sports: Examples A cycling jersey is a specialised jersey designed to be used in road cycling. Cycling jerseys are usually made of synthetic microfiber material to aid in wicking sweat away from the skin to allow it to evaporate. Specific colours or patterns represent certain statuses in these races, such as the yellow jersey of the leader of the general classification in the Tour de France, or the rainbow jersey for the world champion. In sports: The main garment of an ice hockey uniform, which was traditionally called a sweater, is increasingly known as a hockey jersey. Basketball jerseys are usually sleeveless. Baseball jerseys are usually button up. In Australian rules football, the player's shirt is known as a "guernsey".Other examples are the third jersey, hockey jersey, basketball uniform, baseball uniform and gridiron football uniform.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dinonylnaphthylsulfonic acid** Dinonylnaphthylsulfonic acid: Dinonylnaphthylsulfonic acid (DINNSA) is an organic chemical, an aryl sulfonic acid. Its melting point is 259.5 °C and its boiling point is 600.4 °C. It has very low water solubility. It is a moderate skin irritant and a strong eye irritant. It has low volatility and vapor pressure and is stable above 100 °C. Dinonylnaphthylsulfonic acid is used as an additive in industrial lubricants, greases, cutting fluids, industrial coatings, and corrosion inhibitors. Its calcium and barium salts (CAS numbers 57855-77-3 and 25619-56-1, respectively) have generally the same use. Dinonylnaphthylsulfonic acid: Dinonylnaphthylsulfonic acid is a component of Stadis 450 which is an antistatic agent added to distillate fuels, solvents, commercial jet fuels, and to the military JP-8 fuel to increase the electrical conductivity of the fluid. Fluids with increased conductivity more readily dissipate static charges to mitigate the risk of explosions or fires due to Static Discharge Ignitions Dinonylnaphthylsulfonic acid by itself does not function as an anti-static additive. Dinonylnaphthylsulfonic acid: Dinonylnaphthylsulfonic acid is prepared by reaction of naphthalene with nonene, yielding diisononylnaphthalene. Diisononylnaphthalene then undergoes sulfonation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apolipoprotein L1** Apolipoprotein L1: Apolipoprotein L1 is a protein that in humans is encoded by the APOL1 gene. Two transcript variants encoding two different isoforms have been found for this gene. Species distribution: This gene is found only in humans, African green monkeys, and gorillas. Structure: The gene that encodes the APOL1 protein is 14,522 base pairs long and found on the human chromosome 22, on the long arm at position 13.1 from base pair 36,253,070 to base pair 36,267,530.The protein is a 398 amino acid protein. It consists of 5 functional domains: S domain-secretory signal MAD (membrane-addressing domain)-ph sensor and regulator of cell death BH3 domain - associated with programmed cell death PFD (pore forming domain) SRA (serum resistance-associated binding domain)- confers resistance to Trypanosoma brucei Mutations Two coding variants, G1 and G2, have been recently identified with relevance to human phenotypes. The G1 is a pair of two non-synonymous single nucleotide polymorphisms (SNPs) in almost complete linkage disequilibrium. G2 is an in-frame deletion of the two amino acid residues, N388 and Y389. Function: Apolipoprotein L1 (apoL1) is a minor apolipoprotein component of HDL cholesterol which is synthesized in the liver and also in many other tissues, including pancreas, kidney, and brain. APOL1 is found in vascular endothelium, liver, heart, lung, placenta, podocytes, proximal tubules, and arterial cells. The protein as a secreted form that allows it to circulate in the blood. It forms a complex, known as a trypanosome lytic factor (TLF), with high-density lipoprotein 3 (HDL3) particles that also contain apolipoprotein A1 (APOA1) and the hemoglobin-binding, haptoglobin-related protein (HPR). The APOL1 protein acts as the main lytic component in this complex. Once uptaken by the trypanosome, the complex is trafficked to acidic endosomes, where the APOL1 protein may insert into the endosomal membrane. If the endosome is then recycled to the plasma membrane, where it encounters neutral pH conditions, APOL1 may form cation-selective channels.APOL1 is a member of a family of apolipoproteins which also includes six other proteins and it is a member of bcl2 genes which are involved in autophagic cell death. In fact an overabundance of APOL1 within a cell results in autophagy.APOL1 may play a role in the inflammatory response. Pro-inflammatory cytokines interferon-γ(IFN), tumor necrosis factor-α (TNF-α) and p53 can increase the expression of APOL1.APOL1 has a role in innate immunity by protecting against Trypanosoma brucei infection, which is a parasite transmitted by the tsetse fly. Trypanosomes endocytose the secreted form of APOL1; APOL1 forms pores on the lysosomal membranes of the trypanosomes which causes in influx of chloride, swelling of the lysosome and lysis of the trypanosome. Clinical significance: African trypanosomiasis (sleeping sickness) Although its intracellular function has not been elucidated, apoL1 circulating in plasma has the ability to kill the trypanosome Trypanosoma brucei that causes sleeping sickness. Recently, two coding sequence variants in APOL1 have been shown to associate with kidney disease in a recessive fashion while at the same time conferring resistance against Trypanosoma brucei rhodesiense. This resistance is due, in part, to decreased binding of the G1 and G2 APOL1 variants to the T. b. rhodesiense virulence factor, serum resistance-associated protein (SRA) as a result of the C-terminal polymorphisms. People who have at least one copy of either the G1 or G2 variant are resistant to infection by trypanosomes, but people who have two copies of either variant are at an increased risk of developing a non-diabetic kidney disease. Clinical significance: Kidney disease The distribution of the variants most associated with kidney disease risk was analyzed in African populations and found to be more prevalent in western compared to northeastern African populations and absent in Ethiopia, consistent with the reported protection from forms of kidney disease known to be associated with the APOL1 variants. In the Yoruba people of Nigeria (West Africa), the prevalence of G1 and G2 risk alleles are 40% and 8% respectively. African nations with high frequencies of APOL1 risk alleles also have large populations of Trypanosomes suggesting that the risk alleles underwent positive selection as a defense mechanism. The existence of these variants are only found on African chromosomes and exist in people with recent African ancestry (<10,000 years). Clinical significance: Many African Americans are descendants of people of West African nations and consequently, also have a high prevalence of APOL1 risk alleles as well as APOL1 associated kidney diseases. The frequency of the risk alleles in African Americans is more than 30%. The existence of these alleles has been shown to increase the risk of developing diseases such as Focal Segmental Glomerulosclerosis(FSGS), Hypertension Attributed-End Stage Kidney Disease (ESKD), and HIV-Associated Nephropathy(HIVAN). Clinical significance: The prevalence of the risk alleles in African Americans with these kidney diseases shown in recent studies are 67% in HIVAN, 66% in FSGS, and 47% in hypertension-attributed ESKD. Hispanic populations such as Dominicans and Puerto Ricans demonstrate a mixture of genetic influences that include African ancestry resulting in a prevalence of the APOL1 variants as well. Studies have also determined the prevalence of each individual allele in FSGS cases as well. Clinical significance: Focal segmental glomerulosclerosis (FSGS) The prevalence of the G1 risk allele in African Americans with FSGS is 52% and 18-23% in those without FSGS. The prevalence of the G2 risk allele in African Americans with FSGS is 23% and 15% in those without FSGS. FSGS is a kidney disease that affects younger individuals therefore, its effects are slightly different from the effects of general non-diabetic ESKD. In a recent study, the mean ages of onset of FSGS for African Americans with 2, 1, and 0 APOL1 risk alleles was 32yrs, 36yrs and 39yrs, respectively. APOL1 variants also have a tendency to manifest FSGS at relatively young ages; FSGS begins between the ages of 15 and 39 in 70% of individuals with two APOL1 risk alleles and 42% of individuals with of 0 or 1 risk alleles. Clinical significance: Pathogenesis Although possession of the APOL1 risk variants increases susceptibility to non-diabetic kidney disease, not all people who possess these variants develop kidney disease, which indicates another factor may initiate progression of kidney disease. Similarly in HIV positive patients, although the majority of African-American patients with HIVAN have two APOL1 risk alleles other as yet unknown factors in the host, including genetic risk variants and environmental or viral factors, may influence the development of this disorder in those with zero or one APOL1 risk allele. Kidney Int. 2012 Aug;82(3):338-43. The African American population has a total lifetime risk of developing FSGS of 0.8%. For those with 0 risk alleles the risk of developing FSGS is 0.2%, 0.3% with 1 risk allele, 4.25% with 2 risk alleles and a 50% chance of developing HIVAN for untreated HIV infected individuals.People with these allelic variants who develop ESKD begin dialysis at an earlier age than ESKD patients without the risk alleles. On average, those with two risk alleles begin dialysis approximately 10 years earlier than ESKD patients without the risk variants. The mean ages of initiation of dialysis of African American ESKD patients with two risk alleles, one risk allele, or no risk alleles are approximately 48yrs, 53yrs, and 58 yrs, respectively. Compared to African American ESKD patients, Hispanic ESKD patients with two APOL1 risk variants start dialysis at an earlier age, 41 yrs. Clinical significance: Although, the age of initiation of dialysis is earlier with one risk allele this effect is only seen in those with the G1 variant. In a study, ~96% of patients with two risk alleles started dialysis before the age of 75 compared to 94% for G1 heterozygotes, and 84% for those with no risk alleles.Kidneys from donors containing two APOL1 variants experience allograft failure more rapidly than donors with 0 or 1 variants. Kidney recipients who have copies of the APOL1 risk variants, but do not receive kidneys from donors with the risk variants do not have decreased survival rates of the donated kidneys. These observations together suggest that the genotype of the donor only affects allograft survival.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ADP deaminase** ADP deaminase: In enzymology, an ADP deaminase (EC 3.5.4.7) is an enzyme that catalyzes the chemical reaction ADP + H2O ⇌ IDP + NH3Thus, the two substrates of this enzyme are ADP and H2O, whereas its two products are IDP and NH3. This enzyme belongs to the family of hydrolases, those acting on carbon-nitrogen bonds other than peptide bonds, specifically in cyclic amidines. The systematic name of this enzyme class is ADP aminohydrolase. Other names in common use include adenosine diphosphate deaminase, and adenosinepyrophosphate deaminase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Videostream** Videostream: Videostream was an application that enabled the streaming of video, music, and image files wirelessly to Google's Chromecast. Initially released as a standalone Google Chrome application, it was later transformed into a browser tab following Google's decision to cease support for the Chrome App Store. As of February 2021, the drag-and-drop feature, along with several other functionalities, were removed. Videostream was designed to streamline the process of video streaming to Chromecast. The application was capable of transcoding audio and video of incompatible files into a format supported by Chromecast. Several technology news outlets, including Engadget, Lifehacker and Tekzilla, gave positive feedback on the software's user-friendly interface and its ability to handle various video formats. Technology: Videostream utilized several open-source libraries, including the FFmpeg project. Major Features: Video Casting Videostream supported over 400 different audio and video codecs, offering multiple quality settings to allow playback under different network conditions. However, users have reported issues with casting to Chromecast when using AVG and Avast internet virus software. Mobile File Browsing In February 2015, Videostream introduced a media library feature that allowed file browsing from mobile devices. Monetization: Videostream was available free of charge, although a premium version was offered for additional features. The reliability and support for this premium version have been subjects of user feedback.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Symbiosis** Symbiosis: Symbiosis (from Greek συμβίωσις, symbíōsis, "living together", from σύν, sýn, "together", and βίωσις, bíōsis, "living") is any type of a close and long-term biological interaction between two biological organisms of different species, termed symbionts, be it mutualistic, commensalistic, or parasitic. In 1879, Heinrich Anton de Bary defined it as "the living together of unlike organisms". The term is sometimes used in the more restricted sense of a mutually beneficial interaction in which both symbionts contribute to each other's support.Symbiosis can be obligatory, which means that one or more of the symbionts depend on each other for survival, or facultative (optional), when they can generally live independently. Symbiosis: Symbiosis is also classified by physical attachment. When symbionts form a single body it is called conjunctive symbiosis, while all other arrangements are called disjunctive symbiosis. When one organism lives on the surface of another, such as head lice on humans, it is called ectosymbiosis; when one partner lives inside the tissues of another, such as Symbiodinium within coral, it is termed endosymbiosis. Definition: The definition of symbiosis was a matter of debate for 130 years. In 1877, Albert Bernhard Frank used the term symbiosis to describe the mutualistic relationship in lichens. In 1878, the German mycologist Heinrich Anton de Bary defined it as "the living together of unlike organisms". The definition has varied among scientists, with some advocating that it should only refer to persistent mutualisms, while others thought it should apply to all persistent biological interactions (in other words, to mutualism, commensalism, and parasitism, but excluding brief interactions such as predation). In the 21st century, the latter has become the definition widely accepted by biologists.In 1949, Edward Haskell proposed an integrative approach with a classification of "co-actions", later adopted by biologists as "interactions". Definition: Obligate versus facultative Relationships can be obligate, meaning that one or both of the symbionts entirely depend on each other for survival. For example, in lichens, which consist of fungal and photosynthetic symbionts, the fungal partners cannot live on their own. The algal or cyanobacterial symbionts in lichens, such as Trentepohlia, can generally live independently, and their part of the relationship is therefore described as facultative (optional), or non-obligate. When one of the participants in a symbiotic relationship is capable of photosynthesis, as with lichens, it is called photosymbiosis. Ectosymbiosis: Ectosymbiosis is any symbiotic relationship in which the symbiont lives on the body surface of the host, including the inner surface of the digestive tract or the ducts of exocrine glands. Examples of this include ectoparasites such as lice; commensal ectosymbionts such as the barnacles, which attach themselves to the jaw of baleen whales; and mutualist ectosymbionts such as cleaner fish. Competition: Competition can be defined as an interaction between organisms or species, in which the fitness of one is lowered by the presence of another. Limited supply of at least one resource (such as food, water, and territory) used by both usually facilitates this type of interaction, although the competition may also exist over other 'amenities', such as females for reproduction (in the case of male organisms of the same species). Mutualism: Mutualism or interspecies reciprocal altruism is a long-term relationship between individuals of different species where both individuals benefit. Mutualistic relationships may be either obligate for both species, obligate for one but facultative for the other, or facultative for both. Mutualism: Many herbivores have mutualistic gut flora to help them digest plant matter, which is more difficult to digest than animal prey. This gut flora comprises cellulose-digesting protozoans or bacteria living in the herbivores' intestines. Coral reefs result from mutualism between coral organisms and various algae living inside them. Most land plants and land ecosystems rely on mutualism between the plants, which fix carbon from the air, and mycorrhyzal fungi, which help in extracting water and minerals from the ground.An example of mutualism is the relationship between the ocellaris clownfish that dwell among the tentacles of Ritteri sea anemones. The territorial fish protects the anemone from anemone-eating fish, and in turn, the anemone stinging tentacles protect the clownfish from its predators. A special mucus on the clownfish protects it from the stinging tentacles.A further example is the goby, a fish which sometimes lives together with a shrimp. The shrimp digs and cleans up a burrow in the sand in which both the shrimp and the goby fish live. The shrimp is almost blind, leaving it vulnerable to predators when outside its burrow. In case of danger, the goby touches the shrimp with its tail to warn it. When that happens both the shrimp and goby quickly retreat into the burrow. Different species of gobies (Elacatinus spp.) also clean up ectoparasites in other fish, possibly another kind of mutualism.A spectacular example of obligate mutualism is the relationship between the siboglinid tube worms and symbiotic bacteria that live at hydrothermal vents and cold seeps. The worm has no digestive tract and is wholly reliant on its internal symbionts for nutrition. The bacteria oxidize either hydrogen sulfide or methane, which the host supplies to them. These worms were discovered in the late 1980s at the hydrothermal vents near the Galapagos Islands and have since been found at deep-sea hydrothermal vents and cold seeps in all of the world's oceans.Mutualism improves both organism's competitive ability and will outcompete organisms of the same species that lack the symbiont.A facultative symbiosis is seen in encrusting bryozoans and hermit crabs. The bryozoan colony (Acanthodesia commensale) develops a cirumrotatory growth and offers the crab (Pseudopagurus granulimanus) a helicospiral-tubular extension of its living chamber that initially was situated within a gastropod shell.Many types of tropical and sub-tropical ants have evolved very complex relationships with certain tree species. Endosymbiosis: Endosymbiosis is any symbiotic relationship in which one symbiont lives within the tissues of the other, either within the cells or extracellularly. Examples include diverse microbiomes: rhizobia, nitrogen-fixing bacteria that live in root nodules on legume roots; actinomycetes, nitrogen-fixing bacteria such as Frankia, which live in alder root nodules; single-celled algae inside reef-building corals; and bacterial endosymbionts that provide essential nutrients to about 10%–15% of insects.In endosymbiosis, the host cell lacks some of the nutrients which the endosymbiont provides. As a result, the host favors endosymbiont's growth processes within itself by producing some specialized cells. These cells affect the genetic composition of the host in order to regulate the increasing population of the endosymbionts and ensure that these genetic changes are passed onto the offspring via vertical transmission (heredity).As the endosymbiont adapts to the host's lifestyle, the endosymbiont changes dramatically. There is a drastic reduction in its genome size, as many genes are lost during the process of metabolism, and DNA repair and recombination, while important genes participating in the DNA-to-RNA transcription, protein translation and DNA/RNA replication are retained. The decrease in genome size is due to loss of protein coding genes and not due to lessening of inter-genic regions or open reading frame (ORF) size. Species that are naturally evolving and contain reduced sizes of genes can be accounted for an increased number of noticeable differences between them, thereby leading to changes in their evolutionary rates. When endosymbiotic bacteria related with insects are passed on to the offspring strictly via vertical genetic transmission, intracellular bacteria go across many hurdles during the process, resulting in the decrease in effective population sizes, as compared to the free-living bacteria. The incapability of the endosymbiotic bacteria to reinstate their wild type phenotype via a recombination process is called Muller's ratchet phenomenon. Muller's ratchet phenomenon, together with less effective population sizes, leads to an accretion of deleterious mutations in the non-essential genes of the intracellular bacteria. This can be due to lack of selection mechanisms prevailing in the relatively "rich" host environment. Commensalism: Commensalism describes a relationship between two living organisms where one benefits and the other is not significantly harmed or helped. It is derived from the English word commensal, used of human social interaction. It derives from a medieval Latin word meaning sharing food, formed from com- (with) and mensa (table).Commensal relationships may involve one organism using another for transportation (phoresy) or for housing (inquilinism), or it may also involve one organism using something another created, after its death (metabiosis). Examples of metabiosis are hermit crabs using gastropod shells to protect their bodies, and spiders building their webs on plants. Parasitism: In a parasitic relationship, the parasite benefits while the host is harmed. Parasitism takes many forms, from endoparasites that live within the host's body to ectoparasites and parasitic castrators that live on its surface and micropredators like mosquitoes that visit intermittently. Parasitism is an extremely successful mode of life; about 40% of all animal species are parasites, and the average mammal species is host to 4 nematodes, 2 cestodes, and 2 trematodes. Mimicry: Mimicry is a form of symbiosis in which a species adopts distinct characteristics of another species to alter its relationship dynamic with the species being mimicked, to its own advantage. Among the many types of mimicry are Batesian and Müllerian, the first involving one-sided exploitation, the second providing mutual benefit. Batesian mimicry is an exploitative three-party interaction where one species, the mimic, has evolved to mimic another, the model, to deceive a third, the dupe. In terms of signalling theory, the mimic and model have evolved to send a signal; the dupe has evolved to receive it from the model. This is to the advantage of the mimic but to the detriment of both the model, whose protective signals are effectively weakened, and of the dupe, which is deprived of an edible prey. For example, a wasp is a strongly-defended model, which signals with its conspicuous black and yellow coloration that it is an unprofitable prey to predators such as birds which hunt by sight; many hoverflies are Batesian mimics of wasps, and any bird that avoids these hoverflies is a dupe. In contrast, Müllerian mimicry is mutually beneficial as all participants are both models and mimics. For example, different species of bumblebee mimic each other, with similar warning coloration in combinations of black, white, red, and yellow, and all of them benefit from the relationship. Amensalism: Amensalism is a non-symbiotic, asymmetric interaction where one species is harmed or killed by the other, and one is unaffected by the other. There are two types of amensalism, competition and antagonism (or antibiosis). Competition is where a larger or stronger organism deprives a smaller or weaker one of a resource. Antagonism occurs when one organism is damaged or killed by another through a chemical secretion. An example of competition is a sapling growing under the shadow of a mature tree. The mature tree can rob the sapling of necessary sunlight and, if the mature tree is very large, it can take up rainwater and deplete soil nutrients. Throughout the process, the mature tree is unaffected by the sapling. Indeed, if the sapling dies, the mature tree gains nutrients from the decaying sapling. An example of antagonism is Juglans nigra (black walnut), secreting juglone, a substance which destroys many herbaceous plants within its root zone.The term amensalism is often used to describe strongly asymmetrical competitive interactions, such as between the Spanish ibex and weevils of the genus Timarcha which feed upon the same type of shrub. Whilst the presence of the weevil has almost no influence on food availability, the presence of ibex has an enormous detrimental effect on weevil numbers, as they consume significant quantities of plant matter and incidentally ingest the weevils upon it. Cleaning symbiosis: Cleaning symbiosis is an association between individuals of two species, where one (the cleaner) removes and eats parasites and other materials from the surface of the other (the client). It is putatively mutually beneficial, but biologists have long debated whether it is mutual selfishness, or simply exploitative. Cleaning symbiosis is well known among marine fish, where some small species of cleaner fish – notably wrasses, but also species in other genera – are specialized to feed almost exclusively by cleaning larger fish and other marine animals. In a supreme situation, the host species (fish or marine life) will display itself at a designated station deemed the "cleaning station".Cleaner fish play an essential role in the reduction of parasitism on marine animals. Some shark species participate in cleaning symbiosis, where cleaner fish remove ectoparasites from the body of the shark. A study by Raymond Keyes addresses the atypical behavior of a few shark species when exposed to cleaner fish. In this experiment, cleaner wrasse (Labroides dimidiatus) and various shark species were placed in a tank together and observed. The different shark species exhibited different responses and behaviors around the wrasse. For example, Atlantic and Pacific lemon sharks consistently react to the wrasse fish in a fascinating way. During the interaction, the shark remains passive and the wrasse swims to it. It begins to scan the shark's body, sometimes stopping to inspect specific areas. Commonly, the wrasse would inspect the gills, labial regions, and skin. When the wrasse makes its way to the mouth of the shark, the shark often ceases breathing for up to two and a half minutes so that the fish is able to scan the mouth. Then, the fish passes further into the mouth to examine the gills, specifically the buccopharyngeal area, which typically holds the most parasites. When the shark begins to close its mouth, the wrasse finishes its examination and goes elsewhere. Male bull sharks exhibit slightly different behavior at cleaning stations: as the shark swims into a colony of wrasse fish, it drastically slows its speed to allow the cleaners to do their job. After approximately one minute, the shark returns to normal swimming speed. Co-evolution and hologenome theory: Symbiosis is increasingly recognized as an important selective force behind evolution; many species have a long history of interdependent co-evolution.Although symbiosis was once discounted as an anecdotal evolutionary phenomenon, evidence is now overwhelming that obligate or facultative associations among microorganisms and between microorganisms and multicellular hosts had crucial consequences in many landmark events in evolution and in the generation of phenotypic diversity and complex phenotypes able to colonise new environments. Co-evolution and hologenome theory: Hologenome development and evolution Evolution originated from changes in development where variations within species are selected for or against because of the symbionts involved. The hologenome theory relates to the holobiont and symbionts genome together as a whole. Microbes live everywhere in and on every multicellular organism. Many organisms rely on their symbionts in order to develop properly, this is known as co-development. In cases of co-development the symbionts send signals to their host which determine developmental processes. Co-development is commonly seen in both arthropods and vertebrates. Co-evolution and hologenome theory: Symbiogenesis One hypothesis for the origin of the nucleus in eukaryotes (plants, animals, fungi, and protists) is that it developed from a symbiogenesis between bacteria and archaea. It is hypothesized that the symbiosis originated when ancient archaea, similar to modern methanogenic archaea, invaded and lived within bacteria similar to modern myxobacteria, eventually forming the early nucleus. This theory is analogous to the accepted theory for the origin of eukaryotic mitochondria and chloroplasts, which are thought to have developed from a similar endosymbiotic relationship between proto-eukaryotes and aerobic bacteria. Evidence for this includes the fact that mitochondria and chloroplasts divide independently of the cell, and that these organelles have their own genome.The biologist Lynn Margulis, famous for her work on endosymbiosis, contended that symbiosis is a major driving force behind evolution. She considered Darwin's notion of evolution, driven by competition, to be incomplete and claimed that evolution is strongly based on co-operation, interaction, and mutual dependence among organisms. According to Margulis and her son Dorion Sagan, "Life did not take over the globe by combat, but by networking." Co-evolutionary relationships Mycorrhizas About 80% of vascular plants worldwide form symbiotic relationships with fungi, in particular in arbuscular mycorrhizas. Co-evolution and hologenome theory: Pollination Flowering plants and the animals that pollinate them have co-evolved. Many plants that are pollinated by insects (in entomophily), bats, or birds (in ornithophily) have highly specialized flowers modified to promote pollination by a specific pollinator that is correspondingly adapted. The first flowering plants in the fossil record had relatively simple flowers. Adaptive speciation quickly gave rise to many diverse groups of plants, and, at the same time, corresponding speciation occurred in certain insect groups. Some groups of plants developed nectar and large sticky pollen, while insects evolved more specialized morphologies to access and collect these rich food sources. In some taxa of plants and insects, the relationship has become dependent, where the plant species can only be pollinated by one species of insect. Co-evolution and hologenome theory: Bats and rodents Bats and rodents are two species that have a widespread presence and commonly recognized as reservoirs for numerous infectious diseases. Bats have been specifically linked to highly consequential viruses such as Ebola, coronavirus, Henipaviruses, lyssaviruses and many more. Rodents have been linked to arenaviruses, hantaviruses and other zoonoses. Because rats and rodents are considered to be an ancient order of mammals, the diversification of the species has permitted coevolution between them and pathogens. Co-evolution and hologenome theory: Bartonella is investigated to infect a vast collection of mammals. Infection gains access to the erythrocytes and epithelial cells and is able to present asymptomatically in a range of mammals. This distinct relation is a host, vector and pathogen. Because of the intricacy of the impact of Bartonella disease on mammals, it is commonly unrecognized as a tropical disease. Research has reported that rodents have been attributed to the expansion of this disease within the U.S. and Nigeria. Co-evolution and hologenome theory: Leptospira has been recognized as a pathogen that has begun to attack developing as well as already developed countries. The bacteria is able to populate the kidneys of hosts and excrete through the mammal's urine. A rodent was the first to be identified as a carrier. Recently, rodents have carried the pathogen in Australia, Peru, Brazil, and even Madagascar. This pathogen is not "vector-borne", therefore transferal is through water sources and domains with a high pollution of animal urine. Co-evolution and hologenome theory: Acacia ants and acacias The acacia ant (Pseudomyrmex ferruginea) is an obligate plant ant that protects at least five species of "Acacia" (Vachellia) from preying insects and from other plants competing for sunlight, and the tree provides nourishment and shelter for the ant and its larvae. Co-evolution and hologenome theory: Seed dispersal Seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their propagules, including both abiotic vectors such as the wind and living (biotic) vectors like birds. In order to attract animals, these plants evolved a set of morphological characters such as fruit colour, mass, and persistence correlated to particular seed dispersal agents. For example, plants may evolve conspicuous fruit colours to attract avian frugivores, and birds may learn to associate such colours with a food resource.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dead cat bounce** Dead cat bounce: In finance, a dead cat bounce is a small, brief recovery in the price of a declining stock. Derived from the idea that "even a dead cat will bounce if it falls from a great height", the phrase is also popularly applied to any case where a subject experiences a brief resurgence during or following a severe decline. This may also be known as a "sucker rally". History: The earliest citation of the phrase in the news media dates to December 1985 when the Singaporean and Malaysian stock markets bounced back after a hard fall during the recession of that year. Journalists Chris Sherwell and Wong Sulong of the London-based Financial Times were quoted as saying the market rise was "what we call a dead cat bounce". Both the Singaporean and Malaysian economies continued to fall after the quote, although both economies recovered in the following years. History: The phrase was used again the following year about falling oil prices. In the San Jose Mercury News, Raymond F. DeVoe Jr. proposed that "Beware the Dead Cat Bounce" be printed on bumper stickers and followed up with a graphic explanation. This quote was referenced throughout the 1990s and became widely used in the 2000s. History: "This applies to stocks or commodities that have gone into free-fall descent and then rallied briefly. If you threw a dead cat off a 50-story building, it might bounce when it hit the sidewalk. But don't confuse that bounce with renewed life. It is still a dead cat. The spot oil price has recovered from under $10 a barrel to over $13 -- but that also should not be confused with renewed life."The phrase is also used in political circles for a candidate or policy that shows a small positive bounce in approval after a hard and fast decline. Variations and usage: The standard usage of the term refers to a short rise in the price of a stock that has suffered a fall. In other instances, the term is used exclusively to refer to securities or stocks that are considered to be of low value. First, the securities have poor past performance. Second, the decline is "correct" in that the underlying business is weak (e.g. declining sales or shaky financials). Along with this, it is doubtful that the security will recover with better conditions (overall market or economy). Variations and usage: Some variations on the definition of the term include: A stock in a severe decline has a sharp bounce off the lows. A small upward price movement in a bear market after which the market continues to fall. Variations and usage: Technical analysis A "dead cat bounce" price pattern may be used as a part of the technical analysis method of stock trading. Technical analysis describes a dead cat bounce as a continuation pattern in which a reversal of the current decline occurs followed by a significant price recovery. The price fails to continue upward and instead falls again downwards, often surpassing the previous low. This phenomenon can be difficult to identify at the time of occurrence, and like market peaks and troughs, it is usually only with hindsight that the pattern is able to be recognised. Behavioural finance perspective: The phenomenon known as a dead cat bounce can be illustrated partly by the irrational and emotional behaviour of the market participants or traders. According to this theory, investors can sometimes not be rational or objective in their timing of the market; however, they are influenced by cognitive biases and emotions influencing and leading to herd behaviour, overreaction and underreaction. A dead cat bounce might prey on some of the following investor biases: 1. Anchoring Anchoring occurs from relying too much on a fixed reference point rather than adjusting expectations based on updated information. For example, a high or low occurred previously instead of looking at the macro-environmental factors. Some investors might take this information to mean that the stock/commodity is over or undervalued after a sharp decline and hope for a quick rebound in the form of a V-bottom. 2. Confirmation Bias Confirmation Bias is the tendency to interpret information that confirms an individual’s pre-existing beliefs or hypotheses and ignore/dismiss significant contradictory evidence. This leads to investors selectively choosing what information they believe as long as it supports their optimistic outlook for the chosen stock. Thus, they downplay the crucial negative information as rumour or unimportant. 3. Overconfidence Overconfidence results from overestimating one’s ability or knowledge of a particular stock or macro-environment and underestimating the associated uncertainty and risks. Overconfidence results in investors not diversifying, taking on too much risk or refusing to sell stocks at a stop-loss level. These biases, either by themselves or in combination, can create a feedback loop in which the market is amplified in volatility and unpredictability as larger and larger amounts of investors react to the same signals and sentiments. The dead cat bounce is a prime example of a rebound fuelled by traders and speculators who bet on their optimistic views rather than the intrinsic or actual value of the stock. This results in a false sense of recovery as the stock begins to rally, and the subsequent drop in value reflects the actual supply and demand dynamics of the stock value. To counteract these biases, implement a disciplined and evidence-based approach to investing, look at a longer time frame, or diversify in an index fund. These methods rely more on an objective set of criteria, such as the price-to-earnings ratio, the dividend yield, or the market capitalisation, to assess the relative attractiveness of a stock or market rather than trading based on emotion. Furthermore, they emphasise how important it is to diversify, be patient, and have a longer-term perspective which can reduce the impact and importance of short-term fluctuations and noise. By understanding the psychological traps that can lead to a dead cat bounce, investors can make achieving their financial goals more likely and avoid costly errors. Causes: The causes behind a dead cat bounce are largely the result of these effects. It is often a combination of these effects which result in the dead cat bounce phenomenon. Causes: 1. Technical factors Technical factors, such as a short position from many firms, can help in the formation of a dead cat bounce. As many investors buy back their shares to close a short position, the stock receives a temporary increase in demand, driving up the price of the stock. Additionally, if the stock has an established history of a stable, continuous and periodic fluctuation between a support price and a resistance price, a low stock price may result in investors buying shares under the belief that the stock price is still following the same trend. This has the same effect as the short covering where the price receives a short boost. Causes: 2. News and Events Positive news with relevancy to the stock can provide a temporary boost of investor sentiment. This can be the result of a partnership agreement or a new product. In turn, this provides a short boost to the demand for the stock even though the underlying cause of the decline in price has not changed. Causes: 3. Market Sentiment The market sentiment refers to the attitude of investors towards a market. Should many stocks within the same financial market show a positive trend, then the entire financial market may be favored by investors. A "bullish" market refers to a market which is predicted to undergo a positive price movement. Should an entire market experience an increase in demand, even stocks with a falling price can be positively benefited. 4. Market Manipulation Tactics used by market manipulators may be used to temporarily inflate prices in an effort of personal gain. Tactics such as a spreading false rumors or engaging in a pump-and-dump will result in a short-lived increase in price. 5. Irrational Exuberance Overly optimistic investors may become inclined to purchase shares and bid up the price beyond its true value. 6. Technical glitches Errors within trading platform algorithms may cause a temporally increase in the price of an asset. These glitches can be the result of an assortment of factors that may result in the unintentional purchase of shares.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deadly Rooms of Death** Deadly Rooms of Death: Deadly Rooms of Death (DROD) is a computer puzzle game. It was created by Erik Hermansen in 1996 and has been regularly extended since then. The original version of the game published by Webfoot Technologies is no longer available. In 2000 the author reacquired the rights to DROD from Webfoot and released the source code; he continues the support and development as "Caravel DROD". Plot: King Dugan has a problem. He let his guards eat their meals down in the dungeon, and they spread crumbs all over the place, so suddenly his lovely dungeons are swarming with cockroaches, not to mention goblins, serpents, evil eyes, and other nasty things. It's really gotten out of hand. Beethro Budkin, dungeon exterminator extraordinaire and the main protagonist, is called to the castle and, after a short briefing by Dugan, thrown into the dungeon with the doors locked securely after him. With only a "Really Big Sword™" at his disposal, it's up to our hero to clear the place, so that the prisoners can receive their torture in a clean and safe environment. Gameplay: The game is entirely tile-based and takes place on a 38×32 rectangular grid. Most monsters and objects take up a single tile, though some monsters (such as serpents) take up multiple connected tiles. Each room is a separate puzzle, and to solve it the player must defeat all the monsters in the room and exit it. The player controls the movement of Beethro Budkin, a dungeon exterminator equipped with a "Really Big Sword". In the fictional world where the game takes place (the Eighth), his job as a Smitemaster is to clear dungeons of invading monsters. Most gameplay stems from, or elaborates on, this concept. Gameplay: Since the game is also turn-based, monsters or objects will only move once per turn. Each type of monster has a different algorithm for its movement, depending on its location relative to the player. As a result, Deadly Rooms of Death requires logical problem-solving rather than reflexes. Each turn, the player can wait, move Beethro into any of the eight bordering squares to his current one (if not already occupied), or rotate his sword 45 degrees. Some rooms simply require finding a sequence of moves that allows Beethro to defeat all monsters without being killed; other rooms require solving more complex puzzles, thanks to game elements such as orbs that open and close doors, trapdoors that fall after being stepped on, and so forth. History: Original The game was developed by Erik Hermansen in the end 1990s. In 1996, the game was commercially released by Webfoot Technologies as version 1.03 of the game. The release was followed shortly after with versions 1.04 and 1.11 to fix some bugs with unsolvable rooms and levels. This early version is commonly known as Webfoot DROD. As the game was commercially unsuccessful, the publisher stopped distributing the game around 1999. History: Remakes In 2000, the original author of the game got permission from Webfoot to open-source the game and he released the source code under the Mozilla Public License 1.1. With the help of several volunteers, he recreated the game from scratch, rewriting the entire game engine and creating improved graphics and new music for it. The main game screen, however, remained mostly the same as the original Webfoot version. This version, version 1.5, is commonly known as Caravel DROD, and was first released in late October 2002.Version 1.6, also called DROD: Architects' Edition, included improvements to some of the graphics, but most importantly a level editor, and was released in 2003. Community-designed rooms and levels are grouped together in packages called "holds", and extend the gameplay beyond the community-imposed challenges of previous versions.A commercial remake of the original DROD game was also released under this engine, named DROD: King Dugan's Dungeon. Several commercial add-on holds have also been released for this engine as 'Smitemaster's Selections', such as "The Choice" (2005), "Perfection" (2005), "Halph Stories" (2005), "Beethro's Teacher" (2006), "Journeys End" (2006), "Devilishly Dangerous Dungeons of Doom" (2008), "Smitemaster for Hire" (2009), "Truthlock Method" (2011), "Flood Warning" (2012), "Treacle Stew" (2022).An Adobe Flash version consisting of an updated version of King Dugan's Dungeon was released in June 2012. Currently, there are 5 "Episodes". History: Sequels The second game in the series, DROD: Journey to Rooted Hold, was released on April 1, 2005 for Windows, Linux, and Mac. It follows Beethro, whose nephew suddenly ran off. While searching for Halph, Beethro ended up under the world's surface, being chased by a fiend. Everything is viewed from a top-down perspective, and the player is able to see the monsters and other objects in each room visited. Each level is designed so that there is only one way in and out. The movement is turn-based, meaning that each time a button is pressed, Beethro will move by one step in the desired direction. As soon as an action is done, the nearby foes will make their move as well. Since there are no hit points, any contact with enemies or traps will result in a life loss. Also called DROD 2.0, the game includes many new additions and improvements, such as an expanded plot complete with in-game dialogue, higher resolution graphics; better user interfaces in both the editor and in game; new monsters and puzzle elements; additional customizability for holds, scripting system and connectivity to an online DROD database.The third game in the series, DROD: The City Beneath, or DROD 3.0, was released in April 2007. It includes all the features of Journey to Rooted Hold, plus a complete new official hold with in-game dialog, three new design styles, and further enhanced customizability and networking. Cut scene support, a ray-traced lighting system and variables that allow non-linear plot progression are the most prominent new features of DROD:TCB. History: As an extra game, DROD RPG was released on September 12, 2008. Created by Mike Rimer, DROD RPG is a DROD game coupled with several RPG elements including hitpoints, equipment, and the ability to change weapons. The game takes the DROD franchise in a new direction and features a new character, Tendry, a member of the stalwart army, who tries to find his way to the surface world. Some of the older puzzle elements were changed to reflect the RPG style, including keys to open doors that used to open under other conditions. History: A fifth game named DROD 4: Gunthro and the Epic Blunder was released April 1, 2012 - it included a couple of new game elements, and three new graphical and music styles, as well as the inclusion of some more scripting capabilities. DROD 4 is a prequel to the original story, and, although it was released after the others, is set before the "Journey to Rooted Hold". History: A sixth game, titled DROD 5: The Second Sky, was released on June 21, 2014. This is the epic conclusion to the story of Beethro. It features new weapon types, overworld maps, and additional scripting, sound and voice support. Critical reception: DROD has the highest rating amongst puzzle games listed at Home of the Underdogs, and was recommended by Ed Pegg Jr. of the Mathematical Association of America and Tony Delgado of GameSetWatch
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dentition analysis** Dentition analysis: Dentition analyses are systems of tooth and jaw measurement used in orthodontics to understand arch space and predict any malocclusion (mal-alignment of the teeth and the bite). Example systems of dentition analysis are listed below. Permanent dentition (adult teeth) analysis: Maxillary dentition (upper teeth) Pont's Analysis Linder Harth Index Korkhaus Analysis Arch Perimeter Analysis Mandibular dentition (lower teeth) Ashley Howe's Analysis Carey's Analysis Both Arches (upper and lower teeth) Bolton Analysis Mixed dentition analysis: Moyer's Mixed Dentition Analysis Tanaka and Johnston Analysis Radiographic Analysis Ballard and Willie Analysis Huckaba's Analysis Staley Kerber Analysis Hixon and Old Father Analysis Tweed's analysis (cast + cephalometric) Total space analysis (cast + cephalometric + soft tissue) Dental arch analysis: Intermolar Width - It is the distance between the mesiobuccal cusp tip points of the first permanent molars Intercanine Width - It is the distance between the tip of the cusp from canine to canine. Arch Length - It is the distance from the line perpendicular to the mesiobuccal cusp tips of the first permanent molars to the midpoint between the mesioincisal points of the central incisors. Arch Perimeter - It is the distance from mesial contact of a permanent molar on one side to the mesial contact of permanent molar on the other side, with the line connecting the buccal/incisor tip points in the intervening teeth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vapor-compression evaporation** Vapor-compression evaporation: Vapor-compression evaporation is the evaporation method by which a blower, compressor or jet ejector is used to compress, and thus, increase the pressure of the vapor produced. Since the pressure increase of the vapor also generates an increase in the condensation temperature, the same vapor can serve as the heating medium for its "mother" liquid or solution being concentrated, from which the vapor was generated to begin with. If no compression was provided, the vapor would be at the same temperature as the boiling liquid/solution, and no heat transfer could take place. Vapor-compression evaporation: It is also sometimes called vapor compression distillation (VCD). If compression is performed by a mechanically driven compressor or blower, this evaporation process is usually referred to as MVR (mechanical vapor recompression). In case of compression performed by high pressure motive steam ejectors, the process is usually called thermocompression, steam compression or ejectocompression. MVR process: Energy input In this case the energy input to the system lies in the pumping energy of the compressor. The theoretical energy consumption will be equal to E=Q∗(H2−H1) , where E is the total theoretical pumping energy Q is the mass of vapors passing through the compressor H1, H2 are the total heat content of unit mass of vapors, respectively upstream and downstream the compressor.In SI units, these are respectively measured in kJ, kg and kJ/kg. MVR process: The actual energy input will be greater than the theoretical value and will depend on the efficiency of the system, which is usually between 30% and 60%. For example, suppose the theoretical energy input is 300 kJ and the efficiency is 30%. The actual energy input would be 300 x 100/30 = 1,000 kJ. In a large unit, the compression power is between 35 and 45 kW per metric ton of compressed vapors. MVR process: Equipment for MVR evaporators The compressor is necessarily the core of the unit. Compressors used for this application are usually of the centrifugal type, or positive displacement units such as the Roots blowers, similar to the (much smaller) Roots type supercharger. Very large units (evaporation capacity 100 metric tons per hour or more) sometimes use Axial-flow compressors. The compression work will deliver the steam superheated if compared to the theoretical pressure/temperature equilibrium. For this reason, the vast majority of MVR units feature a desuperheater between the compressor and the main heat exchanger. Thermocompression: Energy input The energy input is here given by the energy of a quantity of steam (motive steam), at a pressure higher than those of both the inlet and the outlet vapors. Thermocompression: The quantity of compressed vapors is therefore higher than the inlet : Qd=Qs+Qm Where Qd is the steam quantity at ejector delivery, Qs at ejector suction and Qm is the motive steam quantity. For this reason, a thermocompression evaporator often features a vapor condenser, due to the possible excess of steam necessary for the compression if compared with the steam required to evaporate the solution. Thermocompression: The quantity Qm of motive steam per unit suction quantity is a function of both the motive ratio of motive steam pressure vs. suction pressure and the compression ratio of delivery pressure vs. suction pressure. In principle, the higher the compression ratio and the lower the motive ratio the higher will be the specific motive steam consumption, i. e. the less efficient the energy balance. Thermocompression: Thermocompression equipment The heart of any thermocompression evaporator is clearly the steam ejector, exhaustively described in the relevant page. The size of the other pieces of equipment, such as the main heat exchanger, the vapor head, etc. (see evaporator for details), is governed by the evaporation process. Comparison: These two compression-type evaporators have different fields of application, although they do sometimes overlap. Comparison: An MVR unit will be preferable for a large unit, thanks to the reduced energy consumption. The largest single body MVR evaporator built (1968, by Whiting Co., later Swenson Evaporator Co., Harvey, Ill. in Cirò Marina, Italy) was a salt crystallizer, evaporating approximately 400 metric tons per hour of water, featuring an axial-flow compressor (Brown Boveri, later ABB). This unit was transformed around 1990 to become the first effect of a multiple effect evaporator. MVR evaporators with 10 tons or more evaporating capacity are common. Comparison: The compression ratio in a MVR unit does not usually exceed 1.8. At a compression ratio of 1.8, if the evaporation is performed at atmospheric pressure (0.101 MPa), the condensation pressure after compression will be 0.101 x 1.8 = 0.1818 [MPa]. At this pressure, the condensation temperature of the water vapor at the heat exchanger will be 390 K. Taking into account the boiling point elevation of the salt water we wish to evaporate (8 K for a saturated salt solution), this leaves a temperature difference of less than 8 K at the heat exchanger. A small ∆T leads to slow heat transfer, meaning that we will need a very large heating surface to transfer the required heat. Axial-flow and Roots compressor may reach slightly higher compression ratios. Comparison: Thermocompression evaporators may reach higher compression ratios - at a cost. A compression ratio of 2 is possible (and sometimes more) but unless the motive steam is at a reasonably high pressure (say, 16 bar g - 250 psig - or more), the motive steam consumption will be in the range of 2 kg per kg of suction vapors. A higher compression ratio means a smaller heat exchanger, and a reduced investment cost. Moreover, a compressor is an expensive machine, while an ejector is much simpler and cheap.As a conclusion, MVR machines are used in large, energy-efficient units, while thermocompression units tend to limit their use to small units, where energy consumption is not a big issue. Efficiency: The efficiency and feasibility of this process depends on the efficiency of the compressing device (e.g., blower, compressor or steam ejector) and the heat transfer coefficient attained in the heat exchanger contacting the condensing vapor and the boiling "mother" solution/liquid. Theoretically, if the resulting condensate is subcooled, this process could allow full recovery of the latent heat of vaporization that would otherwise be lost if the vapor, rather than the condensate, was the final product; therefore, this method of evaporation is very energy efficient. The evaporation process may be solely driven by the mechanical work provided by the compressing device. Some uses: Clean water production (Water for injection) A vapor-compression evaporator, like most evaporators, can make reasonably clean water from any water source. In a salt crystallizer, for example, a typical analysis of the resulting condensate shows a typical content of residual salt not higher than 50 ppm or, in terms of electrical conductance, not higher than 10 μS/cm. This results in a drinkable water, if the other sanitary requirements are fulfilled. While this cannot compete in the marketplace with reverse osmosis or demineralization, vapor compression chiefly differs from these thanks to its ability to make clean water from saturated or even crystallizing brines with total dissolved solids (TDS) up to 650 g/L. The other two technologies can make clean water from sources no higher in TDS than approximately 35 g/L. Some uses: For economic reasons evaporators are seldom operated on low-TDS water sources. Those applications are filled by reverse osmosis. The already brackish water which enters a typical evaporator is concentrated further. The increased dissolved solids act to increase the boiling point well beyond that of pure water. Seawater with a TDS of approximately 30 g/L exhibits a boiling point elevation of less than 1 K but saturated sodium chloride solution at 360 g/L has a boiling point elevation of about 7 K. This boiling point elevation represents a challenge for vapor-compression evaporation in that it increases the pressure ratio that the steam compressor must attain to effect vaporization. Since boiling point elevation determines the pressure ratio in the compressor, it is the main overall factor in operating costs. Some uses: Steam-assisted gravity drainage The technology used today to extract bitumen from the Athabasca oil sands is the water-intensive steam-assisted gravity drainage (SAGD) method. In the late 1990s former nuclear engineer Bill Heins of General Electric Company's RCC Thermal Products conceived an evaporator technology called falling film or mechanical vapor compression evaporation. In 1999 and 2002 Petro-Canada's MacKay River facility was the first to install 1999 and 2002 GE SAGD zero-liquid discharge (ZLD) systems using a combination of the new evaporative technology and crystallizer system in which all the water was recycled and only solids were discharged off site. This new evaporative technology began to replace older water treatment techniques employed by SAGD facilities which involved the use of warm lime softening to remove silica and magnesium and weak acid cation ion exchange used to remove calcium. The vapor-compression evaporation process replaced the once-through steam generators (OTSG) traditionally used for steam production. OTSG generally ran on natural gas which in 2008 had become increasingly valuable. The water quality of evaporators is four times better which is needed for the drum boilers. The evaporators, when coupled with standard drum boilers, produce steam which is more "reliable, less costly to operate, and less water-intensive." By 2008 about 85 per cent of SAGD facilities in the Alberta oil sands had adopted evaporative technology. "SAGD, unlike other thermal processes such as cyclic steam stimulation (CSS), requires 100 per cent quality steam."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pipe organ** Pipe organ: The pipe organ is a musical instrument that produces sound by driving pressurised air (called wind) through the organ pipes selected from a keyboard. Because each pipe produces a single pitch, the pipes are provided in sets called ranks, each of which has a common timbre and volume throughout the keyboard compass. Most organs have many ranks of pipes of differing pitch, timbre, and volume that the player can employ singly or in combination through the use of controls called stops. Pipe organ: A pipe organ has one or more keyboards (called manuals) played by the hands, and a pedal clavier played by the feet; each keyboard controls its own division, or group of stops. The keyboard(s), pedalboard, and stops are housed in the organ's console. The organ's continuous supply of wind allows it to sustain notes for as long as the corresponding keys are pressed, unlike the piano and harpsichord whose sound begins to dissipate immediately after a key is depressed. The smallest portable pipe organs may have only one or two dozen pipes and one manual; the largest may have over 33,000 pipes and seven manuals. A list of some of the most notable and largest pipe organs in the world can be viewed at List of pipe organs. A ranking of the largest organs in the world—based on the criterion constructed by Michał Szostak, i.e. 'the number of ranks and additional equipment managed from a single console—can be found in the quarterly magazine The Organ and in the online journal Vox Humana.The origins of the pipe organ can be traced back to the hydraulis in Ancient Greece, in the 3rd century BC, in which the wind supply was created by the weight of displaced water in an airtight container. By the 6th or 7th century AD, bellows were used to supply Byzantine organs with wind. A pipe organ with "great leaden pipes" was sent to the West by the Byzantine emperor Constantine V as a gift to Pepin the Short, King of the Franks, in 757. Pepin's son Charlemagne requested a similar organ for his chapel in Aachen in 812, beginning the pipe organ's establishment in Western European church music. In England, "The first organ of which any detailed record exists was built in Winchester Cathedral in the 10th century. It was a huge machine with 400 pipes, which needed two men to play it and 70 men to blow it, and its sound could be heard throughout the city." Beginning in the 12th century, the organ began to evolve into a complex instrument capable of producing different timbres. By the 17th century, most of the sounds available on the modern classical organ had been developed. From that time, the pipe organ was the most complex man-made device—a distinction it retained until it was displaced by the telephone exchange in the late 19th century.Pipe organs are installed in churches, synagogues, concert halls, schools, other public buildings and in private properties. They are used in the performance of classical music, sacred music, secular music, and popular music. In the early 20th century, pipe organs were installed in theaters to accompany the screening of films during the silent movie era; in municipal auditoria, where orchestral transcriptions were popular; and in the homes of the wealthy. The beginning of the 21st century has seen a resurgence in installations in concert halls. The organ boasts a substantial repertoire, which spans over 500 years. History and development: Antiquity The organ is one of the oldest instruments still used in European classical music that has commonly been credited as having derived from Greece. Its earliest predecessors were built in ancient Greece in the 3rd century BC. The word organ is derived from the Ancient Greek ὄργανον (órganon), a generic term for an instrument or a tool, via the Latin organum, an instrument similar to a portative organ used in ancient Roman circus games. History and development: The Greek engineer Ctesibius of Alexandria is credited with inventing the organ in the 3rd century BC. He devised an instrument called the hydraulis, which delivered a wind supply maintained through water pressure to a set of pipes. The hydraulis was played in the arenas of the Roman Empire. The pumps and water regulators of the hydraulis were replaced by an inflated leather bag in the 2nd century AD, and true bellows began to appear in the Eastern Roman Empire in the 6th or 7th century AD. Some 400 pieces of a hydraulis from the year 228 AD were revealed during the 1931 archaeological excavations in the former Roman town Aquincum, province of Pannonia (present day Budapest), which was used as a music instrument by the Aquincum fire dormitory; a modern replica produces an enjoyable sound. History and development: The 9th century Persian geographer Ibn Khurradadhbih (d. 913), in his lexicographical discussion of instruments, cited the urghun (organ) as one of the typical instruments of the Eastern Roman (Byzantine) Empire. It was often used in the Hippodrome in the imperial capital of Constantinople. A Syrian visitor describes a pipe organ powered by two servants pumping "bellows like a blacksmith's" as being played while guests ate at the emperor's Christmas dinner in Constantinople in 911. The first Western European pipe organ with "great leaden pipes" was sent from Constantinople to the West by the Byzantine emperor Constantine V as a gift to Pepin the Short King of the Franks in 757. Pepin's son Charlemagne requested a similar organ for his chapel in Aachen in 812, beginning its establishment in Western European church music. History and development: Medieval From 800 to the 1400s, the use and construction of organs developed in significant ways, from the invention of the portative and positive organs to the installation of larger organs in major churches such as the cathedrals of Winchester and Notre Dame of Paris. In this period, organs began to be used in secular and religious settings. The introduction of organ into religious settings is ambiguous, most likely because the original position of the Church was that instrumental music was not to be allowed. However, by the twelfth century there is evidence for permanently installed organs existing in religious settings such as the Abbey of Fécamp and other locations throughout Europe. History and development: Several innovations occurred to organs in the Middle Ages, such as the creation of the portative and the positive organ. The portative organs were small and created for secular use and made of light weight delicate materials that would have been easy for one individual to transport and play on their own. The portative organ was a "flue-piped keyboard instrument, played with one hand while the other operated the bellows." Its portability made the portative useful for the accompaniment of both sacred and secular music in a variety of settings. The positive organ was larger than the portative organ but was still small enough to be portable and used in a variety of settings like the portative organ. Towards the middle of the 13th century, the portatives represented in the miniatures of illuminated manuscripts appear to have real keyboards with balanced keys, as in the Cantigas de Santa Maria.It is difficult to directly determine when larger organs began to be installed in Europe; however one of the first eyewitness accounts of organs is from Wulfstan of Winchester. This detailed account gives us an idea of what organs were like prior to the thirteenth century, when there are more records of large organs being placed in churches as well as their uses. In his account, he describes the sound of the organ: "among them bells outstanding in tone and size, and an organ [sounding] through bronze pipes prepared according to the musical proportions." This is one of the earliest accounts of organs in Europe and also indicates that the organ was large and more permanent than other evidence would suggest.The first organ documented to have been permanently installed was one installed in 1361 in Halberstadt, Germany. The first documented permanent organ installation likely prompted Guillaume de Machaut to describe the organ as "the king of instruments", a characterization still frequently applied. The Halberstadt organ was the first instrument to use a chromatic key layout across its three manuals and pedalboard, although the keys were wider than on modern instruments. It had twenty bellows operated by ten men, and the wind pressure was so high that the player had to use the full strength of their arm to hold down a key.Records of other organs permanently installed and used in worship services in the late thirteenth and fourteenth centuries are found in large cathedrals such as Notre Dame, where in the 1300s you can find documents of organists being hired to work for the church as well as records documenting the installation of larger and permanent organs. The earliest record is a payment from 1332 from the clergy of Notre Dame to an organist to perform on the feasts St. Louis and St. Michael. The Notre Dame School also shows how organs could have been used within the increased use of polyphony, which would have allowed for the use of more instrumental voices within the music. According to documentation from the 9th century by Walafrid Strabo, the organ was also used for music during other parts of the church service—the prelude and postlude being the main examples—and not just for the effect of polyphony with the choir. Other possible instances of this were short interludes played on the organ either in between parts of the church service or during choral songs, but they were not played at the same time as the choir was singing. This shows that by this point in time organs were being fully used within church services and not just in secular settings. There is proof that organs existed earlier in the medieval period, based on the surviving keyboards and casings of some organs, however no pipes from organs survive from this period. Until the mid-15th century, organs had no stop controls. Each manual controlled ranks at many pitches, known as the "Blockwerk." Around 1450, controls were designed that allowed the ranks of the Blockwerk to be played individually. These devices were the forerunners of modern stop actions. The higher-pitched ranks of the Blockwerk remained grouped together under a single stop control; these stops developed into mixtures. History and development: Renaissance and Baroque periods During the Renaissance and Baroque periods, the organ's tonal colors became more varied. Organ builders fashioned stops that imitated various instruments, such as the krummhorn and the viola da gamba. Builders such as Arp Schnitger, Jasper Johannsen, Zacharias Hildebrandt and Gottfried Silbermann constructed instruments that were in themselves artistic masterpieces, displaying both exquisite craftsmanship and beautiful sound. These organs featured well-balanced mechanical key actions, giving the organist precise control over the pipe speech. Schnitger's organs featured particularly distinctive reed timbres and large Pedal and Rückpositiv divisions.Different national styles of organ building began to develop, often due to changing political climates. In the Netherlands, the organ became a large instrument with several divisions, doubled ranks, and mounted cornets. The organs of northern Germany also had more divisions, and independent pedal divisions became increasingly common. The divisions of the organ became visibly discernible from the case design. Twentieth-century musicologists have retroactively labelled this the Werkprinzip. History and development: In France, as in Italy, Spain and Portugal, organs were primarily designed to play alternatim verses rather than accompany congregational singing. The French Classical Organ, became remarkably consistent throughout France over the course of the Baroque era, more so than any other style of organ building in history, and standardized registrations developed. It was elaborately described by Dom Bédos de Celles in his treatise L'art du facteur d'orgues (The Art of Organ Building). The Italian Baroque organ was often a single-manual instrument, devoid of pedals. It was built on a full diapason chorus of octaves and fifths. The stop-names indicated the pitch relative to the fundamental ("Principale") and typically reached extremely short nominal pipe-lengths (for example, if the Principale were 8', the "Vigesimanona" was ½'). The highest ranks, however, "broke back", their smallest pipes being replaced by pipes an octave lower in pitch, to produce a kind of composite treble mixture. History and development: In England, many pipe organs were destroyed or removed from churches during the English Reformation of the 16th century and the Commonwealth period. Some were relocated to private homes. At the Restoration, organ builders such as Renatus Harris and "Father" Bernard Smith brought new organ-building ideas from continental Europe. English organs evolved from small one- or two-manual instruments into three or more divisions disposed in the French manner with grander reeds and mixtures, though still without pedal keyboards. The Echo division began to be enclosed in the early 18th century, and in 1712, Abraham Jordan claimed his "swelling organ" at St Magnus-the-Martyr to be a new invention. The swell box and the independent pedal division appeared in English organs beginning in the 18th century. History and development: Romantic period During the Romantic period, the organ became more symphonic, capable of creating a gradual crescendo. This was made possible by voicing stops in such a way that families of tone that historically had only been used separately could now be used together, creating an entirely new way of approaching organ registration. New technologies and the work of organ builders such as Eberhard Friedrich Walcker, Aristide Cavaillé-Coll, and Henry Willis made it possible to build larger organs with more stops, more variation in sound and timbre, and more divisions. For instance, as early as in 1808, the first 32' contre-bombarde was installed in the great organ of Nancy Cathedral, France. Enclosed divisions became common, and registration aids were developed to make it easier for the organist to manage the great number of stops. The desire for louder, grander organs required that the stops be voiced on a higher wind pressure than before. As a result, a greater force was required to overcome the wind pressure and depress the keys. To solve this problem, Cavaillé-Coll configured the English "Barker lever" to assist in operating the key action. This is, essentially, a servomechanism that uses wind pressure from the air plenum, to augment the force that is exerted by the player's fingers.Organ builders began to lean towards specifications with fewer mixtures and high-pitched stops. They preferred to use more 8′ and 16′ stops in their specifications and wider pipe scales. These practices created a warmer, richer sound than was common in the 18th century. Organs began to be built in concert halls (such as the organ at the Palais du Trocadéro in Paris), and composers such as Camille Saint-Saëns and Gustav Mahler used the organ in their orchestral works. History and development: Modern development The development of pneumatic and electro-pneumatic key actions in the late 19th century made it possible to locate the console independently of the pipes, greatly expanding the possibilities in organ design. Electric stop actions were also developed, which allowed sophisticated combination actions to be created.Beginning in the early 20th century in Germany and in the mid-20th century in the United States, organ builders began to build historically inspired instruments modeled on Baroque organs. They returned to building mechanical key actions, voicing with lower wind pressures and thinner pipe scales, and designing specifications with more mixture stops. This became known as the Organ Reform Movement. History and development: In the late 20th century, organ builders began to incorporate digital components into their key, stop, and combination actions. Besides making these mechanisms simpler and more reliable, this also makes it possible to record and play back an organist's performance using the MIDI protocol. In addition, some organ builders have incorporated digital (electronic) stops into their pipe organs. History and development: The electronic organ developed throughout the 20th century. Some pipe organs were replaced by digital organs because of their lower purchase price, smaller physical size, and minimal maintenance requirements. In the early 1970s, Rodgers Instruments pioneered the hybrid organ, an electronic instrument that incorporates real pipes; other builders such as Allen Organs and Johannus Orgelbouw have since built hybrid organs. Allen Organs first introduced the electronic organ in 1937 and in 1971 created the first digital organ using CMOS technology borrowed from NASA which created the digital pipe organ using sound recorded from actual speaking pipes and incorporating the sounds electronically within the memory of the digital organ thus having real pipe organ sound without the actual organ pipes. Construction: A pipe organ contains one or more sets of pipes, a wind system, and one or more keyboards. The pipes produce sound when pressurized air produced by the wind system passes through them. An action connects the keyboards to the pipes. Stops allow the organist to control which ranks of pipes sound at a given time. The organist operates the stops and the keyboards from the console. Construction: Pipes Organ pipes are made from either wood or metal and produce sound ("speak") when air under pressure ("wind") is directed through them. As one pipe produces a single pitch, multiple pipes are necessary to accommodate the musical scale. The greater the length of the pipe, the lower its resulting pitch will be. The timbre and volume of the sound produced by a pipe depends on the volume of air delivered to the pipe and the manner in which it is constructed and voiced, the latter adjusted by the builder to produce the desired tone and volume. Hence a pipe's volume cannot be readily changed while playing. Construction: Organ pipes are divided into flue pipes and reed pipes according to their design and timbre. Flue pipes produce sound by forcing air through a fipple, like that of a recorder, whereas reed pipes produce sound via a beating reed, like that of a clarinet or saxophone.Pipes are arranged by timbre and pitch into ranks. A rank is a set of pipes of the same timbre but multiple pitches (one for each note on the keyboard), which is mounted (usually vertically) onto a windchest. The stop mechanism admits air to each rank. For a given pipe to sound, the stop governing the pipe's rank must be engaged, and the key corresponding to its pitch must be depressed. Ranks of pipes are organized into groups called divisions. Each division generally is played from its own keyboard and conceptually comprises an individual instrument within the organ. Construction: Action An organ contains two actions, or systems of moving parts. When a key is depressed, the key action admits wind into a pipe. The stop action allows the organist to control which ranks are engaged. An action may be mechanical, pneumatic, or electrical (or some combination of these, such as electro-pneumatic action). The key action is independent of the stop action, allowing an organ to combine a mechanical key action along with an electric stop action. Construction: A key action which physically connects the keys and the windchests is a mechanical or tracker action. Connection is achieved through a series of rods called trackers. When the organist depresses a key, the corresponding tracker pulls open its pallet, allowing wind to enter the pipe. Construction: In a mechanical stop action, each stop control operates a valve for a whole rank of pipes. When the organist selects a stop, the valve allows wind to reach the selected rank. This control was at first a draw stop knob, which the organist selects by pulling (or drawing) toward himself/herself. This is the origin of the idiom "to pull out all the stops". More modern stop selectors, utilized in electric actions, are tilting tablets or rocker tabs. Construction: Tracker action has been used from antiquity to modern times. Before the pallet opens, wind pressure augments tension of the pallet spring, but once the pallet opens, only the spring tension is felt at the key. This provides a "breakaway" feel.A later development was the tubular-pneumatic action, which uses changes of pressure within lead tubing to operate pneumatic valves throughout the instrument. This allowed a lighter touch, and more flexibility in the location of the console, within a roughly 50-foot (15-m) limit. This type of construction was used in the late 19th century to early 20th century, and has had only rare application since the 1920s.A more recent development is the electric action which uses low voltage DC to control the key and/or stop mechanisms. Electricity may control the action indirectly through air pressure valves (pneumatics), in which case the action is electro-pneumatic. In such actions, an electromagnet attracts a small pilot valve which lets wind go to a bellows ("pneumatic") which opens the pallet. When electricity operates the action directly without the assistance of pneumatics, it is commonly referred to as direct electric action. In this type, the electromagnet's armature carries a disc pallet. Construction: When electrical wiring alone is used to connect the console to the windchest, electric actions allow the console to be separated at any practical distance from the rest of the organ, and to be movable. Electric stop actions can be controlled at the console by stop knobs, by pivoted tilting tablets, or rocker tabs. These are simple switches, like wall switches for room lights. Some may include electromagnets for setting or resetting when combinations are selected. Construction: The most innovations in organ control systems connect the console and windchests via narrow data cables instead of the larger bundles of cables. Embedded computers in the console and near the windchests communicate with each other via various complex multiplexing syntaxes, comparable to MIDI. Construction: Wind system The wind system consists of the parts that produce, store, and deliver wind to the pipes. Pipe organ wind pressures are on the order of 0.10 psi (0.69 kPa). Organ builders traditionally measure organ wind using a water U-tube manometer, which gives the pressure as the difference in water levels in the two legs of the manometer. The difference in water level is proportional to the difference in pressure between the wind being measured and the atmosphere. The 0.10 psi above would register as 2.75 inches of water (70 mmAq). An Italian organ from the Renaissance period may be on only 2.2 inches (56 mm), while (in the extreme) solo stops in some large 20th-century organs may require up to 50 inches (1,300 mm). In isolated, extreme cases, some stops have been voiced on 100 inches (2,500 mm).With the exception of water organs, playing the organ before the invention of motors required at least one person to operate the bellows. When signaled by the organist, a calcant would operate a set of bellows, supplying the organ with wind. Because calcants were expensive, organists would usually practise on other instruments such as the clavichord or harpsichord. By the mid-19th-century bellows were also being operated by water engines, steam engines or gasoline engines. Starting in the 1860s bellows were gradually replaced by rotating turbines which were later directly connected to electrical motors. This made it possible for organists to practice regularly on the organ. Most organs, both new and historic, have electric blowers, although some can still be operated manually. The wind supplied is stored in one or more regulators to maintain a constant pressure in the windchests until the action allows it to flow into the pipes. Construction: Stops Each stop usually controls one rank of pipes, although mixtures and undulating stops (such as the Voix céleste) control multiple ranks. The name of the stop reflects not only the stop's timbre and construction, but also the style of the organ in which it resides. For example, the names on an organ built in the north German Baroque style generally will be derived from the German language, while the names of similar stops on an organ in the French Romantic style will usually be French. Most countries tend to use only their own languages for stop nomenclature. English-speaking nations as well as Japan are more receptive to foreign nomenclature. Stop names are not standardized: two otherwise identical stops from different organs may have different names.To facilitate a large range of timbres, organ stops exist at different pitch levels. A stop that sounds at unison pitch when a key is depressed is referred to as being at 8′ (pronounced "eight-foot") pitch. This refers to the speaking length of the lowest-sounding pipe in that rank, which is approximately eight feet (2.4 m). For the same reason, a stop that sounds an octave higher is at 4′ pitch, and one that sounds two octaves higher is at 2′ pitch. Likewise, a stop that sounds an octave lower than unison pitch is at 16′ pitch, and one that sounds two octaves lower is at 32′ pitch. Stops of different pitch levels are designed to be played simultaneously. Construction: The label on a stop knob or rocker tab indicates the stop's name and its pitch in feet. Stops that control multiple ranks display a Roman numeral indicating the number of ranks present, instead of pitch. Thus, a stop labelled "Open Diapason 8′ " is a single-rank diapason stop sounding at 8′ pitch. A stop labelled "Mixture V" is a five-rank mixture. Construction: Sometimes, a single rank of pipes may be able to be controlled by several stops, allowing the rank to be played at multiple pitches or on multiple manuals. Such a rank is said to be unified or borrowed. For example, an 8′ Diapason rank may also be made available as a 4′ Octave. When both of these stops are selected and a key (for example, c′) is pressed, two pipes of the same rank will sound: the pipe normally corresponding to the key played (c′), and the pipe one octave above that (c′′). Because the 8′ rank does not have enough pipes to sound the top octave of the keyboard at 4′ pitch, it is common for an extra octave of pipes used only for the borrowed 4′ stop to be added. In this case, the full rank of pipes (now an extended rank) is one octave longer than the keyboard.Special unpitched stops also appear in some organs. Among these are the Zimbelstern (a wheel of rotating bells), the nightingale (a pipe submerged in a small pool of water, creating the sound of a bird warbling when wind is admitted), and the effet d'orage ("thunder effect", a device that sounds the lowest bass pipes simultaneously). Standard orchestral percussion instruments such as the drum, chimes, celesta, and harp have also been imitated in organ building. Construction: Console The controls available to the organist, including the keyboards, couplers, expression pedals, stops, and registration aids are accessed from the console. The console is either built into the organ case or detached from it. Construction: Keyboards Keyboards played by the hands are known as manuals (from the Latin manus, meaning "hand"). The keyboard played by the feet is a pedalboard. Every organ has at least one manual (most have two or more), and most have a pedalboard. Each keyboard is named for a particular division of the organ (a group of ranks) and generally controls only the stops from that division. The range of the keyboards has varied widely across time and between countries. Most current specifications call for two or more manuals with sixty-one notes (five octaves, from C to c″″) and a pedalboard with thirty or thirty-two notes (two and a half octaves, from C to f′ or g′). Construction: Couplers A coupler allows the stops of one division to be played from the keyboard of another division. For example, a coupler labelled "Swell to Great" allows the stops drawn in the Swell division to be played on the Great manual. This coupler is a unison coupler, because it causes the pipes of the Swell division to sound at the same pitch as the keys played on the Great manual. Coupling allows stops from different divisions to be combined to create various tonal effects. It also allows every stop of the organ to be played simultaneously from one manual.Octave couplers, which add the pipes an octave above (super-octave) or below (sub-octave) each note that is played, may operate on one division only (for example, the Swell super octave, which adds the octave above what is being played on the Swell to itself), or act as a coupler to another keyboard (for example, the Swell super-octave to Great, which adds to the Great manual the ranks of the Swell division an octave above what is being played).In addition, larger organs may use unison off couplers, which prevent the stops pulled in a particular division from sounding at their normal pitch. These can be used in combination with octave couplers to create innovative aural effects, and can also be used to rearrange the order of the manuals to make specific pieces easier to play. Construction: Enclosure and expression pedals Enclosure refers to a system that allows for the control of volume without requiring the addition or subtraction of stops. In a two-manual organ with Great and Swell divisions, the Swell will be enclosed. In larger organs, parts or all of the Choir and Solo divisions may also be enclosed. The pipes of an enclosed division are placed in a chamber generally called the swell box. At least one side of the box is constructed from horizontal or vertical palettes known as swell shades, which operate in a similar way to Venetian blinds; their position can be adjusted from the console. When the swell shades are open, more sound is heard than when they are closed. Sometimes the shades are exposed, but they are often concealed behind a row of facade-pipes or a grill. Construction: The most common method of controlling the louvers is the balanced swell pedal. This device is usually placed above the centre of the pedalboard and is configured to rotate away from the organist from a near-vertical position (in which the shades are closed) to a near-horizontal position (in which the shades are open). An organ may also have a similar-looking crescendo pedal, found alongside any expression pedals. Pressing the crescendo pedal forward cumulatively activates the stops of the organ, starting with the softest and ending with the loudest; pressing it backwards reverses this process. Construction: Combination action Organ stops can be combined in many permutations, resulting in a great variety of sounds. A combination action can be used to switch instantly from one combination of stops (called a registration) to another. Combination actions feature small buttons called pistons that can be pressed by the organist, generally located beneath the keys of each manual (thumb pistons) or above the pedalboard (toe pistons). The pistons may be divisional (affecting only a single division) or general (affecting all the divisions), and are either preset by the organ builder or can be altered by the organist. Modern combination actions operate via computer memory, and can store several channels of registrations. Construction: Casing The pipes, action, and wind system are almost always contained in a case, the design of which also may incorporate the console. The case blends the organ's sound and aids in projecting it into the room. The case is often designed to complement the building's architectural style and it may contain ornamental carvings and other decorations. The visible portion of the case, called the façade, will most often contain pipes, which may be either sounding pipes or dummy pipes solely for decoration. The façade pipes may be plain, burnished, gilded, or painted and are usually referred to as (en) montre within the context of the French organ school.Organ cases occasionally feature a few ranks of pipes protruding horizontally from the case in the manner of a row of trumpets. These are referred to as pipes en chamade and are particularly common in organs of the Iberian peninsula and large 20th-century instruments.Many organs, particularly those built in the early 20th century, are contained in one or more rooms called organ chambers. Because sound does not project from a chamber into the room as clearly as from a freestanding organ case, enchambered organs may sound muffled and distant. For this reason, some modern builders, particularly those building instruments specializing in polyphony rather than Romantic compositions, avoid this unless the architecture of the room makes it necessary. Construction: Tuning and regulation The goal of tuning a pipe organ is to adjust the pitch of each pipe so that they all sound in tune with each other. How the pitch of each pipe is adjusted depends on the type and construction of that pipe. Construction: Regulation adjusts the action so that all pipes sound correctly. If the regulation is wrongly set, the keys may be at different heights, some pipes may sound when the keys are not pressed (a "cipher"), or pipes may not sound when a key is pressed. Tracker action, for example in the organ of Cradley Heath Baptist Church, includes adjustment nuts on the wire ends of the wooden trackers, which have the effect of changing the effective length of each tracker. Repertoire: The main development of organ repertoire has progressed along with that of the organ itself, leading to distinctive national styles of composition. Because organs are commonly found in churches and synagogues, the organ repertoire includes a large amount of sacred music, which is accompanimental (choral anthems, congregational hymns, liturgical elements, etc.) as well as solo in nature (chorale preludes, hymn versets designed for alternatim use, etc.). The organ's secular repertoire includes preludes, fugues, sonatas, organ symphonies, suites, and transcriptions of orchestral works. Repertoire: Although most countries whose music falls into the Western tradition have contributed to the organ repertoire, France and Germany in particular have produced exceptionally large amounts of organ music. There is also an extensive repertoire from the Netherlands, England, and the United States. Repertoire: Early music Before the Baroque era, keyboard music generally was not written for one instrument or another, but rather was written to be played on any keyboard instrument. For this reason, much of the organ's repertoire through the Renaissance period is the same as that of the harpsichord. Pre-Renaissance keyboard music is found in compiled manuscripts that may include compositions from a variety of regions. The oldest of these sources is the Robertsbridge Codex, dating from about 1360. The Buxheimer Orgelbuch, which dates from about 1470 and was compiled in Germany, includes intabulations of vocal music by the English composer John Dunstaple. The earliest Italian organ music is found in the Faenza Codex, dating from 1420.In the Renaissance period, Dutch composers such as Jan Pieterszoon Sweelinck composed both fantasias and psalm settings. Sweelinck in particular developed a rich collection of keyboard figuration that influenced subsequent composers. The Italian composer Claudio Merulo wrote in the typical Italian genres of the toccata, the canzona, and the ricercar. In Spain, the works of Antonio de Cabezón began the most prolific period of Spanish organ composition, which culminated with Juan Cabanilles. Repertoire: Common practice period Early Baroque organ music in Germany was highly contrapuntal. Sacred organ music was based on chorales: composers such as Samuel Scheidt and Heinrich Scheidemann wrote chorale preludes, chorale fantasias, and chorale motets. Towards the end of the Baroque era, the chorale prelude and the partita became mixed, forming the chorale partita. This genre was developed by Georg Böhm, Johann Pachelbel, and Dieterich Buxtehude. The primary type of free-form piece in this period was the praeludium, as exemplified in the works of Matthias Weckmann, Nicolaus Bruhns, Böhm, and Buxtehude. The organ music of Johann Sebastian Bach fused characteristics of every national tradition and historical style in his large-scale preludes and fugues and chorale-based works. Towards the end of the Baroque era, George Frideric Handel composed the first organ concertos.In France, organ music developed during the Baroque era through the music of Jean Titelouze, François Couperin, and Nicolas de Grigny. Because the French organ of the 17th and early 18th centuries was very standardized, a conventional set of registrations developed for its repertoire. The music of French composers (and Italian composers such as Girolamo Frescobaldi) was written for use during the Mass. Very little secular organ music was composed in France and Italy during the Baroque period; the written repertoire is almost exclusively intended for liturgical use. In England, composers such as John Blow and John Stanley wrote multi-sectional free works for liturgical use called voluntaries through the 19th century.Organ music was seldom written in the Classical era, as composers preferred the piano with its ability to create dynamics. In Germany, the six sonatas op. 65 of Felix Mendelssohn (published 1845) marked the beginning of a renewed interest in composing for the organ. Inspired by the newly built Cavaillé-Coll organs, the French organist-composers César Franck, Alexandre Guilmant and Charles-Marie Widor led organ music into the symphonic realm. The development of symphonic organ music continued with Louis Vierne and Charles Tournemire. Widor and Vierne wrote large-scale, multi-movement works called organ symphonies that exploited the full possibilities of the symphonic organ, such as Widor's Symphony for Organ No. 6 and Vierne's Organ Symphony No. 3. Max Reger and Sigfrid Karg-Elert's symphonic works made use of the abilities of the large Romantic organs being built in Germany at the time. Repertoire: In the 19th and 20th centuries, organ builders began to build instruments in concert halls and other large secular venues, allowing the organ to be used as part of an orchestra, as in Saint-Saëns' Symphony No. 3 (sometimes known as the Organ Symphony). Frequently the organ is given a soloistic part, such as in Joseph Jongen's Symphonie Concertante for Organ & Orchestra, Francis Poulenc's Concerto for Organ, Strings and Tympani, and Frigyes Hidas' Organ Concerto. Repertoire: Modern and contemporary Other composers who have used the organ prominently in orchestral music include Gustav Holst, Richard Strauss, Ottorino Respighi, Gustav Mahler, Anton Bruckner, and Ralph Vaughan Williams. Because these concert hall instruments could approximate the sounds of symphony orchestras, transcriptions of orchestral works found a place in the organ repertoire. As silent films became popular, theatre organs were installed in theatres to provide accompaniment for the films.In the 20th-century symphonic repertoire, both sacred and secular, continued to progress through the music of Marcel Dupré, Maurice Duruflé, and Herbert Howells. Other composers, such as Olivier Messiaen, György Ligeti, Jehan Alain, Jean Langlais, Gerd Zacher, and Petr Eben, wrote post-tonal organ music. Messiaen's music in particular redefined many of the traditional notions of organ registration and technique.Albert Schweitzer was an organist who studied the music of German composer Johann Sebastian Bach and influenced the Organ reform movement. Repertoire: Music director Hans Zimmer used pipe organ in the movie Interstellar for the leading background score. The final recording took place in London's Temple Church on 1926 four-manual Harrison and Harrison organ.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alpha capture system** Alpha capture system: An alpha capture system is a computer system that enables investment banks and other organizations to submit "trading ideas" or "trade ideas" to clients in a written electronic format, for example TIM Group's TIM Ideas product or Bloomberg LP's Trade Ideas product. Introduction: Financial Services Authority Markets Division: Newsletter on Market Conduct and Transaction Reporting Issues, Issue No. 17, September 2006,First used in 2001 by Marshall Wace they are an alternative to the traditional stockbrokering approach of communicating ideas and strategies to clients face-to-face or over the telephone.The term alpha capture refers to the aim of such systems to help investors find alpha or market-beating returns on investments.Submitted trade ideas are accompanied by a rationale, timeframe and conviction level and enable investors to quantify and monitor the performance of different ideas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acetyllysine** Acetyllysine: Acetyllysine (or acetylated lysine) is an acetyl-derivative of the amino acid lysine. There are multiple forms of acetyllysine - this article refers to N-ε-acetyl-L-lysine. The other form is N-α-acetyl-L-lysine. In proteins, the acetylation of lysine residues is an important mechanism of epigenetics. It functions by regulating the binding of histones to DNA in nucleosomes and thereby controlling the expression of genes on that DNA. Non-histone proteins are acetylated as well. Unlike the functionally similar methyllysine, acetyllysine does not carry a positive charge on its side chain. Histone acetyltransferases (HATs) catalyze the addition of acetyl groups from acetyl-CoA onto certain lysine residues of histones and non-histone proteins. Histone deacetylases (HDACs) catalyze the removal of acetyl groups from acetylated lysines. Acetyllysine can be synthesized from lysine by the selective acetylation of the terminal amine group.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lateral masking** Lateral masking: Lateral masking is a problem for the human visual perception of identical or similar entities in close proximity. This can be illustrated by the difficulty of counting the vertical bars of a barcode. In linguistics lateral masking refers to the interference a letter has on its neighbor. This is a problem readers encounter when reading a word. The identity of a letter in the middle of a word is obscured by the presence of its neighboring letters. Lateral masking may also be a problem in orthography design. A readable orthography will avoid situations in which a reader is faced with severe lateral masking.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DNAI1** DNAI1: Dynein axonemal intermediate chain 1 is a protein that in humans is encoded by the DNAI1 gene.The inner- and outer-arm dyneins, which bridge between the doublet microtubules in axonemes, are the force-generating proteins responsible for the sliding movement in axonemes. The intermediate and light chains, thought to form the base of the dynein arm, help mediate attachment and may also participate in regulating dynein activity. This gene encodes an intermediate chain dynein, belonging to the large family of motor proteins. Mutations in this gene result in abnormal ciliary ultrastructure and function associated with primary ciliary dyskinesia (PCD) and Kartagener syndrome. The DNAI1 gene is involved in the development of proper respiratory function, motility of spermatozoa, and asymmetrical organization of the viscera during embryogenesis. This gene affects these three very different aspects of development because all three are dependent on proper cilia function. DNAI1 codes for the development of cilia ultrastructure in the upper and lower respiratory tracts, spermatozoa flagellae, and nodal cilia (cilia of the primitive node). DNAI1 specifically encodes for an intermediate chain of the outer dynein arm. Each dynein arm of the ciliary axoneme has an inner and outer dynein arm. A mutation in DNAI1 can lead to defective ciliary beating. A DNAI1 gene mutation accounts for 4-10% of all cases of primary ciliary dyskensia (PCD). The most frequent structural defect in cilia of PCD patients are abnormal dynein arms. A common mutation of DNAI1 leading to PCD is a hot-spot mutation in intron 1 of the gene. Mutations in coding or splicing are only found in 10% of PCD cases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amdahl's law** Amdahl's law: In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used". It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967. Amdahl's law: Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours to complete using a single thread, but a one-hour portion of the program cannot be parallelized, therefore only the remaining 19 hours' (p = 0.95) execution time can be parallelized, then regardless of how many threads are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. Hence, the theoretical speedup is less than 20 times the single thread performance, 20 ) Definition: Amdahl's law can be formulated in the following way: latency (s)=1(1−p)+ps where Slatency is the theoretical speedup of the execution of the whole task; s is the speedup of the part of the task that benefits from improved system resources; p is the proportion of execution time that the part benefiting from improved resources originally occupied.Furthermore, latency lim latency (s)=11−p. Definition: shows that the theoretical speedup of the execution of the whole task increases with the improvement of the resources of the system and that regardless of the magnitude of the improvement, the theoretical speedup is always limited by the part of the task that cannot benefit from the improvement. Definition: Amdahl's law applies only to the cases where the problem size is fixed. In practice, as more computing resources become available, they tend to get used on larger problems (larger datasets), and the time spent in the parallelizable part often grows much faster than the inherently serial work. In this case, Gustafson's law gives a less pessimistic and more realistic assessment of the parallel performance. Derivation: A task executed by a system whose resources are improved compared to an initial similar system can be split up into two parts: a part that does not benefit from the improvement of the resources of the system; a part that benefits from the improvement of the resources of the system.An example is a computer program that processes files. A part of that program may scan the directory of the disk and create a list of files internally in memory. After that, another part of the program passes each file to a separate thread for processing. The part that scans the directory and creates the file list cannot be sped up on a parallel computer, but the part that processes the files can. Derivation: The execution time of the whole task before the improvement of the resources of the system is denoted as T . It includes the execution time of the part that would not benefit from the improvement of the resources and the execution time of the one that would benefit from it. The fraction of the execution time of the task that would benefit from the improvement of the resources is denoted by p . The one concerning the part that would not benefit from it is therefore 1−p . Then: T=(1−p)T+pT. Derivation: It is the execution of the part that benefits from the improvement of the resources that is accelerated by the factor s after the improvement of the resources. Consequently, the execution time of the part that does not benefit from it remains the same, while the part that benefits from it becomes: psT. The theoretical execution time T(s) of the whole task after the improvement of the resources is then: T(s)=(1−p)T+psT. Amdahl's law gives the theoretical speedup in latency of the execution of the whole task at fixed workload W , which yields latency (s)=TWT(s)W=TT(s)=11−p+ps. Parallel programs If 30% of the execution time may be the subject of a speedup, p will be 0.3; if the improvement makes the affected part twice as fast, s will be 2. Amdahl's law states that the overall speedup of applying the improvement will be: latency 0.3 0.3 1.18. Derivation: For example, assume that we are given a serial task which is split into four consecutive parts, whose percentages of execution time are p1 = 0.11, p2 = 0.18, p3 = 0.23, and p4 = 0.48 respectively. Then we are told that the 1st part is not sped up, so s1 = 1, while the 2nd part is sped up 5 times, so s2 = 5, the 3rd part is sped up 20 times, so s3 = 20, and the 4th part is sped up 1.6 times, so s4 = 1.6. By using Amdahl's law, the overall speedup is latency 0.11 0.18 0.23 20 0.48 1.6 2.19. Derivation: Notice how the 5 times and 20 times speedup on the 2nd and 3rd parts respectively don't have much effect on the overall speedup when the 4th part (48% of the execution time) is accelerated by only 1.6 times. Derivation: Serial programs For example, with a serial program in two parts A and B for which TA = 3 s and TB = 1 s, if part B is made to run 5 times faster, that is s = 5 and p = TB/(TA + TB) = 0.25, then if part A is made to run 2 times faster, that is s = 2 and p = TA/(TA + TB) = 0.75, then Therefore, making part A to run 2 times faster is better than making part B to run 5 times faster. The percentage improvement in speed can be calculated as percentage improvement 100 latency ). Derivation: Improving part A by a factor of 2 will increase overall program speed by a factor of 1.60, which makes it 37.5% faster than the original computation. However, improving part B by a factor of 5, which presumably requires more effort, will achieve an overall speedup factor of 1.25 only, which makes it 20% faster. Optimizing the sequential part of parallel programs If the non-parallelizable part is optimized by a factor of O , then T(O,s)=(1−p)TO+psT. It follows from Amdahl's law that the speedup due to parallelism is given by latency (O,s)=T(O)T(O,s)=(1−p)1O+p1−pO+ps. When s=1 , we have latency (O,s)=1 , meaning that the speedup is measured with respect to the execution time after the non-parallelizable part is optimized. When s=∞ , latency (O,∞)=T(O)T(O,s)=(1−p)1O+p1−pO+ps=1+p1−pO. If 0.4 , O=2 and s=5 , then: latency 0.4 0.6 0.4 0.6 2.5. Transforming sequential parts of parallel programs into parallelizable Next, we consider the case wherein the non-parallelizable part is reduced by a factor of O′ , and the parallelizable part is correspondingly increased. Then T′(O′,s)=1−pO′T+(1−1−pO′)Ts. It follows from Amdahl's law that the speedup due to parallelism is given by latency ′(O′,s)=T′(O′)T′(O′,s)=11−pO′+(1−1−pO′)1s. Relation to the law of diminishing returns: Amdahl's law is often conflated with the law of diminishing returns, whereas only a special case of applying Amdahl's law demonstrates law of diminishing returns. If one picks optimally (in terms of the achieved speedup) what is to be improved, then one will see monotonically decreasing improvements as one improves. If, however, one picks non-optimally, after improving a sub-optimal component and moving on to improve a more optimal component, one can see an increase in the return. Note that it is often rational to improve a system in an order that is "non-optimal" in this sense, given that some improvements are more difficult or require larger development time than others. Relation to the law of diminishing returns: Amdahl's law does represent the law of diminishing returns if one is considering what sort of return one gets by adding more processors to a machine, if one is running a fixed-size computation that will use all available processors to their capacity. Each new processor added to the system will add less usable power than the previous one. Each time one doubles the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit of 1/(1 − p). Relation to the law of diminishing returns: This analysis neglects other potential bottlenecks such as memory bandwidth and I/O bandwidth. If these resources do not scale with the number of processors, then merely adding processors provides even lower returns. Relation to the law of diminishing returns: An implication of Amdahl's law is that to speed up real applications which have both serial and parallel portions, heterogeneous computing techniques are required. There are novel speedup and energy consumption models based on a more general representation of heterogeneity, referred to as the normal form heterogeneity, that support a wide range of heterogeneous many-core architectures. These modelling methods aim to predict system power efficiency and performance ranges, and facilitates research and development at the hardware and system software levels.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Provel cheese** Provel cheese: Provel () is a white processed cheese product particularly popular in St. Louis cuisine, that is a combination of cheddar, Swiss, and provolone cheeses. Provel has a low melting point, and therefore has a gooey and almost buttery texture at room temperature. It is the traditional topping for St. Louis-style pizza. It is also often used in the preparation of cheese soup and served on salads, chicken, and the Gerber sandwich. Some restaurants use Provel for their pasta dishes with white sauce instead of the customary fresh Italian cheese and cream. Provel cheese: Popular in the St. Louis area, Provel is rarely used elsewhere. Provel can, however, be purchased at Hy-Vee grocery stores throughout the Midwest. Provel cheese: According to former St. Louis Post-Dispatch food critic Joe Bonwich, Provel was invented specifically for St. Louis-style pizza more than a half-century ago by the downtown firm Costa Grocery (now Roma Grocery on the Hill, a St. Louis neighborhood consisting primarily of people of Italian lineage), in collaboration with the Hoffman Dairy Company of Wisconsin (now part of Kraft Foods). Bonwich states that Provel was developed to meet perceived demand for a pizza cheese with a "clean bite": one that melts well but breaks off nicely when the diner bites down. Neither of Bonwich's sources at Kraft and Roma had a definitive answer for the origin of the name, although one popular theory is that it is a portmanteau of the words provolone and mozzarella, two of the cheeses for which it is substituted. Another rumored name origin is that "Provel" comes from the name provolone, removing the "-one" as it is made up of more than one type of cheese. Provel cheese: Because of Provel's moisture content, it cannot be labeled as cheese in the U.S. Instead it is labeled as "pasteurized process cheese". Even though it is processed cheese, it is closer to being a real cheese than most cheeses labeled as American Cheese. As a processed cheese, Provel is subject to FDA guidelines on labeling cheese. The trademark on the "Provel" name, first used in 1947, was held by the Churny Company, Inc. of Glenview, Illinois. Churny then became a wholly owned subsidiary of Kraft Foods after it was closed in 2012.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bolt snap** Bolt snap: A bolt snap is a type of snap hook with a manually operated bolt action slide gate of medium security used to clip a light load to a ring, eye, loop or bight to temporarily secure or suspend an object. They are used for a wide variety of applications including dog leads and for clipping scuba equipment to the diving harness. A similar but more secure device used to attach sails to a stay is known as a piston hank. It differs from a snap shackle in that the load is not carried by the gate. The bolt snap must be actively operated by the user to clip or unclip, and is not easily snagged or unintentionally clipped or unclipped by pressing or bumping against the surroundings.The most common type has a single snap hook at one end and a swivel ring at the other, but double ended bolt snaps and single ended snaps with a swivel shackle are also available. There are a few variations on the style of the hook, gate opening and swivel style. The characteristic element of the bolt snap is the bolt action gate. This is a spring loaded rod which slides longitudinally inside the body of the clip against a compression spring to open the gate of the hook, and returns to rest against the tip of the hook by the action of the spring when released. Bolt snaps are not generally load rated, and are not used to suspend heavy loads. Most applications are in the load range where the user can lift the object to be clipped, or can hold the load manually. Structure: Bolt snaps are made of plastic or metal. The metal used is stainless steel or brass for diving and boating applications, with a stainless steel spring. Chrome plated zinc and plastic bodies are only suitable for light loads such as key rings, handbag straps, and leads for small dogs. Structure: The single ended bolt snap has a hook at one end with the opening in line with a hollow shaft, at the other end of which there is a flanged pin for the swivel fitting. The swivel fitting is usually a ring, but can also be a swivel shackle body. The tip of the hook is directly in line with the central axis of the hole in the shaft, so that the piston gate makes contact with the tip when closed, and the hook curves round to point at the hole. The gate is a cylindrical "bolt" with a sliding fit in the hollow shaft. It has a short rounded tab on the side which provides a grip for finger or thumb operation. This tab slides in a short slot in the outer side of the hole in the shaft. Structure: There is a compression coil spring in the hole in the shaft between the gate bolt and the bottom of the hole, which will return the bolt to rest against the tip of the hook when released, preventing passage of anything in either direction through the mouth of the hook when the bolt is in place. The bolt snap does not have a socket in the tip of the hook for the bolt, as the load is carried by the hook in normal service, and this type of closure is unsuited to multi-directional or highly dynamic loading. Structure: The double ended bolt snap has a hook at each end of the body, with co-axial opposing bolts. The hook gates normally both face the same way. Variations Various sizes and materials may be available for the style variations listed. Single endWith round swivel loop Centred loop Offset loop Small loop with oblong loop for webbing Fixed loop Swivel loop With swivel shackle Double endShort or long body Butterfly gate Operation: The bolt snap is usually operated using one hand to manipulate the hook and gate. If the object to which it is to be clipped is unstable, like the collar on a dog, the other hand may be used to hold it in place. The hook body is generally gripped by the fingers, one of which may be passed through the swivel ring to help support and stabilise the load when applicable, and the thumb used to pull the bolt back to open the gate. The opening is then passed over the target and the bolt released, so that it snaps back to close the gate. To remove, the same method is used, and the load must be supported to unhook while the gate is open. The clip cannot be removed under normal tensile load conditions even with the gate open. Applications: Scuba diving Bolt snaps are commonly used in scuba diving to clip equipment to the diver's harness for security and to keep them in place. The bolt snap style of connector is favoured because it is operable with one hand, is quick and easy to use, can support the relevant loads, and is reasonably secure against accidental operation. Animal leads Bolt snaps are one of the common connectors used for attaching tethers to animal collars or harnesses. Luggage and fashion accessories Bolt snaps are sometimes used to attach straps and handles to luggage and handbags.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bass–Quillen conjecture** Bass–Quillen conjecture: In mathematics, the Bass–Quillen conjecture relates vector bundles over a regular Noetherian ring A and over the polynomial ring A[t1,…,tn] . The conjecture is named for Hyman Bass and Daniel Quillen, who formulated the conjecture. Statement of the conjecture: The conjecture is a statement about finitely generated projective modules. Such modules are also referred to as vector bundles. For a ring A, the set of isomorphism classes of vector bundles over A of rank r is denoted by Vect r⁡(A) The conjecture asserts that for a regular Noetherian ring A the assignment M↦M⊗AA[t1,…,tn] yields a bijection Vect Vect r⁡(A[t1,…,tn]). Known cases: If A = k is a field, the Bass–Quillen conjecture asserts that any projective module over k[t1,…,tn] is free. This question was raised by Jean-Pierre Serre and was later proved by Quillen and Suslin, see Quillen–Suslin theorem. More generally, the conjecture was shown by Lindel (1981) in the case that A is a smooth algebra over a field k. Further known cases are reviewed in Lam (2006). Extensions: The set of isomorphism classes of vector bundles of rank r over A can also be identified with the nonabelian cohomology group HNis1(Spec(A),GLr). Positive results about the homotopy invariance of HNis1(U,G) of isotropic reductive groups G have been obtained by Asok, Hoyois & Wendt (2018) by means of A1 homotopy theory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pygame** Pygame: Pygame is a cross-platform set of Python modules designed for writing video games. It includes computer graphics and sound libraries designed to be used with the Python programming language. History: Pygame was originally written by Pete Shinners to replace PySDL after its development stalled. It has been a community project since 2000 and is released under the free software GNU Lesser General Public License (which "provides for Pygame to be distributed with open source and commercial software"). Development of Version 2: Pygame version 2 was planned as "Pygame Reloaded" in 2009, but development and maintenance of Pygame completely stopped until the end of 2016 with version 1.9.1. After the release of version 1.9.5 in March 2019, development of a new version 2 was active on the roadmap.Pygame 2.0 released on 28 October, 2020, on Pygame's 20th birthday. Features: Pygame uses the Simple DirectMedia Layer (SDL) library, with the intention of allowing real-time computer game development without the low-level mechanics of the C programming language and its derivatives. This is based on the assumption that the most expensive functions inside games can be abstracted from the game logic, making it possible to use a high-level programming language, such as Python, to structure the game.Other features that SDL does have include vector math, collision detection, 2D sprite scene graph management, MIDI support, camera, pixel-array manipulation, transformations, filtering, advanced freetype font support, and drawing.Applications using Pygame can run on Android phones and tablets with the use of Pygame Subset for Android (pgs4a). Sound, vibration, keyboard, and accelerometer are supported on Android. Community: There is a regular competition, called PyWeek, to write games using Python (and usually but not necessarily, Pygame). The community has created many tutorials for Pygame. Notable games using Pygame: Frets on Fire Dangerous High School Girls in Trouble Save the Date, IndieCade 2013 Finalist Drawn Down Abyss
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crateology** Crateology: Crateology was the 'science' of identifying the contents of Soviet shipments to the Island of Cuba carried out by the Central Intelligence Agency during the Cuban Missile Crisis.Crateology has declined as a discipline in recent years due to globalisation and the decline in the usage of custom made wooden crates in favour of standard metal shipping containers. Though making the world intra-connected and smaller, globalisation has resulted in the loss of not merely a science, but a 'beautiful art form'.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non-synchronous transmission** Non-synchronous transmission: A non-synchronous transmission, also called a crash gearbox, is a form of manual transmission based on gears that do not use synchronizing mechanisms. They require the driver to manually synchronize the transmission's input speed (engine RPM) and output speed (driveshaft speed). Non-synchronous transmissions are found primarily in various types of industrial machinery; such as tractors and semi-tractors. Non-synchronous manual transmissions are also found on motorcycles, in the form of constant-mesh sequential manual transmissions. Prior to the 1950s and 1960s, most cars used constant-mesh (and also sliding-mesh) but non-synchronous transmissions. History: Most early automobiles were rear-engined, using a single-speed transmission and belt-drive to power the rear wheels. In 1891, the French Panhard et Levassor automobile used a three-speed manual transmission and is considered to have set the template for multi-speed manual transmissions in motor vehicles. This transmission used a sliding-gear design without any form of speed synchronization, causing frequent grinding of the gear teeth during gear shifts.The Panhard design was refined over the years by other manufacturers to include "constant-mesh" gears (instead of sliding gears). The first usage of synchromesh was by Cadillac in 1928. Driving techniques: Trained drivers of vehicles with non-synchronous transmissions sometimes use the techniques listed below. If improperly implemented, these techniques can cause damage to the vehicle or the loss of control of the vehicle. Double-clutching: releasing the clutch in neutral to synchronize the speeds of the shafts within the transmission Float shifting: shifting without using the clutchIn big rigs and semi-trucks, the driver may have to complete 24 or more gear changes when accelerating from a standstill to 70 mph (110 km/h). Clutch brake: Unlike any other type of transmission, non-synchronous transmissions often have a clutch brake mechanism, which is usually activated by pressing the clutch pedal all the way to the floor or pressing a button on the top of the gear lever. The purpose of the clutch brake is to slow down (or stop) the rotation of the transmission's input shaft, which assists in shifting the transmission into neutral or first gear when the vehicle is at a standstill. The clutch brake not only slows or stops the idle gear axis but can also prevent shifting into gear until the clutch pedal is released a few centimetres (or inches) off the floor. In order to shift into gear, the clutch must be halfway off the floor, otherwise, the clutch brake will prevent the transmission from being shifted into or out of gear. Comparison of transmissions: Any transmission that requires the driver to manually synchronize the engine speed with the speed of the driveshaft is non-synchronous. Non-synchronous transmissions are mostly used in semi-trucks, large industrial machines, older agricultural tractors (eg Massey ferguson 135) and power take-offs.Sequential manual transmissions, which are commonly used in motorcycles, ATVs, and racecars, are a type of non-synchronous (unsynchronized) manual transmission, where gear ratios must be selected in succession (order), hence direct access to a specific gear ratio is not possible.Most manual transmissions in modern passenger vehicles are fitted with synchromesh to equalize the shaft speeds within the transmission, so they are synchronous transmissions. All automatic transmissions have synchronizing mechanisms, and semi-automatic transmissions that use dog clutches typically have cone-and-collar synchronizing mechanisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Central simple algebra** Central simple algebra: In ring theory and related areas of mathematics a central simple algebra (CSA) over a field K is a finite-dimensional associative K-algebra A which is simple, and for which the center is exactly K. (Note that not every simple algebra is a central simple algebra over its center: for instance, if K is a field of characteristic 0, then the Weyl algebra K[X,∂X] is a simple algebra with center K, but is not a central simple algebra over K as it has infinite dimension as a K-module.) For example, the complex numbers C form a CSA over themselves, but not over the real numbers R (the center of C is all of C, not just R). The quaternions H form a 4-dimensional CSA over R, and in fact represent the only non-trivial element of the Brauer group of the reals (see below). Central simple algebra: Given two central simple algebras A ~ M(n,S) and B ~ M(m,T) over the same field F, A and B are called similar (or Brauer equivalent) if their division rings S and T are isomorphic. The set of all equivalence classes of central simple algebras over a given field F, under this equivalence relation, can be equipped with a group operation given by the tensor product of algebras. The resulting group is called the Brauer group Br(F) of the field F. It is always a torsion group. Properties: According to the Artin–Wedderburn theorem a finite-dimensional simple algebra A is isomorphic to the matrix algebra M(n,S) for some division ring S. Hence, there is a unique division algebra in each Brauer equivalence class. Every automorphism of a central simple algebra is an inner automorphism (this follows from the Skolem–Noether theorem). The dimension of a central simple algebra as a vector space over its centre is always a square: the degree is the square root of this dimension. The Schur index of a central simple algebra is the degree of the equivalent division algebra: it depends only on the Brauer class of the algebra. The period or exponent of a central simple algebra is the order of its Brauer class as an element of the Brauer group. It is a divisor of the index, and the two numbers are composed of the same prime factors. If S is a simple subalgebra of a central simple algebra A then dimF S divides dimF A. Every 4-dimensional central simple algebra over a field F is isomorphic to a quaternion algebra; in fact, it is either a two-by-two matrix algebra, or a division algebra. If D is a central division algebra over K for which the index has prime factorisation ind(D)=∏i=1rpimi then D has a tensor product decomposition D=⨂i=1rDi where each component Di is a central division algebra of index pimi , and the components are uniquely determined up to isomorphism. Splitting field: We call a field E a splitting field for A over K if A⊗E is isomorphic to a matrix ring over E. Every finite dimensional CSA has a splitting field: indeed, in the case when A is a division algebra, then a maximal subfield of A is a splitting field. In general by theorems of Wedderburn and Koethe there is a splitting field which is a separable extension of K of degree equal to the index of A, and this splitting field is isomorphic to a subfield of A. As an example, the field C splits the quaternion algebra H over R with t+xi+yj+zk↔(t+xiy+zi−y+zit−xi). Splitting field: We can use the existence of the splitting field to define reduced norm and reduced trace for a CSA A. Map A to a matrix ring over a splitting field and define the reduced norm and trace to be the composite of this map with determinant and trace respectively. For example, in the quaternion algebra H, the splitting above shows that the element t + x i + y j + z k has reduced norm t2 + x2 + y2 + z2 and reduced trace 2t. Splitting field: The reduced norm is multiplicative and the reduced trace is additive. An element a of A is invertible if and only if its reduced norm in non-zero: hence a CSA is a division algebra if and only if the reduced norm is non-zero on the non-zero elements. Generalization: CSAs over a field K are a non-commutative analog to extension fields over K – in both cases, they have no non-trivial 2-sided ideals, and have a distinguished field in their center, though a CSA can be non-commutative and need not have inverses (need not be a division algebra). This is of particular interest in noncommutative number theory as generalizations of number fields (extensions of the rationals Q); see noncommutative number field.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Water damage** Water damage: Water damage describes various possible losses caused by water intruding where it will enable attack of a material or system by destructive processes such as rotting of wood, mold growth, bacteria growth, rusting of steel, swelling of composite woods, de-laminating of materials such as plywood, short-circuiting of electrical devices, etc. The damage may be imperceptibly slow and minor such as water spots that could eventually mar a surface, or it may be instantaneous and catastrophic such as burst pipes and flooding. However fast it occurs, water damage is a major contributor to loss of property. Water damage: An insurance policy may or may not cover the costs associated with water damage and the process of water damage restoration. While a common cause of residential water damage is often the failure of a sump pump, many homeowner's insurance policies do not cover the associated costs without an addendum which adds to the monthly premium of the policy. Often the verbiage of this addendum is similar to "Sewer and Drain Coverage". Water damage: In the United States, those individuals who are affected by wide-scale flooding may have the ability to apply for government and FEMA grants through the Individual Assistance program. On a larger level, businesses, cities, and communities can apply to the FEMA Public Assistance program for funds to assist after a large flood. For example, the city of Fond du Lac Wisconsin received $1.2 million FEMA grant after flooding in June 2008. The program allows the city to purchase the water damaged properties, demolish the structures, and turn the former land into public green space. Causes: Water damage can originate by different sources such as a broken dishwasher hose, a washing machine overflow, a dishwasher leakage, broken/leaking pipes, flood waters, groundwater seepage, building envelope failures (leaking roof, windows, doors, siding, etc.) and clogged toilets. According to the Environmental Protection Agency, 13.7% of all water used in the home today can be attributed to plumbing leaks. On average that is approximately 10,000 gallons of water per year wasted by leaks for each US home. A tiny, 1/8-inch crack in a pipe can release up to 250 gallons of water a day. According to Claims Magazine in August 2000, broken water pipes ranked second to hurricanes in terms of both the number of homes damaged and the amount of claims (on average $50,000 per insurance claim) costs in the US. Experts suggest that homeowners inspect and replace worn pipe fittings and hose connections to all household appliances that use water at least once a year. This includes washing machines, dishwashers, kitchen sinks, and bathroom lavatories, refrigerator icemakers, water softeners, and humidifiers. A few US companies offer whole-house leak protection systems utilizing flow-based technologies. A number of insurance companies offer policyholders reduced rates for installing a whole-house leak protection system. Causes: As far as insurance coverage is concerned, damage caused by surface water intrusion to the dwelling is considered flood damage and is normally excluded from coverage under traditional homeowners' insurance. Surface water is water that enters the dwelling from the surface of the ground because of inundation or insufficient drainage and causes loss to the dwelling. Coverage for surface water intrusion to the dwelling would usually require a separate flood insurance policy. Categories: There are three basic categories of water damage, based on the level of contamination. Category 1 Water - Refers to a source of water that does not pose substantial threat to humans and classified as "clean water". Examples are broken water supply lines, tub or sink overflows or appliance malfunctions that involves water supply lines. Categories: Category 2 Water - Refers to a source of water that contains a significant degree of chemical, biological or physical contaminants and causes discomfort or sickness when consumed or even exposed to. Known as "grey water". This type carries microorganisms and nutrients of micro-organisms. Examples are toilet bowls with urine (no feces), sump pump failures, seepage due to hydrostatic failure and water discharge from dishwashers or washing machines. Categories: Category 3 Water - Known as "black water" and is grossly unsanitary. This water contains unsanitary agents, harmful bacteria and fungi, causing severe discomfort or sickness. Type 3 category are contaminated water sources that affect the indoor environment. This category includes water sources from sewage, seawater, rising water from rivers or streams, storm surge, ground surface water or standing water. Category 2 Water or Grey Water that is not promptly removed from the structure and or have remained stagnant may be re classified as Category 3 Water. Toilet back flows that originates from beyond the toilet trap is considered black water contamination regardless of visible content or color. Classes: Class of water damage is determined by the probable rate of evaporation based on the type of materials affected, or wet, in the room or space that was flooded. Determining the class of water damage is an important first step, and will determine the amount and type of equipment utilized to dry-down the structure.Class 1 - Slow Rate of Evaporation. Affects only a portion of a room. Materials have a low permeance/porosity. Minimum moisture is absorbed by the materials. **IICRC s500 2016 update adds that class 1 be indicated when <5% of the total square footage of a room (ceiling+walls+floor) are affected ** Class 2 - Fast Rate of Evaporation. Water affects the entire room of carpet and cushion. May have wicked up the walls, but not more than 24 inches. **IICRC s500 2016 update adds that class 2 be indicated when 5% to 40% of the total square footage of a room (ceiling+walls+floor) are affected ** Class 3 - Fastest Rate of Evaporation. Water generally comes from overhead, affecting the entire area; walls, ceilings, insulation, carpet, cushion, etc. **IICRC s500 2016 update adds that class 3 be indicated when >40% of the total square footage of a room (ceiling+walls+floor) are affected ** Class 4 - Specialty Drying Situations. Involves materials with a very low permeance/porosity, such as hardwood floors, concrete, crawlspaces, gypcrete, plaster, etc. Drying generally requires very low specific humidity to accomplish drying. Restoration: Water damage restoration can be performed by property management teams, building maintenance personnel, or by the homeowners themselves; however, contacting a certified professional water damage restoration specialist is often regarded as the safest way to restore water damaged property. Certified professional water damage restoration specialists utilize psychrometrics to monitor the drying process. Standards and regulation While there are currently no government regulations in the United States dictating procedures, two certifying bodies, the Institute of Inspection Cleaning and Restoration Certification (IICRC) and the RIA, do recommend standards of care. The current IICRC standard is ANSI/IICRC S500-2021. It is the collaborative work of the IICRC, SCRT, IEI, IAQA, and NADCA. Restoration: Fire and Water Restoration companies are regulated by the appropriate state's Department of Consumer Affairs - usually the state contractors license board. In California, all Fire and Water Restoration companies must register with the California Contractors State License Board. Presently, the California Contractors State License Board has no specific classification for "water and fire damage restoration." Procedures Water damage restoration is often prefaced by a loss assessment and evaluation of affected materials. The damaged area is inspected with water sensing equipment such as probes and other infrared tools in order to determine the source of the damage and possible extent of areas affected. Emergency mitigation services are the first order of business. Controlling the source of water, removal of non-salvageable materials, water extraction and pre-cleaning of impacted materials are all part of the mitigation process. Restoration services would then be rendered to the property in order to dry the structure, stabilize building materials, sanitize any affected or cross-contaminated areas, and deodorize all affected areas and materials. After the labor is completed, water damage equipment including air movers, air scrubbers, dehumidifiers, wood floor drying systems, and sub-floor drying equipment is left in the residence. The goal of the drying process is to stabilize the moisture content of impacted materials below 15%, the generally accepted threshold for microbial amplification. Industry standards state that drying vendors should return at regular time intervals, preferably every twenty-four hours, to monitor the equipment, temperature, humidity, and moisture content of the affected walls and contents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**History of Biology (video game)** History of Biology (video game): History of Biology is a browser-based scavenger hunt–style educational game that was created by Spongelab Interactive. It is designed to teach high school students and general interest groups about the history of biology. Details: The game's purpose is to teach about the discoveries and research of over 20 scientists as missions are completed. For instance, in the first mission, "The Art of Imitation", players are taught about the scientist Zacharias Janssen, who is credited with inventing the compound microscope. All the scientists highlighted in this game are; Antonie van Leeuwenhoek, Robert Hooke, Matthias Schleiden, Theodor Schwann, Rudolf Virchow, Carl Linnaeus, Jean-Baptiste Lamarck, Gregor Mendel, Charles Darwin, Alfred Russel Wallace, Johannes Friedrich Miescher, Oswald Avery, Colin MacLeod, Maclyn McCarty, Alfred Hershey, Martha Chase, James D. Watson, Francis Crick, Rosalind Franklin, Frederick Sanger, and Kary Mullis. Various scientific discoveries from the 15th century to all the way to the 21st century are featured within this game. For example, cell theory is explored by analyzing letters and stamps. Completing this level requires researching terms such as metabolism, nerve cells, and pepsin in the game. Additionally, to introduce the mechanisms of diversity and the work of Charles Darwin, players explore maps, find GPS coordinates and read about Darwin's research on the evolution of finches. This allows players to use various methods to complete the same mission and move on to the next level. As players complete each mission, they are sent a victory email from one of the game characters. This email usually contains a teaser about what the next mission will entail. Details: The notepad feature in History of Biology, which is visible to teachers, allows them to answer questions or deal with specific content students may be interested in or struggling with, in class. As for the teacher, there is a detailed teacher's guide with a walk-through of each mission. A back-end administration area allows teachers to control which missions are available to students. In the news: Caron, Nathalie (2010-11-30). "Decoding Biology with Spongelab Interactive". gamefwd. Retrieved 2010-11-30.Buckler, Grant (2010-11-30). "Canadian startups find opportunity in educational games". itbusiness.ca. Retrieved 2010-11-30."Go On A Scavenger Hunt in History of Biology from Spongelab". Village Gamer. 2010-08-23. Retrieved 2010-08-23.Wilson, Joseph (2010-07-28). ""No ma, this game is helping me learn" - Toronto's serious gaming companies have kids thinking". Yonge Street. Retrieved 2010-07-28.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RAB35** RAB35: Ras-related protein Rab-35 is a protein that in humans is encoded by the RAB35 gene. This GTPase participates in the traffic of recycling endosomes toward the plasma membrane,
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intravascular lymphomas** Intravascular lymphomas: Intravascular lymphomas (IVL) are rare cancers in which malignant lymphocytes proliferate and accumulate within blood vessels. Almost all other types of lymphoma involve the proliferation and accumulation of malignant lymphocytes in lymph nodes, other parts of the lymphatic system (e.g. the spleen), and various non-lymphatic organs (e.g. bone marrow and liver) but not in blood vessels.IVL fall into three different forms based on the type of lymphocyte causing the disease. Intravascular large B-cell lymphoma (IVBCL), which constitutes ~90% of all IVL, is a lymphoma of malignant B-cell lymphocytes as classified by the World Health Organization, 2016. The remaining IVL types, which have not yet been formally classified by the World Health Organization, are defined based mainly on case reports; these IVL are 1) intravascular NK-cell lymphoma (IVNKL) in which the malignant cells are a type of T cell lymphocyte termed natural killer cells (NK-cells) and 2) intravascular T-cell lymphoma (IVTL) in which the neoplastic cells are primarily, if not exclusively, a type of t-cell termed cytotoxic T-cells. Because of their similarities and extreme rarities, IVL lymphomas caused by NK-cells and cytotoxic T-cells are often grouped together under the term intravascular NK/T cell lymphomas (IVNK/TL). The malignant cells in IVNK/TL are typically infected with the Epstein–Barr virus suggesting that these lymphomas are examples of the Epstein-Barr virus-associated lymphoproliferative diseases. Since infection with this virus is rarely seen in IVBCL, this form of IVL is not typically regarded as one of the Epstein-Barr virus-associated lymphoproliferative diseases.Intravascular large B-cell and intravascular NK/T cell IVL are typically very aggressive lymphomas that afflict middle-aged and elderly adults. At the time of diagnosis, they accumulate within small-sized and medium-sized but not large-sized blood vessels of the skin, central nervous system, and, less frequently. virtually any other organ system. Unlike most lymphomas, however, they generally do not accumulate or infiltrate lymph nodes. All of the IVL are frequently associated with systemic B symptoms such as fever and weight loss, as well as symptoms related to the other organs in which they accumulate in blood vessels, constrict blood flow, and thereby cause severe damage due to infarction, i.e. damage due to the loss of blood flow.Historically, most cases of the intravascular lymphomas responded very poorly to standard chemotherapy regimens that were used to treat other types of the B-cell lymphomas. With few exceptions, these intravascular lymphomas progressed very rapidly. More recently, however, the addition to these chemotherapy regimens of the immunotherapy agents, Rituximab, which acts to kill B-cells, has greatly improved their effectiveness and thereby the prognosis of the most common form of these diseases, the intravascular B-cell lymphomas. Unfortunately, no such agent that is directed against NK-cells or cytotoxic T-cells has yet been reported to be useful in treating these two types of the intravascular B-cell lymphomas. History: In 1959, Pfleger and Tappeiner first reported on a cancer in which malignant cells grew uncontrollably within the lumen of blood vessels; the authors suggested that these malignant cells were derived from the endothelial cells lining the vasculature and therefore termed the disorder angioendotheliomatosis proliferans systemisata. Subsequent studies reported in 1982, 1985, and 1986 led to the conclusion that these malignant cells were derived from lymphocytes rather than endothelial cells. These along with other studies termed the disease angioendotheliomatosis, neoplastic angiotheliomatosis, intravascular lymphomatosis, angioendotheliotropic (intravascular) lymphoma, angiotropic large-cell lymphoma, diffuse large-cell lymphoma, intralymphatic lymphomatosis, and, less specifically, malignant angioendotheliomatosis or intravascular lymphoma. By 2001, the World Health Organization had defined the disease as a malignant B-cell lymphoma termed intravascular large B-cell lymphoma.Santucci et al. first reported a case of IVL that involved malignant NK cells. Some 2 dozen other cases of intravascular NK cell lymphoma have been reported by 2018. In 2008, 29 case reports of purported intravascular T-cell lymphoma were reviewed; only two of these cases were associated with evidence strongly suggesting that the malignant cells were cytotoxic T-cells. Subsequently, a few more cases of cytotoxic T-cell-based have been reported. There remains a possibility that future studies will find other T-cell types may cause IVTCL. Intravascular large B-cell lymphoma: Intravascular large B-cell lymphomas fall into three distinct variants, all of which involve the intravascular accumulation of malignant B-cells and appear to have a similar pathophysiology. However, they differ in the distribution of their lesions, types of populations affected, prognoses, and treatments. These three variants are: 1) intravascular large B-cell lymphoma classical, 2) intravascular large B-cell lymphoma, cutaneous variant, and 3) intravascular large B-cell lymphoma, hemophagocytic syndrome-associated variant. The following sections give the common pathophysiology of the three variants and then describes the lesions, populations affected, prognoses, and treatments of each variant in separate sections. Intravascular large B-cell lymphoma: Pathophysiology of the intravascular B-cell lymphomas The gene, chromosome, and gene expression abnormalities in IVBCL have not been fully evaluated. Studies to date indicate that the malignant cells in this disease have mutations in their MYD88 (44% of cases) and CD79B (26% of cases) genes. The exact mutation seen in the MYD88 (i.e. L265P) and some or the mutations in CD79B occur in diverse types of lymphoma. Other abnormalities seen in the small numbers of cases that have been studied so far include translocations between chromosome 14 and 18; tandem triplications of both the BCL2 gene located on the long arm of chromosome 18 at position q21 and the KMT2A gene located on the long arm of chromosome 11 between positions 22 and 25. The product protein of BCL2 viz., Bcl-2, regulates cell survival and apoptosis (i.e. programmed cell death) and the product protein of KMT2a viz., MLL regulates cell maturation. Abnormalities in BCL2 and KMT2A are associated with other types of B-cell lymphomas. It seems likely that these or other gene, chromosome, and/or gene expression abnormalities contribute to the development and/or progression of IVBCL.The malignant B-cells in IVBCL fail to express the CD29 protein while the endothelial cells in close proximity to the intravascular accumulations of the malignant B-cells fail to express key CXC chemokine receptor proteins particularly CxcL12 but also Cxcr5, Ccr6, and/or Ccr7. The failure of the endothelial cells to express these receptor proteins may be due to the action of nearby malignant B-cells. In any event, all of the cited proteins are involved in the movement of B-cells from the intravascular space across the vascular endothelium and into tissues. The lack of these proteins may explain the accumulation of the malignant B-cells of IVLBC within blood vessels.In about 80% of cases, the malignant B-cells in IVBCL are "non-germinal center B-cells" as defined by the Hans algorithm rather than the "germinal center B-cells" that are commonly found in less aggressive B-cell lymphomas. This factor may contribute to the aggressiveness of IVBCL. Intravascular large B-cell lymphoma: Intravascular large B-cell lymphoma, classical variant Presentation Individuals presenting with the classical variant of IVLBL are typically middle-aged or elderly (39–90 years) that have one or more of the following: systemic symptoms, particularly fever (45% of cases); cutaneous lesions (40%); central nervous system disorders (35%); and clinical and laboratory abnormalities involving the bone marrow (~18%), lung (~6%), and, rarely, endocrine glands (e.g. pituitary, thyroid, adrenal gland), liver, prostate, uterus, eye, intestine, and in individual cases almost any other organ or tissue. These findings are based primarily on studies of 740 patients conducted in Europe; a study conducted in Quebec, Canada on 29 patients gave similar results. Individuals may present with one, two, or more of these abnormalities. Systemic symptoms include not only the most commonly seen one viz., fever, but also malaise, weight loss, and other B symptoms; the cutaneous lesions include singular or multiple plaques, nodules, tumors, and ulcerations, some of which may be painful and most of which are located on the breast, lower abdomen, and/or extremities. Central nervous system defects include sensory and/or motor neuropathy, spinal nerve root pain, paresthesia, hypoesthesia, aphasia, dysarthria, hemiparesis, seizures, myoclonus, transient visual loss, vertigo, altered conscious states, and, particularly in relapsed disease, neurolymphomatosis (i.e. direct invasion of one or more nerves in the peripheral nervous system by the malignant B-cells). Laboratory studies generally show non-specific abnormalities: elevated levels of serum lactate dehydrogenase and soluble IL2RA; anemia, decreases in blood platelet levels, and decreases in white blood cell levels in 25%->50% of cases. Circulating malignant B-cells are not found in 90-95% of cases and laboratory evidence of organ injury is found in those cases involving these organs. Intravascular large B-cell lymphoma: Diagnosis The diagnosis of IVBCL is heavily dependent upon obtaining biopsy specimens from involved tissues, particularly the skin but in cases without skin lesions, other apparently involved tissues. Microscopic examination of these tissues typically shows medium-sized to large-sized lymphocytes located within small- to medium-sized blood vessels of the skin, lung, and other tissues or the sinusoids of the liver, bone marrow, and spleen. On occasion, these malignant cells have the appearance of Reed-Sternberg cells. The lesions should show no or very little extension outside of blood vessels. As determined by immunohistochemistry analyses, the intravascular malignant lymphocytes express typical B-cell proteins, particularly CD20, which is found in almost all cases, CD79a and Pax5, which are found in most cases, and MUM1 and Bcl-2, which are found in 95% and 91% of cases, respectively. These B-cells are usually (80% of cases) non-germinal center B-cells (see Pathophysiology section) and may express one or more of the gene, chromosome, and gene expression abnormalities described in the above Pathophysiology section. Since the classical variant can present with a wide range of clinical signs, symptoms, and organ involvements, its presence may not be apparent, particularly in cases that do not exhibit clinically apparent skin lesions. Accordingly, random skin biopsies have been used to obtain evidence of IVL in cases that have signs and/or symptoms of the disease that are restricted to non-cutaneous sites, even in cases that present with no other finding except unexplained fever. The diagnosis of IVBCL, classical variant is solidified by finding these pathological features in more than one site. Intravascular large B-cell lymphoma: Treatment and prognosis At diagnosis, IVBCL must be regarded as an aggressive and disseminated malignancy requiring systemic chemotherapy. In the absence of long- or short-term, controlled clinical trials on treating this lymphoma, individuals with IVBCL have been treated with the standard regimen used to treat diffuse large B-cell lymphomas viz., the CHOP chemotherapy regimen which consists of cyclophosphamide, hydroxydaunorubicin (also termed doxorubicin or adriamycin), oncovin (also termed vincristine), and a corticosteroid (i.e. either prednisone or prednisolone) plus the monoclonal antibody immunotherapy agent, Rituximab. This immunochemotherapeutic regimen has achieved an overall survival rate at 3 years of 81%; this overall survival rate using CHOP before Retuximab was added to the regimen was only 33%. However, highly toxic reactions to Rituximab such as pulmonary failure may occur and require delay or interrupting the use of this drug. High dose chemotherapy regimens followed by autologous stem-cell transplantation has offered clinical improvement similar to that found with the CHOP plus Rituximabn. However, only a small percentage of patients with IVBCL are young and healthy enough to receive this regimen. Intravenous methotrexate may be a useful addition to the rituximab-CHOP regiment in individuals with central nervous system involvement. Intravascular large B-cell lymphoma: Intravascular large B-cell lymphoma, cutaneous variant Presentation The cutaneous variant, which comprises a small percentage of all IVBCL cases, occurs almost exclusively in females and younger individuals (median age 59 years) than the classical variant (median age 72 years). Individuals present with lesions that are exclusively or greatly confined to the skin. The clinical features of these lesions are similar to those described in the section on Presentation of the classical variant. Individuals with the cutaneous variant may have systemic symptoms but this occurs less frequently (30% of cases) than those in the classical variant (45% of cases). In general, cutaneous variant patients are in much better physical shape than those with other forms of IVBCL and have a better long-term prognosis. Intravascular large B-cell lymphoma: Diagnosis The diagnosis of IVL, cutaneous variant depends on finding the pathological picture in the skin as described for the classical variant except that the lesions occur exclusively or predominantly in the skin. Ideally, these pathological findings should be found in more than one skin site. However, cutaneous involvement is frequently detected in a single site such as the hypervascular lesions of cherry hemangiomas and angiolipomas. Intravascular large B-cell lymphoma: Treatment and prognosis Historically, individuals with the cutaneous variant survive significantly longer than that those with the classical variant (3 year overall survival 56% versus 22%). Early intervention in the cutaneous variant would appear to be highly desirable. Virtually all reports on the treatment of the cutaneous variant were made before Rituximab was used to treat IVL. Historically, patients with localized disease obtained prolonged remission with conventional CHOP therapy. However, individuals with single cutaneous lesions were long‐term survivors: when treated with just radiation therapy or surgical removal, these single-lesion patients had prolonged remissions both at initial diagnosis and after relapse. In contrast, patients with multiple lesions had a far worse outcome after treatment with CHOP: they had an objective response in 86% of cases but nonetheless the majority relapsed within a year of treatment with and only a few being successfully managed with salvage chemotherapy. Rituximab may improve the latter situation. Intravascular large B-cell lymphoma: Intravascular large B-cell lymphoma, hemophagocytic syndrome-associated variant Presentation The hemophagocytic syndrome-associated variant of IVBCL is a very rare variant of IVBCL. Its previous name, intravascular large B-cell lymphoma, Asian variant, was recently changed to its current name by the world Health Organization, 2016. Unlike the classical and cutaneous variants, the hemophagocytic syndrome-associated variant presents with the hemophagocytic syndrome. This syndrome is characterized by bone marrow involvement, reduced numbers of circulating blood platelets as well as the reduced levels of other circulating blood cells, and enlarged liver and spleen. Less often, it is also associated with overt hemophagocytosis (i.e. the engulfment by non-malignant histiocytes of red blood cells, white blood cells, platelets, and their precursor cells that is most commonly found in the bone marrow and other tissues. The syndrome often reflects excessive secretion of inflammatory cytokines and severe systemic inflammation similar to that seen in the cytokine storm syndrome. In general, individuals present with a rapidly progressive disease (median time from onset to diagnosis~4 weeks, range 2–12 weeks). Patients are often extremely ill and experience multiple organ failures. Intravascular large B-cell lymphoma: Diagnosis The diagnosis of the intravascular large B-cell lymphoma, hemophagocytic syndrome-associated variant depends on the individual presenting with clinical and laboratory findings compatible with the hemphagocytic syndrome (see previous section) and on the histology of biopsied tissues of the bone marrow, spleen, liver, brain, or other organ that clinical and/or laboratory findings suggest are involved in the disease. Its histology is described in the Diagnosis section of the classical variant but also includes the presence of hemophagocytosis, i.e. the engulfment of red blood cells and/or other mature and immature blood cells. Hemophagocytosis can also be found in sites removed from the intravascular lesions such as the cerebrospinal fluid in patients with central nervous system involvement. Intravascular large B-cell lymphoma: Treatment and prognosis Prior to the use of rituximab, individuals with this variant generally followed a rapidly (i.e. weeks to months) fatal course even when treated with the CHOP regimen. However, addition of rituximab to the CHOP regimen appears to have improved the treatment of this disease. Intravenous methotrexate may be a useful addition to the rituximab-CHOP regiment in individuals with central nervous system involvement. Intravascular NK/T cell lymphomas: Pathophysiology Three studies have examine gene mutations and gene expression abnormalities in IVNK/TL. A retrospective study of 25 patients identified numerous gene abnormalities including tumor-specific splicing alterations in oncogenes and tumor suppressor genes such as HRAS, MDM2, and VEGFA as well as premature termination mutations or copy number losses in a total of 15 splicing-regulator genes such as SF3B5 and TNPO3. A study of two patients with IVNKL identified mutations in genes that produce histone proteins (HIST1H2BE, HIST1H2BN and H3F3A), the histone deacetylase gene, HDAC5, two genes that produce helicase proteins (WRN and DDX3X), two genes that make DNA methylation-related enzymes (TET2 and DNMT1) and a gene in the SWI/SNF family of chromatin remodeling genes, ARID1A. In a third study of a single patient, copy number analysis identified driver gene alterations in ARID1B, HACE1, and SMAD4 genes and gain of the SOX2 gene. While further studies are needed before conclusions can be made, one or more of these gene abnormalities may contribute to the development and/or progression of IVNK/TL.The malignant NK and T cells that accumulate within the vascular of individuals with IVNK/TL are usually infected with the Epstein–Barr virus (EBV). This suggests that most IVNK/TL cases are examples of the Epstein-Barr virus-associated lymphoproliferative diseases and, like these diseases, are EBV-driven. About 95% of the world's population is infected with EBV. During the initial infection, the virus may cause infectious mononucleosis, only minor non-specific symptoms, or no symptoms. Regardless of this, the virus enters a latency phase and the infected individual becomes a lifetime asymptomatic carrier of EBV. Weeks, months, years, or decades thereafter, a small percentage of these carriers develop an EBV-associated lymphoproliferative disease, including in extremely rare cases IVNK/TL. EBV is well known to infect NK- and T-cells, to express some of its genes that promote the survival and proliferation of the cells it infects, and thereby to cause various and far more common NK- and T-cell lymphomas. It seems likely that the virus acts similarly to cause IVNK/TL. IVNK/TL may differ from the other types of NK- and T-cell lymphomas which EBV produces because its NK- and T-cells and nearby endothelial cells have defects in the expression of proteins required for the NK/T-cells to pass through the endothelium and into the surrounding tissues (see above section on the Pathopysiology IVBCL). Intravascular NK/T cell lymphomas: Presentation Individuals (age range 23–81 years) with IVNK/TL typically have a rapidly progressive disease. They commonly present with skin lesions, less commonly symptoms due to central nervous system involvement, and in a minority of cases symptoms due to the involvement of the bone marrow, liver, kidneys, ovaries, and/or cervix. They often show signs of an disseminated disease such as fever, weight loss, night sweats, arthralgias, jaundice, decreased numbers of circulating red blood cells, white blood cells, and/or platelets, bone marrow involvement as determined by biopsy, and signs/symptoms of multiple organ involvement. Intravascular NK/T cell lymphomas: Diagnosis The diagnosis of IVNK/TL depends upon obtaining histology findings in the skin and/or other involved tissue that resembles that seen in IVBCL except that the malignant lymphocytes are not B-cells but rather: 1) NK-cells as evidenced by their expression of NK-cell selective marker proteins (e.g. CD3e, CD2, CD30, CD43, CD56, and/or CD79), expression of granule-bound enzymes (e.g. granzyme B), and expression of EBV proteins (e.g. Epstein–Barr virus latent membrane protein 1 and EBV-produced small RNAs); but not the expression of B-cell (e.g. CD20, CD79a, and Pax5) or cytotoxic T cell marker proteins; and 2) cytotoxic T-cell lymphoma as evidenced by the neoplastic cells' expression of T-cell co-receptor proteins (e.g. CD3, CD4, and/or CD8) as well as EBV marker proteins and/or small RNAs but usually not B-cell or NK-cell marker proteins. Intravascular NK/T cell lymphomas: Treatment and Prognosis Patients with IVNK/TL have been treated with various chemotherapy regimens, particularly CHOP or, less commonly, hyperCVAD. Rare patients have been treated with chemotherapy followed by hematopoietic stem cell transplantation or chemotherapy plus a proteasome inhibitor, bortezomib. In general, patients have responded poorly to treatment, have short (i.e. up to 12 months) survival times regardless of the chemotherapy regimen used. Rituximab does not target NK- or T-cells and therefore is not used to treat IVNK/TL.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Node (UML)** Node (UML): A node in the Unified Modeling Language (UML) is a computational resource upon which UML artifacts may be deployed for execution. There are two types of nodes: device nodes and execution environments. A device represents hardware devices: a physical computational resource with processing capability upon which UML artifacts may be deployed for execution. Devices may be complex (i.e., they may consist of other devices). Node (UML): An execution environment represents software containers (such as operating systems, JVM, servlet/EJB containers, application servers, portal servers, etc.) This is a node that offers an execution environment for specific types of components that are deployed on it in the form of deployable artifacts.Execution environments can be nested. Nodes can be interconnected through communication paths to define network structures. A communication path is an "association between two DeploymentTargets, through which they are able to exchange signals and messages". Usage: When modeling devices, it is possible to model them in several different ways: Name a device using the type and make, for instance "IBM RS6000", "HP 9000". Name a device using its intended function, for instance "Database Server", "High Speed Switch" Name a device using the operating system deployed on it, for instance "Linux Server", "Solaris Server".Use tagged values to specify characteristics of devices / execution environments, for instance "Memory=2GB", "Disk Space=32GB", "Version=2.5.1".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clinical pharmaceutical chemistry** Clinical pharmaceutical chemistry: Clinical pharmaceutical chemistry is a specialty branch of chemical sciences, which consists of medicinal chemistry with additional training in clinical aspects of translational sciences and medicine. Typically this involves similar principal training as in general medicine, where inspection of and interaction with the patients are a vital part of the training.Typically students in clinical pharmaceutical chemistry use the same curriculum as medical students, but specialize in medicinal and organic chemistry after and during the theoretical/early clinical studies. In clinical pharmaceutical chemistry the aim is to understand biological transformations and processes associated with chemical entities inside the human body, and how those processes can be influenced with changes in chemical structures. The aim of clinical pharmaceutical chemistry is in addition to manage and manipulate clinical effects of different chemical structures, as well as to manage phenomena recognized in first-in-human studies. Typically clinical pharmaceutical chemistry has an important role in discovery, design and manipulation of new drug entities, and is vital especially in early clinical studies (such as Phase I studies).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fire blanket** Fire blanket: A fire blanket is a safety device designed to extinguish incipient (starting) fires. It consists of a sheet of a fire retardant material that is placed over a fire in order to smother it. Fire blanket: Small fire blankets, such as for use in kitchens and around the home are usually made of glass fiber and sometimes kevlar, and are folded into a quick-release contraption for ease of storage. Larger fire blankets, for use in laboratory and industrial situations, are often made of wool – sometimes treated with a flame retardant chemical such as hexafluorozirconate and zirconium acetate. These blankets are usually mounted in vertical quick-release containers so that they can be easily pulled out and wrapped round a person whose clothes are on fire. Fire blanket: Fire blankets, along with fire extinguishers, are fire safety items that can be useful in case of a fire. These nonflammable materials are stable in temperatures up to 1300 °C for Nextel ceramic fibres, 1200 °C for glass fibers, Kevlar (480 °C), and wool (570 °C). These are useful in smothering fires by reducing the amount of oxygen available to the fire. Due to its simplicity, a fire blanket may be more helpful for someone who is inexperienced with fire extinguishers. Dangers: Asbestos in old blankets Some older fire blankets were made of woven asbestos fibres and are not NFPA rated. This can pose a hazard during the decommissioning of old equipment. Dangers: Extinguishing oil/fat fires After initial investigation in 2013, and later in 2014, the Netherlands Food and Consumer Product Safety Authority issued a statement that fire blankets should never be used to extinguish an oil/fat fire such as a chip pan fire, even if the icons or text on the blanket indicates the blanket may be used in such a case. This includes fire blankets which have been tested according to BS EN 1869. In the investigation out of the 22 tested fire blankets, 16 of the fire blankets themselves caught fire. In the other 6 the fire reignited when the blanket was removed after 17 minutes. The Dutch Fire Burn foundation reported several accidents involving the use of fire blankets when extinguishing oil/fat fires. Consumers may send in their existing fire blankets, which will then receive a sticker stating 'niet geschikt voor olie- en vetbranden' ("not suitable for oil and grease fires"). New products will have this text printed, rather than stickered. Operation: For a fire to burn, all three elements of the fire triangle must be present: heat, fuel and oxygen. The fire blanket is used to cut off the oxygen supply to the fire, thereby putting it out. The fire blanket must be sealed closely to a solid surface around the fire. Fire blankets usually have two pull down tails visible from outside the packaging. The user should place one hand on each tag and pull down simultaneously removing the blanket from the bag. The tails are located near the top of the fire blanket which allows the top lip of the fire blanket to fold back over the users' hands, protecting them from heat and direct contact burns. Cover the fire with the fire blanket, and it will help cut the oxygen supply and extinguish the fire. You can also use this method when a part of the body catches fire. The fire blanket must be sealed closely to a solid surface around the fire. Operation: Electric vehicles fires EV fires can be extremely difficult to extinguish as lithium batteries can self-reignite. "Up to 150 000 liters of water needed to put out a fire in an electric car ...Teslas may take up to 30,000-40,000 gallons of water, maybe even more, to extinguish the battery pack once it starts burning..." However, a typical larger fire truck carries only a few thousand liters of water. Operation: A fire blanket is so large that a burning vehicle can be completely covered with it (typical size is e.g. 6 m x 9 m to cover large SUVs) - and are extremely heat-resistant (1000 to 1600+ degrees). Also one has to consider the difference between allowed short-term peak temperature and long-term temperature.By putting on a fire blanket, the flames are supposed to be smothered. In a fire test with the fire brigade, experts from the ADAC,the General German Automobile Club, were able to see how the fire blanket actually significantly delays the development of the fire and thus increases the fire brigade's scope for action. Operation: The use of the fire blanket can prevent the fire from spreading to adjacent vehicles or surrounding objects - which is of great importance in an underground car park, for example. In addition, the removal of an electric vehicle that has been involved in an accident or has been extinguished can be secured with a fire blanket. Another field of application of the blanket is the quarantine of crashed electric cars at an accident site of towing companies or workshops. Maintenance: The Fire Industry Association (FIA) publish a "Code of Practice for the Commissioning and Maintenance of Fire Blankets Manufactured to BS EN 1869". The FIA's code of practice recommends that the responsible person ensures that such fire blankets are subject to annual maintenance by a competent service provider. It also recommends that consideration should be given to the replacement of fire blankets after seven years from the date of commissioning (or as otherwise specified by the fire blanket's manufacturer).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Current meter** Current meter: A current meter is an oceanographic device for flow measurement by mechanical, tilt, acoustical or electrical means. Different reference frames: In physics, one distinguishes different reference frames depending on where the observer is located, this is the basics for the Lagrangian and Eulerian specification of the flow field in fluid dynamics: The observer can be either in the Moving frame (as for a Lagrangian drifter) or in a resting frame. Lagrangian current meters measure the displacement of an oceanographic drifter, an unmoored buoy or a non-anchored ship's actual position to the position predicted by dead reckoning. Different reference frames: Eulerian current meters measure current passing a resting current meter. Types: Mechanical Mechanical current meters are mostly based on counting the rotations of a propeller and are thus rotor current meters. A mid-20th-century realization is the Ekman current meter which drops balls into a container to count the number of rotations. The Roberts radio current meter is a device mounted on a moored buoy and transmits its findings via radio to a servicing vessel. Savonius current meters rotate around a vertical axis in order to minimize error introduced by vertical motion. Types: Acoustic There are two basic types of acoustic current meters: Doppler and Travel Time. Both methods use a ceramic transducer to emit a sound into the water. Types: Doppler instruments are more common. An instrument of this type is the Acoustic Doppler current profiler (ADCP), which measures the water current velocities over a depth range using the Doppler effect of sound waves scattered back from particles within the water column. The ADCPs use the traveling time of the sound to determine the position of the moving particles. Single-point devices use again the Doppler shift, but ignoring the traveling times. Such a single-point Doppler Current Sensor (DCS) has a typical velocity range of 0 to 300 cm/s. The devices are usually equipped with additional optional sensors. Types: Travel time instruments determine water velocity by at least two acoustic signals, one up stream and one down stream. By precisely measuring the time to travel from the emitter to the receiver, in both directions, the average water speed can be determined between the two points. By using multiple paths, the water velocity can be determined in three dimensions. Travel time meters are generally more accurate than Doppler meters, but only record the velocity between the transducers. Doppler meters have the advantage that they can determine the water velocity at a considerable range, and in the case of an ADCP, at multiple ranges. Types: Electromagnetic induction This novel approach is for instance employed in the Florida Strait where electromagnetic induction in submerged telephone cable is used to estimate the through-flow through the gateway and the complete setup can be seen as one huge current meter. The physics behind: Charged particles (the ions in seawater) are moving with the ocean currents in the magnetic field of the Earth which is perpendicular to the movement. Using Faraday's law of induction (the third of Maxwell's equations), it is possible to evaluate the variability of the averaged horizontal flow by measuring the induced electric currents. The method has a minor vertical weighting effect due to small conductivity changes at different depths. Types: Tilt Tilt current meters operate under the drag-tilt principle and are designed to either float or sink depending on the type. A floating tilt current meter typically consists of a sub-surface buoyant housing that is anchored to the sea floor with a flexible line or tether. A sinking tilt current is similar, but the housing is designed such that the meter hangs from the attachment point. In either case, the housing tilts as a function of its shape, buoyancy (negative or positive) and the water velocity. Once the characteristics of a housing is known, the velocity can be determined by measuring the angle of the housing and direction of tilt. The housing contains a data logger that records the orientation (angle from vertical and compass bearing) of the Tilt Current Meter. Floating tilt current meters are typically deployed on the bottom with a lead or concrete anchor but may be deployed on lobster traps or other convenient anchors of opportunity. Sinking tilt current meters may be attached to an oceanographic mooring, floating dock or fish pen. Tilt current meters have the advantage over other methods of measuring current in that they are generally relatively low-cost instruments and the design and operation is relatively simple. The low-cost of the instrument may allow researchers to use the meters in greater numbers (thereby increasing spatial density) and/or in locations where there is a risk of instrument loss. Depth correction: Current meters are usually deployed within an oceanographic mooring consisting of an anchor weight on the ground, a mooring line with the instrument(s) connected to it and a floating device to keep the mooring line more or less vertical. Like a kite in the wind, the actual shape of the mooring line will not be completely straight, but following a so-called (half-)catenary. Depth correction: Under the influence of water currents (and wind if the top buoy is above the sea surface) the shape of the mooring line can be determined and by this the actual depth of the instruments. If the currents are strong (above 0.1 m/s) and the mooring lines are long (more than 1 km), the instrument position may vary up to 50 m.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OR2A12** OR2A12: Olfactory receptor 2A12 is a protein that in humans is encoded by the OR2A12 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aurora's Whole Realms Catalog** Aurora's Whole Realms Catalog: Aurora's Whole Realms Catalog is an accessory for the Dungeons & Dragons fantasy role-playing game, published for the Forgotten Realms setting. Contents: Aurora's Whole Realms Catalogue focuses on whimsical items, offering an eclectic inventory that includes bard's instruments, household items, and 14 types of cheese. The designers modeled the book after an actual turn-of-the-century mail-order catalog, meaning that exaggerated salesmanship accompanies each product description. Publication history: Aurora's Whole Realms Catalogue was written by Anne Brown and J. Robert King, and published by TSR, Inc. Reception: Rick Swan reviewed Aurora's Whole Realms Catalogue for Dragon magazine #192 (April 1993). He calls the catalogue's subject, Aurora, "the fictional proprietress of a medieval Wal-Mart", and points out the "death cheese" as an interesting exotic item, "a rich, delicate addition to the dining table, exotic both in its taste and the method by which it is acquired", in that it is made from catoblepas milk. Swan concludes this review by saying: "Better suited for browsers than hardcore gamers, Aurora's Whole Realms Catalogue is among the least essential of the equipment guides, but it's one of the most entertaining." Reviews: Dragão Brasil (Issue 41 - Aug 1998) (Portuguese)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**8-simplex** 8-simplex: In geometry, an 8-simplex is a self-dual regular 8-polytope. It has 9 vertices, 36 edges, 84 triangle faces, 126 tetrahedral cells, 126 5-cell 4-faces, 84 5-simplex 5-faces, 36 6-simplex 6-faces, and 9 7-simplex 7-faces. Its dihedral angle is cos−1(1/8), or approximately 82.82°. It can also be called an enneazetton, or ennea-8-tope, as a 9-facetted polytope in eight-dimensions. The name enneazetton is derived from ennea for nine facets in Greek and -zetta for having seven-dimensional facets, and -on. As a configuration: This configuration matrix represents the 8-simplex. The rows and columns correspond to vertices, edges, faces, cells, 4-faces, 5-faces, 6-faces and 7-faces. The diagonal numbers say how many of each element occur in the whole 8-simplex. The nondiagonal numbers say how many of the column's element occur in or at the row's element. This self-dual simplex's matrix is identical to its 180 degree rotation. As a configuration: 28 56 70 56 28 36 21 35 35 21 84 15 20 15 126 10 10 10 10 126 15 20 15 84 21 35 35 21 36 28 56 70 56 28 89] Coordinates: The Cartesian coordinates of the vertices of an origin-centered regular enneazetton having edge length 2 are: 28 21 15 10 ,1/6,1/3,±1) 28 21 15 10 ,1/6,−21/3,0) 28 21 15 10 ,−3/2,0,0) 28 21 15 ,−22/5,0,0,0) 28 21 ,−5/3,0,0,0,0) 28 12 /7,0,0,0,0,0) (1/6,−7/4,0,0,0,0,0,0) (−4/3,0,0,0,0,0,0,0) More simply, the vertices of the 8-simplex can be positioned in 9-space as permutations of (0,0,0,0,0,0,0,0,1). This construction is based on facets of the 9-orthoplex. Coordinates: Another origin-centered construction uses (1,1,1,1,1,1,1,1)/3 and permutations of (1,1,1,1,1,1,1,-11)/12 for edge length √2. Related polytopes and honeycombs: This polytope is a facet in the uniform tessellations: 251, and 521 with respective Coxeter-Dynkin diagrams: , This polytope is one of 135 uniform 8-polytopes with A8 symmetry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Utility system** Utility system: In video game AI, a utility system, or utility AI, is a simple but effective way to model behaviors for non-player characters. Using numbers, formulas, and scores to rate the relative benefit of possible actions, one can assign utilities to each action. A behavior can then be selected based on which one scores the highest "utility" or by using those scores to seed the probability distribution for a weighted random selection. The result is that the character is selecting the "best" behavior for the given situation at the moment based on how those behaviors are defined mathematically. Key concepts: The concept of utility has been around for centuries – primarily in mathematically dependent areas such as economics. However, it has also been used in psychology, sociology, and even biology. Because of this background and the inherent nature of needing to convert things to math for computer programming, it was something that came naturally as a way of designing and expressing behaviors for game characters. Key concepts: Naturally, different AI architectures have their various pros and cons. One of the benefits of utility AI is that it is less "hand-authored" than many other types of game AI architectures. While behaviors in a utility system are often created individually (and by hand), the interactions and priorities between them are not inherently specified. For example, behavior trees (BTs) require the designer to specify priorities in sequence to check if something should be done. Only if that behavior (or tree branch) is NOT executed will the behavior tree system fall through to check the next one. Key concepts: By comparison, behaviors in many utility systems sort themselves out by priority based on the scores generated by any mathematical modeling that defines every given behavior. Because of this, the developer isn't required to determine exactly where the new behavior "fits" in the overall scheme of what could be thousands of behavior "nodes" in a BT. Instead, the focus is on simply defining the specific reasons why the single behavior in question would be beneficial (i.e. its "utility"). The decision system then scores each behavior according to what is happening in the world at that moment and selects the best one. While some care must be taken to ensure that standards are being followed so that all behavior scoring is using the same or similar premises, the "heavy lifting" of determining how to process tens – or even hundreds – of different behaviors is offloaded from the designer and put into the execution of the system itself. Background: Early use Numbers and formulas and scores have been used for decades in games to define behavior. Even something as simple as a defining a set percentage chance for something to happen (e.g. 12% chance to perform Action X) was an early step into utility AI. Only in the early 21st century, however, has that method started to take on more of a formalized approach now referred to commonly as "utility AI". Background: Mathematical modeling of behavior In The Sims (2000) an NPCs current "need" for something (e.g. rest, food, social activity) was combined with a score from an object or activity that could satisfy that same need. The combinations of these values gave a score to the action that told the Sim what it should do. This was one of the first visible uses of utility AI in a game. While the player didn't see the calculations themselves, they were made aware of the relative needs of the Sim and the varying degrees of satisfaction that objects in the game would provide. It was, in fact, the core gameplay mechanism. Background: In The Sims 3 (2009), Richard Evans used a modified version of the Boltzmann distribution to choose an action for a Sim, using a temperature that is low when the Sim is happy, and high when the Sim is doing badly to make it more likely that an action with a low utility is chosen. He also incorporated "personalities" into the Sims. This created a sort of 3-axis model — extending the numeric "needs" and "satisfaction values" to include preferences so that different NPCs might react differently from others in the same circumstances based on their internal wants and drives. Background: In his book, Behavioral Mathematics for Game AI, Dave Mark detailed how to mentally think of behavior in terms of math including such things as response curves (converting changing input variables to output variables). He and Kevin Dill went on to give many of the early lectures on utility theory at the AI Summit of the annual Game Developers Conference (GDC) in San Francisco including "Improving AI Decision Modeling Through Utility Theory" in 2010. and "Embracing the Dark Art of Mathematical Modeling in AI" in 2012. These lectures served to inject utility AI as a commonly-referred-to architecture alongside finite state machines (FSMs), behavior trees, and planners. Background: A "Utility System" While the work of Richard Evans, and subsequent AI programmers on the Sims franchise such as David "Rez" Graham were heavily based on utility AI, Dave Mark and his co-worker from ArenaNet, Mike Lewis, went on to lecture at the AI Summit during the 2015 GDC about a full stand-alone architecture he had developed, the Infinite Axis Utility System (IAUS). The IAUS was designed to be a data-driven, self-contained architecture that, once hooked up to the inputs and outputs of the game system, did not require much programming support. In a way, this made it similar to behavior trees and planners where the reasoner (what makes the decisions) was fully established and it was left to the development team to add behaviors into the mix as they saw fit. Background: Utility with other architectures Additionally, rather than a stand-alone architecture, other people have discussed and presented methods of incorporating utility calculations into existing architectures. Bill Merrill wrote a segment in the book, Game AI Pro, entitled "Building Utility Decisions into Your Existing Behavior Tree" with examples of how to re-purpose selectors in BTs to use utility-based math. This made for a powerful hybrid that kept much of the popular formal structure of behavior trees but allowed for some of the non-brittle advantages that utility offered. Background: Utility decision-making is relatively fast, in terms of real-time performance, compared to more computationally expensive planning approaches such as Monte Carlo tree search. This mainly stems from the fact that Utility System is reactive, i.e., chooses decision based on the present state. The planning approaches involve some kind of search that enables to consider various future scenarios at the expense of heavy computations. However, both architectures can be combined. In a conference paper about AI in Tactical Troops: Anthracite Shift game, Utility System is responsible for high-level strategical decision making, whereas Monte Carlo Tree Search is responsible for deep tactical situations which require more exact planning.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unraid** Unraid: Unraid is a proprietary Linux-based operating system designed to run on home media server setups that operates as a network-attached storage device, application server, and virtualization host. Unraid is proprietary software developed and maintained by Lime Technology, Inc. Users of the software are encouraged to write and use plugins and Docker applications to extend the functionality of their systems. Features: Unraid's primary feature is the ability to easily create and manage RAID arrays in hardware-agnostic ways, allowing users to use nearly any combination of hard drives to create an array, regardless of model, capacity, or connection type. Since Unraid saves data to individual drives rather than spreading single files out over multiple drives, users can create shares, which are groups of files that can be written to multiple drives (as determined by the user or system) and allow easy access and management by users.Unraid also allows users to create Docker containers or virtual machines to host applications on the system. For example, a user could use a pre-made Docker container to host applications such as Plex, Jellyfin, and others.Unraid's user license is attached to a specific USB flash drive, which may be linked to a user's forum account. Technical specifications: Unraid is based on Linux Slackware. Supported filesystems: XFS, Btrfs, ZFS and ReiserFS. ReiserFS is only for legacy reasons and for backward compatibility, and as a main-rule, shouldn't be used on new implementations. GPL compliance: Unraid uses the Linux kernel and its filesystems. It most notably contains a greatly modified version of Linux md facilities named md_unraid. The source code is distributed as part of the USB system image and is visible in the Unraid OS in /usr/src. binwalk can be used to extract the file from bzroot without booting.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Matrix decoder** Matrix decoder: Matrix decoding is an audio technology where a small number of discrete audio channels (e.g., 2) are decoded into a larger number of channels on play back (e.g., 5). The channels are generally, but not always, arranged for transmission or recording by an encoder, and decoded for playback by a decoder. The function is to allow multichannel audio, such as quadraphonic sound or surround sound to be encoded in a stereo signal, and thus played back as stereo on stereo equipment, and as surround on surround equipment – this is "compatible" multichannel audio. Process: Matrix encoding does not allow one to encode several channels in fewer channels without losing information: one cannot fit 5 channels into 2 (or even 3 into 2) without losing information, as this loses dimensions: the decoded signals are not independent. The idea is rather to encode something that will both be an acceptable approximation of the surround sound when decoded, and acceptable (or even superior) stereo. Notation: The notation for matrix encoding consists of the number of original discrete audio channels separated by a colon from the number of encoded and decoded channels. For example, four channels encoded into two discrete channels and decoded back to four-channels would be notated: 4:2:4 Some methods derive new channels from the existing ones, with no special encoding of the audio source. For example, five discrete channels decoded to six channels would be notated: 5:5:6 Such derived channel "decoders" may take advantage of the Haas effect, as well as audio cues inherent in the source channels. Notation: Many matrix encoding methods have been developed: Hafler circuit (2:2:4): The earliest and simpler form of decoding is the Hafler circuit, deriving back channels out of normal stereo recording (2:2:4). It was used for decoding only (encoding sound was not considered). Decoding matrix Dynaquad matrix (2:2:4) / (4:2:4): The Dynaquad matrix introduced in 1969 was based on the Hafler circuit, but also used for a specific encoding of 4 sound channels in some albums (4:2:4). Encoding matrix Decoding matrix Electro-Voice Stereo-4 matrix (2:2:4) / (4:2:4): The Stereo-4 matrix was invented by Leonard Feldman and Jon Fixler, introduced in 1970, and sold by Electro-Voice and Radio Shack. This matrix was used to encode 4 sound channels on many record albums (4:2:4). Encoding matrix Decoding matrix SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4): 90 ∘ phase-shift, 90 ∘ phase-shift The basic SQ matrix had mono/stereo anomalies as well as encoding/decoding problems, heavily criticized by Michael Gerzon and others.An attempt to improve the system lead to the use of other encoders or sound capture techniques, yet the decoding matrix remained unchanged. Position Encoder An N/2 encoder that encoded every position in a 360° circle - it had 16 inputs and each could be dialed to the exact direction desired, generating an optimized encode. Forward-Oriented encoder 90 ∘ phase-shift, 90 ∘ phase-shift The Forward-Oriented encoder caused Center Back to be encoded as Center Front and was recommended for live broadcast use for maximum mono compatibility - it also encoded Center Left/Center Right and both diagonal splits in the optimal manner. SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4): Could be used to modify existing 2-channel stereo recordings and create 'synthesized SQ' that when played through a Full-Logic or Tate DES SQ decoder, exhibited a 180° or 270° synthesized quad effect. Many stereo FM radio stations broadcasting SQ in the 1970s used their Forward-Oriented SQ encoder for this. For SQ decoders, CBS designed a circuit that produced the 270° enhancement using the 90° phase shifters in the decoder. Sansui's QS Encoders and QS Vario-Matrix Decoders had a similar capability. SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4): Backwards-Oriented encoder 90 ∘ phase-shift, 90 ∘ phase-shift The Backwards-Oriented Encoder was the reverse of the Forward-Oriented Encoder - it allowed sounds to be placed optimally in the back half of the room, but mono-compatibility was sacrificed. When used with standard stereo recordings it created "extra wide" stereo with sounds outside the speakers. Some encoding mixers had channel strips switchable between forward-oriented and backwards-oriented encoding. London Box It encoded the Center Back in such a way that it didn't cancel in mono playback, thus its output was usually mixed with that of a Position Encoder or a Forward Oriented encoder. After 1972, the vast majority of SQ Encoded albums were mixed with either the Position Encoder or the Forward-Oriented encoder. Ghent microphone In addition, CBS created the SQ Ghent Microphone, which was a spatial microphone system using the Neumann QM-69 mic. The signals from the QM-69 were differenced, and then phase-matrixed into 2-channel SQ. With the Ghent Microphone, SQ was transformed from a Matrix into a Kernel and an additional signal could be derived to provide N:3:4 performance. Universal SQ In 1976, Ben Bauer integrated matrix and discrete systems into USQ, or Universal SQ. SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4): It was a hierarchical 4-4-4 discrete matrix that used the SQ matrix as the baseband for discrete quadraphonic FM broadcasts using additional difference signals called "T" and "Q". For a USQ FM broadcast, the additional "T" modulation was placed at 38 kHz in quadrature to the standard stereo difference signal and the "Q" modulation was placed on a carrier at 76 kHz. For standard 2-channel SQ Matrix broadcasts, CBS recommended that an optional pilot-tone be placed at 19 kHz in quadrature to the regular pilot-tone to indicate SQ encoded signals and activate the listeners Logic decoder. SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4): CBS argued that the SQ system should be selected as the standard for quadraphonic FM because, in FCC listening tests of the various four channel broadcast proposals, the 4:2:4 SQ system, decoded with a CBS Paramatrix decoder, outperformed 4:3:4 (without logic) as well as all other 4:2:4 (with logic) systems tested, approaching the performance of a discrete master tape within a very slight margin. At the same time, the SQ "fold" to stereo and mono was preferred to the stereo and mono "fold" of 4:4:4, 4:3:4 and all other 4:2:4 encoding systems. SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4): Tate DES decoder The Directional Enhancement System, also known as the Tate DES, was an advanced decoder that enhanced the directionality of the basic SQ matrix. It first matrixed the four outputs of the SQ decoder to derive additional signals, then compared their envelopes to detect the predominant direction and degree of dominance. SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4): A processor section, implemented outside of the Tate IC chips, applied variable attack/decay timing to the control signals and determined the coefficients of the "B" (Blend) matrices needed to enhance the directionality. These were acted upon by true analog multipliers in the Matrix Multiplier IC's, to multiply the incoming matrix by the "B" matrices and produce outputs in which the directionality of all predominant sounds were enhanced. SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4): Since the DES could recognize all three directions of the Energy Sphere simultaneously, and enhance the separation, it had a very open and 'discrete' sounding soundfield. In addition, the enhancement was done with sufficient additional complexity that all non-dominant sounds were kept at their proper levels. SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4): Dolby used the Tate DES IC's in their theater processors until around 1986, when they developed the Pro Logic system. Unfortunately, delays and problems kept the Tate DES IC's from the market until the late-1970s and only two consumer decoders were ever made that employed them, the Audionics Space & Image Composer and the Fosgate Tate II 101A. The Fosgate used a faster, updated version of the IC, called the Tate II, and additional circuitry that provided for separation enhancement around the full 360 soundfield. Unlike the earlier Full Wave-matching Logic decoders for SQ, that varied the output levels to enhance directionality, the Tate DES cancelled SQ signal crosstalk as a function of the predominant directionality, keeping non-dominant sounds and reverberation in its proper spatial locations at their correct level. QS matrix, "Regular Matrix", "Quadraphonic Sound" (4:2:4): 90 ∘ phase-shift, 90 ∘ phase-shift Matrix H (4:2:4): j = 20° phase-shift k = 25° phase-shift l = 55° phase-shift m = 115° phase-shift Ambisonic UHJ kernel (3:2:4 or more): 90 ∘ phase-shift, 90 ∘ phase-shift Dolby Stereo and Dolby Surround (matrix) 4:2:4: Dolby Stereo and Dolby Surround are also known as Dolby MP, Dolby SVA and Pro Logic. Dolby SVA matrix is the original name of the Dolby Stereo 4:2:4 encoding matrix. The term "Dolby Surround" refers to both the encoding and decoding in the home environment, while in the theater it is known "Dolby Stereo", "Dolby Motion Picture matrix" or "Dolby MP". "Pro Logic" refers to the decoder used, there is no special Pro Logic encoding matrix. The Ultra Stereo system, developed by different company, is compatible and uses similar matrixes to Dolby Stereo. Dolby Stereo and Dolby Surround (matrix) 4:2:4: The Dolby Stereo Matrix is straightforward: the four original channels: Left (L), Center (C), Right (R), and Surround (S), are combined into two, known as Left-total (LT) and Right-total (RT) by this formula: where j = 90° phase-shift The center channel information is carried by both LT and RT in phase, and surround channel information by both LT and RT but out of phase. The surround channel is a single limited frequency-range (7 kHz low-pass filtered) mono rear channel, dynamically compressed and placed with a lower volume than the rest. This allows for better separation of signals. Dolby Stereo and Dolby Surround (matrix) 4:2:4: This gives good compatibility with both mono playback, which reproduces L, C and R from the mono speaker with C at a level 3 dB higher than L or R, but surround information cancels out. It also gives good compatibility with two-channel stereo playback where C is reproduced from both left and right speakers to form a phantom center and surround is reproduced from both speakers but in a diffuse manner. Dolby Stereo and Dolby Surround (matrix) 4:2:4: A simple 4-channel decoder could simply send the sum signal (L+R) to the center speaker, and the difference signal (L-R) to the surrounds. But such a decoder would provide poor separation between adjacent speaker channels, thus anything intended for the center speaker would also reproduce from left and right speakers only 3 dB below the level in the center speaker. Similarly anything intended for the left speaker would be reproduced from both the center and surround speakers, again only 3 dB below the level in the left speaker. There is, however, complete separation between left and right, and between center and surround channels. Dolby Stereo and Dolby Surround (matrix) 4:2:4: To overcome this problem the cinema decoder uses so-called "logic" circuitry to improve the separation. The logic circuitry decides which speaker channel has the highest signal level and gives it priority, attenuating the signals fed to the adjacent channels. Because there already is complete separation between opposite channels there is no need to attenuate those, in effect the decoder switches between L and R priority and C and S priority. This places some limitations on mixing for Dolby Stereo and to ensure that sound mixers mixed soundtracks appropriately they would monitor the sound mix via a Dolby Stereo encoder and decoder in tandem. In addition to the logic circuitry the surround channel is also fed via a delay, adjustable up to 100 ms to suit auditoria of differing sizes, to ensure that any leakage of program material intended for left or right speakers into the surround channel is always heard first from the intended speaker. This exploits the "Precedence effect" to localize the sound to the intended direction. Dolby Pro Logic II matrix (5:2:5): 90 ∘ phase-shift, 90 ∘ phase-shift The Pro Logic II matrix provides for stereo full frequency back channels. Normally a sub-woofer channel is driven by simply filtering and redirecting the existing bass frequencies of the original stereo track.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LOXL2** LOXL2: Lysyl oxidase homolog 2 is an enzyme that in humans is encoded by the LOXL2 gene. Function: This gene encodes a member of the lysyl oxidase gene family. The prototypic member of the family is essential to the biogenesis of connective tissue, encoding an extracellular copper-dependent amine oxidase that catalyses the first step in the formation of crosslinks in collagens and elastin. A highly conserved amino acid sequence at the C-terminus end appears to be sufficient for amine oxidase activity, suggesting that each family member may retain this function. The N-terminus is poorly conserved and may impart additional roles in developmental regulation, senescence, tumor suppression, cell growth control, and chemotaxis to each member of the family.LOXL2 can also crosslink collagen type IV and hence influence the sprouting of new blood vessels. Clinical significance: LOXL2 is an enzyme that is up-regulated in several types of cancer and is associated with a poorer prognosis. LOXL2 changes the structure of histones (proteins that are attached to DNA) and thus changes the shape of the cells, making it easier for the cancer cells to metastasize.An antibody that inhibits the activity of LOXL2, simtuzumab and is currently in clinical trials for the treatment of several types of cancer and fibrotic diseases such as liver fibrosis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**W Corvi** W Corvi: W Corvi is an eclipsing binary star system in the constellation Corvus, ranging from apparent magnitude 11.16 to 12.5 over 9 hours. Its period has increased by 1/4 second over a century. It is an unusual system in that its two stars are very close to each other yet have different surface temperatures and hence thermal transfer is not taking place as expected.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Getsuku** Getsuku: Getsuku (月9, getsuku, sometimes further shortened to gekku) is a Japanese abbreviation for Getsuyō kuji (月曜9時, Monday at 9 pm). This is traditionally the time when the most popular TV dramas air in Japan. History: Fuji TV, one of the major broadcasting companies in Japan, started the pattern of airing the dramas it predicted would be most popular on Monday nights. Mondays are the only night in which generally no baseball games are played, meaning that throughout the year, there would be no delays in broadcasting (due to the popularity of baseball, some stations continue to broadcast games that extend beyond their expected finish time, thus delaying the start of subsequent programs). While all of the stations often aired dramas on Monday nights, it was the popularity of the dramas Tokyo Love Story and The 101st Marriage Proposal that propelled the 9 pm slot on Mondays. Fuji TV started branding its 9 pm Monday shows as "Getsuku", and actors and actresses would be interviewed as starring in their first "getsuku". Not long after this branding began, the other stations also started showing their most popular dramas on Monday nights.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cannabis in Benin** Cannabis in Benin: Cannabis in Benin is illegal. The country is not a major drug producer or consumer, but increasingly serves as a transshipment point for drugs produced elsewhere. Cannabis is the only drug produced locally in Benin, though mostly on a small scale. History: The Encyclopedia of Drug Policy noted in 2011 that over the past two decades, sale of cannabis had increasingly fallen under the control of organized crime syndicates operating regionally, particularly from Nigeria. Porto-Novo emerged as a particular transit point, given its proximity to Nigeria which allowed collaboration with Yoruba smugglers. Enforcement: In 2006, Benin seized 82 kilograms of cannabis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SPEN** SPEN: Msx2-interacting protein is a protein that in humans is encoded by the SPEN gene.This gene encodes a hormone inducible transcriptional repressor. Repression of transcription by this gene product can occur through interactions with other repressors, by the recruitment of proteins involved in histone deacetylation, or through sequestration of transcriptional activators. The product of this gene contains a carboxy-terminal domain that permits binding to other corepressor proteins. This domain also permits interaction with members of the NuRD complex, a nucleosome remodeling protein complex that contains deacetylase activity. In addition, this repressor contains several RNA recognition motifs that confer binding to a steroid receptor RNA coactivator; this binding can modulate the activity of both liganded and nonliganded steroid receptors. Interactions: SPEN has been shown to interact with HDAC1, SRA1 and Nuclear receptor co-repressor 2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hydrophobic effect** Hydrophobic effect: The hydrophobic effect is the observed tendency of nonpolar substances to aggregate in an aqueous solution and exclude water molecules. The word hydrophobic literally means "water-fearing", and it describes the segregation of water and nonpolar substances, which maximizes hydrogen bonding between molecules of water and minimizes the area of contact between water and nonpolar molecules. In terms of thermodynamics, the hydrophobic effect is the free energy change of water surrounding a solute. A positive free energy change of the surrounding solvent indicates hydrophobicity, whereas a negative free energy change implies hydrophilicity. Hydrophobic effect: The hydrophobic effect is responsible for the separation of a mixture of oil and water into its two components. It is also responsible for effects related to biology, including: cell membrane and vesicle formation, protein folding, insertion of membrane proteins into the nonpolar lipid environment and protein-small molecule associations. Hence the hydrophobic effect is essential to life. Substances for which this effect is observed are known as hydrophobes. Amphiphiles: Amphiphiles are molecules that have both hydrophobic and hydrophilic domains. Detergents are composed of amphiphiles that allow hydrophobic molecules to be solubilized in water by forming micelles and bilayers (as in soap bubbles). They are also important to cell membranes composed of amphiphilic phospholipids that prevent the internal aqueous environment of a cell from mixing with external water. Folding of macromolecules: In the case of protein folding, the hydrophobic effect is important to understanding the structure of proteins that have hydrophobic amino acids (such as glycine, alanine, valine, leucine, isoleucine, phenylalanine, tryptophan and methionine) clustered together within the protein. Structures of water-soluble proteins have a hydrophobic core in which side chains are buried from water, which stabilizes the folded state. Charged and polar side chains are situated on the solvent-exposed surface where they interact with surrounding water molecules. Minimizing the number of hydrophobic side chains exposed to water is the principal driving force behind the folding process, although formation of hydrogen bonds within the protein also stabilizes protein structure.The energetics of DNA tertiary structure assembly were determined to be driven by the hydrophobic effect, in addition to Watson-Crick base pairing, which is responsible for sequence selectivity, and stacking interactions between the aromatic bases. Protein purification: In biochemistry, the hydrophobic effect can be used to separate mixtures of proteins based on their hydrophobicity. Column chromatography with a hydrophobic stationary phase such as phenyl-sepharose will cause more hydrophobic proteins to travel more slowly, while less hydrophobic ones elute from the column sooner. To achieve better separation, a salt may be added (higher concentrations of salt increase the hydrophobic effect) and its concentration decreased as the separation progresses. Cause: The origin of the hydrophobic effect is not fully understood. Cause: Some argue that the hydrophobic interaction is mostly an entropic effect originating from the disruption of highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute. A hydrocarbon chain or a similar nonpolar region of a large molecule is incapable of forming hydrogen bonds with water. Introduction of such a non-hydrogen bonding surface into water causes disruption of the hydrogen bonding network between water molecules. The hydrogen bonds are reoriented tangentially to such surface to minimize disruption of the hydrogen bonded 3D network of water molecules, and this leads to a structured water "cage" around the nonpolar surface. The water molecules that form the "cage" (or clathrate) have restricted mobility. In the solvation shell of small nonpolar particles, the restriction amounts to some 10%. For example, in the case of dissolved xenon at room temperature a mobility restriction of 30% has been found. In the case of larger nonpolar molecules, the reorientational and translational motion of the water molecules in the solvation shell may be restricted by a factor of two to four; thus, at 25 °C the reorientational correlation time of water increases from 2 to 4-8 picoseconds. Generally, this leads to significant losses in translational and rotational entropy of water molecules and makes the process unfavorable in terms of the free energy in the system. By aggregating together, nonpolar molecules reduce the surface area exposed to water and minimize their disruptive effect. Cause: The hydrophobic effect can be quantified by measuring the partition coefficients of non-polar molecules between water and non-polar solvents. The partition coefficients can be transformed to free energy of transfer which includes enthalpic and entropic components, ΔG = ΔH - TΔS. These components are experimentally determined by calorimetry. The hydrophobic effect was found to be entropy-driven at room temperature because of the reduced mobility of water molecules in the solvation shell of the non-polar solute; however, the enthalpic component of transfer energy was found to be favorable, meaning it strengthened water-water hydrogen bonds in the solvation shell due to the reduced mobility of water molecules. At the higher temperature, when water molecules become more mobile, this energy gain decreases along with the entropic component. The hydrophobic effect depends on the temperature, which leads to "cold denaturation" of proteins.The hydrophobic effect can be calculated by comparing the free energy of solvation with bulk water. In this way, the hydrophobic effect not only can be localized but also decomposed into enthalpic and entropic contributions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MIMOS II** MIMOS II: MIMOS II is the miniaturised Mössbauer spectrometer, developed by Dr. Göstar Klingelhöfer at the Johannes Gutenberg University in Mainz, Germany, that is used on the Mars Exploration Rovers Spirit and Opportunity for close-up investigations on the Martian surface of the mineralogy of iron-bearing rocks and soils.MIMOS II uses a Cobalt-57 gamma ray source of about 300 mCi at launch which gave a 6-12 hr time for acquisition of a standard MB spectrum during the primary mission on Mars, depending on total Fe content and which Fe-bearing phases are present. Cobalt-57 has a half-life of only 271.8 days (hence the extended measuring times now on Mars after over a decade). MIMOS II: The MIMOS II sensorheads used on Mars are approx 9 cm x 5 cm x 4 cm and weigh about 400gThe MIMOS II system also includes a circuit board of about 100g.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Academy of Medical and Biological Engineering** International Academy of Medical and Biological Engineering: The International Academy of Medical and Biological Engineering (IAMBE) is a non-profit society of distinguished scholars engaged in medical and biological engineering research to further the field of biomedical engineering or bioengineering. The academy is composed of Fellows who have made significant contributions to and played leadership roles in the field of medical and biological engineering. The academy is affiliated with the International Federation for Medical and Biological Engineering (IFMBE), an international organization consisting of more than 60 national and transnational societies of biomedical engineering, representing over 120,000 members. International Academy of Medical and Biological Engineering: The academy was established by International Federation for Medical and Biological Engineering in 1997 to honor individuals who have contributed significantly to the theory and practice of medical and biological engineering, and made extraordinary leadership efforts in promoting the field of medical and biological engineering. International Academy of Medical and Biological Engineering: The academy has engaged in public debates in identifying grand challenges in engineering life sciences, and played an advisory role to the leadership of International Federation for Medical and Biological Engineering. A recent conference endorsed by the academy is IEEE Life Sciences Grand Challenges Conference, which consists of distinguished speakers including Nobel Laureate, National Medal of Sciences Laureate, National Medal of Technology Laureate, president of National Academy of Engineering, director of National Institute of Biomedical Imaging and Bioengineering of NIH, and chair of the International Academy of Medical and Biological Engineering. International Academy of Medical and Biological Engineering: Fellow nominations are accepted either annually or every three years. Fellows may be nominated by a fellow of the academy or by International Federation for Medical and Biological Engineering. The nominations are screened by the Membership Committee of the academy. Election to Fellow status is subject to a vote by all current Fellows. International Academy of Medical and Biological Engineering: The founding chair of the academy is Robert Nerem, Parker H. Petit Distinguished Chair for Engineering in Medicine and Institute Professor at the Georgia Institute of Technology. The immediate Past Chair of the academy is Niilo Saranummi, Research Professor at VTT Technical Research Centre of Finland and Past President of International Federation for Medical and Biological Engineering. The current chair of the academy is Roger Kamm, Cecil and Ida Green Professor of Biological and Mechanical Engineering at the Massachusetts Institute of Technology. List of fellows: Metin Akay, '12, University of Houston, USA Joji Ando, '09, Dokkyo Medical University, Japan Lars Arendt-Nielsen, '03, Aalborg University, Denmark Kazuhiko Atsumi, FF, FE, University of Tokyo, Japan Albert Avolio, '12, Macquarie University, Australia Jing Bai, '12, Tsinghua University, China James Bassingthwaighte, '09, University of Washington, USA Rebecca Bergman, '12, Medtronic, USA Marcello Bracale, '03, University of Naples Federico II, Italy Per-Ingvar Branemark, '00, University of Gothenburg, Sweden Colin Caro, '00, Imperial College London, UK Ewart Carson, '06, City University London, UK Sergio Cerutti, '03, Polytechnic University in Milan (Politecnico), Italy Walter H. Chang, '06, Chung Yuan Christian University, Taiwan Shu Chien, '00, University of California San Diego, USA Jean-Louis Coatrieux, '02, University of Rennes 1, France Richard S. C. Cobbold, FF, University of Toronto, Canada Paolo Dario, '03, Sant'Anna School of Advanced Studies of Pisa, Italy Ivan Daskalov, '03 (Deceased), Bulgarian Academy of Sciences, Bulgaria David Delpy, '03, Engineering and Physical Sciences Research Council (EPSRC), UK Jacques Demongeot, '12, University Joseph Fourier, France André Dittmar, '12, Centre national de la recherche scientifique (CNRS), France Takeyoshi Dohi, '06, University of Tokyo, Japan Olaf Doessel, '12, Karlsruhe Institute of Technology, Germany Floyd Dunn, FF, FE, University of Illinois at Urbana-Champaign, USA Shmuel Einav, '05, Tel Aviv University, Israel Ross Ethier, '09, Imperial College London, UK Uwe Faust, FF, FE, University of Stuttgart, Germany Leszek Filipczynski, FF, FE (Deceased), Polish Academy of Sciences, Poland Yuan-Cheng B. Fung, FF, FE, University of California San Diego, USA Leslie Alexander Geddes, FF (Deceased), University of Toronto, Canada Amit Gefen, '14, Tel Aviv University, Israel Morteza Gharib, '12, California Institute of Technology, USA Bin He, '12, University of Minnesota, USA Hiie Hinrikus, '03, Tallinn University of Technology, Estonia Nozomu Hoshimiya, '06, Tohoku Gakuin University, Japan Peter Hunter, '03, University of Auckland, New Zealand Helmut Hutten, '03, Graz University of Technology, Austria Dov Jaron, '06, Drexel University, USA Fumihiko Kajiya, '00, Kawasaki University of Medical Welfare, Japan Akira Kamiya, '02, Nihon University, Japan Roger D. Kamm, '06, Massachusetts Institute of Technology, USA Hiroshi Kanai, FF, FE, Tohoku University, Japan Zhenhuang Kang, FF, Chengdu University of Science and Technology, China Toivo Katila, '03, Helsinki University central Hospital, Finland Richard E. Kerber, '09, University of Iowa, USA Makoto Kikuchi, '12, National Defence Medical College, Japan Yongmin Kim, '09, Pohang University of Science and Technology, South Korea Richard Kitney, '03, Imperial College London, UK Peter Kneppo, '03, Czech Technical University, Czech Republic Pablo Laguna, '12, University of Zaragoza, Spain Daniel Laurent, FF, FE, Universite Marne-La-Vallee, France Raphael Lee, '12, University of Chicago, USA Peter Lewin, '12, Drexel University, USA Pai-Chi Li, '09, National Taiwan University, Taiwan Zhi-Pei Liang, '12, University of Illinois at Urbana-Champaign, USA John H. Linehan, '06, Northwestern University, USA Jaakko Malmivuo, '03, Tampere University of Technology, Finland Roman Maniewski, '03, Polish Academy of Sciences, Poland Andrew McCulloch, '05, University of California San Diego, USA Jean-Pierre Morucci, FE, National Institute of Health and Medical Research, France Joachim Nagel, '12, University of Stuttgart, Germany Maciej Nalecz, FF, FE (Deceased), Polish Academy of Sciences, Poland Robert M. Nerem, FF, Georgia Institute of Technology, USA Peter Niederer, '00, ETH Zurich, Switzerland Benno M. Nigg, '00, University of Calgary, Canada Marc Nyssen, '12, Free University Brussels, Belgium P. Ake Oberg, FF, Linkoping University, Sweden Kazuo Ogino, '12, Nihon Kohden Corporation, Japan Nicolas Pallikarakis, '05, University of Patras, Greece John P Paul, FF, FE, University of Strathclyde, UK Antonio Pedotti, '00, Politecnico di Milano, Italy Robert Plonsey, FF, FE, Duke University, USA Leandre Pourcelot, '00, Francois Rabelais University, France Jose Principe, '12, University of Florida, USA Basil Proimos, FE, University of Patras, Greece Buddy D. Ratner, '00, University of Washington, USA Gunter Rau, FF, RWTH Aachen University, Germany Robert S Reneman, '03, Maastricht University, Netherlands James B. Reswick, FF, FE, U.S. Department of Education, USA Nandor Richter, FF, FE, National Institute for Hospital and Medical Engineering, Hungary Laura Roa, '03, University of Seville, Spain Fernand A Roberge, FF, University of Montreal, Canada Colin Roberts, '02, King's College London, UK Peter Rolfe, FF, Oxford BioHorizons Ltd., UK, and Harbin Institute of Technology, China Annelise Rosenfalck, FF (Deceased), Aalborg University, Denmark Christian Roux, '03, Télécom Bretagne, France Masao Saito, FF, Tokyo Denki University, Japan Niilo Saranummi, '00, VTT Technical Research Center, Finland Shunske Sato, '03, Osaka University, Japan Klaus Schindhelm, 'FF, University of New South Wales, Australia Geert W. Schmid-Schoenbe, '05, University of California San Diego, USA Leif Sornmo, '12, Lund University, Sweden Jos AE Spaan, '09, University of Amsterdam, Netherlands Kazuo Tanishita, '09, Keio University Nitish Thakor, '12, Johns Hopkins School of Medicine, USA Jie Tian, '12, Chinese Academy of Sciences, China Tatsuo Togawa, FF, Waseda University, Japan Shoogo Ueno, '06, University of Tokyo, Japan Max E. Valentinuzzi, FF, FE, University of Buenos Aires, Argentina Christopher L. 'Kit' Vaughan, '05, University of Cape Town, South Africa Karin Wardell, '12, Linkoping University, Sweden Peter Wells, FF, FE, Cardiff University, UK Andrzej Werynski, '03, Institute of Biocybernetics and Biomedical Engineering, Poland Nico Westerhof, FE, VU University, Netherlands Erich Wintermantel, '03, Swiss Federal Institute of Technology Zurich, Switzerland Jan Wojcicki, '12, Polish Academy of Sciences, Poland Bernhard Wolf, '12, Technische Universitaet Muenchen, Germany Zi Bin Yang, '02, Peking Union Medical College, China Yuan-Ting Zhang, '06, Chinese University of Hong Kong, Hong Kong FF stands for Founding Fellows FE stands for Fellow Emeritus The numbers beside the name indicate the year they were inducted
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded