text
stringlengths
60
353k
source
stringclasses
2 values
**ZNF274** ZNF274: Zinc finger protein 274 is a protein that in humans is encoded by the ZNF274 gene.This gene encodes a zinc finger protein containing five C2H2-type zinc finger domains, one or two Kruppel-associated box A (KRAB A) domains, and a leucine-rich domain. The encoded protein has been suggested to be a transcriptional repressor. It localizes predominantly to the nucleolus. Alternatively spliced transcript variants encoding different isoforms exist. These variants utilize alternative polyadenylation signals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Enteropathy** Enteropathy: Enteropathy refers to any pathology of the intestine. Although enteritis specifically refers to an inflammation of the intestine, and is thus a more specific term than "enteropathy", the two phrases are sometimes used interchangeably. Types: Specific types of enteropathy include: Enteropathy-associated T-cell lymphoma Environmental enteropathy, also known as tropical enteropathyAn incompletely defined syndrome of inflammation related to the quality of the environment. Signs and symptoms include reduced absorptive capacity and reduced intestinal barrier function of the small intestine. It is widespread among children and adults in low- and middle-income countries.Eosinophilic enteropathyA condition in which eosinophils (a type of white blood cell) accumulate in the gastrointestinal tract and in the blood. Eosinophil build up in the gastrointestinal tract can result in polyp formation, tissue break down, inflammation, and ulcers.Coeliac diseaseA malabsorption syndrome precipitated by the ingestion of foods containing gluten in a predisposed individual. It is characterized by inflammation of the small intestine, loss of microvilli structure, deficient nutrient absorption, and malnutrition.Human immunodeficiency virus (HIV) enteropathyCharacterized by chronic diarrhea more than one month in duration with no obvious infectious cause in an HIV-positive individual. Thought to be due to direct or indirect effects of HIV on the enteric mucosa.Immunodysregulation polyendocrinopathy and enteropathy, X-linked (IPEX syndrome, see FOXP3) Protein losing enteropathy Radiation enteropathy Chronic enteropathy associated with SLCO2A1 geneIf the condition also involves the stomach, it is known as "gastroenteropathy". Types: In pigs, porcine proliferative enteropathy is a diarrheal disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Loewy ring** Loewy ring: In mathematics, a Loewy ring or semi-Artinian ring is a ring in which every non-zero module has a non-zero socle, or equivalently if the Loewy length of every module is defined. The concepts are named after Alfred Loewy. Loewy length: The Loewy length and Loewy series were introduced by Emil Artin, Cecil J. Nesbitt, and Robert M. Thrall (1944). If M is a module, then define the Loewy series Mα for ordinals α by M0 = 0, Mα+1/Mα = socle(M/Mα), and Mα = ∪λ<α Mλ if α is a limit ordinal. The Loewy length of M is defined to be the smallest α with M = Mα, if it exists. Semiartinian modules: RM is a semiartinian module if, for all epimorphisms M→N , where N≠0 , the socle of N is essential in N. Note that if RM is an artinian module then RM is a semiartinian module. Clearly 0 is semiartinian. If 0→M′→M→M″→0 is exact then M′ and M″ are semiartinian if and only if M is semiartinian. If {Mi}i∈I is a family of R -modules, then ⊕i∈IMi is semiartinian if and only if Mj is semiartinian for all j∈I. Semiartinian rings: R is called left semiartinian if RR is semiartinian, that is, R is left semiartinian if for any left ideal I , R/I contains a simple submodule. Note that R left semiartinian does not imply that R is left artinian.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Location obfuscation** Location obfuscation: Location obfuscation is a technique used in location-based services or information systems to protect the location of the users by slightly altering, substituting or generalizing their location in order to avoid reflecting their real position. A formal definition of location obfuscation can be "the means of deliberately degrading the quality of information about an individual's location in order to protect that individual's location privacy. Obfuscation techniques: The most common techniques to perform this change are: Pseudonyms and the use of third party location providers "Spatial cloaking" techniques in which a user is k-anonymous if her exact location cannot be distinguished among k-1 other users "Invisible cloaking", in which no locations are provided for certain zones Adding random noise to the position Rounding, which uses landmarks to approximate the location Redefinition of possible areas of location.Each technique for obfuscating location has strengths and weaknesses, and it is important to assess them based on each use case. For example, adding random noise is simple to implement, but can inadvertently create a circle of obfuscated values where the center reveals the individual's exact location. One should also consider the level of obfuscation required in urban areas versus rural areas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Panoramic tripod head** Panoramic tripod head: A panoramic tripod head is a piece of photographic equipment, mounted to a tripod, which allows photographers to shoot a sequence of images around the entrance pupil of a lens that can be used to produce a panorama. The primary function of the panoramic head is to precisely set the point of rotation about the entrance pupil for a given lens and focal length, eliminating parallax error. Panoramic tripod head: To take a panorama, the camera is rotated at fixed angular increments, taking an image at each point. These images can then be assembled (stitched) using stitching software, which allows the images to be aligned and combined into a single seamless panoramic image, either automatically (using image analysis) or manually (with user supplied control points). The final panoramic image can then be viewed or printed as a flat image or viewed interactively using specific playback software. Panoramic tripod head: Professional models include precision bearings, scales to allow the user to take photos at specific angles, detents to stop at common angles and integrated levels to aid in adjusting the tripod. Robotic panoramic heads are also available. The robotic head performs the rotation and image capture functions automatically under computer control. Robotic heads can also be used with time-lapse photography.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Power kite** Power kite: A power kite or traction kite is a large kite designed to provide significant pull to the user. Types: The two most common forms are the foil, and the leading edge inflatable. There are also other less common types of power kite including rigid-framed kites and soft single skin kites. There are several different control systems used with these kites which have two to five lines and a bar or handles. Types: Foil kites consist of a number of cells with cloth ribs in each cell. It is the profile of these ribs that gives the kite its aerofoil shape and enable it to generate lift. The most common type is the ram-air foil, where each cell has a gauze-covered opening at the front, meaning air is forced in during flight, giving the kite its stiffness and enabling it to hold its profile. Some ram-air foils are closed-cell, where a one-way valve locks the air inside the cells, giving some increased water relaunch capability. Types: Leading edge inflatable kites (LEIs) are made of a single skin of fabric with, as the name suggests, an inflated tubular leading edge and inflated ribs. The leading edge and ribs are inflated by the user with a pump prior to launching the kite. The profile of an LEI type kite comes from the inflatable edge and ribs. LEI kites are primarily used for kitesurfing, as they retain their structure when wet and can be easily relaunched from the water after sitting on the surface for an extended period. Conversely, an open-celled foil kite crashed into the sea immediately becomes saturated with water and unflyable. Uses: Power kites are generally used in conjunction with a vehicle or board, such as in: kitesurfing on a kiteboard kite buggying on a purpose-built 3-wheeled cart kite landboarding on an all-terrain/mountain/land board kite skating on all-terrain roller skates kiteboating, on a boat snowkiting on skis or snowboardsPower kites can also be used recreationally without a vehicle or board, as in kite jumping or kite man lifting, where a harnessed kite flier is moored to the ground or one or more people to provide tension and lift Research is also under way in the use of kites to generate electric power to be fed into a power grid. Laddermills are a type of airborne wind turbine. Kites are used to reach high altitude winds such as a jet stream, which are always present, even if ground level winds available to wind turbines are absent. Uses: Kites of related design are used for sailing, including speed sailing. Jacob's Ladder, a kite-powered boat, set the C-Class world sailing speed record with a speed of 25 knots (46 km/h) in 1982, a record that stood for six years. A kiteboard was the first sailing craft to exceed a speed of 50 knots (93 km/h) in October 2008.Power kites range in size from 1.2 to 50 m2 (13 to 538 sq ft). All kites are made for specific purposes: some for water, land, power or maneuverability. Bridle configuration: The lift generated by the kite and other flying characteristics are affected by the kite's angle of attack, which is set by the bridle; the arrangement of lines which terminate the main kite lines and attach to a number of points across the kite's surface. Power kites having 4 or 5 lines come in two variants, fixed bridle and depowerable. Bridle configuration: Fixed bridle Fixed bridle kites have a fixed angle of attack which is set by the bridle. Small adjustments may be possible by adjusting the bridle with the kite on the ground, however the angle of attack is not adjustable whilst the kite is airborne. A high angle of attack setting results in more power from the kite, but at the expense of speed and ability to fly close to the wind. A low angle of attack results in less power, but speed is increased and the kite can fly a lot closer to the edge of the wind window. Fixed bridle kites may be used with handles or a bar, with handles typically being preferable for activities such as kite jumping and kite buggying, and a bar being preferable for kite landboarding. Bridle configuration: Depowerable Depowerable kites are used with a control bar and harness system, with the kite's primary power lines attached to the user's harness through a hole in the centre of the bar. The bar has a few inches of travel along the lines, and the lines are configured such that the user may pull the bar towards themselves to increase the kite's angle of attack, increasing the lift and thus the power delivered through the harness whilst the kite is in flight. Kites used for kitesurfing are almost invariably depowerable, and some modern kites such as bow kites allow power to be reduced by almost 100% for increased safety and versatility. Safety: Kite safety systems have become more prevalent in recent years, and today almost all 4 and 5 line kites are used with a safety system designed to remove power from the kite in the event that the user becomes overpowered or loses control of the kite. When flying a fixed bridle kite, one or more straps known as 'kite killers' are attached to the user's wrist(s) by bungee cords. When the handles or bar are released, these straps pull on the kite's brake lines at the trailing edge of the kite, allowing the kite to flap in the wind with no structure. Safety: Depowerable kites have safety systems that work in a similar way, but since the kite is semi-permanently attached to the user's harness, a toggle or handle is used to activate the safety system which releases the bar and power lines from the harness. Safety: Some depowerable kites have a 5th line safety system, the 5th line being redundant during normal use until the safety mechanism is activated. Here, all of the usual four lines are slackened, causing the kite to either fold or roll backwards, and lose its profile to the wind and therefore its power. The kite is left attached to the user by the 5th line to allow retrieval. History: 19th century In the 1800s, George Pocock used kites of increased size to propel carts on land and ships on the water, using a four-line control system—the same system in common use today. Both carts and boats were able to turn and sail upwind. The kites could be flown for sustained periods. The intention was to establish kitepower as an alternative to horsepower, partly to avoid the hated "horse tax" that was levied at that time. Aviation pioneer Samuel Cody developed several "man-lifting kites" and in 1903 succeeded in crossing the English Channel in a small collapsible canvas boat powered by a kite. History: 20th century In the late 1970s, the development of Kevlar then Spectra flying lines and more controllable kites with improved efficiency contributed to practical kite traction. In 1978, Ian Day's "FlexiFoil" kite-powered Tornado catamaran exceeded 40 km/h. History: In October 1977 Gijsbertus Adrianus Panhuise (Netherlands) received the first patent for KiteSurfing. The patent covers, specifically, a water sport using a floating board of a surf board type where a pilot standing up on it is pulled by a wind catching device of a parachute type tied to his harness on a trapeze type belt. Although this patent did not result in any commercial interest, Gijsbertus Adrianus Panhuise could be considered as the originator of KiteSurfing. History: On 28 August 1982 Greg Locke and Simon Carter, from Brighton UK, set the world record for kite traction at sea, travelling nearly 26 miles under wind power alone along the English channel. This followed a successful crossing of the English Channel from Sussex to France by Locke & Carter the previous year. Through the 1980s, there were occasionally successful attempts to combine kites with canoes, ice skates, snow skis, water skis and roller skates. History: Throughout the 1970s and early 1980s, Dieter Strasilla from Germany developed parachute-skiing and later perfected a kite-skiing system using self-made paragliders and a ball-socket swivel allowing the pilot to sail upwind and uphill but also to take off into the air at will. Strasilla and his Swiss friend Andrea Kuhn used this invention also in combination with surfboards and snowboards, grasskies and self-made buggies. One of his patents describes in 1979 the first use of an inflatable kite design for kitesurfing.Two brothers, Bruno Legaignoux and Dominique Legaignoux, from the Atlantic coast of France, developed kites for kitesurfing in the late 1970s and early 1980s and patented an inflatable kite design in November 1984, a design that has been used by companies to develop their own products. History: In 1990, practical kite buggying was pioneered by Peter Lynn at Argyle Park in Ashburton, New Zealand. Lynn coupled a three-wheeled buggy with a forerunner of the modern parafoil kite. Kite buggying proved to be popular worldwide, with over 14,000 buggies sold up to 1999.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nonstandard finite difference scheme** Nonstandard finite difference scheme: Nonstandard finite difference schemes is a general set of methods in numerical analysis that gives numerical solutions to differential equations by constructing a discrete model. The general rules for such schemes are not precisely known. Overview: A finite difference (FD) model of a differential equation (DE) can be formed by simply replacing the derivatives with FD approximations. But this is a naive "translation." If we literally translate from English to Japanese by making a one-to-one correspondence between words, the original meaning is often lost. Similarly the naive FD model of a DE can be very different from the original DE, because the FD model is a difference equation with solutions that may be quite different from solutions of the DE. For a more technical definition see Mickens 2000.A nonstandard (NS) finite difference model, is a free and more accurate "translation" of a differential equation. For example, a parameter (call it v) in the DE may take another value u in the NS-FD model. Example: As an example let us model the wave equation, 0. The naive finite difference model, which we now call the standard (S) FD model is found by approximating the derivatives with FD approximations. The central second order FD approximation of the first derivative is f′(x)≈f(x+Δx/2)−f(x−Δx/2)Δx. Applying the above FD approximation to f′(x) , we can derive the FD approximation for f″(x) ,f″(x)≈dx2f(x)Δx2, where we have introduced the shortcut dxf(x)=f(x+Δx/2)−f(x−Δx/2) for simplicity such that dx2f(x)=f(x+Δx)+f(x−Δx)−2f(x) which can be check by applying dx on f(x) twice. Approximating both derivatives in the wave equation, leads to the S-FD model, 0. If you insert the solution ϕ(x,t)=ei(kx−ωt) of the wave equation (with ω/k=v )into the S-FD model you find that [dt2−(vΔt/Δx)2dx2]ϕ(x,t)=ϵ. In general ϵ≠0 because the solution of the FD approximation to the wave equation is not the same as the wave equation itself. To construct a NS-FD model which has the same solution as the wave equation, put a free parameter, call it u, in place of vΔt/Δx and try to find a value of u which makes ϵ=0 It turns out that this value of u is sin sin ⁡(kΔx/2). Thus an exact nonstandard finite difference model of the wave equation is 0. Further details and extensions of to two and three dimensions as well as to Maxwell's equations can be found in Cole 2002.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calcium-dependent chloride channel** Calcium-dependent chloride channel: The Calcium-Dependent Chloride Channel (Ca-ClC) proteins (or calcium-activated chloride channels (CaCCs), are heterogeneous groups of ligand-gated ion channels for chloride that have been identified in many epithelial and endothelial cell types as well as in smooth muscle cells. They include proteins from several structurally different families: chloride channel accessory (CLCA), bestrophin (BEST), and calcium-dependent chloride channel anoctamin (ANO or TMEM16) channels ANO1 is highly expressed in human gastrointestinal interstitial cells of Cajal, which are proteins which serve as intestinal pacemakers for peristalsis. In addition to their role as chloride channels some CLCA proteins function as adhesion molecules and may also have roles as tumour suppressors. These eukaryotic proteins are "required for normal electrolyte and fluid secretion, olfactory perception, and neuronal and smooth muscle excitability" in animals. Members of the Ca-CIC family are generally 600 to 1000 amino acyl residues (aas) in length and exhibit 7 to 10 transmembrane segments (TMSs). Function: Tmc1 and Tmc2 (TC#s 1.A.17.4.6 and 1.A.17.4.1, respectively) may play a role in hearing and are required for normal function of cochlear hair cells, possibly as Ca2+ channels or Ca2+ channel subunits (see also family TC# 1.A.82). Mice lacking both channels lack hair cell mechanosensory potentials. There are 8 members of this family in humans, 1 in Drosophila and 2 in C. elegans. One of the latter two is expressed in mechanoreceptors. Tmc1 is a sodium-sensitive cation channel required for salt (Na+) chemosensation in C. elegans "where it is required for salt-evoked neuronal activity and behavioural avoidance of high concentrations of NaCl".TMEM16A is over-expressed in several tumor types. The role of TMEM16A in gliomas and the potential underlying mechanisms were analyzed by Liu et al. 2014. Knockdown of TMEM16A suppressed cell proliferation, migration and invasion. Function: The reactions believed to be catalyzed by channels of the Ca-ClC family are:Cl− (out) ⇌ Cl− (in)andCations (e.g., Ca2+) (out) ⇌ Cations (e.g., Ca2+) (in) In humans: CaCCs that are known to occur in humans include: Accessories: CLCA1, CLCA2, CLCA3, and CLCA4 Anoctamins: ANO1 and ANO2 (potentially others) Bestrophins: BEST1, BEST2, BEST3, and BEST4
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KTN1** KTN1: Kinectin is a protein that in humans is encoded by the KTN1 gene. Function: Various cellular organelles and vesicles are transported along the microtubules in the cytoplasm. Likewise, membrane recycling of the endoplasmic reticulum (ER), Golgi assembly at the microtubule organizing center, and alignment of lysosomes along microtubules are all related processes. The transport of organelles requires a special class of microtubule-associated proteins (MAPs). One of these is the molecular motor kinesin (see MIM 148760 and MIM 600025), an ATPase that moves vesicles unidirectionally toward the plus end of the microtubule. Another such MAP is kinectin, a large integral ER membrane protein. Antibodies directed against kinectin have been shown to inhibit its binding to kinesin.[supplied by OMIM] Interactions: KTN1 has been shown to interact with EEF1D, RhoG and RHOA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minimum energy performance standard** Minimum energy performance standard: A minimum energy performance standard (MEPS) is a specification, containing a number of performance requirements for an energy-using device, that effectively limits the maximum amount of energy that may be consumed by a product in performing a specified task. A MEPS is usually made mandatory by a government energy efficiency body. It may include requirements not directly related to energy; this is to ensure that general performance and user satisfaction are not adversely affected by increasing energy efficiency. A MEPS generally requires use of a particular test procedure that specifies how performance is measured. In North America when addressing energy efficiency, a MEPS is sometimes referred to simply as a "standard", as in "Co-operation on Labeling and Standards Programs". In Latin America when addressing energy efficiency, MEPS are sometimes referred to as Normas (translated as "norms"). Examples: A refrigerating appliance is required to maintain temperatures inside its compartments within specified limits, and to operate (including defrosting) in a specified ambient temperature while using at most a specified amount of electricity; the energy use allowed varies according to volume, number of doors, the function of the various compartments and other parameters. This graph shows the dramatic reduction in electricity use in U.S. refrigerators following the introduction of a series of first California then U.S. MEPS starting in the mid-1970s:An electric fan is required to shift air at a specified rate while consuming a limited amount of power.A storage water heater providing hot water for sanitary purposes is required to heat up a specified quantity of water to a specified temperature and store it at that temperature for a specified time while consuming a limited amount of energy. In this example, the requirements for heating up and for maintaining the temperature may be applied as two separate energy performance requirements or there may be a single task efficiency. An electric induction motor is required to have a specified minimum full-load efficiency.A compact fluorescent lamp is required to start and run up to near full brightness in a given time, to have a minimum life of several thousand hours, to maintain its output within specified limits, to withstand a certain number of switchings, to have a consistent colour appearance and a specified colour rendering. Its energy performance requirement is usually stated in terms of minimum efficacy (light output per electrical input). California: In the U.S., the state of California was a pioneer in the introduction of MEPS. In order to reduce the growth in electricity use, the California Energy Commission (CEC) was given unique and strong authority to regulate the efficiency of appliances sold in the state. It started to adopt appliance efficiency regulations in 1978, and has updated the standards regularly over time, and expanded the list of covered appliances. In 1988, California's standards became national standards for the U.S. through the enactment of the National Appliance Energy Conservation Act (NAECA). The federal standards preempted state standards (unless the state justified a waiver from federal preemption based on conditions in the state), and since then, the U.S. Department of Energy has had the responsibility to update the federal standards. California has continued to expand the list of appliances it regulates for appliances that are not federally regulated, and therefore not preempted. In recent years, the CEC's attention has been focused on consumer electronics, for which energy use has been growing dramatically. Australia: MEPS programs are made mandatory in Australia by state government legislation and regulations which give force to the relevant Australian Standards. It is mandatory for the following products manufactured in or imported into Australia to meet the MEPS levels specified by the relevant Australian Standards: Brazil: A law was approved in 2001. MEPS have been set for three-phase electric motors and compact fluorescent lamps. New Zealand: On 5 February 2002, New Zealand introduced Minimum Energy Performance Standards (MEPS) with Energy Efficiency Regulations. MEPS and energy rating labels help improve the energy efficiency of our products, and enable consumers to choose products that use less energy. Products covered by MEPS must meet or exceed set levels for energy performance before they can be sold to consumers. MEPS have been updated over the years (2002, 2003, 2004, 2008, 2011) to cover a wide range of products, and increasing levels of stringency. New Zealand works with Australia to harmonise MEPS levels. Almost all of its standards are joint standards with Australia. New Zealand: New Zealand has mandatory Energy rating labelling for dishwashers and clothes dryers, fridges, washing machines and room air conditioners. MEPS apply to the following: Refrigerators and freezers Washing machines Air conditioners Computer room air conditioners Chillers Electric storage water heaters Gas water heaters External power supplies Set-top boxes Distribution transformers Refrigerated display cabinets Three-phase electric motors Ballasts for fluorescent lamps Tubular fluorescent lamps
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ranking SVM** Ranking SVM: In machine learning, a ranking SVM is a variant of the support vector machine algorithm, which is used to solve certain ranking problems (via learning to rank). The ranking SVM algorithm was published by Thorsten Joachims in 2002. The original purpose of the algorithm was to improve the performance of an internet search engine. However, it was found that ranking SVM also can be used to solve other problems such as Rank SIFT. Description: The ranking SVM algorithm is a learning retrieval function that employs pairwise ranking methods to adaptively sort results based on how 'relevant' they are for a specific query. The ranking SVM function uses a mapping function to describe the match between a search query and the features of each of the possible results. This mapping function projects each data pair (such as a search query and clicked web-page, for example) onto a feature space. These features are combined with the corresponding click-through data (which can act as a proxy for how relevant a page is for a specific query) and can then be used as the training data for the ranking SVM algorithm. Description: Generally, ranking SVM includes three steps in the training period: It maps the similarities between queries and the clicked pages onto a certain feature space. It calculates the distances between any two of the vectors obtained in step 1. It forms an optimization problem which is similar to a standard SVM classification and solves this problem with the regular SVM solver. Background: Ranking method Suppose C is a data set containing N elements ci . r is a ranking method applied to C . Then the r in C can be represented as a N×N binary matrix. If the rank of ci is higher than the rank of cj , i.e. rci<rcj , the corresponding position of this matrix is set to value of "1". Otherwise the element in that position will be set as the value "0". Background: Kendall's tau Kendall's Tau also refers to Kendall tau rank correlation coefficient, which is commonly used to compare two ranking methods for the same data set. Background: Suppose r1 and r2 are two ranking method applied to data set C , the Kendall's Tau between r1 and r2 can be represented as follows: τ(r1,r2)=P−QP+Q=1−2QP+Q where P is the number of concordant pairs and Q is the number of discordant pairs (inversions). A pair di and dj is concordant if both ra and rb agree in how they order di and dj . It is discordant if they disagree. Background: Information retrieval quality Information retrieval quality is usually evaluated by the following three measurements: Precision Recall Average precisionFor a specific query to a database, let Prelevant be the set of relevant information elements in the database and Pretrieved be the set of the retrieved information elements. Then the above three measurements can be represented as follows: precision relevant retrieved retrieved recall relevant retrieved relevant average precision Prec recall recall , where Prec Recall ) is the Precision of Recall Let r∗ and rf(q) be the expected and proposed ranking methods of a database respectively, the lower bound of Average Precision of method rf(q) can be represented as follows: AvgPrec ⁡(rf(q))≧1R[Q+(R+12)]−1(∑i=1Ri)2 where Q is the number of different elements in the upper triangular parts of matrices of r∗ and rf(q) and R is the number of relevant elements in the data set. Background: SVM classifier Suppose (x→i,yi) is the element of a training data set, where x→i is the feature vector and yi is the label (which classifies the category of x→i ). A typical SVM classifier for such data set can be defined as the solution of the following optimization problem. minimize subject to is a scalar; ∀yi∈{−1,1};∀ξi≧0; The solution of the above optimization problem can be represented as a linear combination of the feature vectors xi s. w→∗=∑iαiyixi where αi is the coefficients to be determined. Ranking SVM algorithm: Loss function Let τP(f) be the Kendall's tau between expected ranking method r∗ and proposed method rf(q) , it can be proved that maximizing τP(f) helps to minimize the lower bound of the Average Precision of rf(q) Expected loss runctionThe negative τP(f) can be selected as the loss function to minimize the lower bound of average precision of rf(q) expected =−τP(f)=−∫τ(rf(q),r∗)dPr(q,r∗) where Pr(q,r∗) is the statistical distribution of r∗ to certain query q Empirical loss functionSince the expected loss function is not applicable, the following empirical loss function is selected for the training data in practice. Ranking SVM algorithm: empirical =−τS(f)=−1n∑i=1nτ(rf(qi),ri∗) Collecting training data n i.i.d. queries are applied to a database and each query corresponds to a ranking method. The training data set has n elements. Each element contains a query and the corresponding ranking method. Feature space A mapping function Φ(q,d) is required to map each query and the element of database to a feature space. Then each point in the feature space is labelled with certain rank by ranking method. Optimization problem The points generated by the training data are in the feature space, which also carry the rank information (the labels). These labeled points can be used to find the boundary (classifier) that specifies the order of them. In the linear case, such boundary (classifier) is a vector. Ranking SVM algorithm: Suppose ci and cj are two elements in the database and denote (ci,cj)∈r if the rank of ci is higher than cj in certain ranking method r . Let vector w→ be the linear classifier candidate in the feature space. Then the ranking problem can be translated to the following SVM classification problem. Note that one ranking method corresponds to one query. Ranking SVM algorithm: minimize constant subject to where k∈{1,2,…,n},i,j∈{1,2,…}. The above optimization problem is identical to the classical SVM classification problem, which is the reason why this algorithm is called Ranking-SVM. Ranking SVM algorithm: Retrieval function The optimal vector w→∗ obtained by the training sample is w→∗=∑αk,ℓ∗Φ(qk,ci) So the retrieval function could be formed based on such optimal classifier. For new query q , the retrieval function first projects all elements of the database to the feature space. Then it orders these feature points by the values of their inner products with the optimal vector. And the rank of each feature point is the rank of the corresponding element of database for the query q Application of ranking SVM: Ranking SVM can be applied to rank the pages according to the query. The algorithm can be trained using click-through data, where consists of the following three parts: Query. Present ranking of search results Search results clicked on by userThe combination of 2 and 3 cannot provide full training data order which is needed to apply the full SVM algorithm. Instead, it provides a part of the ranking information of the training data. So the algorithm can be slightly revised as follows. minimize constant subject to where k∈{1,2,…,n},i,j∈{1,2,…}. The method r′ does not provide ranking information of the whole dataset, it's a subset of the full ranking method. So the condition of optimization problem becomes more relax compared with the original Ranking-SVM.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1,2-Dichloro-1,1,2-trifluoroethane** 1,2-Dichloro-1,1,2-trifluoroethane: 1,2-Dichloro-1,1,2-trifluoroethane is a volatile liquid chlorofluoroalkane composed of carbon, hydrogen, chlorine and fluorine, and with structural formula CClF2CHClF. It is also known as a refrigerant with the designation R-123a. Formation: 1,1,2-Trichloro-1,2,2-trifluoroethane can be biotransformed in sewage sludge to 1,2-dichloro-1,1,2-trifluoroethane. Properties: The critical temperature of R-123a is 461.6 K (188.5 °C; 371.2 °F). The rotation of the molecule appears to be hindered by the present of chlorine on each carbon atom, but is eased at higher temperatures. Use: Although not deliberately used, R-123a is a significant impurity in its isomer, the widely used 2,2-dichloro-1,1,1-trifluoroethane (R-123).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fax art** Fax art: Fax art is art specifically designed to be sent or transmitted by a facsimile machine, where the "fax art" is the received "fax". It is also called telecommunications art or telematic art. According to art historians Annmarie Chandler and Norie Neumark, "Fax art was another means of mediating distances",Fax art was first transmitted in 1980, but that was not documented until 1985. On January 12, 1985, Joseph Beuys together with Andy Warhol and the Japanese artist Kaii Higashiyama participated in the "Global-Art-Fusion" project, a fax art project initiated by the conceptual artist Ueli Fuchser, in which a fax was sent with drawings of all three artists within 32 minutes around the world – from Düsseldorf (Germany) via New York (US) to Tokyo (Japan), received at Vienna's Palais Liechtenstein Museum of Modern Art. This fax was a statement of peace during the Cold War in the 1980s. The earliest scholarly note of fax art in art history was in 1990 by Karen O'Rourke.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endosymbiont** Endosymbiont: An endosymbiont or endobiont is any organism that lives within the body or cells of another organism most often, though not always, in a mutualistic relationship. Endosymbiont: (The term endosymbiosis is from the Greek: ἔνδον endon "within", σύν syn "together" and βίωσις biosis "living".) Examples are nitrogen-fixing bacteria (called rhizobia), which live in the root nodules of legumes, single-cell algae inside reef-building corals and bacterial endosymbionts that provide essential nutrients to insects.The history behind the concept of endosymbiosis stems from the postulates of the endosymbiotic theory. The endosymbiotic theory (symbiogenesis) pushes the notion of bacteria exclusively living in eukaryotic organisms after being engulfed by them. This is popular with the concept of organelle development observed with eukaryotes. Two major types of organelle in eukaryotic cells, mitochondria and plastids such as chloroplasts, are considered to be obtained from bacterial endosymbionts.There are two main types of symbiont transmissions. In horizontal transmission, each new generation acquires free living symbionts from the environment. An example is the nitrogen-fixing bacteria in certain plant roots. Vertical transmission takes place when the symbiont is transferred directly from parent to offspring. An example is pea aphid symbionts. Also, it is possible for both to be involved in a mixed-mode transmission, where symbionts are transferred vertically for some generation before a switch of host occurs and new symbionts are horizontally acquired from the environment. Other examples include Wigglesworthia nutritional symbionts of tse-tse flies, or in sponges. When a symbiont reaches this stage, it begins to resemble a cellular organelle, similar to mitochondria or chloroplasts. Endosymbiont: Many instances of endosymbiosis are obligate; that is, either the endosymbiont or the host cannot survive without the other, such as the gutless marine worms of the genus Riftia, which obtain nutrition from their endosymbiotic bacteria. The most common examples of obligate endosymbioses are mitochondria and chloroplasts. Some human parasites, e.g. Wuchereria bancrofti and Mansonella perstans, thrive in their intermediate insect hosts because of an obligate endosymbiosis with Wolbachia spp. They can both be eliminated from hosts by treatments that target this bacterium. However, not all endosymbioses are obligate and some endosymbioses can be harmful to either of the organisms involved. The Origin: Symbiogenesis and Symbiont transmission: Symbiogenesis and organelles Symbiogenesis explains the origins of eukaryotes, whose cells contain two major kinds of organelle: mitochondria and chloroplasts. The theory proposes that these organelles evolved from certain types of bacteria that eukaryotic cells engulfed through phagocytosis. These cells and the bacteria trapped inside them entered an endosymbiotic relationship, meaning that the bacteria took up residence and began living exclusively within the eukaryotic cells.Numerous insect species have endosymbionts at different stages of symbiogenesis. A common theme of symbiogenesis involves the reduction of the genome to only essential genes for the host and symbiont collective genome. A remarkable example of this is the fractionation of the Hodgkinia genome of Magicicada cicadas. Because the cicada life cycle takes years underground, natural selection on endosymbiont populations is relaxed for many bacterial generations. This allows the symbiont genomes to diversify within the host for years with only punctuated periods of selection when the cicadas reproduce. As a result, the ancestral Hodgkinia genome has split into three groups of primary endosymbiont, each encoding only a fraction of the essential genes for the symbiosis. The host now requires all three sub-groups of symbiont, each with degraded genomes lacking most essential genes for bacterial viability. The Origin: Symbiogenesis and Symbiont transmission: Symbiont transmission Symbiont transmission is the process where the host in a symbiotic relationship between two organisms acquires an organism (internally or externally) that serves as its symbiont. Most symbionts are either obligatory (require their host to survive) or facultative (do not necessarily need their host to survive). Many instances of endosymbiosis are obligate; that is, either the endosymbiont or the host cannot survive without the other, such as the gutless marine worms of the genus Riftia, which get nutrition from their endosymbiotic bacteria. The most common examples of obligate endosymbiosis are mitochondria and chloroplasts. Some human parasites, e.g. Wuchereria bancrofti and Mansonella perstans, thrive in their intermediate insect hosts because of an obligate endosymbiosis with Wolbachia spp. They can both be eliminated from hosts by treatments that target this bacterium.Horizontal (lateral), vertical, and mix-mode (hybrid of horizonal and vertical) transmission are the three paths for symbiont transfer. Horizontal symbiont transfer (horizontal transmission) is a process where a host acquires a facultative symbiont from the environment or from another host. The Rhizobia-Legume symbiosis (bacteria-plant endosymbiosis) is a prime example of horizontal symbiont transmission. The Rhizobia-legume symbiotic relationship is important for processes like the formation of root nodules. It starts with flavonoids released by the plant host (Legume), which causes the rhizobia species (endosymbiont) to activate its nod genes. These Nod genes generate lipooligosaccharide signals which the legume(host) detects, thus leading to root nodule formation. This process bleeds on to other unique processes like nitrogen fixation in plants. The evolutionary advantage of such an interaction allows genetic exchange between both organisms involved increasing the propensity for novel functions as seen in the plant-bacterium interaction (holobiont formation).In vertical transmission, the symbionts often have a reduced genome and are no longer able to survive on their own. As a result, the symbiont depends on the host, resulting in a highly intimate co-dependent relationship. For instance, pea aphid symbionts have lost genes for essential molecules, now relying on the host to supply them with nutrients. In return, the symbionts synthesize essential amino acids for the aphid host. Other examples include Wigglesworthia nutritional symbionts of tsetse flies, or in sponges. When a symbiont reaches this stage, it begins to resemble a cellular organelle, similar to mitochondria or chloroplasts. The evolutionary consequences causes the host and the symbiont to be dependent and form a holobiont, and in the event of a bottleneck a decrease in symbiont diversity could affect the host-symbiont interactions adversely, when deleterious mutations build up over time. Bacterial endosymbionts of invertebrates: The best-studied examples of endosymbiosis are known from invertebrates. These symbioses affect organisms with global impact, including Symbiodinium of corals, or Wolbachia of insects. Many insect agricultural pests and human disease vectors have intimate relationships with primary endosymbionts. Bacterial endosymbionts of invertebrates: Of insects Scientists classify insect endosymbionts in two broad categories, 'Primary' and 'Secondary'. Primary endosymbionts (sometimes referred to as P-endosymbionts) have been associated with their insect hosts for many millions of years (from 10 to several hundred million years in some cases). They form obligate associations (see below), and display cospeciation with their insect hosts. Secondary endosymbionts exhibit a more recently developed association, are sometimes horizontally transferred between hosts, live in the hemolymph of the insects (not specialized bacteriocytes, see below), and are not obligate. Bacterial endosymbionts of invertebrates: Primary Among primary endosymbionts of insects, the best-studied are the pea aphid (Acyrthosiphon pisum) and its endosymbiont Buchnera sp. APS, the tsetse fly Glossina morsitans morsitans and its endosymbiont Wigglesworthia glossinidia brevipalpis and the endosymbiotic protists in lower termites. As with endosymbiosis in other insects, the symbiosis is obligate in that neither the bacteria nor the insect is viable without the other. Scientists have been unable to cultivate the bacteria in lab conditions outside of the insect. With special nutritionally-enhanced diets, the insects can survive, but are unhealthy, and at best survive only a few generations.In some insect groups, these endosymbionts live in specialized insect cells called bacteriocytes (also called mycetocytes), and are maternally-transmitted, i.e. the mother transmits her endosymbionts to her offspring. In some cases, the bacteria are transmitted in the egg, as in Buchnera; in others like Wigglesworthia, they are transmitted via milk to the developing insect embryo. In termites, the endosymbionts reside within the hindguts and are transmitted through trophallaxis among colony members.The primary endosymbionts are thought to help the host either by providing nutrients that the host cannot obtain itself or by metabolizing insect waste products into safer forms. For example, the putative primary role of Buchnera is to synthesize essential amino acids that the aphid cannot acquire from its natural diet of plant sap. Likewise, the primary role of Wigglesworthia, it is presumed, is to synthesize vitamins that the tsetse fly does not get from the blood that it eats. In lower termites, the endosymbiotic protists play a major role in the digestion of lignocellulosic materials that constitute a bulk of the termites' diet. Bacterial endosymbionts of invertebrates: Bacteria benefit from the reduced exposure to predators and competition from other bacterial species, the ample supply of nutrients and relative environmental stability inside the host. Bacterial endosymbionts of invertebrates: Genome sequencing reveals that obligate bacterial endosymbionts of insects have among the smallest of known bacterial genomes and have lost many genes that are commonly found in closely related bacteria. Several theories have been put forth to explain the loss of genes. It is presumed that some of these genes are not needed in the environment of the host insect cell. A complementary theory suggests that the relatively small numbers of bacteria inside each insect decrease the efficiency of natural selection in 'purging' deleterious mutations and small mutations from the population, resulting in a loss of genes over many millions of years. Research in which a parallel phylogeny of bacteria and insects was inferred supports the belief that the primary endosymbionts are transferred only vertically (i.e., from the mother), and not horizontally (i.e., by escaping the host and entering a new host).Attacking obligate bacterial endosymbionts may present a way to control their insect hosts, many of which are pests or carriers of human disease. For example, aphids are crop pests and the tsetse fly carries the organism Trypanosoma brucei that causes African sleeping sickness. Other motivations for their study involve understanding the origins of symbioses in general, as a proxy for understanding e.g. how chloroplasts or mitochondria came to be obligate symbionts of eukaryotes or plants. Bacterial endosymbionts of invertebrates: Secondary The pea aphid (Acyrthosiphon pisum) is known to contain at least three secondary endosymbionts, Hamiltonella defensa, Regiella insecticola, and Serratia symbiotica. Hamiltonella defensa defends its aphid host from parasitoid wasps. This defensive symbiosis improves the survival of aphids, which have lost some elements of the insect immune response.One of the best-understood defensive symbionts is the spiral bacteria Spiroplasma poulsonii. Spiroplasma sp. can be reproductive manipulators, but also defensive symbionts of Drosophila flies. In Drosophila neotestacea, S. poulsonii has spread across North America owing to its ability to defend its fly host against nematode parasites. This defence is mediated by toxins called "ribosome-inactivating proteins" that attack the molecular machinery of invading parasites. These Spiroplasma toxins represent one of the first examples of a defensive symbiosis with a mechanistic understanding for defensive symbiosis between an insect endosymbiont and its host.Sodalis glossinidius is a secondary endosymbiont of tsetse flies that lives inter- and intracellularly in various host tissues, including the midgut and hemolymph. Phylogenetic studies have not indicated a correlation between evolution of Sodalis and tsetse. Unlike tsetse's primary symbiont Wigglesworthia, though, Sodalis has been cultured in vitro.Many other insects have secondary endosymbionts not reviewed here. Bacterial endosymbionts of invertebrates: Of ants The best-studied endosymbiont of ants are bacteria of the genus Blochmannia, which are the primary endosymbiont of Camponotus ants. In 2018 a new ant-associated symbiont was discovered in Cardiocondyla ants. This symbiont was named Candidatus Westeberhardia Cardiocondylae and it is also believed to be a primary symbiont. Bacterial endosymbionts of invertebrates: Of marine invertebrates Extracellular endosymbionts are also represented in all four extant classes of Echinodermata (Crinoidea, Ophiuroidea, Echinoidea, and Holothuroidea). Little is known of the nature of the association (mode of infection, transmission, metabolic requirements, etc.) but phylogenetic analysis indicates that these symbionts belong to the class Alphaproteobacteria, relating them to Rhizobium and Thiobacillus. Other studies indicate that these subcuticular bacteria may be both abundant within their hosts and widely distributed among the Echinoderms in general.Some marine oligochaeta (e.g., Olavius algarvensis and Inanidrillus spp.) have obligate extracellular endosymbionts that fill the entire body of their host. These marine worms are nutritionally dependent on their symbiotic chemoautotrophic bacteria lacking any digestive or excretory system (no gut, mouth, or nephridia).The sea slug Elysia chlorotica lives in endosymbiotic relationship with the algae Vaucheria litorea, and the jellyfish Mastigias have a similar relationship with an algae. Elysia chlorotica forms this relationship intracellularly with the chloroplasts from the algae. These chloroplast retain their photosynthetic capabilities and structures for several months after being taken into the cells of the slug.The very simple animal Trichoplax have two bacterial endosymbionts. One of them is called Ruthmannia, and lives inside the animal's digestive cells. The other is Grellia which lives permanently inside the endoplasmic reticulum (ER) of Trichoplax, the first known symbiont to do so.Paracatenula is a flatworm which have lived in symbiosis with an endosymbiotic bacteria for 500 million years. The bacteria, which have lost much of its genome as a symbiont, produce numerous small, droplet-like vesicles which provide the host with all the nutrients it needs. Bacterial endosymbionts of invertebrates: Dinoflagellate endosymbionts Dinoflagellate endosymbionts of the genus Symbiodinium, commonly known as zooxanthellae, are found in corals, mollusks (esp. giant clams, the Tridacna), sponges, and the unicellular foraminifera. These endosymbionts drive the formation of coral reefs by capturing sunlight and providing their hosts with energy for carbonate deposition.Previously thought to be a single species, molecular phylogenetic evidence over the past couple decades has shown there to be great diversity in Symbiodinium. In some cases, there is specificity between host and Symbiodinium clade. More often, however, there is an ecological distribution of Symbiodinium, the symbionts switching between hosts with apparent ease. When reefs become environmentally stressed, this distribution of symbionts is related to the observed pattern of coral bleaching and recovery. Thus, the distribution of Symbiodinium on coral reefs and its role in coral bleaching presents one of the most complex and interesting current problems in reef ecology. Of phytoplankton: In marine environments, bacterial endosymbionts have more recently been discovered. These endosymbiotic relationships are especially prevalent in oligotrophic or nutrient-poor regions of the ocean like that of the North Atlantic. In these oligotrophic waters, cell growth of larger phytoplankton like that of diatoms is limited by low nitrate concentrations. Endosymbiotic bacteria fix nitrogen for their diatom hosts and in turn receive organic carbon from photosynthesis. These symbioses play an important role in global carbon cycling in oligotrophic regions.One known symbiosis between the diatom Hemialus spp. and the cyanobacterium Richelia intracellularis has been found in the North Atlantic, Mediterranean, and Pacific Ocean. The Richelia endosymbiont is found within the diatom frustule of Hemiaulus spp., and has a reduced genome likely losing genes related to pathways the host now provides. Research by Foster et al. (2011) measured nitrogen fixation by the cyanobacterial host Richelia intracellularis well above intracellular requirements, and found the cyanobacterium was likely fixing excess nitrogen for Hemiaulus host cells. Additionally, both host and symbiont cell growth were much greater than free-living Richelia intracellularis or symbiont-free Hemiaulus spp. The Hemaiulus-Richelia symbiosis is not obligatory especially in areas with excess nitrogen (nitrogen replete).Richelia intracellularis is also found in Rhizosolenia spp., a diatom found in oligotrophic oceans. Compared to the Hemaiulus host, the endosymbiosis with Rhizosolenia is much more consistent, and Richelia intracellularis is generally found in Rhizosolenia. There are some asymbiotic (occurs without an endosymbiont) Rhizosolenia, however there appears to be mechanisms limiting growth of these organisms in low nutrient conditions. Cell division for both the diatom host and cyanobacterial symbiont can be uncoupled and mechanisms for passing bacterial symbionts to daughter cells during cell division are still relatively unknown.Other endosymbiosis with nitrogen fixers in open oceans include Calothrix in Chaetoceros spp. and UNCY-A in prymnesiophyte microalga. The Chaetoceros-Calothrix endosymbiosis is hypothesized to be more recent, as the Calothrix genome is generally intact. While other species like that of the UNCY-A symbiont and Richelia have reduced genomes. This reduction in genome size occurs within nitrogen metabolism pathways indicating endosymbiont species are generating nitrogen for their hosts and losing the ability to use this nitrogen independently. This endosymbiont reduction in genome size, might be a step that occurred in the evolution of organelles (above). Of protists: Mixotricha paradoxa is a protozoan that lacks mitochondria. However, spherical bacteria live inside the cell and serve the function of the mitochondria. Mixotricha also has three other species of symbionts that live on the surface of the cell.Paramecium bursaria, a species of ciliate, has a mutualistic symbiotic relationship with green alga called Zoochlorella. The algae live inside the cell, in the cytoplasm.Platyophrya chlorelligera is a freshwater ciliate which harbors Chlorella that performs photosynthesis.Strombidium purpureum, a marine ciliate which use endosymbiotic purple non-sulphur bacteria for anoxygenic photosynthesis.Paulinella chromatophora is a freshwater amoeboid which has recently (evolutionarily speaking) taken on a cyanobacterium as an endosymbiont. Of protists: Many foraminifera are hosts to several types of algae, such as red algae, diatoms, dinoflagellates and chlorophyta. These endosymbionts can be transmitted vertically to the next generation via asexual reproduction of the host, but because the endosymbionts are larger than the foraminiferal gametes, they need to acquire new algae again after sexual reproduction.Several species of radiolaria have photosynthetic symbionts. In some species the host will sometimes digest algae to keep their population at a constant level.Hatena arenicola is a flagellate protist with a complicated feeding apparaturs that feed on other microbes. But when it engulfs a green alga from the genus Nephroselmis, the feeding apparatus disappears and it becomes photosynthetic. During mitosis the algae is transferred to only one of the two cells, and the cell without the algae needs to start the cycle all over again. Of protists: In 1966, biologist Kwang W. Jeon found that a lab strain of Amoeba proteus had been infected by bacteria that lived inside the cytoplasmic vacuoles. This infection killed all the protists except for a few individuals. After the equivalent of 40 host generations, the two organisms gradually became mutually interdependent. Over many years of study, it has been confirmed that a genetic exchange between the prokaryotes and protists had occurred. Of vertebrates: The spotted salamander (Ambystoma maculatum) lives in a relationship with the algae Oophila amblystomatis, which grows in the egg cases. Of plants: Plants are diverse photosynthetic eukaryotes having wide variety of cell morphologies and lifestyles. Plants are considered one of the primary producers. Plants with all photosynthetic eukaryotes are dependent on an intracellular organelle known as plastid or chloroplast (in case of plants and green algae). The chloroplast is derived from a cyanobacterial primary endosymbiosis over one billion years ago. The oxygenic photosynthetic free-living cyanobacterium was engulfed and kept by a heterotrophic protist and eventually evolved into the present intracellular organelle over the course of many years. The plant symbioses can be categorized into epiphytic, endophytic, and mycorrhizal. The mycorrhizal category is only used for fungi. The endosymbiosis relation of plants and endosymbionts can also be categorized into beneficial, mutualistic, neutral, and pathogenic. Typically, most of the studies related to plan symbioses or plant endosymbionts such as endophytic bacteria or fungi, are focused on a single category or specie to better understand the biological processes and functions one at a time. But this approach is not helping to understand the complex endosymbiotic interactions and biological functions in natural habitat. Microorganisms living in association as endosymbionts with plants can enhance the primary productivity of plants either by producing or capturing the limiting resources. These endosymbionts can also enhance the productivity of plants by the production of toxic metabolites helping plant defenses against herbivores . Although, the role and potential of microorganisms in community regulations has been neglected since long, may because of the microscopic size and unseen lifestyle. Theoretically, all the vascular plants harbor endosymbionts (e.g., fungi and bacteria). these endosymbionts colonize the plants cells and tissue predominantly but not exclusively. Plant endosymbionts can be categorized into different types based on the function, relation and location, some common plant endosymbionts are discussed as follow. Of plants: Plant endosymbionts, also called endophytes, include bacteria, fungi, viruses, protozoa and even microalgae. Endophytes help plant in biological processes such as growth and development, nutrient uptake and defense against biotic and abiotic stresses like drought, salinity, heat, and herbivores. Of plants: Fungi as plant endosymbionts All vascular plants have fungal and bacterial endophytes or endosymbionts which colonize predominantly but not exclusively, roots. Fungal endosymbionts can be found all out the plant tissues and based on their location in the plant, fungal endosymbionts can be defined in multiple ways like fungi living in plant tissues above the ground are termed as endophytes, while fungi living below the ground (roots) are known as mycorrhizal, but the mycorrhizal fungi also have different names based on their location inside the root which are ecto, endo, arbuscular, ericoid, etc. Furthermore, the fungal endosymbionts living in the roots and extending their extraradical hyphae into the outer rhizosphere are known as ectendosymbionts. Of plants: Arbuscular Mycorrhizal Fungi (AMF) Among the plant microbial endosymbionts arbuscular mycorrhizal fungi or AMF are the most diverse group. With some exceptions Ericaceae family, almost all vascular plants are harboring the AMF endosymbionts both as endo and ecto as well. The AMF plant endosymbionts systematically colonize the plant roots and helping plant host by soil nutrients and as a return it takes the plant organic carbon sources. Plant roots exudates contain a diversity of secondary metabolites especially flavonoids and strigolactones which acts as chemical signals and attracts the AMF. Arbuscular mycyrrizal fungus Gigaspora margarita not only lives as a plant endosymbiont but also harbor further endosymbiont intracytoplasmic bacterium-like organisms. By isolating the pure cultures of AMF endosymbionts, it has been reported that it has different effects to the different plant hosts. By introducing the AMF of one plant can reduce the net growth of the other plant host which might have to do something with already present AMF. Furthermore, the AMF are reported in numerous studies as plant health and growth promoting and as an alleviating agent for abiotic stresses like salinity, drought, heat, poor nutrition and metal toxicity. Of plants: Endophytic fungi In addition to mycorrhizal endosymbionts, the endophytic fungi are also catching the interest of scientist by showing so much potential not only in its mutualistic relation where it is benefiting host plant and taking advantages as well but also showing promising results in other domains like helping plant to grow in polluted environment such as high polluted environment with toxic metals. Fungal endophytes are taxonomically diverse group of omnipresent fungi which is divided into different categories based on mode of transmission, biodiversity, in planta colonization and host plant type. These categories are clavicipitaceous and non-clavicipitaceous, the former one systematically colonizes the temperate season grasses while the later one colonizes higher plants and even roots and that’s why can be divided into further categories. Bacillus amyloliquefaciens is a seed born endophytic fungi which produces gibberellins and promotes the physiology. Bacillus amyloliquefaciens has been evaluated in a study for its growth promoting potential where it promotes the longer height of transgenic dwarf rice plants. Similarly, Aureobasidium and preussia species of endophytic fungi isolated from Boswellia sacra are producing indole acetic acid hormone to promote plant health and development.Aphids are most common insects and can be found in most of the plants and carnivorous ladybirds are the specialized predators of the aphids. These ladybirds are used in different programs for the pest control. A study conducted on the effect of plant-endophyte symbiosis on the population and fitness of carnivorous ladybirds. The plant endophytic fungus Neotyphodium lolii is producing alkaloid mycotoxins in response to aphid invasions. The ladybirds picking on the aphids from the infected plants exhibited reduced rate of fertility and abnormal reproductive performance. Adult ladybirds were not significantly affected in terms of their body symmetries and size. But the consistently strong negative effects of endophytes overall fitness of ladybirds suggest that the mycotoxins are transmitted along the food chain and effecting the top predators. Of plants: Endophytic bacteria Endophytic bacteria belong to a diverse group of plant endosymbionts and characterized by systematically colonization of plant internal tissues. Endophytic bacteria most common genera include Pseudomonas, Bacillus, Acinetobacter, Actinobacteria, Sphingomonas. Some endophytic bacteria genera additionally belong to the Enterobacteriaceae family (Pirttila and Frank, 2011). Endophytic bacteria mostly colonize the leaf tissues from plant roots, but can also enter the plant through the leaves through leaf stomata (Senthilkumar et al., 2011).Generally, the endophytic bacteria are isolated from the plant tissues by surface sterilization of the plant tissue in a sterile environment. Moreover, the isolation of endophytic bacteria according to their essential needs in niche occupations has been explored. That’s why the endophytic bacterial community can be divided into "passenger" and "true" endophytes. The passenger endophytic bacteria are those who eventually colonize inner tissue of plant by stochastic events while the true endophytes possess adaptive traits because of which they live in association with plants strictly. the in vitro cultivated endophytic bacteria association with plant is considered a more intimate relationship where it helps plant acclimatize to the conditions and promotes health and growth. The endophytic bacteria are considered as plant's essential endosymbionts because virtually all plants harbor it, and these endosymbionts play essential roles in host plant survival. This plant-endosymbiont relation is important in terms of ecology, evolution and diversity. Moreover, the endophytic bacteria such as Sphingomonas sp. and Serratia sp. being isolated from arid land plants regulate endogenous hormone content and promote growth in crop plants. Of plants: Archaea as plant endosymbionts Archaea are members of most microbiomes. While archaea are highly abundant in extreme environments, they are less abundant and diverse in association with eukaryotic hosts. Nevertheless, archaea are a substantial constituent of plant-associated ecosystems in the aboveground and belowground phytobiome, and play a role in host plant’s health, growth and survival in biotic and abiotic stresses. However, only a few studies have investigated the role of archaea in plant health and its potential symbiosis in ecosystems. Generally, most of the plant endosymbiont related studies focus on fungal or bacterial endosymbionts using metagenomic approaches.The characterization of archaea is not only limited to crop plants like rice and maize but also identified in many aquatic plant species. The abundance of archaea is different in different tissues for example archaea are more abundant in the rhizosphere than the phyllosphere and endosphere. This archaeal abundance is highly associated with plant species type, environment and plant’s developmental stage. In a study conducted on the detection of plant-genotype specific archaeal and bacterial endophytes, 35% of archaeal sequences were detected in overall sequences (achieved using amplicon sequencing and verified by real time-PCR). The archaeal sequences belong to the phyla Thaumarchaeota, Crenarchaeota, and Euryarchaeota. Endosymbionts of fungi: Fungi harbor endohyphal bacteria; however, the effects of the bacteria on the fungi are not well studied. Many fungi that harbor these endohyphal bacteria in turn live within plants. These fungi are otherwise known as fungal endophytes. It is hypothesized that the fungi offers a safe haven for the bacteria, and diverse bacteria colonize these refugia creating a micro-ecosystem. These interactions are important because they may impact the way that fungi interact with the environment by modulating their phenotypes.The way in which the bacteria do this is by altering the gene expression of the fungi. For example, Luteibacter sp. has been shown to naturally infect the ascomycetous endophyte Pestalotiopsis sp. isolated from Platycladus orientalis. The Luteibacter sp. influences the auxin and enzyme production within its host, which, in turn, may influence the effect the fungus has on its plant host. Another interesting example of a bacteria living in symbiosis with a fungus is with the fungus Mortierella. This soil-dwelling fungus lives in close association with a toxin-producing bacteria, Mycoavidus, which helps the fungus to defend against nematodes. This is a very new, but potentially very important, area of study within the study of symbiosis. Virus-host associations: The human genome project found several thousand endogenous retroviruses, endogenous viral elements in the genome that closely resemble and can be derived from retroviruses, organized into 24 families.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Differential of the first kind** Differential of the first kind: In mathematics, differential of the first kind is a traditional term used in the theories of Riemann surfaces (more generally, complex manifolds) and algebraic curves (more generally, algebraic varieties), for everywhere-regular differential 1-forms. Given a complex manifold M, a differential of the first kind ω is therefore the same thing as a 1-form that is everywhere holomorphic; on an algebraic variety V that is non-singular it would be a global section of the coherent sheaf Ω1 of Kähler differentials. In either case the definition has its origins in the theory of abelian integrals. Differential of the first kind: The dimension of the space of differentials of the first kind, by means of this identification, is the Hodge number h1,0.The differentials of the first kind, when integrated along paths, give rise to integrals that generalise the elliptic integrals to all curves over the complex numbers. They include for example the hyperelliptic integrals of type ∫xkdxQ(x) where Q is a square-free polynomial of any given degree > 4. The allowable power k has to be determined by analysis of the possible pole at the point at infinity on the corresponding hyperelliptic curve. When this is done, one finds that the condition is k ≤ g − 1,or in other words, k at most 1 for degree of Q 5 or 6, at most 2 for degree 7 or 8, and so on (as g = [(1+ deg Q)/2]). Differential of the first kind: Quite generally, as this example illustrates, for a compact Riemann surface or algebraic curve, the Hodge number is the genus g. For the case of algebraic surfaces, this is the quantity known classically as the irregularity q. It is also, in general, the dimension of the Albanese variety, which takes the place of the Jacobian variety. Differentials of the second and third kind: The traditional terminology also included differentials of the second kind and of the third kind. The idea behind this has been supported by modern theories of algebraic differential forms, both from the side of more Hodge theory, and through the use of morphisms to commutative algebraic groups. Differentials of the second and third kind: The Weierstrass zeta function was called an integral of the second kind in elliptic function theory; it is a logarithmic derivative of a theta function, and therefore has simple poles, with integer residues. The decomposition of a (meromorphic) elliptic function into pieces of 'three kinds' parallels the representation as (i) a constant, plus (ii) a linear combination of translates of the Weierstrass zeta function, plus (iii) a function with arbitrary poles but no residues at them. Differentials of the second and third kind: The same type of decomposition exists in general, mutatis mutandis, though the terminology is not completely consistent. In the algebraic group (generalized Jacobian) theory the three kinds are abelian varieties, algebraic tori, and affine spaces, and the decomposition is in terms of a composition series. On the other hand, a meromorphic abelian differential of the second kind has traditionally been one with residues at all poles being zero. One of the third kind is one where all poles are simple. There is a higher-dimensional analogue available, using the Poincaré residue.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Safety life cycle** Safety life cycle: The safety life cycle is the series of phases from initiation and specifications of safety requirements, covering design and development of safety features in a safety-critical system, and ending in decommissioning of that system. This article uses software as the context but the safety life cycle applies to other areas such as construction of buildings, for example. In software development, a process is used (software life cycle) and this process consists of a few phases, typically covering initiation, analysis, design, programming, testing and implementation. The focus is to build the software. Some software have safety concerns while others do not. For example, a Leave Application System does not have safety requirements. But we are concerned about safety if a software that is used to control the components in a plane fails. So for the latter, the question is how safety, being so important, should be managed within the software life cycle. What is the Safety Life Cycle?: The basic concept in building software safety, i.e. safety features in software, is that safety characteristics and behaviour of the software and system must be specified and designed into the system.The problem for any systems designer lies in reducing the risk to an acceptable level and of course, the risk tolerated will vary between applications. When a software application is to be used in a safety-related system, then this must be borne in mind at all stages in the software life cycle. The process of safety specification and assurance throughout the development and operational phases is sometimes called the ‘safety life cycle’. Phases in the Safety Life Cycle: The first stages of the life cycle involve assessing the potential system hazards and estimating the risk they pose. One such method is fault tree analysis. This is followed by a safety requirements specification which is concerned with identifying safety-critical functions (functional requirements specification) and the safety integrity level for each of these functions. The specification may either describe how the software should behave to minimize the risk or might require that the hazard should never arise. A ‘normal’ process model is then followed with particular attention paid to the validation (inspection, testing etc.) of the system. Part of that validation should be an explicit safety validation activity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Taribavirin** Taribavirin: Taribavirin (rINN; also known as viramidine, codenamed ICN 3142) is an antiviral drug in Phase III human trials, but not yet approved for pharmaceutical use. It is a prodrug of ribavirin, active against a number of DNA and RNA viruses. Taribavirin has better liver-targeting than ribavirin, and has a shorter life in the body due to less penetration and storage in red blood cells. It is expected eventually to be the drug of choice for viral hepatitis syndromes in which ribavirin is active. These include hepatitis C and perhaps also hepatitis B and yellow fever. Uses: Taribavirin is as active against influenza as ribavirin in animal models, with slightly less toxicity, so it may also eventually replace ribavirin as an anti-influenza agent. History: Taribavirin was first reported in 1973 by J. T. Witkowski et al., then working at ICN Pharmaceuticals, in an attempt to find a more active derivative of ribavirin. Taribavirin is being developed by Valeant Pharmaceuticals International. Valeant is testing the drug as a treatment for chronic hepatitis C. Pharmacology: Note on formulas: The carboxamidine group of this molecule is somewhat basic, and therefore this drug is also known and administered as the hydrochloride salt (with a corresponding HCl chemical formula and different ChemID / PubChem number). At physiologic pH, the positive charge on the molecule from partial protonation of the carboximide group contributes to the relative slowness with which the drug crosses cell membranes (such as in red blood cells) until it has been metabolized into ribavirin. In the liver, however, the transformation from carboxamidine to carboxamide happens on first-pass metabolism and contributes to the higher levels of ribavirin found in liver cells and bile when viramidine is administered.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fibroadenoma** Fibroadenoma: Fibroadenomas are benign breast tumours characterized by an admixture of stromal and epithelial tissue. Breasts are made of lobules (milk producing glands) and ducts (tubes that carry the milk to the nipple). These are surrounded by glandular, fibrous and fatty tissues. Fibroadenomas develop from the lobules. The glandular tissue and ducts grow over the lobule to form a solid lump. Fibroadenoma: Since both fibroadenomas and breast lumps as a sign of breast cancer can appear similar, it is recommended to perform ultrasound analyses and possibly tissue sampling with subsequent histopathologic analysis in order to make a proper diagnosis. Unlike typical lumps from breast cancer, fibroadenomas are easy to move, with clearly defined edges.Fibroadenomas are sometimes called breast mice or a breast mouse owing to their high mobility in the breast. Signs and symptoms: Fibroadenomas are benign tumours of the breast, most often present in women in their 20s and 30s. Clinically, fibroadenomas are usually solid breast lumps that are: Painless Firm or rubbery Mobile Solitary-round with distinct, smooth bordersPeople who have a simple fibroadenoma likely do have an increased risk of developing malignant (harmful) breast cancer compared to the general population. Complex fibroadenomas may increase the risk of breast cancer slightly.In the male breast, fibroepithelial tumors are very rare, and are mostly phyllodes tumors. Exceptionally rare case reports exist of fibroadenomas in the male breast; however, these cases may be associated with antiandrogen treatment. Cause: The cause of fibroadenoma is unknown (idiopathic). A connection between fibroadenomas and reproductive hormones has been suggested which may explain why they present themselves during reproductive years, increase in size during pregnancy, and regress post-menopause.Higher intake of fruits and vegetables, higher number of live births, lower use of oral contraceptives and moderate exercise are associated with lower frequency of fibroadenomas. Cause: Pathology Cytology The diagnostic findings on needle biopsy consist of abundant stromal cells, which appear as bare bipolar nuclei, throughout the aspirate; sheets of fairly uniform-size epithelial cells that are typically arranged in either an antler-like pattern or a honeycomb pattern. These epithelial sheets tend to show typical metachromatic blue on Diff-Quik staining. Foam cells and apocrine cells may also be seen, although these are less diagnostic features. The gallery images below demonstrate these features. Cause: Cellular fibroadenoma, also known as juvenile fibroadenoma, is a variant type of fibroadenoma with increased stromal cellularity. Macroscopic Approximately 90% of fibroadenomas are less than 3 cm in diameter. However, these tumors have the potential to grow reaching a remarkable size, particularly in young individuals. The tumor is round or ovoid, elastic, and nodular, and has a smooth surface. The cut surface usually appears homogenous and firm, and is grey-white or tan in colour. The pericanalicular type (hard) has a whorly appearance with a complete capsule, while the intracanalicular type (soft) has an incomplete capsule. Microscopic Fibroadenoma of the breast is a benign tumor composed of a biplastic proliferation of both stromal and epithelial components. This biplasia can be arranged in two growth patterns: pericanalicular (stromal proliferation around epithelial structures) and intracanalicular (stromal proliferation compressing the epithelial structures into clefts). These tumors characteristically display hypovascular stroma compared to malignant neoplasms. Furthermore, the epithelial proliferation appears in a single terminal ductal unit and describes duct-like spaces surrounded by a fibroblastic stroma. The basement membrane is intact. Molecular pathology: Up to 66% of fibroadenomas harbor mutations in the exon (exon 2) of the mediator complex subunit 12 (MED12) gene. In particular, these mutations are restricted to the stromal component. Diagnosis: A fibroadenoma is usually diagnosed through clinical examination, ultrasound or mammography, and often a biopsy sample of the lump. Suspicious findings on imaging may result in a person needing a biopsy in order to gain a definitive diagnosis. There are three types of biopsies: fine-needle aspiration, core-needle biopsy and surgical biopsy. The method of biopsy depends on the appearance, size and location of the breast mass. Treatment: Fibroadenomas can be expected to shrink naturally, so most are simply monitored. Monitoring fibroadenomas involves regular check-ups to make sure that the breast mass is not growing and is not potentially cancerous. Check-ups involve physical examinations performed every 3–6 months and optional diagnostic imaging performed every 6–12 months for 1–2 years. Generally, surgery is only recommended if the fibroadenoma gets larger or causes increased symptoms. They are removed with a small margin of normal breast tissue if the preoperative clinical investigations are suggestive of the necessity of this procedure. A small amount of normal tissue must be removed in case the lesion turns out to be a phyllodes tumour on microscopic examination.Because needle biopsy is often a reliable diagnostic investigation, some doctors may decide not to operate to remove the lesion, and instead opt for clinical follow-up to observe the lesion over time using clinical examination and mammography to determine the rate of growth, if any, of the lesion. A growth rate of less than sixteen percent per month in women under fifty years of age, and a growth rate of less than thirteen percent per month in women over fifty years of age have been published as safe growth rates for continued non-operative treatment and clinical observation.Some fibroadenomas respond to treatment with ormeloxifene.Fibroadenomas have not been shown to recur following complete excision or transform into phyllodes tumours following partial or incomplete excision. Treatment: Non-invasive surgical interventions There are several non-invasive options for the treatment of fibroadenomas, including percutaneous radiofrequency ablation (RFA), cryoablation, and percutaneous microwave ablation. With the use of advanced medical imaging, these procedures do not require invasive surgery and have the potential for enhanced cosmetic results compared with conventional surgery. Treatment: Cryoablation The FDA approved cryoablation of a fibroadenoma as a safe, effective, and minimally-invasive alternative to open surgical removal in 2001. During cryoablation, ultrasound imaging is used to guide a probe into the mass of breast tissue. Extremely cold temperatures are then used to destroy the abnormal cells, and over time the cells are reabsorbed into the body. The procedure can be performed as an outpatient surgery using local anesthesia, and leaves substantially less scarring than open surgical procedures and no breast tissue deformation.The American Society of Breast Surgeons recommends the following criteria to establish a patient as a candidate for cryoablation of a fibroadenoma: The lesion must be sonographically visible. Treatment: The diagnosis of a fibroadenoma must be confirmed histologically. The lesion should be less than 4 cm in diameter. Treatment: High-intensity focused ultrasound High-intensity focused ultrasound (HIFU) is a newer technique for the treatment of malignant and benign tumors of the breast and has shown promising results in the form of complete radiological removal of tumors. An ultrasound beam is focused on a target in the breast and leads to tissue death and protein degradation by raising the temperature in that area. Currently, the use of radiation is recommended in some cases, but HIFU in particular is not part of treatment guidelines. Further research into the usefulness of HIFU, specifically in fibroadenoma, is required before more widespread use of the technique in fibroadenoma. Epidemiology: Of all breast tissue samples taken, fibroadenomas comprise about 50%, and this rate rises to 75% for tissue sample in women under the age of 20 years. Fibroadenomas are more frequent among women in higher socioeconomic classes and darker-skinned people. Body mass index and the number of full-term pregnancies were found to have a negative correlation with the risk of fibroadenomas. There are no known genetic factors that influence the rate of fibroadenomas. The rate of occurrence of fibroadenomas in women have been reported in literature to range from 7% to 13%.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Helium star** Helium star: A helium star is a class O or B star (blue), which has extraordinarily strong helium lines and weaker than normal hydrogen lines, indicating strong stellar winds and a mass loss of the outer envelope. Extreme helium stars (EHe) entirely lack hydrogen in their spectra. Pure helium stars lie on or near a helium main sequence, analogous to the main sequence formed by the more common hydrogen stars.Previously, a helium star was a synonym for a B-type star, but this usage is considered obsolete.A helium star is also a term for a hypothetical star that could occur if two helium white dwarfs with a combined mass of at least 0.5 solar masses merge and subsequently start nuclear fusion of helium, with a lifetime of a few hundred million years. This may only happen if these two binary masses share the same type of envelope phase. It is believed this is the origin of the extreme helium stars. Helium star: The helium star's great capability of transforming into other stellar objects has been observed over the years. The blue progenitor system of the type-Iax supernova 2012Z in the spiral galaxy NGC 1309 is similar to the progenitor of the Galactic helium nova V445 Puppis, suggesting that SN 2012Z was the explosion of a white dwarf accreting from a helium-star companion. It is observed to have caused a growing helium star that has the potential to transform into a red giant after losing its hydrogen envelope in the future.The helium main sequence is a line in the HR diagram where unevolved helium stars lie. It lies mostly parallel and to the left (ie. higher temperatures) of the better-known hydrogen main sequence, although at high masses and luminosities it bends to the right and even crosses the hydrogen main sequence. Therefore pure helium stars have a maximum temperature, between about 100,000 K and 150,000 K depending on metallicity, because high luminosity causes dramatic inflation of the stellar envelope.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Grouper social club** Grouper social club: Grouper was an online, invite-only social club that uses data gathered from Facebook profiles to organize group outings (called Groupers). Matches for the outings were gathered and analyzed first by a computer and then by a human to ensure strong matches. The excursions were planned in venues throughout 25 cities for six people. Groupers consisted of two groups of three friends and can consist of three males and three females, six males, six females, or any other possible combination.Michael Waxman founded the New York-based startup in 2011. The company was run by a staff of 25 people. Time Inc. listed Grouper in its 10 NYC startups to watch for 2013. Three years later, in October 2016, the company shut down. How it works: Grouper was an invite-only service that matched two individuals according to data found – with the permission of the user – on the user's Facebook profile, including age, career, education, etc. The company determined a match between two individuals using both algorithms and its member experience team. A time was then set for the "Grouper". The two parties were asked to each bring two friends. No names, photos, or information were disclosed before the actual meet. Upon arrival at the determined location, the group received a complimentary first round of drinks, including tax and tip, at a reserved table (the cost was included in Grouper's service fee).The company offered arrangements for both opposite- and same-sex Groupers. Communication with users: Grouper featured real-time customer relationship management (CRM). The service also granted users direct contact with the director of membership experience, who engaged users with personalized reminder texts and bits of advice for success on Groupers.The member experience team communicated with users throughout the Grouper. Users received a customized message from the member experience team on the morning after their grouper inquiring as to the how the night out went. This feedback was analyzed and stored for future matching. Active cities and expansion: For more than a year after its initial launch, Grouper was only available in New York City. By June 2012, the service had grown to San Francisco and Washington D.C.By September 2012, Grouper had expanded its services to 10 additional cities, Atlanta, Austin, Brooklyn, Boston, Chicago, Los Angeles, Seattle, Miami, Philadelphia, and Dallas. That December the service became available to users in Toronto.By 2013, the company reported its services were officially available in 25 cities in the US and Toronto, including new additions Nashville, Denver, London, and others. Technological developments: In April 2013, the meet-up service released its Grouper iPhone app. The company has reported that the app, which features push notifications and alerts, allowed users to set up a Grouper in as little as an hour, avoiding the long questionnaires other services require their users to fill out. Partnerships and acquisitions: Y Combinator, a company that funds startups, was the primary backing for Grouper.In October 2012, Grouper announced its Hackaton program. The concept involved flying select designers and software developers to New York City for a week-long, expenses-paid trip. The company invited the designers into their headquarters to work on new Grouper product development and brainstorm with the team.Grouper arranged the trip after establishing partnerships with Airbnb and Hipmunk.Grouper announced a partnership with Uber in January 2013. The alliance was made to encourage users to utilize Uber for transportation on Grouper dates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kim K. Baldridge** Kim K. Baldridge: Kim K. Baldridge is an American theoretical and computational chemist who works to develop quantum mechanical methodologies and apply quantum chemical methods to problems in life sciences, materials science, and general studies. She is professor and vice dean in the School of Pharmaceutical Science and Technology of Tianjin University in China, where she also directs the High Performance Computing Center. Education and career: Baldridge is originally from Minot, North Dakota, and graduated from Minot State University in 1982. She earned her Ph.D. from North Dakota State University and was a postdoctoral researcher at Wesleyan University. After becoming a scientist at the San Diego Supercomputer Center, she became a visiting professor at the University of California, San Diego in 1995, and continued to work at the San Diego Supercomputer Center and hold an adjunct professorship at the university before becoming a professor of theoretical chemistry at the University of Zurich in Switzerland. She moved to Tianjin University in 2014, following her husband Jay S. Siegel, who became dean of Pharmaceutical Science and Technology at Tianjin in 2013. Research: Much of Baldridge's research involves finding better ways to use quantum mechanical methodologies to study complex molecules. She has published several articles that attempt to help theoretical chemists, some of which include Theoretical Study of Fluorine Atom and Fluorine Ion Attack on Methane and Silane and A Novel Approach to Superimposing Molecules. Baldridge has contributed to quantum mechanical computer programs such as GAMESS (US), QMView, and GEMSTONE. GAMESS stands for General Atomic and Molecular Electronic System, and is an advanced chemistry program for calculations including generalized valence bond, the Hartree-Fock method, and density functional theory. Awards and recognition: Minot State University named Baldridge to their Academic Hall of Fame in 2013. Baldridge is a Fellow of the American Physical Society and Fellow of the American Association for the Advancement of Science. She was given a 2019 Distinguished Women in Chemistry or Chemical Engineering Award by the International Union of Pure and Applied Chemistry (IUPAC), the only winner from Asia. The award was won for her work in developing and applying quantum chemistry programs in molecular science, for pioneering women-in-science symposia, and for championing for chemical safety in China.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nyquist ISI criterion** Nyquist ISI criterion: In communications, the Nyquist ISI criterion describes the conditions which, when satisfied by a communication channel (including responses of transmit and receive filters), result in no intersymbol interference or ISI. It provides a method for constructing band-limited functions to overcome the effects of intersymbol interference. Nyquist ISI criterion: When consecutive symbols are transmitted over a channel by a linear modulation (such as ASK, QAM, etc.), the impulse response (or equivalently the frequency response) of the channel causes a transmitted symbol to be spread in the time domain. This causes intersymbol interference because the previously transmitted symbols affect the currently received symbol, thus reducing tolerance for noise. The Nyquist theorem relates this time-domain condition to an equivalent frequency-domain condition. Nyquist ISI criterion: The Nyquist criterion is closely related to the Nyquist–Shannon sampling theorem, with only a differing point of view. Nyquist criterion: If we denote the channel impulse response as h(t) , then the condition for an ISI-free response can be expressed as: h(nTs)={1;n=00;n≠0 for all integers n , where Ts is the symbol period. The Nyquist theorem says that this is equivalent to: for all frequencies f ,where H(f) is the Fourier transform of h(t) . This is the Nyquist ISI criterion. Nyquist criterion: This criterion can be intuitively understood in the following way: frequency-shifted replicas of H(f) must add up to a constant value. This condition is satisfied when H(f) spectrum has even symmetry, has bandwidth less than or equal to 2/Ts , and its single-sideband has odd symmetry at the cutoff frequency ±1/2Ts In practice this criterion is applied to baseband filtering by regarding the symbol sequence as weighted impulses (Dirac delta function). When the baseband filters in the communication system satisfy the Nyquist criterion, symbols can be transmitted over a channel with flat response within a limited frequency band, without ISI. Examples of such baseband filters are the raised-cosine filter, or the sinc filter as the ideal case. Derivation: To derive the criterion, we first express the received signal in terms of the transmitted symbol and the channel response. Let the function h(t) be the channel impulse response, x[n] the symbols to be sent, with a symbol period of Ts; the received signal y(t) will be in the form (where noise has been ignored for simplicity): y(t)=∑n=−∞∞x[n]⋅h(t−nTs) .Sampling this signal at intervals of Ts, we can express y(t) as a discrete-time equation: y[k]=y(kTs)=∑n=−∞∞x[n]⋅h[k−n] .If we write the h[0] term of the sum separately, we can express this as: y[k]=x[k]⋅h[0]+∑n≠kx[n]⋅h[k−n] ,and from this we can conclude that if a response h[n] satisfies h[n]={1;n=00;n≠0 ,only one transmitted symbol has an effect on the received y[k] at sampling instants, thus removing any ISI. This is the time-domain condition for an ISI-free channel. Now we find a frequency-domain equivalent for it. We start by expressing this condition in continuous time: h(nTs)={1;n=00;n≠0 for all integer n . We multiply such a h(t) by a sum of Dirac delta function (impulses) δ(t) separated by intervals Ts This is equivalent of sampling the response as above but using a continuous time expression. The right side of the condition can then be expressed as one impulse in the origin: h(t)⋅∑k=−∞+∞δ(t−kTs)=δ(t) Fourier transforming both members of this relationship we obtain: H(f)∗1Ts∑k=−∞+∞δ(f−kTs)=1 and 1Ts∑k=−∞+∞H(f−kTs)=1 .This is the Nyquist ISI criterion and, if a channel response satisfies it, then there is no ISI between the different samples.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Korte's third law of apparent motion** Korte's third law of apparent motion: In psychophysics, Korte's third law of apparent motion is an observation relating the phenomenon of apparent motion to the distance and duration between two successively presented stimuli. Formulation: Korte's four laws were first proposed in 1915 by Adolf Korte. The third law, particularly, describes how the increase in distance between two stimuli narrows the range of interstimulus intervals (ISI), which produce the apparent motion. It holds that there is a requirement for the proportional decrease in the frequency in which two stimulators are activated in alternation with the increase in ISI to ensure the quality of apparent motion. One identified violation of the Korte's law occurs if the shortest path between seen arm positions is not possible anatomically. This was demonstrated by Maggie Shiffrar and Jennifer Freyd using a picture that showed a woman demonstrating two positions. This highlighted the problem in taking the shortest path to perform the alternating postures.The laws were composed of general statements (laws) describing beta movement in the sense of "optimal motion". These outlined several constraints for obtaining the percept of apparent motion between flashes: "(1) larger separations require higher intensities, (2) slower presentation rates require higher intensities, (3) larger separations require slower presentation rates, (4) longer flash durations require shorter intervals .A modern formulation of the law is that the greater the length of a path between two successively presented stimuli, the greater the stimulus onset asynchrony (SOA) must be for an observer to perceive the two stimuli as a single mobile object. Typically, the relationship between distance and minimal SOA is linear.Arguably, Korte's third law is counterintuitive. One might expect that successive stimuli are less likely to be perceived as a single object as both distance and interval increase, and therefore, a negative relationship should be observed instead. In fact, such a negative relationship can be observed as well as Korte's law. Which relationship holds depends on speed. Korte's law also involves a constancy of velocity through apparent motion and it is said that data do not support it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kerplunk experiment** Kerplunk experiment: The Kerplunk experiment was a famous stimulus and response experiment conducted on rats and demonstrates the ability to turn voluntary motor responses into a conditioned response. The purpose of the experiment was to get kinaesthetic feedback rather than guidance through external stimuli through maze learning. It was conducted in 1907 by John B. Watson and Harvey A. Carr and was named after the sound the rat made after running into the end of the maze. The study would help form a chain of responses, hypothesis proposed by Watson.The study's findings would later give credibility to stimulus and response interpretations that rewards work by strengthening the learned ability to show a habitual motor action in the presence of a particular stimulus. The experiment: Rats were trained to run in a straight, alley-like maze for a food reward which was located at the end of the alley. Watson found that once the rat was well trained, it performed almost automatically on reflex. Upon learning the maze over time, they started to run faster through each length and turn. By the stimulus of the maze, their behavior became a series of associated movements, or kinaesthetic consequences instead of stimulus from the outside world. This routine continued until the length of the path changed, either farther or shorter.If the conditioned rats were released into an alleyway or path that was shortened, they would run straight into the end of the wall making a "kerplunk" sound. The first trial found that they would run full speed, passing up the food that had been moved closer. Shortening the alleyway, and moving the food closer was an early signal that was ignored by the rats.If the path was longer, the rats would run as usual until it reached their customary distance, the distance at which the food would normally be. They would then pause to sniff the area even though they had not reached the end of the alley, often ignoring food that was farther away.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Panasonic Lumix G 25mm F1.7 ASPH** Panasonic Lumix G 25mm F1.7 ASPH: The Panasonic Lumix G 25mm F1.7 ASPH is a fixed focal length interchangeable camera lens announced by Panasonic on September 2, 2015. It has a stepper-motor autofocus and electronic aperture control. The focus ring is not mechanically connected to the lens elements, which means that the manual focus is also controlled through the autofocus motor. The focus ring has a variable transmission depending on how fast it is turned. Panasonic Lumix G 25mm F1.7 ASPH: It is a product in the Micro Four Thirds system. That means it is fully compatible with every Micro Four Thirds camera body, not just Panasonic, but Olympus, Xiaomi, Kodak and Blackmagic cameras as well. This also means that it is made for cameras with a 17.3 × 13 mm (FourThirds) image sensor, which has a 2× crop compared to 35mm cameras. Therefore, this lens has an equivalent focal length of 50mm. Panasonic Lumix G 25mm F1.7 ASPH: This lens has an 'HD' badge on it, which means its capable of high quality video recording with silent and smooth autofocus. It has eight lens elements in seven groups. Two of them are aspherical, which are used to maintain image quality while using less elements. There is also one ultra-high reflection element. The lens has a seven-blade aperture diagram for stopped-down background blur quality. The minimum focus distance is 25 cm (0.25 m; 0.82 ft) and the maximum magnification is 0.14× (0.28× 35mm equivalent) of this lens, so it is not a macro lens at all. The Panasonic 25mm F1.7 lens is available in two colors: black and silver.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Satellite galaxy** Satellite galaxy: A satellite galaxy is a smaller companion galaxy that travels on bound orbits within the gravitational potential of a more massive and luminous host galaxy (also known as the primary galaxy). Satellite galaxies and their constituents are bound to their host galaxy, in the same way that planets within our own solar system are gravitationally bound to the Sun. While most satellite galaxies are dwarf galaxies, satellite galaxies of large galaxy clusters can be much more massive. The Milky Way is orbited by about fifty satellite galaxies, the largest of which is the Large Magellanic Cloud. Satellite galaxy: Moreover, satellite galaxies are not the only astronomical objects that are gravitationally bound to larger host galaxies (see globular clusters). For this reason, astronomers have defined galaxies as gravitationally bound collections of stars that exhibit properties that cannot be explained by a combination of baryonic matter (i.e. ordinary matter) and Newton's laws of gravity. For example, measurements of the orbital speed of stars and gas within spiral galaxies result in a velocity curve that deviates significantly from the theoretical prediction. This observation has motivated various explanations such as the theory of dark matter and modifications to Newtonian dynamics. Therefore, despite also being satellites of host galaxies, globular clusters should not be mistaken for satellite galaxies. Satellite galaxies are not only more extended and diffuse compared to globular clusters, but are also enshrouded in massive dark matter halos that are thought to have been endowed to them during the formation process.Satellite galaxies generally lead tumultuous lives due to their chaotic interactions with both the larger host galaxy and other satellites. For example, the host galaxy is capable of disrupting the orbiting satellites via tidal and ram pressure stripping. These environmental effects can remove large amounts of cold gas from satellites (i.e. the fuel for star formation), and this can result in satellites becoming quiescent in the sense that they have ceased to form stars. Moreover, satellites can also collide with their host galaxy resulting in a minor merger (i.e. merger event between galaxies of significantly different masses). On the other hand, satellites can also merge with one another resulting in a major merger (i.e. merger event between galaxies of comparable masses). Galaxies are mostly composed of empty space, interstellar gas and dust, and therefore galaxy mergers do not necessarily involve collisions between objects from one galaxy and objects from the other, however, these events generally result in much more massive galaxies. Consequently, astronomers seek to constrain the rate at which both minor and major mergers occur to better understand the formation of gigantic structures of gravitationally bound conglomerations of galaxies such as galactic groups and clusters. History: Early 20th century Prior to the 20th century, the notion that galaxies existed beyond our Milky Way was not well established. In fact, the idea was so controversial at the time that it led to what is now heralded as the "Shapley-Curtis Great Debate" aptly named after the astronomers Harlow Shapley and Heber Doust Curtis that debated the nature of "nebulae" and the size of the Milky Way at the National Academy of Sciences on April 26, 1920. Shapley argued that the Milky Way was the entire universe (spanning over 100,000 lightyears or 30 kiloparsec across) and that all of the observed "nebulae" (currently known as galaxies) resided within this region. On the other hand, Curtis argued that the Milky way was much smaller and that the observed nebulae were in fact galaxies similar to our own Milky Way. This debate was not settled until late 1923 when the astronomer Edwin Hubble measured the distance to M31 (currently known as the Andromeda galaxy) using Cepheid Variable stars. By measuring the period of these stars, Hubble was able to estimate their intrinsic luminosity and upon combining this with their measured apparent magnitude he estimated a distance of 300 kpc, which was an order-of-magnitude larger than the estimated size of the universe made by Shapley. This measurement verified that not only was the universe much larger than previously expected, but it also demonstrated that the observed nebulae were actually distant galaxies with a wide range of morphologies (see Hubble sequence). History: Modern times Despite Hubble's discovery that the universe was teeming with galaxies, a majority of the satellite galaxies of the Milky Way and the Local Group remained undetected until the advent of modern astronomical surveys such as the Sloan Digital Sky Survey (SDSS) and the Dark Energy Survey (DES). In particular, the Milky Way is currently known to host 59 satellite galaxies (see satellite galaxies of the Milky Way), however two of these satellites known as the Large Magellanic Cloud and Small Magellanic Cloud have been observable in the Southern Hemisphere with the unaided eye since ancient times. Nevertheless, modern cosmological theories of galaxy formation and evolution predict a much larger number of satellite galaxies than what is observed (see missing satellites problem). However, more recent high resolution simulations have demonstrated that the current number of observed satellites pose no threat to the prevalent theory of galaxy formation. History: Motivations to study satellite galaxies Spectroscopic, photometric and kinematic observations of satellite galaxies have yielded a wealth of information that has been used to study, among other things, the formation and evolution of galaxies, the environmental effects that enhance and diminish the rate of star formation within galaxies and the distribution of dark matter within the dark matter halo. As a result, satellite galaxies serve as a testing ground for prediction made by cosmological models. Classification of satellite galaxies: As mentioned above, satellite galaxies are generally categorized as dwarf galaxies and therefore follow a similar Hubble classification scheme as their host with the minor addition of a lowercase "d" in front of the various standard types to designate the dwarf galaxy status. These types include dwarf irregular (dI), dwarf spheroidal (dSph), dwarf elliptical (dE) and dwarf spiral (dS). However, out of all of these types it is believed that dwarf spirals are not satellites, but rather dwarf galaxies that are only found in the field. Classification of satellite galaxies: Dwarf irregular satellite galaxies Dwarf irregular satellite galaxies are characterized by their chaotic and asymmetric appearance, low gas fractions, high star formation rate and low metallicity. Three of the closest dwarf irregular satellites of the Milky Way include the Small Magellanic Cloud, Canis Major Dwarf, and the newly discovered Antlia 2. Dwarf elliptical satellite galaxies Dwarf elliptical satellite galaxies are characterized by their oval appearance on the sky, disordered motion of constituent stars, moderate to low metallicity, low gas fractions and old stellar population. Dwarf elliptical satellite galaxies in the Local Group include NGC 147, NGC 185, and NGC 205, which are satellites of our neighboring Andromeda galaxy. Classification of satellite galaxies: Dwarf spheroidal satellite galaxies Dwarf spheroidal satellite galaxies are characterized by their diffuse appearance, low surface brightness, high mass-to-light ratio (i.e. dark matter dominated), low metallicity, low gas fractions and old stellar population. Moreover, dwarf spheroidals make up the largest population of known satellite galaxies of the Milky Way. A few of these satellites include Hercules, Pisces II and Leo IV, which are named after the constellation in which they are found. Classification of satellite galaxies: Transitional types As a result of minor mergers and environmental effects, some dwarf galaxies are classified as intermediate or transitional type satellite galaxies. For example, Phoenix and LGS3 are classified as intermediate types that appear to be transitioning from dwarf irregulars to dwarf spheroidals. Furthermore, the Large Magellanic Cloud is considered to be in the process of transitioning from a dwarf spiral to a dwarf irregular. Formation of satellite galaxies: According to the standard model of cosmology (known as the ΛCDM model), the formation of satellite galaxies is intricately connected to the observed large-scale structure of the Universe. Specifically, the ΛCDM model is based on the premise that the observed large-scale structure is the result of a bottom-up hierarchical process that began after the recombination epoch in which electrically neutral hydrogen atoms were formed as a result of free electrons and protons binding together. As the ratio of neutral hydrogen to free protons and electrons grew, so did fluctuations in the baryonic matter density. These fluctuations rapidly grew to the point that they became comparable to dark matter density fluctuations. Moreover, the smaller mass fluctuations grew to nonlinearity, became virialized (i.e. reached gravitational equilibrium), and were then hierarchically clustered within successively larger bound systems.The gas within these bound systems condensed and rapidly cooled into cold dark matter halos that steadily increased in size by coalescing together and accumulating additional gas via a process known as accretion. The largest bound objects formed from this process are known as superclusters, such as the Virgo Supercluster, that contain smaller clusters of galaxies that are themselves surrounded by even smaller dwarf galaxies. Furthermore, in this model dwarfs galaxies are considered to be the fundamental building blocks that give rise to more massive galaxies, and the satellites that are observed around these galaxies are the dwarfs that have yet to be consumed by their host. Formation of satellite galaxies: Accumulation of mass in dark matter halos A crude yet useful method to determine how dark matter halos progressively gain mass through mergers of less massive halos can be explained using the excursion set formalism, also known as the extended Press-Schechter formalism (EPS). Among other things, the EPS formalism can be used to infer the fraction of mass M2 that originated from collapsed objects of a specific mass at an earlier time {\textstyle t_{1}<t_{2}} by applying the statistics of Markovian random walks to the trajectories of mass elements in (S,δ) -space, where {\textstyle S=\sigma ^{2}(M)} and δ=ρ(x)−ρ¯ρ¯ represent the mass variance and overdensity, respectively. Formation of satellite galaxies: In particular the EPS formalism is founded on the ansatz that states "the fraction of trajectories with a first upcrossing of the barrier {\textstyle \delta _{S}=\delta _{critical}(t)} at {\textstyle S>S_{1}=\sigma ^{2}(M_{1})} is equal to the mass fraction at time t that is incorporated in halos with masses {\textstyle M<M_{1}} ". Consequently, this ansatz ensures that each trajectory will upcross the barrier δS=δcritical(t) given some arbitrarily large S , and as a result it guarantees that each mass element will ultimately become part of a halo.Furthermore, the fraction of mass M2 that originated from collapsed objects of a specific mass at an earlier time {\textstyle t_{1}<t_{2}} can be used to determine average number of progenitors at time t1 within the mass interval {\textstyle (M_{1},M_{1}+dM_{1})} that have merged to produce a halo of M2 at time t2 . This is accomplished by considering a spherical region of mass M2 with a corresponding mass variance {\textstyle S_{2}=\sigma ^{2}(M_{2})} and linear overdensity {\textstyle \delta _{2}=\delta _{c}(t_{2})={\delta _{c} \over D(t_{2})}} , where {\textstyle D(t_{2})} is the linear growth rate that is normalized to unity at time {\textstyle t_{2}} and {\textstyle \delta _{c}} is the critical overdensity at which the initial spherical region has collapsed to form a virialized object. Mathematically, the progenitor mass function is expressed as:where 12 {\textstyle \nu _{12}={\delta _{1}-\delta _{2} \over {\sqrt {S_{1}-S_{2}}}}} and 12 12 exp 12 {\textstyle f_{PS}(\nu _{12})={\sqrt {2 \over \pi }}\nu _{12}\exp({-\nu _{12}^{2} \over 2})} is the Press-Schechter multiplicity function that describes the fraction of mass associated with halos in a range ln 12 {\textstyle \ln(\nu _{12})} .Various comparisons of the progenitor mass function with numerical simulations have concluded that good agreement between theory and simulation is obtained only when Δt=t2−t1 is small, otherwise the mass fraction in high mass progenitors is significantly underestimated, which can be attributed to the crude assumptions such as assuming a perfectly spherical collapse model and using a linear density field as opposed to a non-linear density field to characterize collapsed structures. Nevertheless, the utility of the EPS formalism is that it provides a computationally friendly approach for determining properties of dark matter halos. Formation of satellite galaxies: Halo merger rate Another utility of the EPS formalism is that it can be used to determine the rate at which a halo of initial mass M merges with a halo with mass between M and M+ΔM. This rate is given by where {\textstyle S_{1}=\sigma ^{2}(M)} , {\textstyle S_{2}=\sigma ^{2}(M+\Delta M)} . In general the change in mass, ΔM , is the sum of a multitude of minor mergers. Nevertheless, given an infinitesimally small time interval dt it is reasonable to consider the change in mass to be due to a single merger events in which M1 transitions to M2 Galactic cannibalism (minor mergers): Throughout their lifespan, satellite galaxies orbiting in the dark matter halo experience dynamical friction and consequently descend deeper into the gravitational potential of their host as a result of orbital decay. Throughout the course of this descent, stars in the outer region of the satellite are steadily stripped away due to tidal forces from the host galaxy. This process, which is an example of a minor merger, continues until the satellite is completely disrupted and consumed by the host galaxies. Evidence of this destructive process can be observed in stellar debris streams around distant galaxies. Galactic cannibalism (minor mergers): Orbital decay rate As satellites orbit their host and interact with each other they progressively lose small amounts of kinetic energy and angular momentum due to dynamical friction. Consequently, the distance between the host and the satellite progressively decreases in order to conserve angular momentum. This process continues until the satellite ultimately mergers with the host galaxy. Furthermore, If we assume that the host is a singular isothermal sphere (SIS) and the satellite is a SIS that is sharply truncated at the radius at which it begins to accelerate towards the host (known as the Jacobi radius), then the time tfric that it takes for dynamical friction to result in a minor merger can be approximated as follows:where {\textstyle r_{i}} is the initial radius at {\textstyle t=0} , {\textstyle \sigma _{\mathcal {M}}} is the velocity dispersion of the host galaxy, σs is the velocity dispersion of the satellite and ln ⁡Λ is the Coulomb logarithm defined as ln ln {\textstyle \ln \Lambda =\ln {\Big (}{\frac {b_{\mathrm {max} }}{\mathrm {max} (r_{\mathrm {h} },GM/v_{\mathrm {typ} }^{2})}}{\Big )}} with max {\textstyle b_{\max }} , {\textstyle r_{\mathrm {h} }} and {\textstyle v_{\mathrm {typ} }^{2}} respectively representing the maximum impact parameter, the half-mass radius and the typical relative velocity. Moreover, both the half-mass radius and the typical relative velocity can be rewritten in terms of the radius and velocity dispersion such that {\textstyle r_{\mathrm {h} }={\frac {\sigma _{\mathrm {s} }}{2^{3/2}\sigma _{\mathcal {M}}}}r} and GMvtyp2=2σs2σM3r . Using the Faber-Jackson relation, the velocity dispersion of satellites and their host can be estimated individually from their observed luminosity. Therefore, using the equation above it is possible to estimate the time that it takes for a satellite galaxy to be consumed by the host galaxy. Galactic cannibalism (minor mergers): Minor merger driven star formation In 1978, pioneering work involving the measurement of the colors of merger remnants by the astronomers Beatrice Tinsley and Richard Larson gave rise to the notion that mergers enhance star formation. Their observations showed that an anomalous blue color was associated with the merger remnants. Prior to this discovery, astronomers had already classified stars (see stellar classifications) and it was known that young, massive stars were bluer due to their light radiating at shorter wavelengths. Furthermore, it was also known that these stars live short lives due to their rapid consumption of fuel to remain in hydrostatic equilibrium. Therefore, the observation that merger remnants were associated with large populations of young, massive stars suggested that mergers induced rapid star formation (see starburst galaxy). Since this discovery was made, various observations have verified that mergers do indeed induce vigorous star formation. Despite major mergers being far more effective at driving star formation than minor mergers, it is known that minor mergers are significantly more common than major mergers so the cumulative effect of minor mergers over cosmic time is postulated to also contribute heavily to burst of star formation. Galactic cannibalism (minor mergers): Minor mergers and the origins of thick disk components Observations of edge-on galaxies suggest the universal presence of a thin disk, thick disk and halo component of galaxies. Despite the apparent ubiquity of these components, there is still ongoing research to determine if the thick disk and thin disk are truly distinct components. Nevertheless, many theories have been proposed to explain the origin of the thick disk component, and among these theories is one that involves minor mergers. In particular, it is speculated that the preexisting thin disk component of a host galaxy is heated during a minor merger and consequently the thin disk expands to form a thicker disk component.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Descent direction** Descent direction: In optimization, a descent direction is a vector p∈Rn that points towards a local minimum x∗ of an objective function f:Rn→R Computing x∗ by an iterative method, such as line search defines a descent direction pk∈Rn at the k th iterate to be any pk such that ⟨pk,∇f(xk)⟩<0 , where ⟨,⟩ denotes the inner product. The motivation for such an approach is that small steps along pk guarantee that f is reduced, by Taylor's theorem. Descent direction: Using this definition, the negative of a non-zero gradient is always a descent direction, as ⟨−∇f(xk),∇f(xk)⟩=−⟨∇f(xk),∇f(xk)⟩<0 Numerous methods exist to compute descent directions, all with differing merits, such as gradient descent or the conjugate gradient method. More generally, if P is a positive definite matrix, then pk=−P∇f(xk) is a descent direction at xk . This generality is used in preconditioned gradient descent methods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Excited state** Excited state: In quantum mechanics, an excited state of a system (such as an atom, molecule or nucleus) is any quantum state of the system that has a higher energy than the ground state (that is, more energy than the absolute minimum). Excitation refers to an increase in energy level above a chosen starting point, usually the ground state, but sometimes an already excited state. The temperature of a group of particles is indicative of the level of excitation (with the notable exception of systems that exhibit negative temperature). Excited state: The lifetime of a system in an excited state is usually short: spontaneous or induced emission of a quantum of energy (such as a photon or a phonon) usually occurs shortly after the system is promoted to the excited state, returning the system to a state with lower energy (a less excited state or the ground state). This return to a lower energy level is often loosely described as decay and is the inverse of excitation. Excited state: Long-lived excited states are often called metastable. Long-lived nuclear isomers and singlet oxygen are two examples of this. Atomic excitation: Atoms can be excited by heat, electricity, or light. The hydrogen atom provides a simple example of this concept. Atomic excitation: The ground state of the hydrogen atom has the atom's single electron in the lowest possible orbital (that is, the spherically symmetric "1s" wave function, which, so far, has been demonstrated to have the lowest possible quantum numbers). By giving the atom additional energy (for example, by absorption of a photon of an appropriate energy), the electron moves into an excited state (one with one or more quantum numbers greater than the minimum possible). If the photon has too much energy, the electron will cease to be bound to the atom, and the atom will become ionized. Atomic excitation: After excitation the atom may return to the ground state or a lower excited state, by emitting a photon with a characteristic energy. Emission of photons from atoms in various excited states leads to an electromagnetic spectrum showing a series of characteristic emission lines (including, in the case of the hydrogen atom, the Lyman, Balmer, Paschen and Brackett series). An atom in a high excited state is termed a Rydberg atom. A system of highly excited atoms can form a long-lived condensed excited state e.g. a condensed phase made completely of excited atoms: Rydberg matter. Perturbed gas excitation: A collection of molecules forming a gas can be considered in an excited state if one or more molecules are elevated to kinetic energy levels such that the resulting velocity distribution departs from the equilibrium Boltzmann distribution. This phenomenon has been studied in the case of a two-dimensional gas in some detail, analyzing the time taken to relax to equilibrium. Calculation of excited states: Excited states are often calculated using coupled cluster, Møller–Plesset perturbation theory, multi-configurational self-consistent field, configuration interaction, and time-dependent density functional theory. Excited-state absorption: The excitation of a system (an atom or molecule) from one excited state to a higher-energy excited state with the absorption of a photon is called excited-state absorption (ESA). Excited-state absorption is possible only when an electron has been already excited from the ground state to a lower excited state. The excited-state absorption is usually an undesired effect, but it can be useful in upconversion pumping. Excited-state absorption measurements are done using pump–probe techniques such as flash photolysis. However, it is not easy to measure them compared to ground-state absorption, and in some cases complete bleaching of the ground state is required to measure excited-state absorption. Reaction: A further consequence of excited-state formation may be reaction of the atom or molecule in its excited state, as in photochemistry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Old Earth creationism** Old Earth creationism: Old Earth creationism (OEC) is an umbrella of theological views encompassing certain varieties of creationism which may or can include day-age creationism, gap creationism, progressive creationism, and sometimes theistic evolutionism. Old Earth creationism: Broadly speaking, OEC usually occupies a middle ground between young Earth creationism (YEC) and theistic evolution (TE). In contrast to YEC, it is typically more compatible with the scientific consensus on the issues of physics, chemistry, geology, and the age of the Earth. However, like YEC and in contrast with TE, some forms of it reject macroevolution, claiming it is biologically untenable and not supported by the fossil record, and the concept of universal descent from a last universal common ancestor. Old Earth creationism: For a long time Evangelical creationists generally subscribed to Old Earth Creationism until 1960 when John C. Whitcomb and Henry M. Morris published the book The Genesis Flood, which caused the Young Earth creationist view to become prominent. History: Augustine interpreted the days of Genesis allegorically, whose view also influenced Gregory the Great, Bede and Isodor of Seville. Augustine was not alone in viewing the days of Genesis as allegorical, others include: Didumyus the Blind, possibly Basil the Great, Clement of Alexandria, Origen and Athanasius, who interpreted the days of the Genesis narrative allegorically.Cyprian argued that each days of Genesis consisted of 1000 years. Irenaeus and Justin Martyr suggested that the days of Genesis could have been long epochs of 1000 years, quoting Psalm 90:4 and perhaps 2 Peter. According to Dr. Hugh Ross, Thomas Aquinas also denied the genesis account as being literal with six 24 hour days.Thomas Chalmers popularized gap creationism, which is a form of Old Earth Creationism. Additionally it was advocated by the Scofield Reference bible, which caused the theory to survive longer.Probably the most famous day-age creationist was American politician, anti-evolution campaigner and Scopes Trial prosecutor William Jennings Bryan. Unlike many of his conservative followers, Bryan was not a strict biblical literalist, and had no objection to "evolution before man but for the fact that a concession as to the truth of evolution up to man furnishes our opponents with an argument which they are quick to use, namely, if evolution accounts for all the species up to man, does it not raise a presumption in behalf of evolution to include man?" He considered defining the days in Genesis 1 to be twenty-four hours to be a pro-evolution straw man argument to make attacking creationists easier, and admitted under questioning at the Scopes trial that the world was far older than six thousand years, and that the days of creation were probably longer than twenty-four hours each.American Baptist preacher and anti-evolution campaigner William Bell Riley, "The Grand Old Man of Fundamentalism", founder of the World Christian Fundamentals Association and of the Anti-Evolution League of America was another prominent day-age creationist in the first half of the 20th century, who defended this position in a famous debate with friend and prominent young Earth creationist Harry Rimmer. Types: Gap creationism Gap creationism is a form of old Earth creationism which posits the belief that the six-yom creation period, as described in the Book of Genesis, involved six literal 24-hour days, but that there was a gap of time between two distinct creations in the first and second verses of Genesis, which the theory states explains many scientific observations, including the age of the Earth. This view was popularized in 1909 by the Scofield Reference Bible. Types: Progressive creationism Progressive creationism is the religious belief that God created new forms of life gradually over a period of hundreds of millions of years. As a form of Old Earth creationism, it accepts mainstream geological and cosmological estimates for the age of the Earth, some tenets of biology such as microevolution as well as archaeology to make its case. In this view creation occurred in rapid bursts in which all "kinds" of plants and animals appear in stages lasting millions of years. The bursts are followed by periods of stasis or equilibrium to accommodate new arrivals. These bursts represent instances of God creating new types of organisms by divine intervention. As viewed from the archaeological record, progressive creationism holds that "species do not gradually appear by the steady transformation of its ancestors; [but] appear all at once and "fully formed." Thus the evidence for macroevolution is claimed to be false, but microevolution is accepted as a genetic parameter designed by the Creator into the fabric of genetics to allow for environmental adaptations and survival. Generally, it is viewed by proponents as a middle ground between literal creationism and evolution. Approaches to Genesis 1: Old Earth Christian creationists may approach the creation accounts of Genesis in a number of different ways. Approaches to Genesis 1: Framework interpretation The framework interpretation (or framework hypothesis) notes that there is a pattern or "framework" present in the Genesis account and that, because of this, the account may not have been intended as a strict chronological record of creation. Instead, the creative events may be presented in a topical order. This view is broad enough that proponents of other old earth views (such as many Day-Age creationists) have no problem with many of the key points put forward by the hypothesis, though they might believe that there is a certain degree of chronology present. Approaches to Genesis 1: Day-age creationism Day-age creationism is an effort to reconcile the literal Genesis account of creation with modern scientific theories on the age of the universe, the Earth, life, and humans. It holds that the six days referred to in the Genesis account of creation are not ordinary 24-hour days, but rather are much longer periods (of thousands or millions of years). The Genesis account is then interpreted as an account of the process of cosmic evolution, providing a broad base on which any number of theories and interpretations are built. Proponents of the day-age theory can be found among theistic evolutionists and progressive creationists. Approaches to Genesis 1: The day-age theory tries to reconcile these views by arguing that the creation "days" were not ordinary 24-hour days, but actually lasted for long periods of time—or as the theory's name implies: the "days" each lasted an age. Most advocates of old Earth creationism hold that the six days referred to in the creation account given in Genesis are not ordinary 24-hour days, as the Hebrew word for "day" (yom) can be interpreted in this context to mean a long period of time (thousands or millions of years) rather than a 24-hour day. According to this view, the sequence and duration of the creation "days" is representative or symbolic of the sequence and duration of events that scientists theorize to have happened, such that Genesis can be read as a summary of modern science, simplified for the benefit of pre-scientific humans. Approaches to Genesis 1: Cosmic time Gerald Schroeder puts forth a view which reconciles 24-hour creation days with an age of billions of years for the universe by noting, as creationist Phillip E. Johnson summarizes in his article "What Would Newton Do?": "the Bible speaks of time from the viewpoint of the universe as a whole, which Schroeder interprets to mean at the moment of 'quark confinement,' when stable matter formed from energy early in the first second of the big bang." Schroeder calculates that a period of six days under the conditions of quark confinement, when the universe was approximately a trillion times smaller and hotter than it is today is equal to fifteen billion years of earth time today. This is all due to space expansion after quark confinement. Thus Genesis and modern physics are reconciled. Schroeder, though, states in an earlier book, Genesis and the Big Bang, that the Earth and solar system is some "4.5 to 5 billion years" old and also states in a later book, The Science of God, that the Sun is 4.6 billion years old. The biblical flood: Some old Earth creationists reject flood geology, a position which leaves them open to accusations that they thereby reject the infallibility of scripture (which states that the Genesis flood covered the whole of the earth). In response, old Earth creationists cite verses in the Bible where the words "whole" and "all" clearly require a contextual interpretation. Old Earth creationists generally believe that the human race was localised around the Middle East at the time of the Genesis flood, a position which is in conflict with the Out of Africa theory. Criticism: Old Earth creationism has received criticism from some secular communities and proponents of theistic evolution for rejecting evolution, as well as criticism from young Earth creationists for not interpreting the six-yom period as six literal 24-hour days of the Genesis creation narrative and for believing in death and suffering before the fall.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SUPRENUM** SUPRENUM: SUPRENUM (German: SUPerREchner für NUMerische Anwendungen, English: super-computer for numerical applications) was a German research project to develop a parallel computer from 1985 through 1990. It was a major effort which was aimed at developing a national expertise in massively parallel processing both at hardware and at software level. Although the Suprenum-1 computer was the fastest massively parallel MIMD computer in the world during a period in 1992, the project was set and is considered a commercial failure. History: Funded by the Federal Ministry for Research and Technology (BMFT), the SUPRENUM project began in 1985 and BMFT funding continued until 1990 when a fully configured 256-node prototype Suprenum-1 machine was available. The project's inception in 1985 was preceded by a definition phase lasting more than one year in which ideas were gathered, concepts were formed and project partners were selected. History: The project was two-tiered, of which only the first step was taken. In particular it was planned the following: Suprenum 1 subproject: production of a high-speed MIMD computer Suprenum 2 subproject: expanding the core applications and algorithmic service classes to include complex and dynamic grid structures; data-dependent adaptive procedures, irregular and highly dimensional grids, Monte Carlo methods based on grid structures, non-grid applications, etc., development of innovative language concepts which support automatic load distribution (particularly with dynamic grid structures) to the multiple-processor structure, investigation of alternative interconnecting structures (other topologies, variable interconnection networks) in particular with regard to dynamic grid structures and automatic load distribution strategies, new processor technologies (VLSI, GaAs and so on).The mandate accompanying the funding was to create a project that included both a research and a commercial side. To this end, the SUPRENUM Supercomputer GmbH was founded in Bonn. The SUPRENUM Supercomputer GmbH's charge was to manage the whole enterprise, to contribute to the software effort, to coordinate software developments, and to exploit and market the results of the project. The commercial goal required that companies with manufacturing expertise be involved. The research aspects required that various university and government research laboratories should participate. The final team consisted of about 15 groups from different institutions all over Germany, including several large companies as well as the small SUPRENUM Supercomputer GmbH. They were four (or five, resp.) major research institutes: GMD (German: Gesellschaft für Mathematik und Datenverarbeitung, English: society for mathematics and informatics) both their sites in Sankt Augustin and Berlin, KfA (German: KernForschungsAnstalt, Jülich, English: institute for nuclear research in Jülich), KfK (German: KernForschungszentrum Karlsruhe, English: centre for nuclear research in Karlsruhe), DLR five universities: Darmstadt, Bonn, Braunschweig, Düsseldorf, Erlangen-Nürnberg two industrial users: Dornier, Kraftwerk Union two companies: Krupp Atlas Elektronik GmbH, Stollmann GmbH and Suprenum GmbHThe tasks were assigned the following: In the applications software area: DLR, Dornier GmbH, the GMD, the Kernforschungsanlage Jülich GmbH (KFA), the Kernforschungsanlage Karlsruhe GmbH (KfK), Kraftwerk Union AG and the University of Düsseldorf. History: In the language level area: GMD, the Technical University at Darmstadt and the University of Bonn. History: In the systems area: GMD, Krupp Atlas Elektronik GmbH, Stollmann GmbH, the Technical University at Brunswick and the University of Erlangen-Nuremberg.While the research group on parallel numerical methods in Sankt Augustin provided the know-how for the applications (solving partial differential equations), the German Society for Mathematics and Data Processing GMD FIRST (German: Forschungszentrums für Innovative Rechnersysteme und -technologie, English: Research centre for innovative computer systems and technologies) in Berlin, provided the necessary know-how in hardware and operating system design. A total of 15 research groups in academic institutions across Germany were involved in the project. The involvement of the industry was limited to the production of hardware at Krupp Atlas Elektronik. History: Only five systems were shipped.Since Jul 12, 2010 SUPRENUM Supercomputer GmbH ist defunct.After the end of the SUPRENUM project, Pallas GmbH evolved out of the remains of SUPRENUM GmbH in 1991. In 2003, the company sold its high performance computing division to the Intel Corporation. In contrast to the then ubiquitous, conventional vector computers (e.g. NEC SX architecture, Cray Y-MP), SUPRENUM-1 pursued as one of the first a massively parallel design. However, competitors like Thinking Machines Corporation were catching up fast. Architecture: The Suprenum-1 was designed as a massively parallel MIMD multi-computer system and it was based on a distributed hardware architecture. It was scalable up to 256 computing nodes, organized into clusters. Architecture: The nodes of a cluster were partitioned into five function units. From a total of 20 nodes, for the execution of application programs, 16 application nodes were available. One stand-by node served for fault-tolerant purposes. In addition to these application-oriented nodes, the disk node provided for disk I/O services and the diagnostic node provided for maintenance services. And finally, the inter-connection of different clusters, as well as the inter-connection to host machines, is made feasible by the communication node, which actually serves as a gateway between cluster bus and SUPRENUM bus.The first release consisted of 320 nodes (256 application nodes and 64 maintenance nodes). Architecture: The main components of each application node were a 32-bit microprocessor Motorola 68020 operating at a clock rate of 20 MHz, 8 MByte of main memory, protected by 2-bit error-detection and 1-bit error-correction logic, and four coprocessors: The paged memory management unit (PMMU) Motorola 68851 checked access rights and page violation when the node memory was being accessed by the CPU or at the beginning of DMA. Architecture: The floating-point unit (FPU) Motorola 68882 executed scalar floating-point arithmetic. Architecture: The vector floating-point unit (VFPU) consisted of the Weitek chip set WTL2264/2265 and 64 KByte of fast static memory (vector cache). Peak performance was 10 MFlops for single-operation double-precision floating point computations, and 20 MFlops in the case of chained operations. Peak performance was achieved even if one of the two operands was being read from main memory by DMA, provided a constant increment was used. Architecture: The communication unit (CU) was a microprogrammable coprocessor which took care of the data transfer between a node's main memory and other nodes in the system. The CPU initiated the communication. The communication unit then handled the entire data transfer including bus request, transfer with protocol checks, and bus release. The functions of the communication unit were realized mainly by gate arrays and hybrid modules.The net performance of each application node was specified with 4 Mflops. As a consequence, a net performance of 1 Gflops was calculated for the SUPRENUM release . Architecture: The 16 clusters were connected by a network of 200 Mbit/sec busses. The busses were arranged as a rectangular grid with 4 horizontal and 4 vertical busses (global busses). Each cluster consisted of 16 processors connected by a fast bus, along with I/O devices for communication to the global bus grid, to the disk and the host computers. There was a dedicated disk for each cluster. Individual nodes could deliver up to 20 Mflops (64-bit chained) or 10 Mflops (64-bit unchained) of computing power.The high bandwidth of the bus network made the Suprenum-1 an interesting machine for a wide range of applications, including those requiring long-range communication. No more than three communication steps were ever required between remote nodes. SUPRENUM supported a send/receive model of communication. The primary difference is that SUPRENUM Fortran was an extension of standard Fortran, in which task control and communication are incorporated into the language, rather than being implemented through library calls as on the iPSC. Architecture: SUPRENUM also supports Fortran 90 array extensions which avail of the vector hardware. SUPRENUM software was characterized by the best support for scientific applications to be found among the various distributed memory MIMD vendors. The effort invested in development of libraries of high-level grid and communication primitives greatly eased the effort of moving applications to the computer, and also provides substantial high-level portability to other systems, since the communication library could be implemented in terms of low level primitives on any distributed system.Besides the hardware development, Suprenum-1 software was developed on many levels: Operating System Vectorizing Compilers Message Passing ApplicationsThe operating system for Suprenum-1 was PEACE (Process Execution And Communication Environment), a new operating system developed specifically for the project. PEACE was designed from the start to support efficient low-latency message passing as weIl as multitasking. While PEACE appeared to be a satisfactory operating system, message latency never was as low as desired. Typical latency overheads are of order 1 millisecond. While asynchronous communication was a design goal for SUPRENUM, we were never able to overlap communication with computation on Suprenum-1 due to a mailbox conflict within PEACE. Architecture: As a major result, a rudimentary and "first-of-its-kind" Fortran compiler was developed. Based on Fortran 77, it already provided some features of the then upcoming Fortran 90 standard. It also used the PARMACS ("parallel macros") communication library. In contrast to the above-mentioned FORTRAN compiler, PARMACS programming model is explicitly based on Message Passing. But again, funding for the project was stopped before the compiler had reached maturity. It transformed into the SUPERB (SUprenum parallelyER Bonn) project ("Vienna Fortran"). Performance: The table below provides a comparison of the Suprenum-1 with other MPP systems of its time: Review: Because of the high development cost of more than 160 million Deutsche Mark and the lack of success in marketing, the project has been increasingly evaluated critically and compared with other unsuccessful research (Breeder reactor, Transrapid). Therefore, the Federal Ministry for Research and Technology waived the funding of the planned second phase of evolving into a commercial project. This decision stymied the commercial success because it denied a successor system on which potential customers could have relied. Continuity is an essential prerequisite for software development or applied industrial use. Review: In hindsight, especially the inadequate involvement of industry is being criticized. However, as a research project itself, SUPRENUM was successful. The participating institutions had acquired a well-respected expertise in parallel computing, which resulted in a European project GENESIS. PEACE served as an operating system for the non-profit MANNA architecture. SUPRENUM also influenced the development of other parallel computers such as the Meiko CS-2 which was an outcome of the European GENESIS project. Review: The SUPRENUM project has spun off many successful enterprises, e.g. GENESIS, SUPERB, Pallas GmbH, Manna, PPPE and RAPS. Pallas, in fact, can be seen as a continuation of all of the software aspects of SUPRENUM, and as such shows that this part of SUPRENUM was commercially successful. The GMD FIRST project Manna is similarly a continuation of the operating system and some of the architecture aspects of SUPRENUM, again very successful, although this time in a research environment. Review: Also the Meiko CS-2 machine, originally developed within GENESIS, involved many elements of the Suprenum-2 design from SUPRENUM, and indeed there were serious plans at one point to merge Meiko and SUPRENUM. Unfortunately, this concept was ultimately rejected by the shareholders of SUPRENUM GmbH, who at that time also decided to withdraw from SUPRENUM. Finally the applications' side of SUPRENUM evolved into GENESIS, later PPPE and RAPS, so that again this aspect of SUPRENUM has shown itself to be of long-term viability. Review: Taking into account all of these achievements across a broad spectrum of computing technology, one can only conclude that SUPRENUM was highly successful, even while not achieving all of the goals originally established by the government.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Grubel–Lloyd index** Grubel–Lloyd index: The Grubel–Lloyd index measures intra-industry trade of a particular product. It was introduced by Herb Grubel and Peter Lloyd in 1971. GLi=(Xi+Mi)−|Xi−Mi|Xi+Mi=1−|Xi−Mi|Xi+Mi;0≤GLi≤1 where Xi denotes the export, Mi the import of good i. Grubel–Lloyd index: If GLi = 1, there is a good level of intra-industry trade. This means for example the Country in consideration Exports the same quantity of good i as much as it Imports. Conversely, if GLi = 0, there is no intra-industry trade at all. This would mean that the Country in consideration only either Exports or only Imports good i.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SCAPER** SCAPER: SCAPER (S-phase CyclinA Associated Protein residing in the Endoplasmic Reticulum) is a gene located on the long arm of chromosome 15 (15q24.3). It was first identified in 2007. Gene: This gene lies on the Crick strand and has 30 exons. Protein: The gene encodes a 1399-amino acid protein with a predicted weight of 158 kiloDaltons. It has a C2H2-type zinc finger motif, a putative transmembrane domain, an ER retrieval signal at the C terminus, 4 coiled-coil domains, 6 potential RXL motifs and 6 consensus Cdk phosphorylation sites. Biochemistry: The encoded protein is found in the nucleus and endoplasmic reticulum. It is found in all tissues tested. It appears to have a role in the cell cycle. Clinical significance: Mutations in this gene have been associated with intellectual disability and retinitis pigmentosa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mac OS nanokernel** Mac OS nanokernel: The Mac OS nanokernel is an operating system kernel serving as the basis of most PowerPC based system software versions 7 through 9 of the classic Mac OS, predating Mac OS X. Mac OS nanokernel: The initial revision of this software is a single tasking system which delegates most tasks to an emulator running the Motorola 68000 series (68K) version of the operating system. The second major revision supports multitasking, multiprocessing, and message passing, and would be more properly called a microkernel. Unlike the 68K-derived Mac OS kernel running within it, the PowerPC kernel exists in a protected memory space and executes device drivers in user mode. Mac OS nanokernel: The nanokernel is very different from the Copland OS microkernel, although they were created in succession with similar goals. System 7.1.2 – Mac OS 8.5.1: The original nanokernel, and the tightly integrated Mac 68k emulator, were written by emulation consultant Gary Davidian. Its main purpose is to allow the existing Motorola 68K version of the operating system to run on new hardware. As such, the normal state of the system is to be running 68K code. The operating system does little until activated by an interrupt, which is quickly mapped to its 68K equivalent within the virtual machine. System 7.1.2 – Mac OS 8.5.1: Other tasks may include switching back to PowerPC mode, if necessary, upon completion of the interrupt handler, and mapping the Macintosh virtual memory system to the PowerPC hardware. However, as the software is little documented, these might instead be handled by the emulator running in user mode. This nanokernel is stored on the Mac OS ROM chip integrated into Old World ROM computers, or inside the Mac OS ROM file on disk on the New World ROM computers, rather than being installed in the familiar sense. Interim development: Progress after 1994 demanded additional functionality. A forward-looking architecture was introduced for PCI card drivers in anticipation of the Copland microkernel called NuKernel, which supports memory protection. The Open Transport networking architecture introduced standardized PowerPC synchronization primitives. The DayStar Digital Genesis MP Macintosh clone requires kernel extensions to support multiprocessing. This evolution would later affect the overhaul to the nanokernel in Mac OS 8.6. Mac OS 8.6 and later: Mac OS 8.6's nanokernel was rewritten by René A. Vega to add Multiprocessing Services 2.0 support. PowerMacInfo, distributed in the Multiprocessing SDK, is an application that displays statistics about the nanokernel's operation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RNA silencing suppressor p19** RNA silencing suppressor p19: RNA silencing suppressor p19 (also known as Tombusvirus P19 core protein and 19 kDa symptom severity modulator) is a protein expressed from the ORF4 gene in the genome of tombusviruses. These viruses are positive-sense single-stranded RNA viruses that infect plant cells, in which RNA silencing forms a widespread and robust antiviral defense system. The p19 protein serves as a counter-defense strategy, specifically binding the 19- to 21-nucleotide double-stranded RNAs that function as small interfering RNA (siRNA) in the RNA silencing system. By sequestering siRNA, p19 suppresses RNA silencing and promotes viral proliferation. The p19 protein is considered a significant virulence factor and a component of an evolutionary arms race between plants and their pathogens. Structure: The p19 protein received its name from its size, being approximately 19 kilodaltons. It forms a functional homodimer. The crystal structures are available of p19 proteins from the tomato bushy stunt virus and Carnation Italian ringspot virus; the protein consists of a novel protein fold and exemplifies a previously unknown mechanism for binding RNA, using a binding surface formed by a beta sheet and flanked by alpha helices to interact with double-stranded RNAs of around 21 nucleotides in length in a non-sequence-specific manner. Function: The p19 protein binds to double-stranded RNAs that function as short interfering RNA (siRNA) and is specialized for the 21-nucleotide product of the enzyme DCL4 (a member of a family of plant enzymes with homology to Dicer). By binding to siRNA, p19 sequesters these species and prevents them from interacting with the RNA-induced silencing complex (RISC), a protein complex that mediates the antiviral RNA silencing mechanism in the cell. Function: The p19 protein is also capable of binding to microRNA molecules that are endogenous to the host cell, as well as the siRNAs that are ultimately derived from the virus's own genome. Notably, an exception to this pattern is p19's inefficiency in interacting with the microRNA miR-168, a regulatory non-coding RNA that represses expression of argonaute-1 (AGO1). The AGO1 protein is required for RNA silencing, thus selectively sparing its repressor from p19's general sequestration of miRNA has the effect of reducing cellular AGO1 levels and is an additional mechanism by which p19 inhibits silencing. The two mechanisms are independent of one another and can be selectively abrogated by mutations. Evolution: The gene encoding the p19 protein is an example of an overprinted gene, a genomic arrangement common in viruses in which multiple genes are encoded by the same portion of the genome read in alternate reading frames. The open reading frame ORF4, which encodes p19, is completely contained within the open reading frame of another gene, which is designated ORF3 and encodes the movement protein p22. Both genes, and their relative positions, are conserved within the tombusvirus family. P19 is thought to have originated de novo in this lineage.Sequestration of dsRNA is a common viral counter-defense strategy against RNA silencing, evolved in a form of evolutionary arms race between virus and host. The p19 protein is not unique in this role; in an example of convergent evolution, this strategy appears to have evolved at least three times in distinct viral lineages using proteins with distinct structures and physical means of binding RNA. History: The tomato bushy stunt virus, which is the type species of the tombusvirus family, is a long-standing model system for the study of plant viruses. The open reading frame encoding p19 was originally discovered in the late 1980s when the virus's genome was sequenced; it was subsequently demonstrated that the predicted protein was indeed expressed from the gene, although its role in promoting virulence and infectivity was initially underappreciated. Following the elucidation of its role as a suppressor of RNA silencing, p19 has also been used as a tool in molecular biology research on RNA silencing, RNA interference, and related processes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Central office multiplexing** Central office multiplexing: Central office multiplexing (or CO muxing) is telephone exchange (central office) equipment that derives a number of lower-speed channels from one high-bandwidth channel. This type of multiplexing is needed when the customer wants to terminate the DS1 or DS3 in the central office and wants to ‘pick up’ lower level services. For example, a customer may order a DS3, using it to carry a number of DS1 level services from different IXC carriers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Periorbital dark circles** Periorbital dark circles: Periorbital dark circles are dark blemishes around the eyes. There are many causes of this symptom, including heredity and bruising. Causes: Anatomical factors Bony structure and prominence of the orbicularis oculi muscle can contribute to infraorbital dark circles. Skin in the lower eyelid is very thin which accentuates subdermal features. Causes: Allergies, asthma, and eczema Any condition that causes the eyes to itch can contribute to darker circles due to rubbing or scratching the skin around them. Hay fever sufferers in particular will notice under-eye "smudges" during the height of the allergy season. Atopy can lead to frequent rubbing of the eyes, leading to local inflammation and increased pigmentation.Also, dark circles from allergies are caused by superficial venous congestion in the capillaries under the eyes. Causes: Medications Any medications that cause blood vessels to dilate can cause circles under the eyes to darken. The skin under the eyes is very delicate, any increased blood flow shows through the skin. Cortisol deficiency When cortisol is deficient the pituitary compensates by producing excess adrenocorticotropic hormone (ACTH) and melanocyte-stimulating hormone (MSH), the latter resulting in dark circles under the eyes. Causes: Anemia The lack of nutrients in the diet, or the lack of a balanced diet, can contribute to the discoloration of the area under the eyes. It is believed that iron deficiency and vitamin B12 deficiency can cause dark circles as well. Iron deficiency is the most common type of anemia and this condition is a sign that not enough oxygen is getting to the body tissues. Causes: The skin can also become more pale during pregnancy and menstruation (due to lack of iron), allowing the underlying veins under the eyes to become more visible. Fatigue A lack of sleep and mental fatigue can cause paleness of the skin, allowing the blood underneath the skin to become more visible and appear bluer or darker. Causes: Age Dark circles are likely to become more noticeable and permanent with age. This is because as people get older, their skin loses collagen, becoming thinner and more translucent. As facial fat descends and fat volume decreases, the somewhat inflexible ligaments can result in orbital rim and facial hollowing. Photoaging has similar effects. Hemoglobin breakdown products such as hemosiderin and biliverdin can leak from the vascular contributing to pigmentation changes.Circles may also gradually begin to appear darker in one eye than the other as a result of some habitual facial expressions, such as an uneven smile. Causes: Sun exposure Prompts your body to produce more melanin, the pigment that gives skin its color. Periorbital hyperpigmentation Periorbital hyperpigmentation is the official name for when there is more melanin produced around the eyes than is usual, giving them a darker color. Treatment: At one time, hydroquinone solution was often mixed in an oil-free moisturizer that acted like a skin bleach. However the use of hydroquinone for skin whitening has been banned in European countries due to health concerns. In 2006, the United States Food and Drug Administration revoked its approval of hydroquinone for over the counter preparations warning that it may cause cancer or have many other detrimental effects.The use of hydroquinone skin-whitening products can be toxic, harmful or lethal for humans.Modern treatments include topical creams that are marketed for the condition. Various ingredients have been researched, developed and included in these creams. For example, recently, chemical compounds called alpha hydroxy acids (AHAs) have been added as a beneficial ingredient to creams for dark circles. Specialist treatments including laser and intense pulsed light skin surgery can also be used. A compounding cream of Pfaffia paniculata, Ptychopetalum olacoides and Lilium candidum has also been reported as an effective treatments. Low-level laser therapy, autologous fat transplantation and hyaluronic acid fillers are also alternative treatment options.In addition, many skin care ingredients can help in the form of eye creams. Caffeine is a potent vasoconstrictor that has been proven to improve the look of dark circles by constricting, or tightening, the dilated vessels under eyes. Vitamin C can help brighten hyperpigmentation as well as thicken the dermal layer of skin which conceals dark circles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mynigma** Mynigma: Mynigma (also known as M) is an email client with built-in encryption. It is free for personal use. The Name “Mynigma” derives from the ancient Greek „Ainigma“ (αἴνιγμα, „Riddle“). Functionality: Mynigma's core feature is an encryption mechanism that activates automatically when both parties use the client. With its focus on usability and automation, Mynigma aims to make encryption available specifically to non-technical users. Platforms: A proof-of-concept app is currently available for Mac and iOS. The most recent version, as well as an Outlook plug-in, are in closed beta. Programs for other platforms like Android are also being developed. Awards: In 2015, Mynigma received the CeBIT Innovation Award for its unique approach to combining security with usability. The company also finished runner-up in the Gründerpreis der Berliner Sparkasse competition. Name changes: Due to possible confusion with the meanwhile ceased instant messenger MyEnigma, the program was renamed M in March 2015. Following the announcement of Facebook M in August 2015, the name was changed back to Mynigma. Privacy: The personal-use version of the program is peer-to-peer. As it does not connect to a central server, it collects no user or usage data. Security: Mynigma uses end-to-end encryption. The keys required for decryption are stored only on the users' devices. The encryption format is public and the program's source code is available under a GPL licence. It uses the algorithms RSA (4096 bit, OAEP padding), AES-128 (CBC with random IV) and SHA-512. Its crypto container is provably CCA secure and protects subject lines as well as message body and attachments. Man-the-middle prevention: Like any trust-on-first-use system, Mynigma may be subject to a man-in-the-middle attack. This can be prevented by comparing a fingerprint (e.g. over the phone) or scanning a QR code. Press coverage: Mynigma has been featured in various national newspapers, including Tagesspiegel, FAZ and Die Welt. It appears in the sixth issue of The Hundert magazine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theoretical Linguistics (journal)** Theoretical Linguistics (journal): Theoretical Linguistics is an international peer-reviewed journal of theoretical linguistics published by Mouton de Gruyter. Since 2001, Manfred Krifka (Humboldt University of Berlin) has been its editor. In 2020, the journal's impact factor was 1.929, as reported by Journal Citation Reports.The journal publishes four issues per year. Each issue contains one main article, and a number of shorter responses to that article.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glycoside hydrolase family 42** Glycoside hydrolase family 42: In molecular biology, glycoside hydrolase family 42 is a family of glycoside hydrolases. Glycoside hydrolase family 42: Glycoside hydrolases EC 3.2.1. are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.The glycosyl hydrolase 42 family CAZY GH_42 comprises beta-galactosidase enzymes (EC 3.2.1.23). These enzyme catalyse the hydrolysis of terminal, non-reducing terminal beta-D-galactoside residues. The middle domain of these three-domain enzymes is involved in trimerisation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semantic translation** Semantic translation: Semantic translation is the process of using semantic information to aid in the translation of data in one representation or data model to another representation or data model. Semantic translation takes advantage of semantics that associate meaning with individual data elements in one dictionary to create an equivalent meaning in a second system. Semantic translation: An example of semantic translation is the conversion of XML data from one data model to a second data model using formal ontologies for each system such as the Web Ontology Language (OWL). This is frequently required by intelligent agents that wish to perform searches on remote computer systems that use different data models to store their data elements. The process of allowing a single user to search multiple systems with a single search request is also known as federated search. Semantic translation: Semantic translation should be differentiated from data mapping tools that do simple one-to-one translation of data from one system to another without actually associating meaning with each data element. Semantic translation requires that data elements in the source and destination systems have "semantic mappings" to a central registry or registries of data elements. The simplest mapping is of course where there is equivalence. Semantic translation: There are three types of Semantic equivalence: Class Equivalence - indicating that class or "concepts" are equivalent. For example: "Person" is the same as "Individual" Property Equivalence - indicating that two properties are equivalent. For example: "PersonGivenName" is the same as "FirstName" Instance Equivalence - indicating that two individual instances of objects are equivalent. For example: "Dan Smith" is the same person as "Daniel Smith"Semantic translation is very difficult if the terms in a particular data model do not have direct one-to-one mappings to data elements in a foreign data model. In that situation, an alternative approach must be used to find mappings from the original data to the foreign data elements. This problem can be alleviated by centralized metadata registries that use the ISO-11179 standards such as the National Information Exchange Model (NIEM).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Klein transformation** Klein transformation: In quantum field theory, the Klein transformation is a redefinition of the fields to amend the spin-statistics theorem. Bose–Einstein: Suppose φ and χ are fields such that, if x and y are spacelike-separated points and i and j represent the spinor/tensor indices, 0. Bose–Einstein: Also suppose χ is invariant under the Z2 parity (nothing to do with spatial reflections!) mapping χ to −χ but leaving φ invariant. Obviously, free field theories always satisfy this property. Then, the Z2 parity of the number of χ particles is well defined and is conserved in time. Let's denote this parity by the operator Kχ which maps χ-even states to itself and χ-odd states into their negative. Then, Kχ is involutive, Hermitian and unitary. Bose–Einstein: Needless to say, the fields φ and χ above don't have the proper statistics relations for either a boson or a fermion. i.e. they are bosonic with respect to themselves but fermionic with respect to each other. But if you look at the statistical properties alone, we find it has exactly the same statistics as the Bose–Einstein statistics. Here's why: Define two new fields φ' and χ' as follows: φ′=iKχφ and χ′=Kχχ. Bose–Einstein: This redefinition is invertible (because Kχ is). Now, the spacelike commutation relations become 0. Fermi–Dirac: Now, let's work with the example where {ϕi(x),ϕj(y)}={χi(x),χj(y)}=[ϕi(x),χj(y)]=0 (spacelike-separated as usual). Assume once again we have a Z2 conserved parity operator Kχ acting upon χ alone. Let ϕ′=iKχϕ and χ′=Kχχ. Then 0. More than two fields: If there are more than two fields, then one can keep applying the Klein transformation to each pair of fields with the "wrong" commutation/anticommutation relations until the desired result is obtained. This explains the equivalence between parastatistics and the more familiar Bose–Einstein/Fermi–Dirac statistics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**S-Methylmethionine** S-Methylmethionine: S-Methylmethionine (SMM) is a derivative of methionine with the chemical formula (CH3)2S+CH2CH2CH(NH3+)CO2−. This cation is a naturally-occurring intermediate in many biosynthetic pathways owing to the sulfonium functional group. It is biosynthesized from L-methionine and S-adenosylmethionine by the enzyme methionine S-methyltransferase. S-methylmethionine is particularly abundant in plants, being more abundant than methionine.S-Methylmethionine is sometimes referred to as vitamin U, but it is not considered a true vitamin. The term was coined in 1950 by Garnett Cheney for uncharacterized anti-ulcerogenic factors in raw cabbage juice that may help speed healing of peptic ulcers. Biosynthesis and biochemical function: S-Methylmethionine arises via the methylation of methionine by S-adenosyl methionine (SAM). The coproduct is S-adenosyl homocysteine.The biological roles of S-methylmethionine are not well understood. Speculated roles include methionine storage, use as a methyl donor, regulation of SAM. A few plants use S-methylmethionine as a precursor to the osmolyte dimethylsulfoniopropionate (DMSP). Intermediates include dimethylsulfoniumpropylamine and dimethylsulfoniumpropionaldehyde. Beer flavor precursor in barley malt: S-Methylmethionine is found in barley and is further created during the malting process. SMM can be subsequently converted to dimethyl sulfide (DMS) during the malt kilning process, causing an undesirable flavor. Lightly kilned malts such as pilsner or lager malts retain much of their SMM content while higher kilned malt such as pale ale malt has substantially more of the SMM converted to DMS in the malt. Darker kilned malts such as Munich malt have virtually no SMM content since most has been converted to DMS. Other crystal malts and roasted malts have no SMM content and often no DMS content since the kilning also drives that compound out of the malt.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Simple 4-line** Simple 4-line: Simple 4-line rhymes are usually characterized by having a simple rhyme scheme of ABCB repeated throughout the entire poem. Though usually simplistic looking, the songs can be very complex and are widely used today in most poetry and songs. Many poets and authors use this pattern, including popular children's poets Bruce Larkin and Kenn Nesbitt.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plain Old C++ Object** Plain Old C++ Object: Like the term POJO (Plain Old Java Object) in the Java world, the term Plain Old C++ object or its acronym POCO means a C++ artifact that is neither defined by nor coupled to the underlying C++ component framework that manipulates it. Plain Old C++ Object: Examples of such an artifact include, for instance, instances of C++ classes, K&R structs, unions, or even functions (as function pointers). This is contrast to component model in classic C++ component frameworks, such as OMG-CCM, JTRS-SCA core framework (CF), OpenSOA's SCA for C++. These classic component frameworks either dedicate a proprietary component programming model (a super class), or mandate component implementations to be tightly coupled to the underlying framework (calling its runtime).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kamalesh Sirkar** Kamalesh Sirkar: Kamalesh K. Sirkar is a Distinguished Professor of Chemical Engineering at New Jersey Institute of Technology (NJIT) in Newark, New Jersey, USA. He is also the Foundation Professor of'Membrane Separations and Director of the NJIT Center for Membrane Technologies. He is internationally recognized as an expert in membrane separation technologies. Education: Sirkar received his B.Tech. from the Indian Institute of Technology at Kharagpur and both his MS and PhD from the University of Illinois at Urbana-Champaign. Career: Sirkar was previously a Professor of Chemical Engineering at Stevens Institute of Technology and Indian Institute of Technology at Kanpur prior to arriving at NJIT in 1992. His accomplishments: Sirkar is the holder of 25 U.S. patents. He has also authored 156 refereed articles and 18 book chapters, and is a co-editor of the widely used Membrane Handbook. He is the Editor of the Elsevier series Membrane Science and Technology and an Associate Editor of Separation Science and Technology. He has served (or is serving) on the editorial boards of the Journal of Membrane Science, Industrial and Engineering Chemistry Research and Separation Science and Technology. Honors and awards: Sirkar has received numerous honors and awards throughout his research life. Some of these include: Kirkpatrick Award (1991). Honorary Fellow of Indian Institute of Chemical Engineers (2001). American Institute of Chemical Engineers's Institute Award for excellence in Industrial Gases Technology (2005). Thomas Alva Edison Patent Award in the Environmental Category of the Research and Development Council (2006). Fellow of American Association for the Advancement of Science (AAAS) in 2008. Clarence Gerhold Award of the Separations Division of AIChE (2008). NJIT Excellence in Research Prize & Medal (2009). New Jersey Inventors Hall of Fame Innovators Award (2009).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GenGIS** GenGIS: GenGIS merges geographic, ecological and phylogenetic biodiversity data in a single interactive visualization and analysis environment. A key feature of GenGIS is the testing of geographic axes that can correspond to routes of migration or gradients that influence community similarity. Data can also be explored using graphical summaries of data on a site-by-site basis, as 3D geophylogenies, or custom visualizations developed using a plugin framework. Standard statistical test such as linear regression and Mantel are provided, and the R statistical language can be accessed directly within GenGIS. Since its release, GenGIS has been used to investigate the phylogeography of viruses and bacteriophages, bacteria, and eukaryotes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EMDEX** EMDEX: EMDEX (Essential Medicines Index) is the most commonly used reference source of drug and therapeutic information by healthcare professionals in Nigeria. It was first published in 1991 as Nigeria's Essential Drugs (NED) Guide. EMDEX drug information contents, arrangements, and therapeutic recommendations are supported by several references and clinical guidelines notably WHO Model Formulary, WHO ATC (Anatomical Therapeutic Chemical) Classification System, Nigeria's Essential Medicines List, and Standard Treatment Guidelines, etc. The information is regularly reviewed and updated by a select team of healthcare practitioners and academics. EMDEX: The central objective of EMDEX has been to promote the rational use of medicines through the provision of independent drug information, and the use of clinical guidelines and essential medicines list. It is the largest and most up-to-date source of information on drug products approved for use in Nigeria by NAFDAC (National Agency for Food & Drug Administration & Control). EMDEX: The use of EMDEX as a reference drug manual is endorsed by the Pharmacists Council of Nigeria, the Nursing & Midwifery Council of Nigeria, and major health institutions. It is used both within and outside Nigeria by physicians, dentists, pharmacists, nurse practitioners, and auxiliary health workers at all levels of healthcare delivery. These healthcare providers rely on EMDEX for accuracy and completeness of drug information namely indications, contra-indications, precautions or warnings, adverse effects, dosages, and drug use in special populations like children, elderly, pregnancy & lactation. EMDEX: EMDEX publications are also in the syllabus of various colleges & schools of medicine, pharmacy & nursing. EMDEX as Nigeria's National Drug Formulary: A national formulary is essentially a listing of available and affordable medicines that are relevant to the treatment of diseases in a particular country. It is usually a source of unbiased drug information and helps promote the rational use of safe, effective and good-quality medicines. EMDEX Publications: EMDEX vol. 1 (Drug Information for Healthcare Professionals) is published annually. Other EMDEX print publications include: EMDEX vol. 2 (Nurses' Reference) EMDEX Paediatric Drug Guide Mini EMDEX (Clinician's Pocket Reference) EMDEX RapidRx – Quarterly Evidence-Based Medication Therapy Management Newsletter EMDEX API offers a comprehensive database of NAFDAC-approved drug products in Nigeria.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canon FL 300mm lens** Canon FL 300mm lens: Canon FL 300mm lens refers to two telephoto prime lenses made by Canon. The lenses have an FL type mount which fits the Canon FL line of cameras. First introduced in 1969, the FL lens replaced the R mount version, which in turn was superseded by its FD equivalent. Altogether, two variations (with two different aperture settings) were made in its short production life. The FL-F 300mm f/5.6 was the world's first lens to use synthetic fluorite crystals in its elements; those are commonly found today in its L lens.The f/2.8 model was used to take photographs of Henry Kissinger reading a confidential document at the Helsinki Accords. The images were so sharp that the text could be read clearly. Information: The f/5.6 FL-F was the first interchangeable lens to use calcium fluoride (CaF2) on its lens elements to achieve extremely high contrast correcting chromatic aberration. It used two fluorite elements. Information: The main benefits of fluorite on lens is its low index of refraction and low dispersion is superior to that of optical glass. Though its optical glass wavelengths in the range from red to green is similar, it differ greatly for wavelengths in the range from green through blue, enabling a significant improvement of the imaging performance of super-telephoto lenses in terms of sharpness, contrast and color balance.The f/5.6 FL-F are primarily designed for shooting sports, wildlife, and candids. Helsinki Accords: At the Helsinki Accords in July 1975, held at Finlandia Hall, freelance photographer Franco Rossi was hired to cover the event. Twenty minutes into the speech, Henry Kissinger, the United States Secretary of State at the time, opened his briefcase and took out three folders. Rossi was above him. By the time Kissinger reached a confidential section titled "Top Secret Sensitive Exclusively Eyes Only Contains Codeword", Rossi was shooting on his Canon F-1 with its Canon FL-F 300mm f/2.8 S.S.C. Fluorite equipped with an FD Extender 2X mounted on a tripod.The text can be seen to the point that it is clearly readable. It depicts a report on diplomatic relations between Paris and Hanoi that was based on information from a trusted CIA source.According to the CIA source, the French felt deceived by assurances of North Vietnam would they not invade the South. Therefore Paris refused to grant Hanoi new credits until the situation in the South clarifies and until Hanoi or Saigon makes a preliminary acknowledgement of debts contracted by the Thieu government.This photograph was shown in La Domenica del Corriere of Italy and Nieuwe Revu of and the Netherlands. Paris Match purchased the rights to the pictures but never published them, claiming it ran out of space.Time magazine erroneously referred to the lens as a 600 mm; a 300 mm lens combined with a 2 x teleconverter gives the focal equivalent of a 600 mm lens. The Canon 600 mm lens available at the time was a non-fluorite model with an aperture of f/5.6.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brown–Gibson model** Brown–Gibson model: The Brown–Gibson model is one of the many techniques for multi-attribute decision making. The method was developed in 1972 by P. Brown and D. Gibson. This is one of the few models which integrates both objective and subjective factors in decision making. Brown–Gibson model: The Brown–Gibson model can be mathematically represented as follows: Mi=Ci⋅[D⋅Oi+(1−D)Si] where Mi = measure for an alternative i Ci = critical factor measure, which could be either 0 or 1 for an alternative i O = objective factor measure, which could be between 0 and 1; however, the sum of all objective factor measures for different alternatives should add back to 1 S = subjective factor measure, which could be between 0 and 1; however, the sum of all subjective factor measures for different alternatives should add back to 1 D = objective factor decision weight; should be between 0 and 1 One would select the alternative whose measure is the highest.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shin guard** Shin guard: A shin guard or shin pad, is a piece of equipment worn on the front of an athlete's shin to protect it from injury. These are commonly used in sports including association football, baseball, ice hockey, field hockey, lacrosse, cricket and mountain bike trials. They are also used in combat sports and martial arts competitions including kickboxing, mixed martial arts, taekwondo, karate and professional wrestling. This is due to either being required by the rules/laws of the sport or worn voluntarily by the participants for protective measures. Materials: Modern day shin guards are made of many differing synthetic materials, including, but not limited to: Fibreglass - Stiff, sturdy, and light weight. Foam rubber - Very light weight, but not as sturdy and solid as fibreglass. Polyurethane - Heavy and sturdy, which offers almost complete protection from most impacts. Plastic - Less protective than any of the other synthetic shin guards. Metal - Highly protective, but very heavy and uncomfortable. History: The shin guard was inspired by the concept of a greave. A greave is a piece of armour used to protect the shin. It is a Middle English term, derived from an Old French word, greve (pronounced gri’v), meaning shin or shin armour. The etymology of this word not only describes the use and purpose of shin guards, but also contributes to dating the technology. History: This technology dates back to ancient times as early as Greek and Roman Republics. Back then, shin guards were viewed as purely protective measures for warriors in battle and were made of bronze or other hard, sturdy materials. The earliest known physical proof of the technology appeared when archaeologist Sir William Temple discovered a pair of bronze greaves with a Gorgon's head design in the relief on each knee capsule. After careful, proper examination it was estimated that the greaves were made in Apulia, a region in Southern Italy, around 550/500 B.C. This area fell under the Roman Empire boundaries and is known as today as the Salento Peninsula; it is more commonly known as the heel of Italy. This discovery is not considered the oldest known application of shin guards, but all other references lie in written or pictorial medians. The oldest known reference to shin guards was a written verse in the Bible. 1 Samuel 17:6 describes Goliath, a Philistine champion from Gath, who wore a bronze helmet, coat of mail, and bronze leggings. The Book of Samuel is commonly accepted to be written by Prophets Samuel, Nathan, and Gad between 960 and 700 B.C. Later, more concrete, examples of the shin guard concept resurfaced in the Middle Ages. All studies and evidence show greaves were improved to cover the entire lower leg, front and back, from the feet to the knees, and were mostly made of cloth, leather, or iron.As time progressed into the 19th century a major shift in the application of shin guards occurred. The overall purpose of protecting the shin was maintained, but instead of being used for fighting, it became applied to sports. This paradigm shift dominates today's market use of shin guards as they are used mostly in sports. Other applications do exist though for protecting the lower leg in other physical activities such as hiking, mixed martial arts, and kickboxing, but all these activities can also be considered for sport instead of being necessary in battle. History: Cricket was the first sport to adopt the use of shin guards. The introduction of this equipment was not motivated by the need for protection, but rather a strategic device to gain an advantage for the batsman. The batsman who wore the leg pads was able to cover the stumps with his protected legs and prevent the ball from hitting the stumps, instead the ball bowled into the batsman. Thus, the protection provided by the leg pads provided the batsman confidence to play without suffering pain or injury. This resulted in an offensive advantage; instead of hitting the wickets to get the batsman out, the bowler hits the batsman giving him another chance to hit the ball. This was addressed in 1809 with a rule change called leg before wicket, where the umpire was allowed to deduce whether the ball would have hit the stumps if the batter was not hit first. Leg pads became more popular as protective measures against the impact from the ball and are worn by the batsman, the wicket-keeper, and the fielders that are fielding in close to the batsman. History: Association football was the next major sport to see the introduction of the shin guard. Sam Weller Widdowson is credited for bringing shin guards to the sport in 1874. He played cricket for Nottinghamshire and football for Nottingham Forest, and he got the idea to protect himself based on his cricket experiences. Widdowson cut down a pair of cricket shin pads and strapped them to the outside of his stockings using straps of leather. Other players ridiculed him initially, but shin guards eventually caught on as players saw the practical use of protecting their shins. Today, there are a two basic types of shin guards used in association football: slip-in shin guards and ankle shin guards.In baseball, one of the innovators of the modern shin guard, New York Giants catcher Roger Bresnahan, began wearing shin guards in 1907. Made of leather, the guards were fastened with straps and hooks. Batters began wearing shin guards at the plate in the late 1980s and early 1990s.After the application of shin guards in association football, they quickly spread to other sports and are now considered necessary for most contact sports.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vaccine resistance** Vaccine resistance: Vaccine resistance is the evolutionary adaptation of pathogens to infect and spread through vaccinated individuals, analogous to antimicrobial resistance. It concerns both human and animal vaccines. Although the emergence of a number of vaccine resistant pathogens has been well documented, this phenomenon is nevertheless much more rare and less of a concern than antimicrobial resistance. Vaccine resistance: Vaccine resistance may be considered a special case of immune evasion, from the immunity conferred by the vaccine. Since the immunity conferred by a vaccine may be different from that induced by infection by the pathogen, the immune evasion may also be easier (in case of an inefficient vaccine) or more difficult (would be the case of the universal flu vaccine). We speak of vaccine resistance only if the immune evasion is a result of evolutionary adaptation of the pathogen (and not a feature of the pathogen that it had before any evolutionary adaptation to the vaccine) and the adaptation is driven by the selective pressure induced by the vaccine (this would not be the case of an immune evasion that is the result of genetic drift that would be present even without vaccinating the population).Some of the causes advanced for less frequent emergence of resistance are that vaccines are mostly used for prophylaxis, that is before infection occurs, and usually act to suppress the pathogen before the host becomes infectious most vaccines target multiple antigenic sites of the pathogen different hosts may produce different immune responses to the same pathogenFor diseases that confer long lasting immunity after exposure, typically childhood diseases, it was argued that a vaccine may provide the same immune response as natural infection, so it is expected that there should be no vaccine resistance.If vaccine resistance emerges the vaccine may retain some level of protection against serious infection, possibly by modifying the immune response of the host away from immunopathology.The best known cases of vaccine resistance are for the following diseases animal diseases Marek's disease where actually more virulent strains emerged after vaccination because the vaccine did not protect against infection and transmission, only against serious forms of the disease Yersinia ruckeri because a single mutation was sufficient to generate vaccine resistance avian metapneumovirus human diseases Streptococcus pneumoniae because recombination with another serotype not targeted by the vaccine hepatitis B virus because the vaccine targeted a single site formed by 9 amino acids Bordetella pertussis because not all serotypes were targeted and later because acellular vaccines targeted only a few antigensOther less documented cases are for avian influenza, avian reovirus, Corynebacterium diphtheriae, feline calicivirus, H. influenzae, infectious bursal disease virus, Neisseria meningitidis, Newcastle disease virus, and porcine circovirus type 2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**My Friend Cayla** My Friend Cayla: My Friend Cayla was a line of 18-inch (46 cm) dolls which uses speech recognition technology in conjunction with an Android or iOS mobile app to recognize the child's speech and have a conversation. The doll uses the internet to search what the child said which then answers with what it collected online. My Friend Cayla was created by Bob Delprincipe, inventor of Cindy Smart and Tekno the Robotic Puppy. The doll is banned in Germany as a surveillance device. Technology: Cayla functions by sending microphone inputs to an app on an iOS or Android device via Bluetooth. The app then parses the speech into text and uses keywords to search the Internet for a response. The app translates the text back into speech and sends it back to the doll, who answers after around a one-second delay.Cayla, operated by 3 AA batteries, also has a "personality", with a database containing details of her family, pets, her favorite food, pop star, and film.The creator of the doll, Bob Delprincipe, says "She's not a search engine, she's a seven-year-old girl. There are some things she just doesn't know." And he argued that though there had been 'intelligent' toys before, there had never been an Internet-connected doll. Styles: Cayla is available in 3 styles: Blonde, Brunette and African American. The UK saw a limited release of a Princess Edition. Distribution: My Friend Cayla is distributed by Vivid in the UK. Genesis is the US distributor. Awards: The doll was named 2014 Innovative Toy of the Year by the London Toy Industry Association, and was a top 10 toy for all key European retailers this past holiday season. The doll was later sold in the United States market in August 2015.In 2015 My Friend Cayla won Most Wanted Dolls of 2015 from TTPM (Toys, Tots, Pets & More). Controversy: Ken Munro of security firm Pen Test Partners claimed he hacked the doll, and demonstrated the hack in BBC World News Tech Tent program. Tim Medin from Counter Hack also hacked the doll by simply using bluetooth to use it as a remote speaker and microphone, which could be used to communicate with children. "Cayla was basically the subject of a tech prank," said Peter Magalhaes, general manager of Cayla manufacturer Genesis.In February 2017 the German Federal Network Agency notified parents that they were obliged to "destroy" any Cayla in their possession as it constitutes a concealed espionage device violating the German Telecommunications Act. The agency also considers the Bluetooth device as insecure, allowing connections to Cayla's speaker and microphone within a 10 m (33 ft) radius.The doll has also been criticised by the Norwegian Consumer Council for allowing the use of the collected data from the child's speech for targeted advertisements and other commercial purposes and its sharing with third parties, as well as for hidden advertisements through the doll's positive statements about certain products and services.In the United States, the Federal Trade Commission is investigating similar complaints over whether My Friend Cayla's upload of child speech constitutes an undue violation of privacy.The resulting controversy led to the doll's inclusion in the Museum of Failure in Sweden, where similar failed products and services which were either commercial failures or are controversial in their own right are on display. The Spy Museum Berlin also has a Cayla doll on display, the first toy in its collection of various espionage and surveillance devices. The doll was donated by a German mother who found the doll in her daughter's room after hearing about a government order to destroy the dolls following a ban on its sale and possession.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thallium(I) hydroxide** Thallium(I) hydroxide: Thallium(I) hydroxide, also called thallous hydroxide, TlOH, is a hydroxide of thallium, with thallium in oxidation state +1. Synthesis: Thallium(I) hydroxide is obtained from the decomposition of thallium(I) ethoxide in water. CH3CH2OTl + H2O → TlOH + CH3CH2OHThis can also be done by direct reaction of thallium with ethanol and oxygen gas. 4 Tl + 2 CH3CH2OH + O2 → 2 CH3CH2OTl + 2 TlOHAnother method is the reaction between thallium(I) sulfate and barium hydroxide. Tl2SO4 + Ba(OH)2 → 2 TlOH + BaSO4 Properties: Thallous hydroxide is a strong base; it dissociates to the thallous ion, Tl+, except in strongly basic conditions. Tl+ resembles an alkali metal ion, such as Li+ or K+.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Single-responsibility principle** Single-responsibility principle: The single-responsibility principle (SRP) is a computer programming principle that states that "A module should be responsible to one, and only one, actor." The term actor refers to a group (consisting of one or more stakeholders or users) that requires a change in the module. Single-responsibility principle: Robert C. Martin, the originator of the term, expresses the principle as, "A class should have only one reason to change". Because of confusion around the word "reason" he has also clarified saying that the "principle is about people." In some of his talks, he also argues that the principle is, in particular, about roles or actors. For example, while they might be the same person, the role of an accountant is different from a database administrator. Hence, each module should be responsible for each role. History: The term was introduced by Robert C. Martin in his article "The Principles of OOD" as part of his Principles of Object Oriented Design, made popular by his 2003 book Agile Software Development, Principles, Patterns, and Practices. Martin described it as being based on the principle of cohesion, as described by Tom DeMarco in his book Structured Analysis and System Specification, and Meilir Page-Jones in The Practical Guide to Structured Systems Design. In 2014 Martin published a blog post titled "The Single Responsibility Principle" with a goal to clarify what was meant by the phrase "reason for change."[1] Example: Martin defines a responsibility as a reason to change, and concludes that a class or module should have one, and only one, reason to be changed (e.g. rewritten). As an example, consider a module that compiles and prints a report. Imagine such a module can be changed for two reasons. First, the content of the report could change. Second, the format of the report could change. These two things change for different causes. The single-responsibility principle says that these two aspects of the problem are really two separate responsibilities, and should, therefore, be in separate classes or modules. It would be a bad design to couple two things that change for different reasons at different times. Example: The reason it is important to keep a class focused on a single concern is that it makes the class more robust. Continuing with the foregoing example, if there is a change to the report compilation process, there is a greater danger that the printing code will break if it is part of the same class.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Math.NET Numerics** Math.NET Numerics: Math.NET Numerics is an open-source numerical library for .NET and Mono, written in C# and F#. It features functionality similar to BLAS and LAPACK. History: Math.NET Numerics started 2009 by merging code and teams of dnAnalytics with Math.NET Iridium. It is influenced by ALGLIB, JAMA and Boost, among others, and has accepted numerous code contributions. It is part of the Math.NET initiative to build and maintain open mathematical toolkits for the .NET platform since 2002.Math.NET is used by several open source libraries and research projects, like MyMediaLite, FermiSim and LightField Retrieval, and various theses and papers. Features: The software library provides facilities for: Probability distributions: discrete, continuous and multivariate. Pseudo-random number generation, including Mersenne Twister MT19937. Real and complex linear algebra types and solvers with support for sparse matrices and vectors. LU, QR, SVD, EVD, and Cholesky decompositions. Matrix IO classes that read and write matrices from/to Matlab and delimited files. Complex number arithmetic and trigonometry. “Special” routines including the Gamma, Beta, Erf, modified Bessel and Struve functions. Interpolation routines, including Barycentric, Floater-Hormann. Linear Regression/Curve Fitting routines. Numerical Quadrature/Integration. Root finding methods, including Brent, Robust Newton-Raphson and Broyden. Descriptive Statistics, Order Statistics, Histogram, and Pearson Correlation Coefficient. Markov chain Monte Carlo sampling. Basic financial statistics. Fourier and Hartley transforms (FFT). Overloaded mathematical operators to simplify complex expressions. Runs under Microsoft Windows and platforms that support Mono. Optional support for Intel Math Kernel Library (Microsoft Windows and Linux) Optional F# extensions for more idiomatic usage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudo-functor** Pseudo-functor: In mathematics, a pseudofunctor F is a mapping between 2-categories, or from a category to a 2-category, that is just like a functor except that F(f∘g)=F(f)∘F(g) and F(1)=1 do not hold as exact equalities but only up to coherent isomorphisms. The Grothendieck construction associates to a pseudofunctor a fibered category.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glutinol synthase** Glutinol synthase: Glutinol synthase (EC 5.4.99.49) is an enzyme with systematic name (3S)-2,3-epoxy-2,3-dihydrosqualene mutase (cyclizing, glutinol-forming). This enzyme catalyses the following chemical reaction (3S)-2,3-epoxy-2,3-dihydrosqualene ⇌ glutinolThe enzyme from Kalanchoe daigremontiana also gives traces of other triterpenoids.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parakeratosis pustulosa** Parakeratosis pustulosa: Parakeratosis pustulosa is a cutaneous condition which is exclusively seen in children, usually involving one finger, most commonly the thumb or index finger, with the affected nail showing subungual hyperkeratosis and onycholysis.: 662 : 1026
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Runway visual range** Runway visual range: In aviation, the runway visual range (RVR) is the distance over which a pilot of an aircraft on the centreline of the runway can see the runway surface markings delineating the runway or identifying its centre line. RVR is normally expressed in meters or feet. RVR is used to determine the landing and takeoff conditions for aircraft pilots, as well as the type of operational visual aids used at the airport. Measurement: Originally RVR was measured by a person, either by viewing the runway lights from the top of a vehicle parked on the runway threshold, or by viewing special angled runway lights from a tower at one side of the runway. The number of lights visible could then be converted to a distance to give the RVR. This is known as the human observer method and can still be used as a fall-back. Measurement: Today most airports use instrumented runway visual range (IRVR), which is measured by devices called scatterometers which provide simplified installation as they are integrated units and can be installed as a single unit(s) at a critical location along the runway or transmissometers which are installed at one side of a runway relatively close to its edge. Normally three transmissometers are provided, one at each end of the runway and one at the midpoint. In the US, Forward Scatter RVRs are replacing transmissometers at most airports. According to the US Federal Aviation Administration: "There are approximately 279 RVR systems in the NAS, of which 242 are forward scatter NG RVR Systems and 34 are older Transmissometer Systems." Data reliability Because IRVR data are localized information, the values obtained are not necessarily a reliable guide to what a pilot can actually expect to see. This can easily be demonstrated when obscuration such as fog is variable, different values can apply simultaneously at the same physical point. Measurement: For example, a 2000 m runway could have reported touchdown, midpoint and rollout IRVR values of 700m, 400m and 900 m. If the actual RVR at the touchdown point (300 m from the threshold) is 700 m as reported, then a pilot could expect to see the light 700 m away, at the runway midpoint. If the pilot taxies to the midpoint and looks back through the same air mass, he or she must also be able to see the light at the touchdown point, but since the midpoint RVR is reported as 400 m, a light 700 m away must be invisible. Similarly, looking forward he or she cannot see the light in the rollout area, but according to the rollout RVR of 900m, the light there is visible and has already been so for the last 200 m. Usage: RVR is used as one of the main criteria for minima on instrument approaches, as in most cases a pilot must obtain visual reference of the runway to land an aircraft. The maximum RVR range is 2,000 metres or 6,000 feet, above which it is not significant and thus does not need to be reported. RVRs are provided in METARs and are transmitted by air traffic controllers to aircraft making approaches to allow pilots to assess whether it is prudent and legal to make an approach. Usage: RVR is also the main criterion used to determine the category of visual aids that are installed at an airport. The International Civil Aviation Organization ICAO stipulates in its Annex 14 that for RVR values above 550 m, CAT I lighting shall be installed, if RVR is between 300 m and 549 m then CAT II lighting is required. CAT IIIa is installed for RVR values between 175 m and 300 m. CAT IIIb is required for RVR values between 50 m and 175 m while there is no RVR limitation for CAT IIIc visual aids.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toposcope** Toposcope: A toposcope, topograph, or orientation table is a kind of graphic display erected at viewing points on hills, mountains or other high places which indicates the direction, and usually the distance, to notable landscape features which can be seen from that point. They are often placed in public parks, country parks, the grounds of stately homes, at popular vantage points (especially accompanying or built into triangulation stations) or places of historical note, such as battlefields.Toposcopes usually show the points of the compass, or at least North. Toposcope: Smaller toposcopes usually consist of a circular plaque, or a plaque with a circle marked on it, mounted horizontally on a plinth. They will have radiating lines indicating the direction to various landmarks, together with the distance and often a pictorial representation of the landmark. They are frequently constructed of a metal such as bronze, cast or etched, set on top of a concrete or stone block, which provides weather- and vandal-resistance. Toposcope: Large toposcopes may be circular paved areas, with numerous plaques around the perimeter, each indicating a particular feature of the landscape.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vehicle armour** Vehicle armour: Military vehicles are commonly armoured (or armored; see spelling differences) to withstand the impact of shrapnel, bullets, shells, rockets, and missiles, protecting the personnel inside from enemy fire. Such vehicles include armoured fighting vehicles like tanks, aircraft, and ships. Civilian vehicles may also be armoured. These vehicles include cars used by officials (e.g., presidential limousines), reporters and others in conflict zones or where violent crime is common. Civilian armoured cars are also routinely used by security firms to carry money or valuables to reduce the risk of highway robbery or the hijacking of the cargo. Vehicle armour: Armour may also be used in vehicles to protect from threats other than a deliberate attack. Some spacecraft are equipped with specialised armour to protect them against impacts from micrometeoroids or fragments of space debris. Modern aircraft powered by jet engines usually have them fitted with a sort of armour in the form of an aramid composite kevlar bandage around the fan casing or debris containment walls built into the casing of their gas turbine engines to prevent injuries or airframe damage should the fan, compressor, or turbine blades break free.The design and purpose of the vehicle determines the amount of armour plating carried, as the plating is often very heavy and excessive amounts of armour restrict mobility. In order to decrease this problem, some new materials (nanomaterials) and material compositions are being researched which include buckypaper, and aluminium foam armour plates. Materials: Metals Steel Rolled homogeneous armour is strong, hard, and tough (does not shatter when struck with a fast, hard blow). Steel with these characteristics are produced by processing cast steel billets of appropriate size and then rolling them into plates of required thickness. Rolling and forging (hammering the steel when it is red hot) irons out the grain structure in the steel, removing imperfections which would reduce the strength of the steel. Rolling also elongates the grain structure in the steel to form long lines, which enable the stress the steel is placed under when loaded to flow throughout the metal, and not be concentrated in one area. Materials: Aluminium Aluminium is used when light weight is a necessity. It is most commonly used on APCs and armoured cars. While certainly not the strongest metal, it is cheap, lightweight, and tough enough that it can serve as easy armour. Iron Wrought iron was used on ironclad warships. Early European iron armour consisted of 10 to 12.5 cm of wrought iron backed by up to one meter of solid wood. It has since been supplemented by steel due to steel being significantly stronger. Materials: Titanium Titanium has almost twice the density of aluminium, but can have a yield strength similar to high strength steels, giving it a high specific strength. It also has a high specific resilience and specific toughness. So, despite being more expensive, it finds an application in areas where weight is a concern, such as personal armour and military aviation. Some notable examples of its use include the USAF A-10 Thunderbolt II and the Soviet/Russian-built Sukhoi Su-25 ground-attack aircraft, utilising a bathtub-shaped titanium enclosure for the pilot, as well as the Soviet/Russian Mil Mi-24 attack helicopter. Materials: Uranium Because of its high density, depleted uranium can also be used in tank armour, sandwiched between sheets of steel armour plate. For instance, some late-production M1A1HA and M1A2 Abrams tanks built after 1998 have DU reinforcement as part of the armour plating in the front of the hull and the front of the turret, and there is a program to upgrade the rest (see Chobham armour). Materials: Plastic Plastic metal was a type of vehicle armour originally developed for merchant ships by the British Admiralty in 1940. The original composition was described as 50% clean granite of half-inch size, 43% of limestone mineral, and 7% of bitumen. It was typically applied in a layer two inches thick and backed by half an inch of steel. Plastic armour was highly effective at stopping armour piercing bullets because the hard granite particles would deflect the bullet, which would then lodge between plastic armour and the steel backing plate. Plastic armour could be applied by pouring it into a cavity formed by the steel backing plate and a temporary wooden form. Some main battle tank (MBT) armour utilises polymers, for example polyurethane as used in the "BDD" applique armour applied to modernized T-62 and T-55. Glass Bulletproof glass is a colloquial term for glass that is particularly resistant to being penetrated when struck by bullets. The industry generally refers to it as bullet-resistant glass or transparent armour. Bullet-resistant glass is usually constructed using a strong but transparent material such as polycarbonate thermoplastic or by using layers of laminated glass. The desired result is a material with the appearance and light-transmitting behaviour of standard glass, which offers varying degrees of protection from small arms fire. Materials: The polycarbonate layer, usually consisting of products such as Armormax, Makroclear, Cyrolon, Lexan or Tuffak, is often sandwiched between layers of regular glass. The use of plastic in the laminate provides impact-resistance, such as physical assault with a hammer, an axe, etc. The plastic provides little in the way of bullet-resistance. The glass, which is much harder than plastic, flattens the bullet and thereby prevents penetration. This type of bullet-resistant glass is usually 70–75 mm (2.8–3.0 in) thick. Materials: Bullet-resistant glass constructed of laminated glass layers is built from glass sheets bonded together with polyvinyl butyral, polyurethane or ethylene-vinyl acetate. This type of bullet-resistant glass has been in regular use on combat vehicles since World War II; it is typically about 100–120 mm (3.9–4.7 in) thick and is usually extremely heavy. Newer materials are being developed. One such, aluminium oxynitride, is much lighter but at US$10–15 per square inch is much more costly. Materials: Ceramic Ceramic's precise mechanism for defeating HEAT was uncovered in the 1980s. High speed photography showed that the ceramic material shatters as the HEAT round penetrates, the highly energetic fragments destroying the geometry of the metal jet generated by the hollow charge, greatly diminishing the penetration. Ceramic layers can also be used as part of composite armour solutions. The high hardness of some ceramic materials serves as a disruptor that shatters and spreads the kinetic energy of projectiles. Materials: Composite Composite armour is armour consisting of layers of two or more materials with significantly different physical properties; steel and ceramics are the most common types of material in composite armour. Composite armour was initially developed in the 1940s, although it did not enter service until much later and the early examples are often ignored in the face of newer armour such as Chobham armour. Composite armour's effectiveness depends on its composition and may be effective against kinetic energy penetrators as well as shaped charge munitions; heavy metals are sometimes included specifically for protection from kinetic energy penetrators. Materials: Composite armour used on modern Western and Israeli main battle tanks largely consists of non-explosive reactive armour (NERA) elements - a type of Reactive armour. These elements are often a laminate consisting of two hard plates (usually high hardness steel) with some low density interlayer material between them. Upon impact, the interlayer swells and moves the plates, disrupting heat 'jets' and possibly degrading kinetic energy projectiles. Behind these elements will be some backing element designed to stop the degraded jet or projectile element, which may be of high hardness steel, or some composite of steel and ceramic or possibly uranium. Materials: Soviet main battle tanks from the T-64 onward utilised composite armour which often consisted of some low density filler between relatively thick steel plates or castings, for example Combination K. For example, the T-64 turret front and cheek was originally filled with aluminum, and then ceramic balls and aluminum, whilst some models of the T-72 features a glass filler called "Kvartz". The tank glacis was often a sandwich of steel and some low density filler, either textolite (a fibreglass reinforced polymer) or ceramic plates. Later T-80 and T-72 turrets contained NERA elements, similar to those discussed above. Ships: Belt armour is a layer of armour-plating outside the hull (watercraft) of warships, typically on battleships, battlecruisers, cruisers and some aircraft carriers.Typically, the belt covers from the deck down someway below the waterline of the ship. If built within the hull, rather than forming the outer hull, it can be fitted at an inclined angle to improve the protection. Ships: When struck by a shell or torpedo, the belt armour is designed to prevent penetration, by either being too thick for the warhead to penetrate, or sloped to a degree that would deflect either projectile. Often, the main belt armour was supplemented with a torpedo bulkhead spaced several meters behind the main belt, designed to maintain the ship's watertight integrity even if the main belt were penetrated. Ships: The air-space between the belt and the hull also adds buoyancy. Several wartime vessels had belt armour that was thinner or shallower than was desirable, to speed production and conserve resources. Deck armour on aircraft carriers is usually at the flight deck level, but on some early carriers was at the hangar deck. (See armoured flight deck.) Aircraft: Armour plating is not common on aircraft, which generally rely on their speed and maneuverability to avoid attacks from enemy aircraft and ground fire, rather than trying to resist impacts. Additionally, any armour capable of stopping large-calibre anti-aircraft fire or missile fragments would result in an unacceptable weight penalty. So, only the vital parts of an aircraft, such as the ejection seat and engines, are usually armoured. This is one area where titanium is used extensively as armour plating. For example, in the American Fairchild Republic A-10 Thunderbolt II and the Soviet-built Sukhoi Su-25 ground attack aircraft, as well as the Mil Mi-24 Hind ground-attack helicopter, the pilot sits in a titanium enclosure known as the "bathtub" for its shape. In addition, the windscreens of larger aircraft are generally made of impact-resistant, laminated materials, even on civilian craft, to prevent damage from bird strikes or other debris. Armoured fighting vehicles: The most heavily armoured vehicles today are the main battle tanks, which are the spearhead of the ground forces, and are designed to withstand anti-tank guided missiles, kinetic energy penetrators, high-explosive anti-tank weapons, NBC threats and in some tanks even steep-trajectory shells. The Israeli Merkava tanks were designed in a way that each tank component functions as added back-up armour to protect the crew. Outer armour is modular and enables quickly replacing damaged parts. Armoured fighting vehicles: Layout For efficiency, the heaviest armour on an armoured fighting vehicle (AFV) is placed on its front. Tank tactics require the vehicle to always face the likely direction of enemy fire as much as possible, even in defence or withdrawal operations. Armoured fighting vehicles: Sloping and curving armour can both increase its protection. Given a fixed thickness of armour plate, a projectile striking at an angle must penetrate more armour than one impacting perpendicularly. An angled surface also increases the chance of deflecting a projectile. This can be seen on v-hull designs, which direct the force of an Improvised explosive device or landmine away from the crew compartment, increasing crew survivability. Armoured fighting vehicles: Spall liners Beginning during the Cold War, many AFVs have spall liners inside of the armour, designed to protect crew and equipment inside from fragmentation (spalling) released from the impact of enemy shells, especially high-explosive squash head warheads. Spall liners are made of aramids (Kevlar, Twaron), UHMWPE (Dyneema, Spectra Shield), or similar materials. Appliqué Appliqué armour, or add-on armour, consists of extra plates mounted onto the hull or turret of an AFV. The plates can be made of any material and are designed to be retrofitted to an AFV to withstand weapons that can penetrate the original armour of the vehicle. An advantage of appliqué armour is the possibility to tailor a vehicle's protection level to a specific threat scenario. Armoured fighting vehicles: Improvised Vehicle armour is sometimes improvised in the midst of an armed conflict by vehicle crews or individual units. In World War II, British, Canadian and Polish tank crews welded spare strips of tank track to the hulls of their Sherman tanks. U.S. tank crews often added sand bags in the hull and turrets on Sherman tanks, often in an elaborate cage made of girders. Some Sherman tanks were up-armoured in the field with glacis plates and other armour cut from knocked-out tanks to create Improvised Jumbos, named after the heavily armoured M4A3E2 assault tank. In the Vietnam War, U.S. "gun trucks" were armoured with sandbags and locally fabricated steel armour plate. More recently, U.S. troops in Iraq armoured Humvees and various military transport vehicles with scrap materials: this came to be known as "hillbilly armour" or "haji armour" by the Americans. Moreover, there was the Killdozer incident, with the modified bulldozer being armoured with steel and concrete composite, which proved to be highly resistant to small arms. Armoured fighting vehicles: Spaced Armour with two or more plates spaced a distance apart, called spaced armour, has been in use since the First World War, where it was used on the Schneider CA1 and Saint-Chamond tanks. Spaced armour can be advantageous in several situations. For example, it can reduce the effectiveness of kinetic energy penetrators because the interaction with each plate can cause the round to tumble, deflect, deform, or disintegrate. This effect can be enhanced when the armour is sloped. Spaced armour can also offer increased protection against HEAT projectiles. This occurs because the shaped charge warhead can detonate prematurely (at the first surface), so that the metal jet that is produced loses its coherence before reaching the main armour and impacting over a broader area. Sometimes the interior surfaces of these hollow cavities are sloped, presenting angles to the anticipated path of the shaped charge's jet in order to further dissipate its power. Taken to the extreme, relatively thin armour plates, metal mesh, or slatted plates, much lighter than fully protective armour, can be attached as side skirts or turret skirts to provide additional protection against such weapons. This can be seen in middle and late-World War II German tanks, as well as many modern AFVs. Taken as a whole, spaced armour can provide significantly increased protection while saving weight. Armoured fighting vehicles: The analogous Whipple shield uses the principle of spaced armour to protect spacecraft from the impacts of very fast micrometeoroids. The impact with the first wall melts or breaks up the incoming particle, causing fragments to be spread over a wider area when striking the subsequent walls. Armoured fighting vehicles: Sloped Sloped armour is armour that is mounted at a non-vertical and non-horizontal angle, typically on tanks and other armoured fighting vehicles. For a given normal to the surface of the armour, its plate thickness, increasing armour slope improves the armour's level of protection by increasing the thickness measured on a horizontal plane, while for a given area density of the armour the protection can be either increased or reduced by other sloping effects, depending on the armour materials used and the qualities of the projectile hitting it. The increased protection caused by increasing the slope while keeping the plate thickness constant, is due to a proportional increase of area density and thus mass, and thus offers no weight benefit. Therefore, the other possible effects of sloping, such as deflection, deforming and ricochet of a projectile, have been the reasons to apply sloped armour in armoured vehicles design. Another motive is the fact that sloping armour is a more efficient way of covering the necessary equipment since it encloses less volume with less material. The sharpest angles are usually seen on the frontal glacis plate, both as it is the hull side most likely to be hit and because there is more room to slope in the longitudinal direction of a vehicle. Armoured fighting vehicles: Reactive Explosive reactive armour, initially developed by German researcher Manfred Held while working in Israel, uses layers of high explosive sandwiched between steel plates. When a shaped-charge warhead hits, the explosive detonates and pushes the steel plates into the warhead, disrupting the flow of the charge's liquid metal penetrator (usually copper at around 500 degrees Celsius; it can be made to flow like water by sufficient pressure). Traditional "light" ERA is less effective against kinetic penetrators. "Heavy" reactive armour, however, offers better protection. The only example currently in widespread service is Russian Kontakt-5. Explosive reactive armour poses a threat to friendly troops near the vehicle. Armoured fighting vehicles: Non-explosive reactive armour is an advanced spaced armour which uses materials which change their geometry so as to increase protection under the stress of impact. Active protection systems use a sensor to detect an incoming projectile and explosively launch a counter-projectile into its path. Armoured fighting vehicles: Slat Slat armour is designed to protect against anti-tank rocket and missile attacks, where the warhead is a shaped charge. The slats are spaced so that the warhead is either partially deformed before detonating, or the fuzing mechanism is damaged, thereby preventing detonation entirely. As shaped charges rely on very specific structure to create a jet of hot metal, any disruption to this structure greatly reduces the effectiveness of the warhead. Slat armour can be defeated by tandem-charge designs such as the RPG-27 and RPG-29. Armoured fighting vehicles: Electric armour Electric armour is a recent development in the United Kingdom by the Defence Science and Technology Laboratory. A vehicle is fitted with two thin shells, separated by insulating material. The outer shell holds an enormous electric charge, while the inner shell is at ground. If an incoming HEAT jet penetrates the outer shell and forms a bridge between the shells, the electrical energy discharges through the jet, disrupting it. Trials have so far been extremely promising, and it is hoped that improved systems could protect against KE penetrators. The developers of the Future Rapid Effect System (FRES) series of armoured vehicles are considering this technology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biohit** Biohit: Biohit Oyj is a Finnish company which develops, manufactures, and markets biotech and diagnostics products for use in research and health care. Summary: Biohit was established in 1988 in Finland by Professor Osmo Suovaniemi (M.D., Ph.D.), previously known as the founder of Labsystems. He stepped down from the CEO position in June 2011, but remains actively involved with the company through positions on several advisory boards. Biohit is a globally operating Finnish biotechnology company. Biohit is headquartered in Helsinki and has subsidiaries in Italy and the UK. Biohit's Series B share (BIOBV) is quoted on NASDAQ OMX Helsinki since 1999, Small cap/Healthcare. Semi Korpela was appointed CEO in 2011. Markets: Biohit's two businesses are acetaldehyde eliminating products and diagnostics. More than 90% of Biohit's sales occur outside Finland. Markets: Diagnostic tests The diagnostic product range of Biohit includes the GastroPanel examinations, which are used to aid diagnosis of Helicobacter pylori infection and atrophic gastritis from a blood sample. They are also ideal tools for identification of patients at increased risk of gastric cancer, peptic ulcer disease, gastroesophageal reflux disease (GERD) and deficiencies of vitamin B12, calcium and iron. In addition to this, Biohit offers Quick tests for the detection of Lactose intolerance Helicobacter pylori infection Celiac disease Screening of colorectal cancer Acetaldehyde binding products In 2018 was launched a new, nicotine free smoking cessation product, Acetium Lozenge. Markets: Monoclonal antibodies Biohit also develops and manufactures monoclonal antibodies for research use and use as raw materials for diagnostic industry. Instruments Adopting a systems approach, Biohit also provides laboratory equipment, such as microplate instruments and automates, as well as liquid handling products to support Biohit ELISA tests. Distribution network A network of 40 distributors, including 2 subsidiaries, in more than 40 countries is selling Biohit products. In the U.S. the official distributor is Bio Testing Supplies, a division of Avrio Genetics. Research, development, and intellectual property: Biohit spends roughly 30% of net sales on basic research each year and pursues an aggressive patenting strategy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Michelle Haber** Michelle Haber: Michelle Haber (born 18 October 1956) is an Australian cancer researcher. Michelle Haber: Haber is an Australian scientist in the field of childhood cancer research. She serves as the Executive Director of Children's Cancer Institute and is a professor at the School of Women’s and Children’s Health, University of New South Wales. She is known for her discoveries in the area of chemotherapy resistance in neuroblastoma and for translating these discoveries into new therapeutics that are currently in clinical trials. Education: Haber attended Mount Scopus Memorial College in Melbourne and, when her family moved to Sydney, attended Moriah College, graduating in 1973. She completed a clinical psychology degree at University of New South Wales and was awarded a University Medal. She obtained her PhD from the School of Pathology at the University of New South Wales in 1984 - her thesis was entitled Structural analysis by BD-cellulose chromatography of mammalian DNA during repair, replication and degradation. She was awarded a Doctor of Science honoris causa by the University of New South Wales in 2008. Career: In 1982, during her PhD studies, Haber spent three months as Visiting Research Fellow at the Department of Molecular Virology in Hadassah Medical Centre, Hebrew University of Jerusalem. Her first postdoctoral position was as at Children’s Leukaemia and Cancer Research Unit, a precursor to Children's Cancer Institute then located at the Prince of Wales Children’s Hospital, Randwick. Having joined as a Staff Scientist in 1984, she was promoted to Senior Research Fellow in 1992, Principal Research Fellow in 1996, Director in 2000 and Executive Director in 2003. Haber also holds a conjoint appointment as Professor in the Faculty of Medicine at the University of New South Wales.Under her leadership Children's Cancer Institute, now located in the UNSW Lowy Cancer Research Centre, has tripled in size and grown from a little known group to become the largest children’s cancer research facility in the region. Research: Haber’s early studies were amongst the first characterizing the complex molecular mechanisms underlying therapy-related drug resistance. With her collaborators, she identified the relationship between high expression of multidrug transporter gene MRP1, and the malignant phenotype of neuroblastoma and poor clinical outcome. These studies provided the first definitive demonstration of clinical relevance of the MRP1 gene in solid tumours, resulting in a large international clinical study which confirmed the independent prognostic significance of MRP1 expression in neuroblastoma and established MRP1 inhibition as a potential new treatment for this disease.By high-throughput chemical screening of small molecule libraries, Haber and her colleagues have also developed novel MRP1 inhibitors and patented and licensed the compounds for the treatment of neuroblastoma and other MRP1-associated malignancies. This led to a $3.1M award from the Australian Cancer Research Foundation to establish a Drug Discovery Centre for Childhood Cancer in the UNSW Lowy Cancer Research Centre, which is currently developing a pipeline of potential new drugs for treating childhood and adult malignancies. Research: Haber and her collaborators have also identified the role of ATP-binding cassette transporter genes (ABC transporters) in neuroblastoma biology, demonstrating that their expression predicts for poor clinical outcome in neuroblastoma but, unexpectedly, this phenomenon was not due to the ABC proteins’ role in drug transport, but through an independent pathway that influences fundamental aspects of tumour biology. A further study on ovarian cancer and ABCA1 has extended the discovery to common adult cancers. Service to the scientific community: Haber is a long-term member of the International Neuroblastoma Risk Group Committee, which makes recommendations regarding standardised protocols and best practice for identifying/utilising prognostic indicators for neuroblastoma treatment risk assessment. From 2006 to 2014, Haber has served on the steering committee of the Advances in Neuroblastoma Research Association (ANRA), the peak international body for neuroblastoma research and was President of this organisation from 2010 to 2012. In 2011, Haber also played a key role in establishment of the Kids Cancer Alliance and currently serves on this organization's executive management committee. Haber is convenor of the 2016 Advances in Neuroblastoma Research conference (Cairns, Australia), one of the largest specialist childhood cancer conferences internationally. Awards and honours: In 2007 Haber was appointed a Member of the Order of Australia (AM) for service to science in the field of childhood cancer, to scientific education, and to the community and she was also named as one of Australia’s 25 ‘True Leaders’ by Financial Review’s Boss Magazine. In 2008, Haber was awarded DSc (Honoris Causa) by University of New South Wales for eminent service to the cancer research community. She has received numerous awards for research excellence, including the NSW Science & Engineering Award for Biomedical Sciences (2011), and in that same year was a New South Wales finalist for Australian of the Year. In 2012, Haber (with her long-time collaborators Norris and Marshall) received the Cancer Institute NSW Premier’s Award for Excellence in Translational Cancer Research and was also highlighted with a National Health and Medical Research Council (NHMRC) Ten of the Best Award. In 2013, she was showcased, again with Norris and Marshall, in an article in the Lancet. In 2013 she was a finalist in the 2013 Australian Museum Eureka Prize for Medical Research Translation, and in 2014 received the NSW Premier's Award for Outstanding Cancer Researcher of the Year. Haber was elected a Fellow of the newly formed Australian Academy of Health and Medical Sciences in March 2015 and elected Fellow of the Australian Academy of Science in 2022.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dowker–Thistlethwaite notation** Dowker–Thistlethwaite notation: In the mathematical field of knot theory, the Dowker–Thistlethwaite (DT) notation or code, for a knot is a sequence of even integers. The notation is named after Clifford Hugh Dowker and Morwen Thistlethwaite, who refined a notation originally due to Peter Guthrie Tait. Definition: To generate the Dowker–Thistlethwaite notation, traverse the knot using an arbitrary starting point and direction. Label each of the n crossings with the numbers 1, ..., 2n in order of traversal (each crossing is visited and labelled twice), with the following modification: if the label is an even number and the strand followed crosses over at the crossing, then change the sign on the label to be a negative. When finished, each crossing will be labelled a pair of integers, one even and one odd. The Dowker–Thistlethwaite notation is the sequence of even integer labels associated with the labels 1, 3, ..., 2n − 1 in turn. Example: For example, a knot diagram may have crossings labelled with the pairs (1, 6) (3, −12) (5, 2) (7, 8) (9, −4) and (11, −10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6 −12 2 8 −4 −10. Uniqueness and counting: Dowker and Thistlethwaite have proved that the notation specifies prime knots uniquely, up to reflection.In the more general case, a knot can be recovered from a Dowker–Thistlethwaite sequence, but the recovered knot may differ from the original by either being a reflection or by having any connected sum component reflected in the line between its entry/exit points – the Dowker–Thistlethwaite notation is unchanged by these reflections. Knots tabulations typically consider only prime knots and disregard chirality, so this ambiguity does not affect the tabulation. Uniqueness and counting: The ménage problem, posed by Tait, concerns counting the number of different number sequences possible in this notation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CDR computerized assessment system** CDR computerized assessment system: The CDR computerized assessment system (CDR system) is a computerized battery of cognitive tests designed in the late 1970s by Professor Keith Wesnes at the University of Reading in Berkshire, England, for repeated testing in clinical trials. Task stimuli are presented in a laptop computer and participants respond via 'YES' and 'NO' buttons on a two-button response box, which records both the accuracy and reaction time. The CDR system is a computer based cognitive testing tool, developed to assess both enhancement and impairment of human cognitive performance. The CDR system's simplicity, sensitivity and specificity makes it acceptable to be used in clinical trials with either healthy subjects or diseased patient populations. The CDR system software is loaded onto laptop computers for testing in medical clinics. An internet version of the CDR system is available using keyboard commands to measure responses. Ancillary equipment is used for specific cognitive tests such as a postural stability (sway) meter, a critical flicker fusion device or joysticks for CDR's tracking test. CDR computerized assessment system: The CDR system is a series of brief neuropsychological tests that assess major aspects of cognitive function known to be influenced by a wide variety of factors including trauma, fatigue, stress, nutrition, ageing, disease (both physical and mental), medicines and drugs. The standard battery of cognitive tests in The CDR system includes immediate/delayed word recall, word recognition, picture recognition, simple reaction time, digit vigilance, choice reaction time, numeric working memory, and spatial working memory. Individual tests can be added to or removed from the battery to target specific cognitive domains. Examples of tests that can be added include measurements of executive function, mood states, social cognition, motor function and postural stability. The standard battery of tests lasts 18 minutes. CDR computerized assessment system: The CDR system tasks have proven validity in definitively measuring cognitive function in a variety of domains including attention, working memory, episodic secondary memory, executive function, and motor skill. In September, 2009, Cognitive Drug Research was acquired by United BioSource Corporation. UBC division Bracket continues to offer the CDR System for use in clinical research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum photoelectrochemistry** Quantum photoelectrochemistry: Quantum photoelectrochemistry is the investigation of the quantum mechanical nature of photoelectrochemistry, the subfield of study within physical chemistry concerned with the interaction of light with electrochemical systems, typically through the application of quantum chemical calculations. Quantum photoelectrochemistry provides an expansion of quantum electrochemistry to processes involving also the interaction with light (photons). It therefore also includes essential elements of photochemistry. Key aspects of quantum photoelectrochemistry are calculations of optical excitations, photoinduced electron and energy transfer processes, excited state evolution, as well as interfacial charge separation and charge transport in nanoscale energy conversion systems. Quantum photoelectrochemistry: Quantum photoelectrochemistry in particular provides fundamental insight into basic light-harvesting and photoinduced electro-optical processes in several emerging solar energy conversion technologies for generation of both electricity (photovoltaics) and solar fuels. Examples of such applications where quantum photoelectrochemistry provides insight into fundamental processes include photoelectrochemical cells, semiconductor photochemistry, as well as light-driven electrocatalysis in general, and artificial photosynthesis in particular.Quantum photoelectrochemistry constitutes an active line of current research, with several publications appearing in recent years that relate to several different types of materials and processes, including light-harvesting complexes, light-harvesting polymers, as well as nanocrystalline semiconductor materials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Open kinetic chain exercises** Open kinetic chain exercises: Open chain exercises (OKC) are exercises that are performed where the hand or foot is free to move. The opposite of OKC are closed kinetic chain exercises (CKC). Both are effective for strengthening and rehabilitation objectives. Closed-chain exercises tend to offer more "functional" athletic benefits because of their ability to recruit more muscle groups and require additional skeletal stabilization. Properties: Single-joint versions of these exercises are typically non-weight bearing, with the movement occurring at the hinge joints (elbow or knee). If there is any weight applied, it is often applied to the distal portion of the limb. Open chain exercises are postulated to be advantageous in rehabilitation settings because they can be easily manipulated to selectively target specific muscles, or specific heads of certain muscles, more effectively than their closed chain counterparts, at different phases of contraction. Open kinetic chain upper body exercises: Biceps curl Lying triceps extensions Bench press
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Personality test** Personality test: A personality test is a method of assessing human personality constructs. Most personality assessment instruments (despite being loosely referred to as "personality tests") are in fact introspective (i.e., subjective) self-report questionnaire (Q-data, in terms of LOTS data) measures or reports from life records (L-data) such as rating scales. Attempts to construct actual performance tests of personality have been very limited even though Raymond Cattell with his colleague Frank Warburton compiled a list of over 2000 separate objective tests that could be used in constructing objective personality tests. One exception however, was the Objective-Analytic Test Battery, a performance test designed to quantitatively measure 10 factor-analytically discerned personality trait dimensions. A major problem with both L-data and Q-data methods is that because of item transparency, rating scales and self-report questionnaires are highly susceptible to motivational and response distortion ranging all the way from lack of adequate self-insight (or biased perceptions of others) to downright dissimulation (faking good/faking bad) depending on the reason/motivation for the assessment being undertaken.The first personality assessment measures were developed in the 1920s and were intended to ease the process of personnel selection, particularly in the armed forces. Since these early efforts, a wide variety of personality scales and questionnaires have been developed, including the Minnesota Multiphasic Personality Inventory (MMPI), the Sixteen Personality Factor Questionnaire (16PF), the Comrey Personality Scales (CPS), among many others. Although popular especially among personnel consultants, the Myers–Briggs Type Indicator (MBTI) has numerous psychometric deficiencies. More recently, a number of instruments based on the Five Factor Model of personality have been constructed such as the Revised NEO Personality Inventory. However, the Big Five and related Five Factor Model have been challenged for accounting for less than two-thirds of the known trait variance in the normal personality sphere alone.Estimates of how much the personality assessment industry in the US is worth range anywhere from $2 and $4 billion a year (as of 2013). Personality assessment is used in wide a range of contexts, including individual and relationship counseling, clinical psychology, forensic psychology, school psychology, career counseling, employment testing, occupational health and safety and customer relationship management. History: The origins of personality assessment date back to the 18th and 19th centuries, when personality was assessed through phrenology, the measurement of bumps on the human skull, and physiognomy, which assessed personality based on a person's outer appearances. Sir Francis Galton took another approach to assessing personality late in the 19th century. Based on the lexical hypothesis, Galton estimated the number of adjectives that described personality in the English dictionary. Galton's list was eventually refined by Louis Leon Thurstone to 60 words that were commonly used for describing personality at the time. Through factor analyzing responses from 1300 participants, Thurstone was able to reduce this severely restricted pool of 60 adjectives into seven common factors. This procedure of factor analyzing common adjectives was later utilized by Raymond Cattell (7th most highly cited psychologist of the 20th Century—based on the peer-reviewed journal literature), who subsequently utilized a data set of over 4000 affect terms from the English dictionary that eventually resulted in construction of the Sixteen Personality Factor Questionnaire (16PF) which also measured up to eight second-stratum personality factors. Of the many introspective (i.e., subjective) self-report instruments constructed to measure the putative Big Five personality dimensions, perhaps the most popular has been the Revised NEO Personality Inventory (NEO-PI-R) However, the psychometric properties of the NEO-PI-R (including its factor analytic/construct validity) has been severely criticized.Another early personality instrument was the Woodworth Personal Data Sheet, a self-report inventory developed for World War I and used for the psychiatric screening of new draftees. Overview: There are many different types of personality assessment measures. The self-report inventory involves administration of many items requiring respondents to introspectively assess their own personality characteristics. This is highly subjective, and because of item transparency, such Q-data measures are highly susceptible to motivational and response distortion. Respondents are required to indicate their level of agreement with each item using a Likert scale or, more accurately, a Likert-type scale. An item on a personality questionnaire, for example, might ask respondents to rate the degree to which they agree with the statement "I talk to a lot of different people at parties" on a scale from 1 ("strongly disagree") to 5 ("strongly agree"). Overview: Historically, the most widely used multidimensional personality instrument is the Minnesota Multiphasic Personality Inventory (MMPI), a psychopathology instrument originally designed to assess archaic psychiatric nosology.In addition to subjective/introspective self-report inventories, there are several other methods for assessing human personality, including observational measures, ratings of others, projective tests (e.g., the TAT and Ink Blots), and actual objective performance tests (T-data). Topics: Norms The meaning of personality test scores are difficult to interpret in a direct sense. For this reason substantial effort is made by producers of personality tests to produce norms to provide a comparative basis for interpreting a respondent's test scores. Common formats for these norms include percentile ranks, z scores, sten scores, and other forms of standardized scores. Topics: Test development A substantial amount of research and thinking has gone into the topic of personality test development. Development of personality tests tends to be an iterative process whereby a test is progressively refined. Test development can proceed on theoretical or statistical grounds. There are three commonly used general strategies: Inductive, Deductive, and Empirical. Scales created today will often incorporate elements of all three methods. Topics: Deductive assessment construction begins by selecting a domain or construct to measure. The construct is thoroughly defined by experts and items are created which fully represent all the attributes of the construct definition. Test items are then selected or eliminated based upon which will result in the strongest internal validity for the scale. Measures created through deductive methodology are equally valid and take significantly less time to construct compared to inductive and empirical measures. The clearly defined and face valid questions that result from this process make them easy for the person taking the assessment to understand. Although subtle items can be created through the deductive process, these measure often are not as capable of detecting lying as other methods of personality assessment construction.Inductive assessment construction begins with the creation of a multitude of diverse items. The items created for an inductive measure to not intended to represent any theory or construct in particular. Once the items have been created they are administered to a large group of participants. This allows researchers to analyze natural relationships among the questions and label components of the scale based upon how the questions group together. Several statistical techniques can be used to determine the constructs assessed by the measure. Exploratory Factor Analysis and Confirmatory Factor Analysis are two of the most common data reduction techniques that allow researchers to create scales from responses on the initial items.The Five Factor Model of personality was developed using this method. Advanced statistical methods include the opportunity to discover previously unidentified or unexpected relationships between items or constructs. It also may allow for the development of subtle items that prevent test takers from knowing what is being measured and may represent the actual structure of a construct better than a pre-developed theory. Criticisms include a vulnerability to finding item relationships that do not apply to a broader population, difficulty identifying what may be measured in each component because of confusing item relationships, or constructs that were not fully addressed by the originally created questions.Empirically derived personality assessments require statistical techniques. One of the central goals of empirical personality assessment is to create a test that validly discriminates between two distinct dimensions of personality. Empirical tests can take a great deal of time to construct. In order to ensure that the test is measuring what it is purported to measure, psychologists first collect data through self- or observer reports, ideally from a large number of participants. Topics: Self- vs. observer-reports A personality test can be administered directly to the person being evaluated or to an observer. In a self-report, the individual responds to personality items as they pertain to the person himself/herself. Self-reports are commonly used. In an observer-report, a person responds to the personality items as those items pertain to someone else. To produce the most accurate results, the observer needs to know the individual being evaluated. Combining the scores of a self-report and an observer report can reduce error, providing a more accurate depiction of the person being evaluated. Self- and observer-reports tend to yield similar results, supporting their validity. Topics: Direct observation reports Direct observation involves a second party directly observing and evaluating someone else. The second party observes how the target of the observation behaves in certain situations (e.g., how a child behaves in a schoolyard during recess). The observations can take place in a natural (e.g., a schoolyard) or artificial setting (social psychology laboratory). Direct observation can help identify job applicants (e.g., work samples) who are likely to be successful or maternal attachment in young children (e.g., Mary Ainsworth's strange situation). The object of the method is to directly observe genuine behaviors in the target. A limitation of direct observation is that the target persons may change their behavior because they know that they are being observed. A second limitation is that some behavioral traits are more difficult to observe (e.g., sincerity) than others (e.g., sociability). A third limitation is that direct observation is more expensive and time-consuming than a number of other methods (e.g., self-report). Topics: Personality tests in the workplace Though personality tests date back to the early 20th century, it was not until 1988 when it became illegal in the United States for employers to use polygraphs that they began to more broadly utilize personality tests. The idea behind these personality tests is that employers can reduce their turnover rates and prevent economic losses in the form of people prone to thievery, drug abuse, emotional disorders or violence in the workplace. There is a chance that an applicant may fake responses to personality test items in order to make the applicant appear more attractive to the employing organization than the individual actually is.Personality tests are often part of management consulting services, as having a certification to conduct a particular test is a way for a consultant to offer an additional service and demonstrate their qualifications. The tests are used in narrowing down potential job applicants, as well as which employees are more suitable for promotion. The United States federal government is a notable customer of personality test services outside the private sector with approximately 200 federal agencies, including the military, using personality assessment services.Despite evidence showing personality tests as one of the least reliable metrics in assessing job applicants, they remain popular as a way to screen candidates. Topics: Test evaluation There are several criteria for evaluating a personality test. For a test to be successful, users need to be sure that (a) test results are replicable and (b) the test measures what its creators purport it to measure. Fundamentally, a personality test is expected to demonstrate reliability and validity. Reliability refers to the extent to which test scores, if a test were administered to a sample twice within a short period of time, would be similar in both administrations. Test validity refers to evidence that a test measures the construct (e.g., neuroticism) that it is supposed to measure. Topics: Analysis A respondent's response is used to compute the analysis. Analysis of data is a long process. Two major theories are used here: classical test theory (CTT), used for the observed score; and item response theory (IRT), "a family of models for persons' responses to items". The two theories focus upon different 'levels' of responses and researchers are implored to use both in order to fully appreciate their results. Topics: Non-response Firstly, item non-response needs to be addressed. Non-response can either be unit, where a person gave no response for any of the n items, or item, i.e., individual question. Unit non-response is generally dealt with exclusion. Item non-response should be handled by imputation – the method used can vary between test and questionnaire items. Topics: Scoring The conventional method of scoring items is to assign '0' for an incorrect answer and '1' for a correct answer. When tests have more response options (e.g. multiple choice items) '0' when incorrect, '1' for being partly correct and '2' for being correct. Personality tests can also be scored using a dimensional (normative) or a typological (ipsative) approach. Dimensional approaches such as the Big 5 describe personality as a set of continuous dimensions on which individuals differ. From the item scores, an 'observed' score is computed. This is generally found by summing the un-weighted item scores. Criticism and controversy: Personality versus social factors In the 1960s and 1970s some psychologists dismissed the whole idea of personality, considering much behaviour to be context-specific. This idea was supported by the fact that personality often does not predict behaviour in specific contexts. However, more extensive research has shown that when behaviour is aggregated across contexts, that personality can be a mostly good predictor of behaviour. Almost all psychologists now acknowledge that both social and individual difference factors (i.e., personality) influence behaviour. The debate is currently more around the relative importance of each of these factors and how these factors interact. Criticism and controversy: Respondent faking One problem with self-report measures of personality is that respondents are often able to distort their responses.Several meta-analyses show that people are able to substantially change their scores on personality tests when such tests are taken under high-stakes conditions, such as part of a job selection procedure.Work in experimental settings has also shown that when student samples have been asked to deliberately fake on a personality test, they clearly demonstrated that they are capable of doing so. Criticism and controversy: Hogan, Barett and Hogan (2007) analyzed data of 5,266 applicants who did a personality test based on the Big Five. At the first application the applicants were rejected. After six months the applicants reapplied and completed the same personality test. The answers on the personality tests were compared and there was no significant difference between the answers. Criticism and controversy: So in practice, most people do not significantly distort. Nevertheless, a researcher has to be prepared for such possibilities. Also, sometimes participants think that tests results are more valid than they really are because they like the results that they get. People want to believe that the positive traits that the test results say they possess are in fact present in their personality. This leads to distorted results of people's sentiments on the validity of such tests. Criticism and controversy: Several strategies have been adopted for reducing respondent faking. One strategy involves providing a warning on the test that methods exist for detecting faking and that detection will result in negative consequences for the respondent (e.g., not being considered for the job). Forced choice item formats (ipsative testing) have been adopted which require respondents to choose between alternatives of equal social desirability. Social desirability and lie scales are often included which detect certain patterns of responses, although these are often confounded by true variability in social desirability. Criticism and controversy: More recently, Item Response Theory approaches have been adopted with some success in identifying item response profiles that flag fakers. Other researchers are looking at the timing of responses on electronically administered tests to assess faking. While people can fake in practice they seldom do so to any significant level. To successfully fake means knowing what the ideal answer would be. Even with something as simple as assertiveness people who are unassertive and try to appear assertive often endorse the wrong items. This is because unassertive people confuse assertion with aggression, anger, oppositional behavior, etc. Criticism and controversy: Psychological research Research on the importance of personality and intelligence in education shows evidence that when others provide the personality rating, rather than providing a self-rating, the outcome is nearly four times more accurate for predicting grades. Criticism and controversy: Additional applications The MBTI questionnaire is a popular tool for people to use as part of self-examination or to find a shorthand to describe how they relate to others in society. It is well known from its widespread adoption in hiring practices, but popular among individuals for its focus exclusively on positive traits and "types" with memorable names. Some users of the questionnaire self-identify by their personality type on social media and dating profiles. Due to the publisher's strict copyright enforcement, many assessments come from free websites which provide modified tests based on the framework.Unscientific personality type quizzes are also a common form of entertainment. In particular Buzzfeed became well known for publishing user-created quizzes, with personality-style tests often based on deciding which pop culture character or celebrity the user most resembles. Criticism and controversy: Dangers There is an issue of privacy to be of concern forcing applicants to reveal private thoughts and feelings through his or her responses that seem to become a condition for employment. Another danger is the illegal discrimination of certain groups under the guise of a personality test.In addition to the risks of personality test results being used outside of an appropriate context, they can give inaccurate results when conducted incorrectly. In particular, ipsative personality tests are often misused in recruitment and selection, where they are mistakenly treated as if they were normative measures. Effects of technological advancements on the field: New technological advancements are increasing the possible ways that data can be collected and analyzed, and broadening the types of data that can be used to reliably assess personality. Although qualitative assessments of job-applicants' social media have existed for nearly as long as social media itself, many scientific studies have successfully quantized patterns in social media usage into various metrics to assess personality quantitatively. Smart devices, such as smart phones and smart watches, are also now being used to collect data in new ways and in unprecedented quantities. Also, brain scan technology has dramatically improved, which is now being developed to analyze personalities of individuals extremely accurately.Aside from the advancing data collection methods, data processing methods are also improving rapidly. Strides in big data and pattern recognition in enormous databases (data mining) have allowed for better data analysis than ever before. Also, this allows for the analysis of large amounts of data that was difficult or impossible to reliably interpret before (for example, from the internet). There are other areas of current work too, such as gamification of personality tests to make the tests more interesting and to lower effects of psychological phenomena that skews personality assessment data.With new data collection methods comes new ethical concerns, such as over the analysis of one's public data to make assessments on their personality and when consent is needed. Examples of personality tests: The first modern personality test was the Woodworth Personal Data Sheet, which was first used in 1919. It was designed to help the United States Army screen out recruits who might be susceptible to shell shock. The Rorschach inkblot test was introduced in 1921 as a way to determine personality by the interpretation of inkblots. The Thematic Apperception Test was commissioned by the Office of Strategic Services (O.S.S.) in the 1930s to identify personalities that might be susceptible to being turned by enemy intelligence. Examples of personality tests: The Minnesota Multiphasic Personality Inventory was published in 1942 as a way to aid in assessing psychopathology in a clinical setting. It can also be used to assess the Personality Psychopathology Five (PSY-5), which are similar to the Five Factor Model (FFM; or Big Five personality traits). These five scales on the MMPI-2 include aggressiveness, psychoticism, disconstraint, negative emotionality/neuroticism, and introversion/low positive emotionality. Examples of personality tests: Myers–Briggs Type Indicator (MBTI) is a questionnaire designed to measure psychological preferences in how people perceive the world and make decisions. This 16-type indicator test is based on Carl Jung's Psychological Types, developed during World War II by Isabel Myers and Katharine Briggs. The 16-type indicator includes a combination of Extroversion-Introversion, Sensing-Intuition, Thinking-Feeling and Judging-Perceiving. The MBTI utilizes two opposing behavioral divisions on four scales that yields a "personality type". Examples of personality tests: OAD Survey is an adjective word list designated to measure seven work related personality traits and job behaviors: Assertiveness-Compliance, Extroversion-Introversion, Patience-Impatience, Detail-Broad, High Versatility-Low Versatility, Low Emotional IQ-High Emotional IQ, Low Creativity-High Creativity. It was first published in 1990 with periodic norm revisions to assure scale validity, reliability, and non-bias. Keirsey Temperament Sorter developed by David Keirsey is influenced by Isabel Myers sixteen types and Ernst Kretschmer's four types. The True Colors Test developed by Don Lowry in 1978 is based on the work of David Keirsey in his book, Please Understand Me, as well as the Myers-Briggs Type Indicator and provides a model for understanding personality types using the colors blue, gold, orange and green to represent four basic personality temperaments. Examples of personality tests: The 16PF Questionnaire (16PF) was developed by Raymond Cattell and his colleagues in the 1940s and 1950s in a search to try to discover the basic traits of human personality using scientific methodology. The test was first published in 1949, and is now in its 5th edition, published in 1994. It is used in a wide variety of settings for individual and marital counseling, career counseling and employee development, in educational settings, and for basic research. Examples of personality tests: The EQSQ Test developed by Simon Baron-Cohen, Sally Wheelwright centers on the empathizing-systemizing theory of the male versus the female brain types. The Personality and Preference Inventory (PAPI), originally designed by Dr Max Kostick, Professor of Industrial Psychology at Boston State College, in Massachusetts, USA, in the early 1960s evaluates the behaviour and preferred work styles of individuals. The Strength Deployment Inventory, developed by Elias Porter in 1971 and is based on his theory of Relationship Awareness. Porter was the first known psychometrician to use colors (Red, Green and Blue) as shortcuts to communicate the results of a personality test. The Newcastle Personality Assessor (NPA), created by Daniel Nettle, is a short questionnaire designed to quantify personality on five dimensions: Extraversion, Neuroticism, Conscientious, Agreeableness, and Openness. The DISC assessment is based on the research of William Moulton Marston and later work by John Grier, and identifies four personality types: Dominance; Influence; Steadiness and Conscientiousness. It is used widely in Fortune 500 companies, for-profit and non-profit organizations. The Winslow Personality Profile measures 24 traits on a decile scale. It has been used in the National Football League, the National Basketball Association, the National Hockey League and every draft choice for Major League Baseball for the last 30 years and can be taken online for personal development. Other personality tests include Forté Profile, Millon Clinical Multiaxial Inventory, Eysenck Personality Questionnaire, Swedish Universities Scales of Personality, Edwin E. Wagner's The Hand Test, and Enneagram of Personality. The HEXACO Personality Inventory – Revised (HEXACO PI-R) is based on the HEXACO model of personality structure, which consists of six domains, the five domains of the Big Five model, as well as the domain of Honesty-Humility. The Personality Inventory for DSM-5 (PID-5) was developed in September 2012 by the DSM-5 Personality and Personality Disorders Workgroup with regard to a personality trait model proposed for DSM-5. The PID-5 includes 25 maladaptive personality traits as determined by Krueger, Derringer, Markon, Watson, and Skodol. The Process Communication Model (PCM), developed by Taibi Kahler with NASA funding, was used to assist with shuttle astronaut selection. Now it is a non-clinical personality assessment, communication and management methodology that is now applied to corporate management, interpersonal communications, education, and real-time analysis of call centre interactions among other uses. Examples of personality tests: The Birkman Method (TBM) was developed by Roger W. Birkman in the late 1940s. The instrument consists of ten scales describing "occupational preferences" (Interests), 11 scales describing "effective behaviors" (Usual behavior) and 11 scales describing interpersonal and environmental expectations (Needs). A corresponding set of 11 scale values was derived to describe "less than effective behaviors" (Stress behavior). TBM was created empirically. The psychological model is most closely associated with the work of Kurt Lewin. Occupational profiling consists of 22 job families with over 200 associated job titles connected to O*Net. Examples of personality tests: The International Personality Item Pool (IPIP) is a public domain set of more than 2000 personality items which can be used to measure many personality variables, including the Five Factor Model. The Guilford-Zimmerman Temperament Survey examined 10 factors that represented normal personality, and was used in both longitudinal studies and to examine the personality profiles of Italian pilots. Examples of personality tests: Personality tests of the five factor model Different types of the Big Five personality traits: The NEO PI-R, or the Revised NEO Personality Inventory, is one of the most significant measures of the Five Factor Model (FFM). The measure was created by Costa and McCrae and contains 240 items in the forms of sentences. Costa and McCrae had divided each of the five domains into six facets each, 30 facets total, and changed the way the FFM is measured. Examples of personality tests: The Five-Factor Model Rating Form (FFMRF) was developed by Lynam and Widiger in 2001 as a shorter alternative to the NEO PI-R. The form consists of 30 facets, 6 facets for each of the Big Five factors. The Ten-Item Personality Inventory (TIPI) and the Five Item Personality Inventory (FIPI) are very abbreviated rating forms of the Big Five personality traits. The Five Factor Personality Inventory – Children (FFPI-C) was developed to measure personality traits in children based upon the Five Factor Model (FFM). Examples of personality tests: The Big Five Inventory (BFI), developed by John, Donahue, and Kentle, is a 44-item self-report questionnaire consisting of adjectives that assess the domains of the Five Factor Model (FFM). The 10-Item Big Five Inventory is a simplified version of the well-established BFI. It is developed to provide a personality inventory under time constraints. The BFI-10 assesses the five dimensions of BFI using only two items each to cut down on length of BFI. Examples of personality tests: The Semi-structured Interview for the Assessment of the Five-Factor Model (SIFFM) is the only semi-structured interview intended to measure a personality model or personality disorder. The interview assesses the five domains and 30 facets as presented by the NEO PI-R, and it additional assesses both normal and abnormal extremities of each facet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AGRICOLA** AGRICOLA: AGRICOLA (AGRICultural OnLine Access) is an online database created and maintained by the United States National Agricultural Library of the United States Department of Agriculture.The database serves as the catalog and index for the collections of the United States National Agricultural Library. It also provides public access to information on agriculture and allied fields. Scope: AGRICOLA indexes a wide variety of publications covering agriculture and its allied fields, including, "animal and veterinary sciences, entomology, plant sciences, forestry, aquaculture and fisheries, farming and farming systems, agricultural economics, extension and education, food and human nutrition, and earth and environmental sciences." Materials are indexed using terms from the National Agricultural Library Glossary and Thesaurus. Scope: PubAg A related database, PubAg, was released in 2015 and is focused on the full-text publications from USDA scientists, as well as some of the journal literature. PubAg was designed for a broad range of users, including farmers, scientists, scholars, students, and the general public.The distinctions between AGRICOLA and PubAg include: "AGRICOLA serves as the public catalog of the National Agricultural Library. It contains records for all of the holdings of the library. It also contains citations to articles, much like PubAg. AGRICOLA also contains citations to many items that, while valuable and relevant to the agricultural sciences, are not peer-reviewed journal articles. Also, AGRICOLA has a different interface. So, while there is some overlap between the two resources, they are different in significant ways. There are no plans to eliminate AGRICOLA."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Twinjet** Twinjet: A twinjet or twin-engine jet is a jet aircraft powered by two engines. A twinjet is able to fly well enough to land with a single working engine, making it safer than a single-engine aircraft in the event of failure of an engine. Fuel efficiency of a twinjet is better than that of aircraft with more engines. These considerations have led to the widespread use of aircraft of all types with twin engines, including airliners, fixed-wing military aircraft, and others. Aircraft configurations: There are three common configurations of twinjet aircraft. The first, common on large aircraft such as airliners, has a podded engine usually mounted beneath, or occasionally above or within, each wing. The second has one engine mounted on each side of the rear fuselage, close to its empennage, used by many business jets. In the third configuration both engines are within the fuselage, side-by-side, used by most fighters since the 1960s. Later fighters using this configuration include the Su-27 'Flanker', the F-15 Eagle, and the F-22 Raptor. History: The first twinjet to fly was the German fighter prototype Heinkel He 280, flying in April 1941 with a pair of nacelled Heinkel HeS 8 axial-flow turbojets. History: The twinjet configuration was used for short-range narrow-bodied aircraft such as the McDonnell Douglas DC-9 and Boeing 737. The Airbus A300 was initially not successful when first produced as a short-range widebody, as airlines operating the A300 on short-haul routes had to reduce frequencies to try and fill the high-capacity aircraft, and lost passengers to airlines operating more frequent narrow-body flights. However, thanks to the introduction of ETOPS rules that allowed twin-engine jets to fly long-distance routes that were previously off-limits to them, Airbus was able to further develop the A300 as a medium- to long-range airliner to increased sales; Boeing launched its widebody twinjet, the Boeing 767, in response. History: In the 1980s the Boeing 727 was discontinued, as its central engine bay would require a prohibitively expensive redesign to accommodate quieter high-bypass turbofans, and it was soon supplanted by twinjets for the narrow-body market; Airbus with the A320, and Boeing with the 757 and updated "classic" variants of the 737. During that decade only McDonnell Douglas continued development of the trijet design with an update to the DC-10, the MD-11, which initially had a range advantage over its closest medium wide-body competitors which were twinjets, the in-production Boeing 767 and Airbus A300/A310. In contrast to McDonnell Douglas sticking with their existing trijet configuration, Airbus (which never produced a trijet aircraft) and Boeing worked on new widebody twinjet designs that would become the Airbus A330 and Boeing 777, respectively. The MD-11's long range advantage was brief as it was soon nullified by the Airbus A330-300 and the extended-range Boeing 767-300ER and Boeing 777-200ER. History: The Boeing 737 twinjet stands out as the most produced jet airliner. The Boeing 777X is the world's largest twinjet, and the 777-200LR variant has the world's second longest aircraft range (behind Airbus A350-900 ULR). Other Boeing twinjets include the 767, 757 (out of production but still in commercial service) and 787. Competitor Airbus produces the A320 family, the A330, and the A350. History: Some modern commercial airplanes still use four engines (quad-jets) like the Airbus A380 and Boeing 747-8, which are classified as very large aircraft (over 400 seats in mixed-class configurations). Four engines are still used on the largest cargo aircraft capable of transporting outsize cargo, including strategic airlifters. Efficiency: Twin-jets tend to be more fuel-efficient than trijet (three engine) and quad-jet (four engine) aircraft. As fuel efficiency in airliners is a high priority, many airlines have been increasingly retiring trijet and quad-jet designs in favor of twinjets in the twenty-first century. The trijet designs were phased out first, in particular due to the more complicated design and maintenance issues of the middle engine mounted on the stabilizer. Early twinjets were not permitted by ETOPS restrictions to fly long-haul trans-oceanic routes, as it was thought that they were unsafe in the event of failure of one engine, so quad-jets were used. Quad-jets also had higher carrying capacity than comparable earlier twinjets. However, later twinjets such as the Boeing 777, Boeing 787 and Airbus A350 have matched or surpassed older quad-jet designs such as the Boeing 747 and Airbus A340 in these aspects, and twinjets have been more successful in terms of sales than quad-jets. Efficiency: In 2012, Airbus studied a 470-seat twinjet competitor for the B747-8 with lower operating costs expected between 2023 and 2030, revived after Boeing launched the 777X in November 2013, while then-CEO Fabrice Brégier preferred to focus on product improvement rather than all-new concepts for 10 years. Efficiency: It would have a 10-abreast economy like the 777; its 565 m2 (6,081 sq ft) wing, slightly more than the 747-8, would have an 80 m (262 ft) span, as wide as the A380, for a 892,900 lb (405 t) MTOW compared to 775,000 lb (352 t) for the 777X, with a composite structure for an operating empty weight of 467,400 lb (212 t), and a 8,150 nmi (15,090 km) range at Mach 0.85. ETOPS: When flying far from diversionary airports (so called ETOPS/LROPS flights), the aircraft must be able to reach an alternate on the remaining engine within a specified time in case of one engine failure. When aircraft are certified according to ETOPS standards, thrust is not an issue, as one of the engines is more than powerful enough to keep the aircraft aloft (see below). Mostly, ETOPS certification involves maintenance and design requirements ensuring that a failure of one engine cannot make the other one fail also. The engines and related systems need to be independent and (in essence) independently maintained. ETOPS/LROPS is often incorrectly thought to apply only to long overwater flights, but it applies to any flight more than a specified distance from an available diversion airport. Overwater flights near diversion airports need not be ETOPS/LROPS-compliant. ETOPS: Introduction to transoceanic flights Since the 1990s, airlines have increasingly turned from four-engine or three-engine airliners to twin-engine airliners to operate transatlantic and transpacific flight routes. On a nonstop flight from America to Asia or Europe, the long-range aircraft usually follows a great circle route. Hence, in case of an engine failure in a twinjet (like Boeing 777), the twin-jet could make emergency landings in fields in Canada, Alaska, eastern Russia, Greenland, Iceland, or the British Isles. The Boeing 777 has also been approved by the Federal Aviation Administration for flights between North America and Hawaii, which is the world's longest regular airline route with no diversion airports along the way. Other advantages: On large passenger jets, the cost of the engines makes up a significant proportion of the plane's final cost. Each engine also requires separate service, paperwork, and certificates. Having two larger engines as opposed to three or four smaller engines will typically significantly reduce both the purchase and maintenance costs of a plane. Other advantages: Regulations governing the required thrust levels for transport aircraft are typically based upon the requirement that an aircraft be able to continue a takeoff if an engine fails after the takeoff decision speed is reached. Thus, with all engines operating, trijets must be able to produce at least 150% of the minimum thrust required to climb and quad-jets 133%. Conversely, since a twinjet will lose half of its total thrust if an engine fails, they are required to produce 200% of the minimum thrust required to climb when both engines are operating. Because of this, twinjets typically have higher thrust-to-weight ratios than aircraft with more engines, and are thus able to accelerate and climb faster.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Generalized gamma distribution** Generalized gamma distribution: The generalized gamma distribution is a continuous probability distribution with two shape parameters (and a scale parameter). It is a generalization of the gamma distribution which has one shape parameter (and a scale parameter). Since many distributions commonly used for parametric models in survival analysis (such as the exponential distribution, the Weibull distribution and the gamma distribution) are special cases of the generalized gamma, it is sometimes used to determine which parametric model is appropriate for a given set of data. Another example is the half-normal distribution. Characteristics: The generalized gamma distribution has two shape parameters, d>0 and p>0 , and a scale parameter, a>0 . For non-negative x from a generalized gamma distribution, the probability density function is f(x;a,d,p)=(p/ad)xd−1e−(x/a)pΓ(d/p), where Γ(⋅) denotes the gamma function. The cumulative distribution function is or P(dp,(xa)p); where γ(⋅) denotes the lower incomplete gamma function, and P(⋅,⋅) denotes the regularized lower incomplete gamma function. The quantile function can be found by noting that F(x;a,d,p)=G((x/a)p) where G is the cumulative distribution function of the gamma distribution with parameters α=d/p and β=1 . The quantile function is then given by inverting F using known relations about inverse of composite functions, yielding: F−1(q;a,d,p)=a⋅[G−1(q)]1/p, with G−1(q) being the quantile function for a gamma distribution with α=d/p,β=1 Related distributions: If d=p then the generalized gamma distribution becomes the Weibull distribution. If p=1 the generalised gamma becomes the gamma distribution. If p=d=1 then it becomes the exponential distribution. If p=2 and d=2m then it becomes the Nakagami distribution. Related distributions: If p=2 and d=1 then it becomes a half-normal distribution.Alternative parameterisations of this distribution are sometimes used; for example with the substitution α = d/p. In addition, a shift parameter can be added, so the domain of x starts at some value other than zero. If the restrictions on the signs of a, d and p are also lifted (but α = d/p remains positive), this gives a distribution called the Amoroso distribution, after the Italian mathematician and economist Luigi Amoroso who described it in 1925. Moments: If X has a generalized gamma distribution as above, then E⁡(Xr)=arΓ(d+rp)Γ(dp). Properties: Denote GG(a,d,p) as the generalized gamma distribution of parameters a, d, p. Then, given c and α two positive real numbers, if f∼GG(a,d,p) , then cf∼GG(ca,d,p) and fα∼GG(aα,dα,pα) Kullback-Leibler divergence: If f1 and f2 are the probability density functions of two generalized gamma distributions, then their Kullback-Leibler divergence is given by ln ln ln ⁡a1](d1−d2)+Γ((d1+p2)/p1)Γ(d1/p1)(a1a2)p2−d1p1 where ψ(⋅) is the digamma function. Software implementation: In the R programming language, there are a few packages that include functions for fitting and generating generalized gamma distributions. The gamlss package in R allows for fitting and generating many different distribution families including generalized gamma (family=GG). Other options in R, implemented in the package flexsurv, include the function dgengamma, with parameterization: ln ln ln ⁡pp , σ=1pd , Q=pd , and in the package ggamma with parametrisation: a=a , b=p , k=d/p In the python programming language, it is implemented in the SciPy package, with parametrisation: c=p , a=d/p , and scale of 1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tae Soo Do** Tae Soo Do: Tae Soo Do is a name that has been used over the years by both the Taekwondo and the Hwa Rang Do communities. In relation to Taekwondo, it was the name that some major schools in South Korea agreed to call their martial art systems due to reactions to controversies within the Taekwondo communities in the early 1960s. In relation to Hwa Rang Do, Tae Soo Do is the name of their introductory program to help students develop their fundamentals and help prepare them for their training in Hwa Rang Do. Modern day Tae Soo Do/ Hwa Rang Do has no connection with Taekwondo and one should not be mistaken for the other. Previous Use in Relation to Taekwondo: In 1961, the name Taekwondo was temporarily dropped by members of the Taekwondo community due to controversies that arose between various schools and practitioners. In response to these controversies, several of the schools choose to change the name of their art to Tae Soo Do and The Korea Tae Soo Do Association submitted its documented to the Ministry of Education on September 22, 1961. A general in the Korean military and a predominant member of the Taekwondo community, Choi Hong Hi, was unhappy with this change and in 1965, he succeeded in changing the name of the art back to Taekwondo with the reformation of The Korean Taekwondo Association. Previous Use in Relation to Taekwondo: Today, the name Tae Soo Do is no longer used by Taekwondo practitioners or schools. Very few people who practice Taekwondo today will recognize the name except for its modern use with the Hwa Rang Do community. In the west, the art was always referred to as "Tae Kwon Do" (or "Taekwondo", the western form of the name), due to that the controversies that happened in Korea during the early 1960s never made their way over to the United States aside from a very select few, who chose to watch eastern culture and events closely in order to keep traditions alive and accurate. Modern Use in Relation to Hwa Rang Do: Today, the name Tae Soo Do (Way of the Warrior Spirit, Korean: 태수도; Hanja: 太手道) refers to a martial art program created in 1990 by the World Hwa Rang Do Association as a beginners program to the martial art system Hwa Rang Do. The World Hwa Rang Do Association decided to use the name because functionally it expressed what they were working to achieve with the students who participate within the program and since it had been rejected outright by General Choi and the Taekwondo community decades earlier, they didn't see an issue using it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mura (Japanese term)** Mura (Japanese term): Mura (斑) is a Japanese word meaning "unevenness; irregularity; lack of uniformity; nonuniformity; inequality", and is a key concept in the Toyota Production System (TPS) as one of the three types of waste (muda, mura, muri). Waste in this context refers to the wasting of time or resources rather than wasteful by-products and should not be confused with Waste reduction. Toyota adopted these three Japanese words as part of their product improvement program, due to their familiarity in common usage. Mura (Japanese term): Mura, in terms of business/process improvement, is avoided through Just-In-Time systems which are based on keeping little or no inventory. These systems supply the production process with the right part, at the right time, in the right amount, using first-in, first-out (FIFO) component flow. Just-In-Time systems create a “pull system” in which each sub-process withdraws its needs from the preceding sub-processes, and ultimately from an outside supplier. When a preceding process does not receive a request or withdrawal it does not make more parts. This type of system is designed to maximize productivity by minimizing storage overhead. Mura (Japanese term): For example: The assembly line “makes a request to,” or “pulls from” the Paint Shop, which pulls from Body Weld. The Body Weld shop pulls from Stamping. At the same time, requests are going out to suppliers for specific parts, for the vehicles that have been ordered by customers. Small buffers accommodate minor fluctuations, yet allow continuous flow.If parts or material defects are found in one process, the Just-in-Time approach requires that the problem be quickly identified and corrected. Implementation: Production leveling, also called heijunka, and frequent deliveries to customer are key to identifying and eliminating Mura. The use of different types of Kanban to control inventory at different stages in the process are key to ensuring that "pull" is happening between sub-processes. Leveling production, even when different products are produced in the same system, will aid in scheduling work in a standard way that encourages lower costs. Implementation: It is also possible to smooth the workflow by having one operator work across several machines in a process rather than having different operators; in a sense merging several sub-processes under one operator. The fact that there is one operator will force a smoothness across the operations because the workpiece flows with the operator. There is no reason why the several operators cannot all work across these several machines following each other and carrying their workpiece with them. This multiple machine handling is called "multi-process handling" in the Toyota Production System. Another means of detecting and reducing Mura is increasing the process' standardization - ensuring that all workers understand and can handle each type of request that they come across along a clear, step-by-step protocol. Working to simplify the process as much as possible will also help to drive down the unevenness-generating complexities. You can also aid variability detection by performance monitoring through histograms and statistical control charts. Limitations, critiques and improvements: Some processes have considerable lead time. Some processes have unusually high costs for waiting or downtime. When this is the case, it is often desirable to try to predict the upcoming demand from a sub-process before pull occurs or a card is generated. The smoother the process, the more accurately this can be done from analysis of previous historical experience. Limitations, critiques and improvements: Some processes have asymmetric cost. In such situations, it may be better to err away from the higher cost error. In this case, there appears to be waste and higher average error, but the waste or errors are smaller ones and in aggregate leads to lower costs / more customer value. Limitations, critiques and improvements: For example, consider running a call center. It may be more effective to have low cost call center operators wait for high value clients rather than risk losing high value clients by making them wait. Given the asymmetric cost of these errors - particularly if the processes are not smooth - it may be prudent to have what seems like a surplus of call center operators that appear to be "wasting" call center operator time, rather than commit the higher-cost error of losing the occasional high value client.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lawrence L. Larmore** Lawrence L. Larmore: Lawrence L. Larmore is an American mathematician and theoretical computer scientist. Since 1994 he has been a professor of computer science at the University of Nevada, Las Vegas (UNLV). Larmore developed the package-merge algorithm for the length-limited Huffman coding problem, as well as an algorithm for optimizing paragraph breaking in linear time. He is perhaps best known for his work with competitive analysis of online algorithms, particularly for the k-server problem. His contributions, with his co-author Marek Chrobak, led to the application of T-theory to the server problem. Lawrence L. Larmore: Larmore earned a Ph.D. in Mathematics in the field of algebraic topology from Northwestern University in 1965. He later earned a second Ph.D., this time in Computer Science, in the field of theoretical computer science from University of California, Irvine. He is a past member of Institute for Advanced Study in Princeton, New Jersey and Gastwissenschaftler (visiting scholar) at the University of Bonn.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Docket (court)** Docket (court): A docket in the United States is the official summary of proceedings in a court of law. In the United Kingdom in modern times it is an official document relating to delivery of something, with similar meanings to these two elsewhere. In the late nineteenth century the term referred to a large folio book in which clerks recorded all filings and court proceedings for each case, although use has been documented since 1485. Historical usage: The term originated in England; it was recorded in the form "doggette" in 1485, and later also as doket, dogget(t), docquett, docquet, and docket. The derivation and original sense are obscure, although it has been suggested that it derives from the verb "to dock", in the sense of cutting short (e.g. the tail of a dog or horse); a long document summarised has been docked, or docket using old spelling. It was long used in England for legal purposes (there was an official called the Clerk of the Dockets in the early nineteenth century), although discontinued in modern English legal usage. Historical usage: Docket was described in The American and English Encyclopedia of Law as a courts summary, digest, or register. A usage note in this 1893 text warns that term docket and calendar are not synonymous.A 1910 law dictionary states the terms trial docket and calendar are synonymous. United States: In the United States, court dockets are considered to be public records, and many public records databases and directories include references to court dockets. Rules of civil procedure often state that the court clerk shall record certain information "on the docket" when a specific event occurs. The Federal Courts use the PACER (Public Access Court Electronic Records) system to house dockets and documents on all federal civil, criminal and bankruptcy cases, available to the public for a fee.The term is also sometimes used informally to refer to a court calendar, the schedule of the appearances, arguments and hearings scheduled for a court. It may also be used as a metonym to refer to a court's caseload as a whole. Thus, either sense may be intended (depending upon the context) in the frequent use of the phrase "crowded dockets" by legal journalists and commentators. United States: Supreme Court In its meaning as calendar, the docket of the United States Supreme Court is different both in its composition and significance. The justices of the Supreme Court have almost complete discretion over the cases they choose to hear. From the large number of cases which it receives, only 70 to 100 will be placed on the docket. The Solicitor General decides which cases to present on behalf of the federal government. United States: Court docket links Official Supreme CourtUnited States Supreme Court CalendarCourt of AppealsPublic Access to Court Electronic Records (PACER) is a system for public access to court records, subject to payment. United States: Federal Circuit First Circuit Court of Appeals, Court Calendar Second Circuit Court of Appeals, Court Calendar Third Circuit Court of Appeals PACER Fourth Circuit Court of Appeals PACER Fifth Circuit Court of Appeals, Appeals Calendar Sixth Circuit Court of Appeals, Oral Argument Calendar Seventh Circuit Court of Appeals Oral Arguments Seventh Circuit Court of Appeals Court Calendar Eighth Circuit Court of Appeals, Court Calendar Ninth Circuit Court of Appeals Court Calendar, and dockets via PACER Tenth Circuit Court of Appeals; Argument Calendar Eleventh Circuit Court of Appeals "Voluntary filing to begin on 1 January 2012"Federal District Courts IowaNorthern District of Iowa Has Decisions/Verdicts page. Links to PACER Northern District of Iowa Bankruptcy Southern District of Iowa Southern District of Iowa Bankruptcy, Calendars Unofficial Docket Alarm, Inc. provides a search engine and alerts for Federal Court dockets and bankruptcies, as well as providing programmatic access via an API. United States: Inforuptcy.com is a PACER alternative website that provides all queries including dockets for U.S. bankruptcy courts. LegalDockets.com is the oldest docket portal on the internet. In addition to linking to all PACER courts, it also links to all state court dockets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ellen Hildreth** Ellen Hildreth: Ellen Catherine Hildreth is a professor of computer science at Wellesley College. Her fields are visual perception and computer vision. She co-invented the Marr-Hildreth algorithm along with David Marr.She completed all of her higher education at the Massachusetts Institute of Technology. She earned a Bachelor of Science in Mathematics in 1977, a Master of Science from the Department of Electrical Engineering and Computer Science (EECS) in 1980, and a Ph.D. from EECS in 1983. Her thesis, "The Measurement of Visual Motion", won an Honorable Mention from the Association for Computing Machinery.She is a Fellow of the Association for the Advancement of Artificial Intelligence and the Institute of Electrical and Electronics Engineers.Hildreth is married to Eric Grimson. The couple have two sons.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bismuthide** Bismuthide: The bismuthide ion is Bi3−. Bismuthides are compounds of bismuth with more electropositive elements. They are intermetallic compounds, containing partially metallic and partially ionic bonds. The majority of bismuthides adopt efficient packing arrangements and become densely packed structures, which is a characteristic of intermetallic compounds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Limb infarction** Limb infarction: A limb infarction is an area of tissue death of an arm or leg. It may cause skeletal muscle infarction, avascular necrosis of bones, or necrosis of a part of or an entire limb. Signs and symptoms: Early symptoms of an arterial embolism in the arms or legs appear as soon as there is ischemia of the tissue, even before any frank infarction has begun. Such symptoms may include: A major presentation of diabetic skeletal muscle infarction is painful thigh or leg swelling. Signs and symptoms: Affected tissues The major tissues affected are nerves and muscles, where irreversible damage starts to occur after 4–6 hours of cessation of blood supply. Skeletal muscle, the major tissue affected, is still relatively resistant to infarction compared to the heart and brain because its ability to rely on anaerobic metabolism by glycogen stored in the cells may supply the muscle tissue long enough for any clot to dissolve, either by intervention or the body's own system for thrombus breakdown. In contrast, brain tissue (in cerebral infarction) does not store glycogen, and the heart (in myocardial infarction) is so specialized on aerobic metabolism that not enough energy can be liberated by lactate production to sustain its needs.Bone is more susceptible to ischemia, with hematopoietic cells usually dying within 2 hours, and other bone cells (osteocytes, osteoclasts, osteoblasts etc.) within 12–20 hours. On the other hand, it has better regenerative capacity once blood supply is reestablished, as the remaining dead inorganic osseous tissue forms a framework upon which immigrating cells can reestablish functional bone tissue in optimal conditions. Causes: Causes include: Thrombosis (approximately 40% of cases) Arterial embolism (approximately 40%) arteriosclerosis obliteransAnother cause of limb infarction is skeletal muscle infarction as a rare complication of long standing, poorly controlled diabetes mellitus. Diagnosis: In addition to evaluating the symptoms described above, angiography can distinguish between cases caused by arteriosclerosis obliterans (displaying abnormalities in other vessels and collateral circulations) from those caused by emboli.Magnetic resonance imaging (MRI) is the preferred test for diagnosing skeletal muscle infarction. Treatment: Oxygen consumption of skeletal muscle is approximately 50 times larger while contracting than in the resting state. Thus, resting the affected limb should delay onset of infarction substantially after arterial occlusion. Treatment: Low molecular weight heparin is used to reduce or at least prevent enlargement of a thrombus, and is also indicated before any surgery. In the legs, below the inguinal ligament, percutaneous aspiration thrombectomy is a rapid and effective way of removing thromboembolic occlusions. Balloon thrombectomy using a Fogarty catheter may also be used. In the arms, balloon thrombectomy is an effective treatment for thromboemboli as well. However, local thrombi from atherosclerotic plaque are harder to treat than embolized ones. If results are not satisfying, another angiography should be performed.Thrombolysis using analogs of tissue plasminogen activator (tPA) may be used as an alternative or complement to surgery. Where there is extensive vascular damage, bypass surgery of the vessels may be necessary to establish other ways to supply the affected parts. Treatment: Swelling of the limb may cause inhibited flow by increased pressure, and in the legs (but very rarely in the arms), this may indicate a fasciotomy, opening up all four leg compartments.Because of the high recurrence rates of thromboembolism, it is necessary to administer anticoagulant therapy as well. Aspirin and low molecular weight heparin should be administered, and possibly warfarin as well. Follow-up includes checking peripheral pulses and the arm-leg blood pressure gradient. Prognosis: With treatment, approximately 80% of patients are alive (approx. 95% after surgery) and approximately 70% of infarcted limbs remain vital after 6 months.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electrochromic device** Electrochromic device: An electrochromic device (ECD) controls optical properties such as optical transmission, absorption, reflectance and/or emittance in a continual but reversible manner on application of voltage (electrochromism). This property enables an ECD to be used for applications like smart glass, electrochromic mirrors, and electrochromic display devices. History: The history of electro-coloration goes back to 1704 when Diesbach discovered Prussian blue (hexacyanoferrate), which changes color from transparent to blue under oxidation of iron. In the 1930s, Kobosew and Nekrassow first noted electrochemical coloration in bulk tungsten oxide. While working at Balzers in Lichtenstein, T. Kraus provided a detailed description of the electrochemical coloration in a thin film of tungsten trioxide (WO3) on 30 July 1953. In 1969, S. K. Deb demonstrated electrochromic coloration in WO3 thin films. Deb observed electrochromic color by applying an electric field on the order of 104 Vcm−1 across WO3 thin film. In fact, the real birth of the EC technology is usually attributed to S. K. Deb’s seminal paper of 1973, wherein he described the coloration mechanism in WO3. The electrochromism occurs due to the electrochemical redox reactions that take place in electrochromic materials. Various types of materials and structures can be used to construct electrochromic devices, depending on the specific applications. Device structure: Electrochromic (sometimes called electrochromatic) devices are one kind of electrochromic cells. The basic structure of ECD consists of two EC layers separated by an electrolytic layer. The ECD works on an external voltage, for which the conducting electrodes are used on the either side of both EC layers. Electrochromic devices can be categorized in two types depending upon the kind of electrolyte used viz. Laminated ECD are the one in which liquid gel is used while in solid electrolyte EC devices solid inorganic or organic material is used. The basic structure of electrochromic device embodies five superimposed layers on one substrate or positioned between two substrates in a laminated configuration. In this structure there are three principally different kinds of layered materials in the ECD: The EC layer and ion-storage layer conduct ions and electrons and belong to the class of mixed conductors. The electrolyte is a pure ion conductor and separates the two EC layers. The transparent conductors are pure electron conductors. Optical absorption occurs when electrons move into the EC layers from the transparent conductors along with charge balancing ions entering from the electrolyte. Device structure: Solid-state devices In solid-state electrochromic devices, a solid inorganic or organic material is used as the electrolyte. Ta2O5 and ZrO2 are the most extensively studied inorganic solid electrolytes. Laminated devices Laminated electrochromic devices contain a liquid gel which is used as the electrolyte. Mode of operation: Typically, ECD are of two types depending on the modes of device operation, namely the transmission mode and reflectance mode. In the transmission mode, the conducting electrodes are transparent and control the light intensity passing through them; this mode is used in smart-window applications. In the reflectance mode, one of the transparent conducting electrodes (TCE) is replaced with a reflective surface like aluminum, gold or silver, which controls the reflective light intensity; this mode is useful in rear-view mirrors of cars and EC display devices. Applications: Smart windows Windows have both direct and indirect impacts on building energy consumption. Electrochromic windows, or the application of electrochromic switchable glazes deposited on to windows, also known as smart windows, are a technology for energy efficiency used in buildings by controlling the amount of sunlight passing through. Applications: The solar-optical properties of electrochromic coatings vary over a wide range in response to an applied electrical signal that can be applied via execution of laboratory processes, such as Cyclic Voltammetry (CV). Specifically, these smart windows are made of Tungsten Oxide (WO3). Tungsten Oxide is known to be a standard material used for electrochromic devices because of its wide optical window, ranging from 400-630 nm, and prolonged cyclic stability on the order of thousands of cycles. To enhance the electrochromic performance of Tungsten Oxide coatings, electro chromic coatings are prepared by introducing a small amount of dopamine (DA) into a peroxo tungstic acid (PTA) precursor sol to form tungsten complexes on the surface of nanoparticles. This processing method shows promising cyclic stability as it will last up to thirty five thousand cycles which is greater than that of regular WO3 since new ligand formation promotes plasmonic tuning in nanoparticle electrochemistry. They can also produce less glare than fritted glass. The efficiency of electrochromic windows is dependent on the intrinsic properties of the coating, the placement of the coating within a window system, and parameters related to the building they are used for. In addition to this, electrochromic coating efficiency is directly dependent on the growth kinetics of such thin-film layers since thinner films, and non-even coatings, have a lower optical signal compared to the thicker films with more uniformity having more control and experience a greater optical signal.These windows usually contain layers for tinting in response to increases in incoming sunlight and to protect from UV radiation. For example the glass developed by Gesimat, has a tungsten oxide layer, a polyvinyl butyral layer and a Prussian Blue layer sandwiched by two dual layers of glass and fluorine-doped glass coated with tin oxide. The tungsten oxide and Prussian Blue layers form the positive and negative ends of a battery using the incoming light energy. The polyvinyl butyral (PVB) forms the central layer and serves as a polymer electrolyte. This allows for the flow of ions which, in turn, generates a current. Applications: Mirrors Electrochromic mirrors use a combination of optoelectronic sensors and complex electronics that monitor both ambient light and the intensity of the light shining on the surface. As soon as glare makes contact with the surface, these mirrors automatically dim reflections of flashing light from following vehicles at night so that a driver can see them without discomfort. These mirrors, however, only dim relative to the amount of light that shines on them. Applications: Other displays Electrochromic displays can operate in one of two modes: reflecting light mode, where light or other radiation strikes a surface and is redirected, or transmitting light mode, which is transmitted through a substrate; the majority of displays operates in a reflective mode. Applications: Even though electrochromic devices are considered to be more “passive” since they do not emit light and need external illumination to function, electrochromic coatings on devices have been proposed for flat panel displays and visual-display units (VDUs). For example, an electrochromic coating was featured on an iPod in the early 2000s, and the Nanochromic screen surpassed that of the original iPod in terms of its fidelity in display quality and screen brightness. Electrochromics have been used for other display applications as well; however, the technology is still somewhat nascent and competes with Liquid-crystal displays (LCDs) and their presence in the market. Electrochromic devices do have advantages over materials synthesized to produce LCD based optoelectronics, such as consuming little to no power in producing images and the same amount of power is needed to keep present displays, and there is no restriction to the size of such a device since it is dependent on manufacturing capability and number of electrodes. But they are not regularly used because of their quick response times, 𝜏, estimated by the equation l=(Dt)0.5. For type I-electrochromics (solution-phase) species, the diffusion coefficient is on the order of 10–7 cm2/s. In comparison, for type III-electrochromic species, the diffusion coefficient is on the order of 10–12 cm2/s, which allows for a longer response time on the order of ten seconds compared to almost a millisecond when using type I devices. Such electrochromic displays, to be used commercially, need to be optimized at the materials processing and synthesis level to compete with LCDs in advanced display technologies beyond the iPod.Other applications include dynamically tinting goggles and motorcycle helmet visors, and special paper for drawing on with a stylus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lisbon Formation** Lisbon Formation: The Lisbon Formation is a geologic formation in the U.S. state of Georgia. It is predominantly sandstone deposited during the Paleogene Period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tree tunnel** Tree tunnel: A tree tunnel is a road, lane or track where the trees on each side form a more or less continuous canopy overhead, giving the effect of a tunnel. The effect may be achieved in a formal avenue lined with trees or in a more rural setting with randomly placed trees on each side of the route.The British artist David Hockney has painted tree tunnels as a theme, as especially illustrated at a 2012 solo exhibition of his work at the Royal Academy in London, England. The English landscape artist Nick Schlee has used a tree tunnel as subject matter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FOSD metamodels** FOSD metamodels: Feature-oriented software development (FOSD) is a general paradigm for software generation, where a model of a product line is a tuple of 0-ary and 1-ary functions (program transformations). This page discusses a more abstract concept of models of product lines of product lines (PL**2) called metamodels, and product lines of product lines of product lines called meta-metamodels (PL**3), and further abstract concepts. Metamodels: A metamodel is a model whose instances are models. A GenVoca model of a product line is a tuple whose components are features (0-ary or 1-ary functions). An extension (a.k.a. delta or refinement) of a model is a "meta-feature", which is a tuple of deltas that can modify an existing product line by modifying existing features and adding new features. As a simple example, consider GenVoca model M that contains three features a-c: M=[a,b,c] Suppose meta-model MM contains three meta-features AAA-CCC, each of which is a tuple with a single non-identity feature: MM=[AAA,BBB,CCC]=[[a,0,0],[0,b,0],[0,0,c]] where 0 is the null feature. Model M is constructed by adding the meta-features of MM, where + is the composition operation (see FOSD). Metamodels: expression substitution composition simplification where 0+x=x+0=x MM models a product line of product lines (PL**2). That is, different MM expressions correspond to GenVoca models of different product lines..
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meditative poetry** Meditative poetry: Meditative poetry combines the religious practice of meditation with verse. Buddhist and Hindu writers have developed extensive theories and phase models for meditation (Bevis 1988; 73-88). Meditative poetry: In Christianity, meditation became a major devotional practice during the Middle Ages, closely associated with the life in monasteries. Definitions vary, but there were various attempts to distinguish meditation from contemplation. While meditation focuses the mind on a text, preferably from the Bible, contemplation will take a concrete object, such as a candle, to concentrate the thoughts of the mind. Both contemplation and meditation had the same end, to seek unity with God. Meditative poetry: During the Protestant Reformation and Counter-Reformation, Jesuits like Ignatius of Loyola formalized the process of meditation, as a channeling of memory, understanding, and will. His method of meditation fell into three main parts: A) prayer and composition of place; B) the examination of points (analysis); C) the colloquies (the dialogue with God as a climax) (Martz 1962, 27-32). Jesuits brought this practice to England (Daly 1978: 72). Calvinist and other Protestants adapted meditation to Bible studies. Meditative poetry: Puritan meditation emphasized self-examination, applying Bible verses to contemporary, everyday life. In 1628, Thomas Taylor wrote a Puritan handbook "Meditation from the Creatures", recommending to include images from the sensible world (metaphorical of God's glory). In colonial New England, Thomas Hooker defined meditation in "The Souls Preparation for Christ" (1632) as follows: "It is a settled exercise for two ends: first to make a further inquiry of the truth: and secondly, to make the heart affected therewith." In 1648, batman university made meditation a duty for Puritans, and in 1649/50 Richard Baxter 's "The Saints' Everlasting Rest" became the standard Puritan text, at its core prescribing meditation. Like Taylor and Hooker, Baxter admitted the use of the senses; that is, he included contemplation with meditation, based on figural correspondences with the Bible. By including contemplation with meditation, the Puritans laid the foundation for a rich tradition of verse meditation in the USA from its colonial beginnings to the twenty-first century (Daly 1978, 74-76, 79-81; Martz 1962). Meditative poetry: Soon Puritan ministers like Edward Taylor began to write meditations in verse, based on lines from the Bible and on sense perceptions, both allegorical of the greater glory of God. Anne Bradstreet provided the first published meditations purely based on the senses, celebrating nature's beauties as the creation of God. Using the analogy of Nature as God's second book, poetic meditations gradually secularized, replacing the old allegoric technique with a more symbolic reading of nature and affirming the self-reliant individual (Pearce 1961, 42-57). Ralph Waldo Emerson 's essay "Nature" (1836) freed the meditation from its theological underpinnings and its reliance on the Bible. He encouraged poets to view nature as a storehouse of symbols that they could use merely relying on their imagination. Walt Whitman and Emily Dickinson took meditation into this direction and paved the way for Modernist and Postmodernist practices in poetry (Lawson 1994). Meditative poetry: The method of the three main steps (the composition of place, examination of points, colloquies) had survived into the twentieth century in many poems, as had the devotional practice of verse meditation. Leading modernist poets like T. S. Eliot and Wallace Stevens began to fragmentize the process, blending thoughts and sense perceptions in a sort of spiritual diary (Parini 1993,12). Postmodernist poets like John Ashbery deconstruct the contemplative aspect, the reference of the poem to an object outside itself, dissolving narrative or episodic structures of the spiritual diary in an ironic and open association (Bevis 1988:280-90), and thereby turning the poem itself into the object the reader can use for contemplation or meditation. Meditative poetry has often been correlated to Relaxation Through Poetry, which is simply using poetry to relax or relieve stress whenever someone is in need. It can also be seen in group visualization sessions where a speaker tries to get the audience to forget all about their stress by the use of calm and relaxing poetry. Meditative poetry: Bibliography Bevis, William W. "Mind of Winter. Wallace Stevens, Meditation, and Literature." Pittsburgh: University of Pittsburgh Press, 1988. Daly, Robert. "God's Altar. The World and the Flesh in Puritan Poetry." Berkeley, Los Angeles, London: University of California Press, 1978. Martz, Louis. "The Poetry of Meditation. A Study of English Religious Literature of the Seventeenth Century." 1954. Rev. New Haven: Yale University Press, 1962. Larson, Laura Louise. "The tradition of meditative poetry in America" (January 1, 1994). ETD Collection for the University of Connecticut. Paper AAI9525676. http://digitalcommons.uconn.edu/dissertations/AAI9525676 Parini, Jay ed., "Columbia History of American Poetry." New York: Columbia University Press, 1993. Pearce, Roy Harvey. "The Continuity of American Poetry." Princeton: Princeton University Press, 1961.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sex on the beach** Sex on the beach: A sex on the beach is an alcoholic cocktail containing vodka, peach schnapps, orange juice and cranberry juice. It is an International Bartenders Association Official Cocktail. General types: There are two general types of the cocktail: The IBA official cocktail is made from vodka, peach schnapps, orange juice, and cranberry juice. The 2008 Mr. Boston Official Bartender's Guide (67th edition) provides an alternative recipe made from vodka, Chambord, Midori Melon Liqueur, pineapple juice, and cranberry juice.The drink is built over ice in a highball glass and garnished with an orange slice. Sometimes they are mixed in smaller amounts and served as a shooter. Variations: Some derivative variations have their own names: A "sex in the driveway" is a sex on the beach with orange juice and cranberry juice replaced with blue curaçao and Sprite. A "woo woo" is a sex on the beach without orange juice. The alcohol-free variation is sometimes referred to as "safe sex on the beach", "cuddles on the beach", or "virgin(s) on the beach".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dysphoric milk ejection reflex** Dysphoric milk ejection reflex: Dysphoric milk ejection reflex (D-MER) is a condition in which women who breastfeed develop negative emotions that begin just before the milk ejection reflex and last less than a few minutes. It is different from postpartum depression, breastfeeding aversion response (BAR), or a dislike of breastfeeding. It has been described anecdotally many times, yet one of the earliest case studies on the condition was only published in 2011, and not much research was done prior to that. Even in 2021 when the first review of published literature was done the authors noted that health care providers were still "barely [able to] recognize D-MER."The feelings described may also occur in women who are not currently, or never have been, breastfeeding. In these cases, stimulation of the nipples produces a similar, dysphoric feeling as described by women with a condition identified as D-MER. A link between local dopamine blockage and the precise location of AMPA-glutamate blockage in the nucleus accumbens, and the subsequent experience of stimuli as negative or positive has been researched but not confirmed as the cause of D-MER and related conditions. Signs and symptoms: The lactating woman develops a brief period of dysphoria that begins just prior to the milk ejection reflex and continues for not more than several minutes. It may recur with every milk release, any single release, or only with the initial milk release at each feeding. D-MER always presents as an emotional reaction but may also produce a hollow or churning feeling in the pit of the stomach, nausea, restlessness, and/or general unease. When experiencing D-MER, mothers may report any of a spectrum of different unpleasant emotions, ranging from depression to anxiety to anger. Each of these emotions can be felt at a different level of intensity. Diagnosis: D-MER does not appear to be a psychological response to breastfeeding. It is possible for women to have psychological responses to breastfeeding, but D-MER gives evidence of being a physiological reflex. D-MER is not postpartum depression or a postpartum mood disorder. A woman can have D-MER and PPD, but they are separate conditions and the common treatments for PPD do not treat D-MER. The majority of women with D-MER report no other mood disorders. D-MER is not the "breastfeeding aversion response (BAR)" that can happen to some when continuing to nurse while pregnant. Breastfeeding aversion response occurs upon nipple contact when nursing whereas D-MER is triggered by the let-down reflex, even if it is several minutes after latching. Management: There is no product that is medically approved to treat D-MER. It has been hypothesized that efforts to raise dopamine may help, and anecdotal evidence encourages a healthy diet limiting caffeine intake and adding supplements. Emotional support Awareness, understanding, and education appear to be important. Many people with D-MER rate their D-MER much worse prior to learning what is causing their feelings. Once a mother understands that she is not alone in her condition and realizes it is a physiological condition she seems to be much less likely to wean prematurely. History: The first documented reference to a hormonally based negative emotional reaction while breastfeeding was found online in a forum in June 2004. Prior to the launch of D-MER.org the phenomenon was unknown, unnamed, misunderstood and rarely mentioned or talked about. The term dysphoric milk ejection reflex (D-MER) came from Alia Macrina Heise who described it in 2007. It was chosen due to the emotional reaction (dysphoria) to milk let-down (milk ejection reflex). The "milk ejection reflex" is abbreviated among lactation professionals and referred to as the M-E-R. In 2008 a team of lactation consultants, headed up by Diane Wiessinger, worked together and consulted with other medical professionals to do a preliminary investigation to better understand D-MER. Case reports and case series have been published on the topic. A 2019 study reported a prevalence rate of 9.1%. An October 2021 review of literature published to that date reported: Due to poor public awareness of D-MER and the scarcity of evidence-based literature, many mothers may mistake D-MER for postpartum depression especially given its atypical symptomatic manifestations, and lactation practitioners and health care providers may also barely recognize D-MER. Another challenge in the management of D-MER is that mental health professionals may lack knowledge about lactation or training in lactation management. This makes it necessary to educate mothers because educated mothers are usually better at handling postpartum situations if they are prepared in advance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Capsular-polysaccharide endo-1,3-alpha-galactosidase** Capsular-polysaccharide endo-1,3-alpha-galactosidase: Capsular-polysaccharide endo-1,3-α-galactosidase (EC 3.2.1.87, polysaccharide depolymerase, capsular polysaccharide galactohydrolase) is an enzyme with systematic name Aerobacter-capsular-polysaccharide galactohydrolase. It catalyses random hydrolysis of (1→3)-α-D-galactosidic linkages in Aerobacter aerogenes capsular polysaccharide. It hydrolyses the galactosyl-α-1,3-D-galactose linkages only in the complex substrate, bringing about depolymerization.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Baseball (card game)** Baseball (card game): Baseball (or in some early editions, "Batter-Up Baseball") is a card game simulating the sport of baseball, played with special cards and a diagram of a baseball diamond. The game was created by Ed-u-Cards Manufacturing Corporation, New York. The deck: The deck consists of 36 cards representing a variety of base hits, (mostly singles, but only one home run), balks, stolen bases, a hit-by-pitcher, balls, strikes, and a variety of outs. The deck: A typical deck from the late 1950s or early 1960s consists of: 10 balls 10 strikes 2 foul balls 2 fly outs 1 foul out 2 singles 2 doubles 1 triple 1 home run 1 balk 1 stolen base 1 hit-by-pitcherEarlier decks omitted the balk, stolen base, and hit-by-pitcher, in favor of an additional ball, an additional double play, and an additional single. The deck: In some editions from the 1960s, strikes and outs are color-coded orange, balls green, and all cards that advance a runner, blue, while in late-1950s editions, strikes and outs are green, balls blue, and cards advancing a runner, red. The cards are illustrated with line drawings of the action represented by the card; in the 1960s, a New York Mets edition included Mr. Met as the principal figure in the illustrations, and a Mets logo as the back design. The play: The game is playable by any arbitrary number of players (the box stating that it "can be played by 1 to 9 players"). The cards are not dealt; instead, whichever player is "at bat" turns over cards from a freshly shuffled deck until put out three times, following the actions named on the cards: Strikes are collected, with three strikes becoming an out (and clearing any collected balls). The play: Balls are collected, with four balls becoming a walk (and clearing any collected strikes). Outs are collected (with each out clearing strikes and balls) until the third out ends the player's turn. If the out is marked "double play at first," and a double play at first is possible, then it counts as a double play. Base hits, walks, and hit-by-pitcher are placed on the diamond diagram, with any cards already on the diagram advancing appropriately. A base hit clears the balls and strikes. Balks and stolen bases advance runners according to the instructions on the card, and actual baseball rules.An inning consists of each player getting a turn "at bat" for three outs; a game consists of nine innings. Scoring is as in an actual baseball game. Availability: This game had a limited print run, but decks of varying vintage can be found online. Or one could improvise a deck from the same "VisEd" cards traditionally used in the game of 1000 Blank White Cards.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stuart Pocock** Stuart Pocock: Stuart J. Pocock is a British medical statistician. He has been professor of medical statistics at the London School of Hygiene and Tropical Medicine since 1989. His research interests include statistical methods for the design, monitoring, analysis and reporting of randomized clinical trials. He also collaborates on major clinical trials, particularly in cardiovascular disease.In 2003, the Royal Statistical Society awarded him the Bradford Hill Medal "for his development of clinical trials methodology, including group sequential methods, his extensive applied work, notably in the epidemiology and treatment of heart disease, and his exposition of good practice nationally and internationally, especially through his book Clinical Trials: a Practical Approach and through his service on influential government committees." Books: Pocock, Stuart (1983). Clinical Trials: A Practical Approach. Wiley-Blackwell. ISBN 978-0-471-90155-6. Pitt, Bertram; Julian, Desmond Gareth; Pocock, Stuart J., eds. (1997). Clinical Trials in Cardiology. W. B. Saunders. ISBN 978-0-7020-2156-5.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2MASS J21392676+0220226** 2MASS J21392676+0220226: 2MASS J21392676+0220226 (or CFBDS J213926+02202) is a brown dwarf located 34 light-years (10 parsecs) from Earth in the constellation Aquarius. Its surface is thought to be host to a massive storm, resulting in large variability of its color. It is a member of the Carina-Near moving group. This brown dwarf was discovered in the Two Micron All-Sky Survey (2MASS). Once thought to be a binary object based on a 2010 study, it has since been shown to in fact be single.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**O-aminophenol oxidase** O-aminophenol oxidase: o-Aminophenol oxidase (EC 1.10.3.4, isophenoxazine synthase, o-aminophenol:O2 oxidoreductase, 2-aminophenol:O2 oxidoreductase, GriF) is an enzyme with systematic name 2-aminophenol:oxygen oxidoreductase. This enzyme catalyses the following chemical reaction 2 o-aminophenol + O2 + acceptor ⇌ 2-aminophenoxazin-3-one + reduced acceptor + 2 H2O (overall reaction) (1a) 2 2-aminophenol + O2 ⇌ 2 6-iminocyclohexa-2,4-dienone + 2 H2O (1b) 2 6-iminocyclohexa-2,4-dienone + acceptor ⇌ 2-aminophenoxazin-3-one + reduced acceptor (spontaneous)o-Aminophenol oxidase is a flavoprotein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hexabenzocoronene** Hexabenzocoronene: Hexa-peri-hexabenzocoronene (HBC) is a polycyclic aromatic hydrocarbon with the molecular formula C42H18. It consists of a central coronene molecule, with an additional benzene ring fused between each adjacent pair of rings around the periphery. It is sometimes simply called hexabenzocoronene, however, there are other chemicals that share this less-specific name, such as hexa-cata-hexabenzocoronene. Hexa-peri-hexabenzocoronene has been imaged by atomic force microscopy (AFM) providing the first example of a molecule in which differences in bond order and bond lengths of the individual bonds can be distinguished by a measurement in direct space. Supramolecular structures: Various hexabenzocoronenes have been investigated in supramolecular electronics. They are known to self-assemble into a columnar phase. One derivative in particular forms carbon nanotubes with interesting electrical properties. The columnar phase in this compound further organises itself into sheets, which ultimately roll up like a carpet to form multi-walled nanotubes with an outer diameter of 20 nanometers and a wall thickness of 3 nm. In this geometry, the stacks of coronene disks are aligned with the length of the tube. The nanotubes have sufficient length to fit between two platinum nanogap electrodes produced by scanning probe nanofabrication and are 180 nm apart. The nanotubes as such are insulating, but, after one-electron oxidation with nitrosonium tetrafluoroborate (NOBF4), they conduct electricity. The structure containing three C-H groups on one benzene ring, so-called TRIO, was analyzed by infrared spectroscopy. Supramolecular structures: Synthesis Organic synthesis of a hexabenzocoronene starts with an Aldol condensation reaction of dibenzyl ketone with a benzil derivative to give a substituted cyclopentadienone. A Diels–Alder reaction with alkyne and subsequent expulsion of carbon monoxide gives a hexaphenylbenzene. The adjacent pairs of benzene rings undergo oxidative electrocyclic reactions and aromatization by oxidation by iron(III) chloride in nitromethane.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geometric Exercises in Paper Folding** Geometric Exercises in Paper Folding: Geometric Exercises in Paper Folding is a book on the mathematics of paper folding. It was written by Indian mathematician T. Sundara Row, first published in India in 1893, and later republished in many other editions. Its topics include paper constructions for regular polygons, symmetry, and algebraic curves. According to historian of mathematics Michael Friedman, it became "one of the main engines of the popularization of folding as a mathematical activity". Publication history: Geometric Exercises in Paper Folding was first published by Addison & Co. in Madras in 1893. The book became known in Europe through a remark of Felix Klein in his book Vorträge über ausgewählte Fragen der Elementargeometrie (1895) and its translation Famous Problems Of Elementary Geometry (1897). Based on the success of Geometric Exercises in Paper Folding in Germany, the Open Court Press of Chicago published it in the US, with updates by Wooster Woodruff Beman and David Eugene Smith. Although Open Court listed four editions of the book, published in 1901, 1905, 1917, and 1941, the content did not change between these editions. The fourth edition was also published in London by La Salle, and both presses reprinted the fourth edition in 1958.The contributions of Beman and Smith to the Open Court editions have been described as "translation and adaptation", despite the fact that the original 1893 edition was already in English. Beman and Smith also replaced many footnotes by references to their own work, replaced some of the diagrams by photographs, and removed some remarks specific to India. In 1966, Dover Publications of New York published a reprint of the 1905 edition, and other publishers of out-of-copyright works have also printed editions of the book. Topics: Geometric Exercises in Paper Folding shows how to construct various geometric figures using paper-folding in place of the classical Greek Straightedge and compass constructions.The book begins by constructing regular polygons beyond the classical constructible polygons of 3, 4, or 5 sides, or of any power of two times these numbers, and the construction by Carl Friedrich Gauss of the heptadecagon, it also provides a paper-folding construction of the regular nonagon, not possible with compass and straightedge. The nonagon construction involves angle trisection, but Rao is vague about how this can be performed using folding; an exact and rigorous method for folding-based trisection would have to wait until the work in the 1930s of Margherita Piazzola Beloch. The construction of the square also includes a discussion of the Pythagorean theorem. The book uses high-order regular polygons to provide a geometric calculation of pi.A discussion of the symmetries of the plane includes congruence, similarity, and collineations of the projective plane; this part of the book also covers some of the major theorems of projective geometry including Desargues's theorem, Pascal's theorem, and Poncelet's closure theorem.Later chapters of the book show how to construct algebraic curves including the conic sections, the conchoid, the cubical parabola, the witch of Agnesi, the cissoid of Diocles, and the Cassini ovals. The book also provides a gnomon-based proof of Nicomachus's theorem that the sum of the first n cubes is the square of the sum of the first n integers, and material on other arithmetic series, geometric series, and harmonic series.There are 285 exercises, and many illustrations, both in the form of diagrams and (in the updated editions) photographs. Influences: Tandalam Sundara Row was born in 1853, the son of a college principal, and earned a bachelor's degree at the Kumbakonam College in 1874, with second-place honours in mathematics. He became a tax collector in Tiruchirappalli, retiring in 1913, and pursued mathematics as an amateur. As well as Geometric Exercises in Paper Folding, he also wrote a second book, Elementary Solid Geometry, published in three parts from 1906 to 1909.One of the sources of inspiration for Geometric Exercises in Paper Folding was Kindergarten Gift No. VIII: Paper-folding. This was one of the Froebel gifts, a set of kindergarten activities designed in the early 19th century by Friedrich Fröbel. The book was also influenced by an earlier Indian geometry textbook, First Lessons in Geometry, by Bhimanakunte Hanumantha Rao (1855–1922). First Lessons drew inspiration from Fröbel's gifts in setting exercises based on paper-folding, and from the book Elementary Geometry: Congruent Figures by Olaus Henrici in using a definition of geometric congruence based on matching shapes to each other and well-suited for folding-based geometry.In turn, Geometric Exercises in Paper Folding inspired other works of mathematics. A chapter in Mathematische Unterhaltungen und Spiele [Mathematical Recreations and Games] by Wilhelm Ahrens (1901) concerns folding and is based on Rao's book, inspiring the inclusion of this material in several other books on recreational mathematics. Other mathematical publications have studied the curves that can be generated by the folding processes used in Geometric Exercises in Paper Folding. In 1934, Margherita Piazzola Beloch began her research on axiomatizing the mathematics of paper-folding, a line of work that would eventually lead to the Huzita–Hatori axioms in the late 20th century. Beloch was explicitly inspired by Rao's book, titling her first work in this area "Alcune applicazioni del metodo del ripiegamento della carta di Sundara Row" ["Several applications of the method of folding a paper of Sundara Row"]. Audience and reception: The original intent of Geometric Exercises in Paper Folding was twofold: as an aid in geometry instruction, and as a work of recreational mathematics to inspire interest in geometry in a general audience. Edward Mann Langley, reviewing the 1901 edition, suggested that its content went well beyond what should be covered in a standard geometry course. And in their own textbook on geometry using paper-folding exercises, The First Book of Geometry (1905), Grace Chisholm Young and William Henry Young heavily criticized Geometric Exercises in Paper Folding, writing that it is "too difficult for a child, and too infantile for a grown person". However, reviewing the 1966 Dover edition, mathematics educator Pamela Liebeck called it "remarkably relevant" to the discovery learning techniques for geometry instruction of the time, and in 2016 computational origami expert Tetsuo Ida, introducing an attempt to formalize the mathematics of the book, wrote "After 123 years, the significance of the book remains."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jungle computing** Jungle computing: Jungle computing is a form of high performance computing that distributes computational work across cluster, grid and cloud computing.The increasing complexity of the high performance computing environment has provided a range of choices beside traditional supercomputers and clusters. Scientists can now use grid and cloud infrastructures, in a variety of combinations along with traditional supercomputers - all connected via fast networks. And the emergence of many-core technologies such as GPUs, as well as supercomputers on chip within these environments has added to the complexity. Thus, high-performance computing can now use multiple diverse platforms and systems simultaneously, giving rise to the term "computing jungle".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar (Spanish term)** Solar (Spanish term): In Spanish urban development a solar is a plot of land that meets minimum conditions to be built on and developed properly according to existing land use regulations. These conditions relate primarily to water supply and access to the electrical grid, disposal or purification of wastewater and road access. The specific characteristics required for such a plot to be considered a "solar" are set for each Spanish Autonomous Region based on these criteria. During the Spanish colonization of the Americas, the solar was one of the basic units into which cities were divided; solares were assigned when a new settlement was founded.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intradural pseudoaneurysm** Intradural pseudoaneurysm: Intradural pseudoaneurysm is a broad term to describe several subtypes of aneurysms that fundamentally are different from the more typical intracranial berry-type aneurysms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DSQI** DSQI: DSQI (design structure quality index) is an architectural design metric used to evaluate a computer program's design structure and the efficiency of its modules. The metric was developed by the United States Air Force Systems Command. The result of DSQI calculations is a number between 0 and 1. The closer to 1, the higher the quality. It is best used on a comparison basis, i.e., with previous successful projects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small nucleolar RNA SNORA1** Small nucleolar RNA SNORA1: In molecular biology, SNORA1 (also known as ACA1) is a member of the H/ACA class of small nucleolar RNA that guide the sites of modification of uridines to pseudouridines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wheat Crunchies** Wheat Crunchies: Wheat Crunchies are a crisp wheat snack produced under the British snack producer KP Snacks Ltd. They come in several flavours including Spicy Tomato, Crispy Bacon and Cheddar & Onion. A regular multipack bag contains 20g and a normal retail pack contains 30g. History: Wheat Crunchies was acquired by Rowntree Mackintosh in 1982 with their purchase of a 90% stake in Riley's Potato Crisps.In February 1992, Sooner Snacks was bought from Borden Inc. by Dalgety plc, with the company being absorbed into Golden Wonder. In 1995, Golden Wonder underwent a management buyout costing £54.6 million. In 2000, Bridgepoint Capital acquired Golden Wonder for £156 million. It was subsequently sold to United Biscuits in 2006, following the take-over of Golden Wonder by Tayto, and merged into KP Snacks. In December 2012 KP Snacks was sold to Intersnack. History: Re-launch In June 2012, United Biscuits re-launched Wheat Crunchies with a new logo, 'improved taste', a new Cheddar & Onion flavour, and a bigger snack size. Health information: The packet promises that the product is free from any artificial colours or flavours and contains no MSG. An average 25g multipack packet contains: Energy (kj)- 516, (kcal)- 123, Protein- 2.4g, Carbohydrate- 14.4g, of which sugars- 0.7g, Fat- 6.3g, of which saturates- 0.9g, Fibre- 0.8g, Sodium- 0.2g Side effects of eating 3 or more packets include, but are not limited to: headaches, blurred vision, flatulence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spatial frequency** Spatial frequency: In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier transform) of the structure repeat per unit of distance. The SI unit of spatial frequency is cycles per meter (m). In image-processing applications, spatial frequency is often expressed in units of cycles per millimeter (mm) or equivalently line pairs per mm. Spatial frequency: In wave propagation, the spatial frequency is also known as wavenumber. Ordinary wavenumber is defined as the reciprocal of wavelength λ and is commonly denoted by ξ or sometimes ν :ξ=1λ. Angular wavenumber k , expressed in rad per m, is related to ordinary wavenumber and wavelength by k=2πξ=2πλ. Visual perception: In the study of visual perception, sinusoidal gratings are frequently used to probe the capabilities of the visual system. In these stimuli, spatial frequency is expressed as the number of cycles per degree of visual angle. Sine-wave gratings also differ from one another in amplitude (the magnitude of difference in intensity between light and dark stripes), and angle. Visual perception: Spatial-frequency theory The spatial-frequency theory refers to the theory that the visual cortex operates on a code of spatial frequency, not on the code of straight edges and lines hypothesised by Hubel and Wiesel on the basis of early experiments on V1 neurons in the cat. In support of this theory is the experimental observation that the visual cortex neurons respond even more robustly to sine-wave gratings that are placed at specific angles in their receptive fields than they do to edges or bars. Most neurons in the primary visual cortex respond best when a sine-wave grating of a particular frequency is presented at a particular angle in a particular location in the visual field. (However, as noted by Teller (1984), it is probably not wise to treat the highest firing rate of a particular neuron as having a special significance with respect to its role in the perception of a particular stimulus, given that the neural code is known to be linked to relative firing rates. For example, in color coding by the three cones in the human retina, there is no special significance to the cone that is firing most strongly – what matters is the relative rate of firing of all three simultaneously. Teller (1984) similarly noted that a strong firing rate in response to a particular stimulus should not be interpreted as indicating that the neuron is somehow specialized for that stimulus, since there is an unlimited equivalence class of stimuli capable of producing similar firing rates.) The spatial-frequency theory of vision is based on two physical principles: Any visual stimulus can be represented by plotting the intensity of the light along lines running through it. Visual perception: Any curve can be broken down into constituent sine waves by Fourier analysis.The theory (for which empirical support has yet to be developed) states that in each functional module of the visual cortex, Fourier analysis (or its piecewise form ) is performed on the receptive field and the neurons in each module are thought to respond selectively to various orientations and frequencies of sine wave gratings. When all of the visual cortex neurons that are influenced by a specific scene respond together, the perception of the scene is created by the summation of the various sine-wave gratings. (This procedure, however, does not address the problem of the organization of the products of the summation into figures, grounds, and so on. It effectively recovers the original (pre-Fourier analysis) distribution of photon intensity and wavelengths across the retinal projection, but does not add information to this original distribution. So the functional value of such a hypothesized procedure is unclear. Some other objections to the "Fourier theory" are discussed by Westheimer (2001) ). One is generally not aware of the individual spatial frequency components since all of the elements are essentially blended together into one smooth representation. However, computer-based filtering procedures can be used to deconstruct an image into its individual spatial frequency components. Research on spatial frequency detection by visual neurons complements and extends previous research using straight edges rather than refuting it.Further research shows that different spatial frequencies convey different information about the appearance of a stimulus. High spatial frequencies represent abrupt spatial changes in the image, such as edges, and generally correspond to featural information and fine detail. M. Bar (2004) has proposed that low spatial frequencies represent global information about the shape, such as general orientation and proportions. Rapid and specialised perception of faces is known to rely more on low spatial frequency information. In the general population of adults, the threshold for spatial frequency discrimination is about 7%. It is often poorer in dyslexic individuals. Spatial frequency in MRI: When spatial frequency is used as a variable in a mathematical function, the function is said to be in k-space . Two dimensional k-space has been introduced into MRI as a raw data storage space. The value of each data point in k-space is measured in the unit of 1/meter, i.e. the unit of spatial frequency. Spatial frequency in MRI: It is very common that the raw data in k-space shows features of periodic functions. The periodicity is not spatial frequency, but is temporal frequency. An MRI raw data matrix is composed of a series of phase-variable spin-echo signals. Each of the spin-echo signal is a sinc function of time, which can be described by Spin-Echo = sin ⁡ωrtωrt Where ωr=ω0+γ¯rG Here γ¯ is the gyromagnetic ratio constant, and ω0 is the basic resonance frequency of the spin. Due to the presence of the gradient G, the spatial information r is encoded onto the frequency ω . The periodicity seen in the MRI raw data is just this frequency ωr , which is basically the temporal frequency in nature. Spatial frequency in MRI: In a rotating frame, ω0=0 , and ωr is simplified to γ¯rG . Just by letting k=γ¯Gt , the spin-echo signal is expressed in an alternative form Spin-Echo = sin ⁡rkrk Now, the spin-echo signal is in the k-space. It becomes a periodic function of k with r as the k-space frequency but not as the "spatial frequency", since "spatial frequency" is reserved for the name of the periodicity seen in the real space r. Spatial frequency in MRI: The k-space domain and the space domain form a Fourier pair. Two pieces of information are found in each domain, the spatial information and the spatial frequency information. The spatial information, which is of great interest to all medical doctors, is seen as periodic functions in the k-space domain and is seen as the image in the space domain. The spatial frequency information, which might be of interest to some MRI engineers, is not easily seen in the space domain but is readily seen as the data points in the k-space domain.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GF method** GF method: The GF method, sometimes referred to as FG method, is a classical mechanical method introduced by Edgar Bright Wilson to obtain certain internal coordinates for a vibrating semi-rigid molecule, the so-called normal coordinates Qk. Normal coordinates decouple the classical vibrational motions of the molecule and thus give an easy route to obtaining vibrational amplitudes of the atoms as a function of time. In Wilson's GF method it is assumed that the molecular kinetic energy consists only of harmonic vibrations of the atoms, i.e., overall rotational and translational energy is ignored. Normal coordinates appear also in a quantum mechanical description of the vibrational motions of the molecule and the Coriolis coupling between rotations and vibrations. GF method: It follows from application of the Eckart conditions that the matrix G−1 gives the kinetic energy in terms of arbitrary linear internal coordinates, while F represents the (harmonic) potential energy in terms of these coordinates. The GF method gives the linear transformation from general internal coordinates to the special set of normal coordinates. The GF method: A non-linear molecule consisting of N atoms has 3N − 6 internal degrees of freedom, because positioning a molecule in three-dimensional space requires three degrees of freedom, and the description of its orientation in space requires another three degree of freedom. These degrees of freedom must be subtracted from the 3N degrees of freedom of a system of N particles. The GF method: The interaction among atoms in a molecule is described by a potential energy surface (PES), which is a function of 3N − 6 coordinates. The internal degrees of freedom s1, ..., s3N−6 describing the PES in an optimal way are often non-linear; they are for instance valence coordinates, such as bending and torsion angles and bond stretches. It is possible to write the quantum mechanical kinetic energy operator for such curvilinear coordinates, but it is hard to formulate a general theory applicable to any molecule. This is why Wilson linearized the internal coordinates by assuming small displacements. The linearized version of the internal coordinate st is denoted by St. The GF method: The PES V can be Taylor expanded around its minimum in terms of the St. The third term (the Hessian of V) evaluated in the minimum is a force derivative matrix F. In the harmonic approximation the Taylor series is ended after this term. The second term, containing first derivatives, is zero because it is evaluated in the minimum of V. The first term can be included in the zero of energy. The GF method: The classical vibrational kinetic energy has the form: 2T=∑s,t=13N−6gst(s)S˙sS˙t, where gst is an element of the metric tensor of the internal (curvilinear) coordinates. The dots indicate time derivatives. Mixed terms SsS˙t generally present in curvilinear coordinates are not present here, because only linear coordinate transformations are used. Evaluation of the metric tensor g in the minimum s0 of V gives the positive definite and symmetric matrix G = g(s0)−1. The GF method: One can solve the two matrix problems LTFL=ΦandLTG−1L=E, simultaneously, since they are equivalent to the generalized eigenvalue problem GFL=LΦ, where diag ⁡(f1,…,f3N−6) where fi is equal to 4π2νi2 (νi is the frequency of normal mode i); E is the unit matrix. The matrix L−1 contains the normal coordinates Qk in its rows: 6. The GF method: Because of the form of the generalized eigenvalue problem, the method is called the GF method, often with the name of its originator attached to it: Wilson's GF method. By matrix transposition in both sides of the equation and using the fact that both G and F are symmetric matrices, as are diagonal matrices, one can recast this equation into a very similar one for FG . This is why the method is also referred to as Wilson's FG method. The GF method: We introduce the vectors col col ⁡(Q1,…,Q3N−6), which satisfy the relation s=LQ. Upon use of the results of the generalized eigenvalue equation, the energy E = T + V (in the harmonic approximation) of the molecule becomes: 2E=s˙TG−1s˙+sTFs =Q˙T(LTG−1L)Q˙+QT(LTFL)Q =Q˙TQ˙+QTΦQ=∑t=13N−6(Q˙t2+ftQt2). The Lagrangian L = T − V is L=12∑t=13N−6(Q˙t2−ftQt2). The corresponding Lagrange equations are identical to the Newton equations Q¨t+ftQt=0 for a set of uncoupled harmonic oscillators. These ordinary second-order differential equations are easily solved, yielding Qt as a function of time; see the article on harmonic oscillators. Normal coordinates in terms of Cartesian displacement coordinates: Often the normal coordinates are expressed as linear combinations of Cartesian displacement coordinates. Let RA be the position vector of nucleus A and RA0 the corresponding equilibrium position. Then xA≡RA−RA0 is by definition the Cartesian displacement coordinate of nucleus A. Wilson's linearizing of the internal curvilinear coordinates qt expresses the coordinate St in terms of the displacement coordinates St=∑A=1N∑i=13sAitxAi=∑A=1NsAt⋅xA,fort=1,…,3N−6, where sAt is known as a Wilson s-vector. If we put the sAit into a (3N − 6) × 3N matrix B, this equation becomes in matrix language s=Bx. The actual form of the matrix elements of B can be fairly complicated. Especially for a torsion angle, which involves 4 atoms, it requires tedious vector algebra to derive the corresponding values of the sAit . See for more details on this method, known as the Wilson s-vector method, the book by Wilson et al., or molecular vibration. Now, s=LQ=Lltrq=BM−1/2q≡Dq, which can be inverted and put in summation language: 6. Here D is a (3N − 6) × 3N matrix, which is given by (i) the linearization of the internal coordinates s (an algebraic process) and (ii) solution of Wilson's GF equations (a numeric process). Matrices involved in the analysis: There are several related coordinate systems commonly used in the GF matrix analysis. These quantities are related by a variety of matrices. For clarity, we provide the coordinate systems and their interrelations here. The relevant coordinates are: x: Cartesian coordinates for each atom s: Internal coordinates for each atom q: Mass-weighted Cartesian coordinates Q: Normal coordinatesThese different coordinate systems are related to one another by: s=Bx , i.e. the matrix B transforms the Cartesian coordinates to (linearized) internal coordinates. x=M−1/2q, i.e. the mass matrix M1/2 transforms Cartesian coordinates to mass-weighted Cartesian coordinates. q=lQ, i.e. the matrix l transforms the normal coordinates to mass-weighted internal coordinates. s=LQ, i.e. the matrix L transforms the normal coordinates to internal coordinates.Note the useful relationship: L=BM−1/2l. These matrices allow one to construct the G matrix quite simply as G=BM−1B. Relation to Eckart conditions: From the invariance of the internal coordinates St under overall rotation and translation of the molecule, follows the same for the linearized coordinates stA. It can be shown that this implies that the following 6 conditions are satisfied by the internal coordinates, 6. These conditions follow from the Eckart conditions that hold for the displacement vectors, 0. Further references: Califano, S. (1976). Vibrational States. New York-London: Wiley. ISBN 0-471-12996-8. Papoušek, D.; Aliev, M. R. (1982). Molecular Vibrational-Rotational Spectra. Elsevier. ISBN 0444997377. Wilson, E. B.; Decius, J. C.; Cross, P. C. (1995) [1955]. Molecular Vibrations. New York: Dover. ISBN 048663941X.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded