text
stringlengths
60
353k
source
stringclasses
2 values
**Trāṭaka** Trāṭaka: Trāṭaka (Sanskrit: त्राटक "look, gaze") is a yogic purification (a shatkarma) and a tantric method of meditation that involves staring at a single point such as a small object, black dot or candle flame. Description: The practitioner may fix attention on a symbol or yantra, such as the Om symbol, a black dot, the image of some deity or guru, a flame, a mirror or any point, and stare at it. A candle should be three to four feet (1 metre plus) away, the flame level with the eyes. The practitioner relaxes but keeps the spine erect and remains wakeful and vigilant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rose Marie Parr** Rose Marie Parr: Rose Marie Parr is the Chief Pharmaceutical Officer for Scotland. She is an honorary professor at both Scottish Schools of Pharmacy. Early life: Parr studied at the University of Strathclyde graduating with a BSc (hons) in pharmacy degree then also a MSc degree. She then completed a Doctorate in Education at the University of Glasgow. Career: She gained her registration in 1982 and began working as a hospital pharmacist with Lanarkshire Health Board.In 1993, Parr became Director of pharmacy at the Scottish Centre for Pharmacy Postgraduate Education (SCPPE) which would later become the Scottish Centre for Post Qualification Education. Parr became the Director of Pharmacy of NHS Education for Scotland (NES) in 2002, when several healthcare education organisations joined to form a single national body.In 2004, Parr was appointed as an honorary reader at the Robert Gordon University in Aberdeen. She is a visiting professor at the University of Strathclyde in Glasgow.In 2007, Parr was elected the first Chair of the Scottish Pharmacy Board of the Royal Pharmaceutical Society of Great Britain (RPSGB).In April 2015, Parr was appointed Chief Pharmaceutical Officer for Scotland following the retirement of Professor Bill Scott from the post in March 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**North American Nanohertz Observatory for Gravitational Waves** North American Nanohertz Observatory for Gravitational Waves: The North American Nanohertz Observatory for Gravitational Waves (NANOGrav) is a consortium of astronomers who share a common goal of detecting gravitational waves via regular observations of an ensemble of millisecond pulsars using the Green Bank Telescope, Arecibo Observatory, and the Very Large Array. This project is being carried out in collaboration with international partners in the Parkes Pulsar Timing Array in Australia, the European Pulsar Timing Array, and the Indian Pulsar Timing Array as part of the International Pulsar Timing Array. Gravitational wave detection using pulsar timing: Gravitational waves are an important prediction from Einstein's general theory of relativity and result from the bulk motion of matter, fluctuations during the early universe and the dynamics of space-time itself. Pulsars are rapidly rotating, highly magnetized neutron stars formed during the supernova explosions of massive stars. They act as highly accurate clocks with a wealth of physical applications ranging from celestial mechanics, neutron star seismology, tests of strong-field gravity and Galactic astronomy. Gravitational wave detection using pulsar timing: The idea to use pulsars as gravitational wave detectors was originally proposed by Sazhin and Detweiler in the late 1970s. The idea is to treat the solar system barycenter and a distant pulsar as opposite ends of an imaginary arm in space. The pulsar acts as the reference clock at one end of the arm sending out regular signals which are monitored by an observer on the Earth. The effect of a passing gravitational wave would be to perturb the local space-time metric and cause a change in the observed rotational frequency of the pulsar. Gravitational wave detection using pulsar timing: Hellings and Downs extended this idea in 1983 to an array of pulsars and found that a stochastic background of gravitational waves would produce a correlated signal for different angular separations on the sky, now known as the Hellings–Downs curve. This work was limited in sensitivity by the precision and stability of the pulsar clocks in the array. Following the discovery of the first millisecond pulsar in 1982, Foster and Donald C. Backer were among the first astronomers to seriously improve the sensitivity to gravitational waves by applying the Hellings-Downs analysis to an array of highly stable millisecond pulsars. Gravitational wave detection using pulsar timing: The advent of state-of-the-art digital data acquisition systems, new radio telescopes and receiver systems and the discoveries of many new pulsars advanced the sensitivity of the pulsar timing array to gravitational waves. The 2010 paper by Hobbs et al. summarizes the early state of the international effort. The 2013 Demorest et al. paper describes the five-year data release, analysis, and first NANOGrav limit on the stochastic gravitational wave background. It was followed by the nine-year and 11-year data releases in 2015 and 2018, respectively. Each further limited the gravitational wave background and, in the second case, techniques to precisely determine the barycenter of the solar system were refined. Gravitational wave detection using pulsar timing: In 2020, the collaboration presented the first evidence of gravitational wave background within the 12.5-year data release, taking the shape of a noise consistent with the expectations; however, it could not be definitely attributed to gravitational waves.In the 2020 Decadal Survey of Astronomy and Astrophysics, the National Academies of Science named NANOGrav as one of eight mid-scale astrophysics projects recommended as high priorities for funding in the next decade. Gravitational wave detection using pulsar timing: In June 2023, NANOGrav published further evidence for a stochastic gravitational wave background using the 15-year data release. In particular, it provides a measurement of the Hellings–Downs curve, the unique sign of the gravitational wave origin of the observations. Funding sources: The NSF first funded researchers within NANOGrav as part of the Partnerships for International Research and Education (PIRE) program from 2010 to 2015; the Physics Frontiers Center (PFC) program from 2015 to 2021; and from a second PFC grant starting in 2021. NANOGrav as a NSF PFC has been supported by the NSF Divisions of Physics and Astronomical Sciences and the Windows on the Universe program. The NSF has also contributed to supporting International Pulsar Timing Array through the AccelNet program. NANOGrav has additionally been supported by The Gordon and Betty Moore Foundation, the Natural Sciences and Engineering Research Council of Canada, and the Canadian Institute for Advanced Research. Funding sources: The research activities of NANOGrav have also been supported by single-investigator grants awarded through the Natural Sciences and Engineering Research Council (NSERC) in Canada, the National Science Foundation (NSF) and the Research Corporation for Scientific Advancement in the USA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rosenbrock function** Rosenbrock function: In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms. It is also known as Rosenbrock's valley or Rosenbrock's banana function. The global minimum is inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial. To converge to the global minimum, however, is difficult. The function is defined by f(x,y)=(a−x)2+b(y−x2)2 It has a global minimum at (x,y)=(a,a2) , where f(x,y)=0 . Usually, these parameters are set such that a=1 and 100 . Only in the trivial case where a=0 the function is symmetric and the minimum is at the origin. Multidimensional generalizations: Two variants are commonly encountered. One is the sum of N/2 uncoupled 2D Rosenbrock problems, and is defined only for even N s: 100 (x2i−12−x2i)2+(x2i−1−1)2]. This variant has predictably simple solutions. A second, more involved variant is 100 where x=(x1,…,xN)∈RN. Multidimensional generalizations: has exactly one minimum for N=3 (at (1,1,1) ) and exactly two minima for 4≤N≤7 —the global minimum at (1,1,...,1) and a local minimum near x^=(−1,1,…,1) . This result is obtained by setting the gradient of the function equal to zero, noticing that the resulting equation is a rational function of x . For small N the polynomials can be determined exactly and Sturm's theorem can be used to determine the number of real roots, while the roots can be bounded in the region of 2.4 . For larger N this method breaks down due to the size of the coefficients involved. Stationary points: Many of the stationary points of the function exhibit a regular pattern when plotted. This structure can be exploited to locate them. Optimization examples: The Rosenbrock function can be efficiently optimized by adapting appropriate coordinate system without using any gradient information and without building local approximation models (in contrast to many derivate-free optimizers). The following figure illustrates an example of 2-dimensional Rosenbrock function optimization by adaptive coordinate descent from starting point x0=(−3,−4) . The solution with the function value 10 10 can be found after 325 function evaluations. Optimization examples: Using the Nelder–Mead method from starting point x0=(−1,1) with a regular initial simplex a minimum is found with function value 1.36 10 10 after 185 function evaluations. The figure below visualizes the evolution of the algorithm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Type XXVII collagen** Type XXVII collagen: Type XXVII collagen is the protein predicted to be encoded by COL27A1. It was first described by Dr. James M. Pace and his colleagues at the University of Washington. It is related to the fibrillar collagens: type II, type XI, and type XXIV. Current research suggests that it is made by cartilage during skeletal development.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Traumatin** Traumatin: Traumatin is a plant hormone produced in response to wound. Traumatin is a precursor to the related hormone traumatic acid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pleasure ground** Pleasure ground: In English gardening history, the pleasure ground or pleasure garden was the parts of a large garden designed for the use of the owners, as opposed to the kitchen garden and the wider park. It normally included flower gardens, typically directly outside the house, and areas of lawn, used for playing games (bowling grounds were very common, later croquet lawns), and perhaps "groves" or a wilderness for walking around. Smaller gardens were often or usually entirely arranged as pleasure grounds, as are modern public parks. Pleasure ground: The concept survived a number of major shifts in the style of English gardens, from the Renaissance, through Baroque formal gardens, to the English landscape garden style. The pleasure grounds of English country house gardens have typically been remade a number of times, and awareness has recently returned that even the designs of the famous 18th-century landscapists such as Capability Brown originally included large areas of pleasure gardens, which unlike the landscaped parks, have rarely survived without major changes. History: The type of garden known as the pleasure ground in the shape of an ornamented area of lawn right next to the house was already known in England during the Renaissance, and continued to be an essential part of the garden. Encouraged by the landscape architect, Humphry Repton, this division of the grounds of a country house spread to Germany around 1800 and was employed inter alia by Prince Pückler-Muskau and Peter Joseph Lenné, who made use of it in their designs at Muskau, Glienicke and Babelsberg. The first pleasure ground in Prussia is probably that laid out at Glienicke Palace by Lenné in 1816. History: Jane Austen makes use of the pleasure grounds in her 1814 novel Mansfield Park when describing a visit by the young people to Sotherton Court where the owner, James Rushworth plans to hire Repton to make further improvements. A German description: The German landscape gardener, Hermann, Prince of Pückler-Muskau, explained the meaning of this term in his 1834 publication Andeutungen über Landschaftsgärtnerei ("Ideas On Landscape Gardening") as follows: "The word pleasure ground is difficult enough to render in German and I have therefore felt it better to retain the English expression. This means a piece of land adjacent to a house, which is fenced in and ornamented, of much greater extent than gardens, and something of an intermediate thing, a connecting element between the park and the actual gardens." And further: "[...] if the park is an idealised, condensed piece of the natural world, so the pleasure garden is an extended residence [...] in this way [...] the suite of rooms, is continued on a larger scale in the open air, [...]Pückler-Muskau's description refers to one of the three elements of the English landscape garden that are, from the outer perimeter of the estate to its main building, the park, the pleasure ground and the flower gardens. Usually there was also a flower-bedecked terrace on the house itself so that the transition from the open countryside to the house was in several stages. Form: The pleasure ground was an ornately designed garden area. It consisted of an ornamental lawn at several levels immediately next to the house. This lawn required a lot of maintenance, because the aim was to make the lawn appear like a "velvet carpet". The ornamentation included native and exotic plants that were laid out as flower carpets in various, mostly geometric, shapes and, according to Repton's advice, placed tastefully in the lawn, with round or oval flower baskets hanging mostly near the paths, as well as special individual shrubs and trees, statues, water features, small ponds or garden buildings. A fence separating the pleasure ground from the rest of the park area was intended, on the one hand, to make the separation between the idealized nature of the English landscape garden and the artistic design of the ornamental garden visible. On the other hand, the enclosure was made for pragmatic reasons, in order to keep grazing cattle or wild animals away from the ornamental garden. Around the outside of the pleasure ground, and sometimes partly through it, a winding system of paths – belt walks – led through an area formed by gentle hillocks with groups of shrubs and trees to various viewing points. These could be experienced at places along the walks and offer views of buildings and the surrounding landscape, which is set out as a backdrop. Sources: Klaus-Henning von Krosigk, chapter about the pleasure ground in: Dieter Hennebo: Gartendenkmalpflege. Verlag Eugen Ulmer, Stuttgart 1985, p. 232–253. Klaus-Henning von Krosigk: Klein-Glienicke mit Pleasureground. In: Landesdenkmalamt Berlin (ed.): Gartenkunst Berlin. 20 Jahre Gartendenkmalpflege in der Metropole. Schlezky & Jeep, Berlin 1999 Anne Schäfer: Der Pleasureground und die Sondergärten in Branitz. In: Kommunale Stiftung Fürst Pückler Museum – Park und Schloß Branitz (ed.): 150 Jahre Branitzer Park. Cottbus 1998, p. 90–99
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Duffing map** Duffing map: The Duffing map (also called as 'Holmes map') is a discrete-time dynamical system. It is an example of a dynamical system that exhibits chaotic behavior. The Duffing map takes a point (xn, yn) in the plane and maps it to a new point given by xn+1=yn yn+1=−bxn+ayn−yn3. The map depends on the two constants a and b. These are usually set to a = 2.75 and b = 0.2 to produce chaotic behaviour. It is a discrete version of the Duffing equation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intrathecal administration** Intrathecal administration: Intrathecal administration is a route of administration for drugs via an injection into the spinal canal, or into the subarachnoid space so that it reaches the cerebrospinal fluid (CSF) and is useful in spinal anesthesia, chemotherapy, or pain management applications. This route is also used to introduce drugs that fight certain infections, particularly post-neurosurgical. The drug needs to be given this way to avoid being stopped by the blood–brain barrier. The same drug given orally must enter the blood stream and may not be able to pass out and into the brain. Drugs given by the intrathecal route often have to be compounded specially by a pharmacist or technician because they cannot contain any preservative or other potentially harmful inactive ingredients that are sometimes found in standard injectable drug preparations. Intrathecal administration: The route of administration is sometimes simply referred to as "intrathecal"; however, the term is also an adjective that refers to something occurring in or introduced into the anatomic space or potential space inside a sheath, most commonly the arachnoid membrane of the brain or spinal cord (under which is the subarachnoid space). For example, intrathecal immunoglobulin production is production of antibodies in the spinal cord. The abbreviation "IT" is best not used; instead, "intrathecal" is spelled out to avoid medical mistakes. Intrathecal administration of analgesic agents: Very popular for a single 24-hour dose of analgesia (opioid with local anesthetic) Caution because of late onset hypoventilation due to intrathecal opioids Severe pruritus and urinary retention may limit the use of intrathecal morphine Pethidine has the unusual property of being both a local anaesthetic and opioid analgesic which occasionally permits its use as the sole intrathecal anaesthetic agent An intrathecal catheter and pump can be used to deliver a local anaesthetic and sometimes also an opioid and/or clonidine. Intrathecal administration of analgesic agents: The analgesic ziconotide is administered through an intrathecal pump system. Intrathecal administration of antifungal agents: For CNS infections, amphotericin B is administered intrathecally. Intrathecal chemotherapy: Currently, only four agents are licensed for intrathecal chemotherapy They are methotrexate, cytarabine (Ara-C), hydrocortisone, and, rarely, thiotepa. Accidental administration of any vinca alkaloids—especially vincristine but also vinblastine, vinorelbine, or others—via the intrathecal route is nearly always fatal. Intrathecal baclofen: Often reserved for spastic cerebral palsy, intrathecally-administered baclofen is done through an intrathecal pump implanted just below the skin of the abdomen, (or behind the chest wall, depending on the surgeon implanting the device, and patient preferences), with a tube (called the 'catheter') connected directly to the base of the spine, where it bathes the spinal cord using a dose about one thousand times smaller than that required by orally-administered baclofen. Intrathecal baclofen also carries none of the side effects, such as sleepiness, that typically occur with oral baclofen. However, intrathecal baclofen pumps carry serious clinical risks, such as infection or a possibly fatal sudden malfunction, that oral baclofen does not. Intrathecal baclofen: A tremendous amount of care is taken to ensure the optimal location of the pump and catheter, based upon medical considerations and patient requirements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Education 3.0** Education 3.0: Education 3.0 is an umbrella term used by educational theorists to describe a variety of ways to integrate technology into learning. According to Jeff Borden, Education 3.0 entails a confluence of neuroscience, cognitive psychology, and education technology, using web-based digital and mobile technology, including apps, hardware and software, and "anything else with an e in front of it." Instead of viewing digital technology as a competitor to current teaching models, Education 3.0 means actively embracing new technologies to see how they can help students learn efficiently. Writer Michael Horn describes it as moving "beyond mass education to mass-customized education through blended learning," using the flexibility of technology to help students of varying backgrounds and skills. The term has been included in the term Entrepreneurship Education 3.0 which denotes a broadening of entrepreneurship education with interdisciplinary appeal for non-business majors, according to a report in Technically Philly magazine.With Education 3.0, classes move away from traditional lectures and instead focus on interactive learning, with question and answer sessions, reviews and quizzes, discussions, labs, and other project-based learning. It usually involves customization and personalization, such that educational content is tailored to meet the needs of specific students. It can mean reversing the traditional classroom learning, in which lectures happen in class and homework is done out of class, into flipped classrooms, such that new content is delivered online while students work on assignments together in class. Education 3.0: Lectures move online—which handles students' need for personalization—and, as one of Lee's presentations states, "What in a class? Anything but lecturing!" Class time moves away from PowerPoint, blackboards, and whiteboards and is instead devoted to interactive and applied learning—questions and answers, review and summary, quizzes, interactive problem solving, discussion, project-based learning, and labs. Education 3.0: The term has been used by educational theorists in South Korea and in Latin America. According to a report in Forbes magazine, schools such as the Korea Advanced Institute of Science and Technology or KAIST are actively exploring Education 3.0. In Latin America, Educación 3.0 is being explored as a way to make education affordable to impoverished people throughout the region, and to help ameliorate poverty.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bakelite** Bakelite: Bakelite ( BAY-kə-lyte), formally Polyoxybenzylmethyleneglycolanhydride, is a thermosetting phenol formaldehyde resin, formed from a condensation reaction of phenol with formaldehyde. The first plastic made from synthetic components, it was developed by the Belgian chemist and inventor Leo Baekeland in Yonkers, New York in 1907, and patented on December 7, 1909 (U.S. Patent 942699A). Bakelite: Because of its electrical nonconductivity and heat-resistant properties, it became a great commercial success. It was used in electrical insulators, radio and telephone casings, and such diverse products as kitchenware, jewelry, pipe stems, children's toys, and firearms. The "retro" appeal of old Bakelite products has made them collectible.The creation of a synthetic plastic was revolutionary for the chemical industry, which at the time made most of its income from cloth dyes and explosives. Bakelite's commercial success inspired the industry to develop other synthetic plastics. As the world's first commercial synthetic plastic, Bakelite was named a National Historic Chemical Landmark by the American Chemical Society. History: Baekeland was already wealthy due to his invention of Velox photographic paper when he began to investigate the reactions of phenol and formaldehyde in his home laboratory. Chemists had begun to recognize that many natural resins and fibers were polymers. Baekeland's initial intent was to find a replacement for shellac, a material in limited supply because it was made naturally from the secretion of lac insects (specifically Kerria lacca). He produced a soluble phenol-formaldehyde shellac called "Novolak", but it was not a market success, even though it is still used to this day (e.g., as a photoresist). History: He then began experimenting on strengthening wood by impregnating it with a synthetic resin rather than coating it. By controlling the pressure and temperature applied to phenol and formaldehyde, he produced a hard moldable material that he named Bakelite, after himself. It was the first synthetic thermosetting plastic produced, and Baekeland speculated on "the thousand and one ... articles" it could be used to make.: 58–59  He considered the possibilities of using a wide variety of filling materials, including cotton, powdered bronze, and slate dust, but was most successful with wood and asbestos fibers, though asbestos was gradually abandoned by all manufacturers due to stricter environmental laws.: 9 Baekeland filed a substantial number of related patents. Bakelite, his "method of making insoluble products of phenol and formaldehyde," was filed on July 13, 1907 and granted on December 7, 1909. He also filed for patent protection in other countries, including Belgium, Canada, Denmark, Hungary, Japan, Mexico, Russia and Spain. He announced his invention at a meeting of the American Chemical Society on February 5, 1909. History: Baekeland started semi-commercial production of his new material in his home laboratory, marketing it as a material for electrical insulators. In the summer of 1909 he licensed the continental European rights to Rütger AG. The subsidiary formed at that time, Bakelite AG, was the first to produce Bakelite on an industrial scale. History: By 1910, Baekeland was producing enough material in the US to justify expansion. He formed the General Bakelite Company of Perth Amboy, NJ as a U.S. company to manufacture and market his new industrial material, and made overseas connections to produce it in other countries.The Bakelite Company produced "transparent" cast resin (which did not include filler) for a small market during the 1910s and 1920s.: 172–174  Blocks or rods of cast resin, also known as "artificial amber", were machined and carved to create items such as pipe stems, cigarette holders and jewelry. However, the demand for molded plastics led the company to concentrate on molding rather than cast solid resins.: 172–174 The Bakelite Corporation was formed in 1922 after patent litigation favorable to Baekeland, from a merger of three companies: Baekeland's General Bakelite Company; the Condensite Company, founded by J. W. Aylesworth; and the Redmanol Chemical Products Company, founded by Lawrence V. Redman. Under director of advertising and public relations Allan Brown, who came to Bakelite from Condensite, Bakelite was aggressively marketed as "the material of a thousand uses".: 58–59  A filing for a trademark featuring the letter B above the mathematical symbol for infinity was made August 25, 1925, and claimed the mark was in use as of December 1, 1924. A wide variety of uses were listed in their trademark applications. History: The first issue of Plastics magazine, October 1925, featured Bakelite on its cover, and included the article "Bakelite – What It Is" by Allan Brown. The range of colors available included "black, brown, red, yellow, green, gray, blue, and blends of two or more of these". The article emphasized that Bakelite came in various forms. "Bakelite is manufactured in several forms to suit varying requirements. In all these forms the fundamental basis is the initial Bakelite resin. This variety includes clear material, for jewelry, smokers' articles, etc.; cement, using in sealing electric light bulbs in metal bases; varnishes, for impregnating electric coils, etc.; lacquers, for protecting the surface of hardware; enamels, for giving resistive coating to industrial equipment; Laminated Bakelite, used for silent gears and insulation; and molding material, from which are formed innumerable articles of utility and beauty. The molding material is prepared ordinarily by the impregnation of cellulose substances with the initial 'uncured' resin.": 17  In a 1925 report, the United States Tariff Commission hailed the commercial manufacture of synthetic phenolic resin as "distinctly an American achievement", and noted that "the publication of figures, however, would be a virtual disclosure of the production of an individual company".In England, Bakelite Limited, a merger of three British phenol formaldehyde resin suppliers (Damard Lacquer Company Limited of Birmingham, Mouldensite Limited of Darley Dale and Redmanol Chemical Products Company of London), was formed in 1926. A new Bakelite factory opened in Tyseley, Birmingham, around 1928. It was the "heart of Bakelite production in the UK" until it closed in 1987.A new factory opened in Bound Brook, New Jersey, in 1931.: 75 In 1939, the companies were acquired by Union Carbide and Carbon Corporation. History: In 2005 German Bakelite manufacturer Bakelite AG was acquired by Borden Chemical of Columbus, OH, now Hexion Inc.[1] In addition to the original Bakelite material, these companies eventually made a wide range of other products, many of which were marketed under the brand name "Bakelite plastics". These included other types of cast phenolic resins similar to Catalin, and urea-formaldehyde resins, which could be made in brighter colors than polyoxy­benzyl­methyleneglycol­anhydride.Once Baekeland's heat and pressure patents expired in 1927, Bakelite Corporation faced serious competition from other companies. Because molded Bakelite incorporated fillers to give it strength, it tended to be made in concealing dark colors. In 1927, beads, bangles and earrings were produced by the Catalin company, through a different process which enabled them to introduce 15 new colors. Translucent jewelry, poker chips and other items made of phenolic resins were introduced in the 1930s or 1940s by the Catalin company under the Prystal name. The creation of marbled phenolic resins may also be attributable to the Catalin company. Synthesis: Making Bakelite is a multi-stage process. It begins with heating of phenol and formaldehyde in the presence of a catalyst such as hydrochloric acid, zinc chloride, or the base ammonia. This creates a liquid condensation product, referred to as Bakelite A, which is soluble in alcohol, acetone, or additional phenol. Heated further, the product becomes partially soluble and can still be softened by heat. Sustained heating results in an "insoluble hard gum". However, the high temperatures required to create this tends to cause violent foaming of the mixture when done at standard atmospheric pressure, which results in the cooled material being porous and breakable. Baekeland's innovative step was to put his "last condensation product" into an egg-shaped "Bakelizer". By heating it under pressure, at about 150 °C (300 °F), Baekeland was able to suppress the foaming that would otherwise occur. The resulting substance is extremely hard and both infusible and insoluble.: 67  : 38–39 Compression molding Molded Bakelite forms in a condensation reaction of phenol and formaldehyde, with wood flour or asbestos fiber as a filler, under high pressure and heat in a time frame of a few minutes of curing. The result is a hard plastic material. Asbestos was gradually abandoned as filler because many countries banned the production of asbestos.: 9 Bakelite's molding process had a number of advantages. Bakelite resin could be provided either as powder, or as preformed partially cured slugs, increasing the speed of the casting. Thermosetting resins such as Bakelite required heat and pressure during the molding cycle, but could be removed from the molding process without being cooled, again making the molding process faster. Also, because of the smooth polished surface that resulted, Bakelite objects required less finishing. Millions of parts could be duplicated quickly and relatively cheaply.: 42–43 Phenolic sheet Another market for Bakelite resin was the creation of phenolic sheet materials. Phenolic sheet is a hard, dense material made by applying heat and pressure to layers of paper or glass cloth impregnated with synthetic resin.: 53  Paper, cotton fabrics, synthetic fabrics, glass fabrics and unwoven fabrics are all possible materials used in lamination. When heat and pressure are applied, polymerization transforms the layers into thermosetting industrial laminated plastic.Bakelite phenolic sheet is produced in many commercial grades and with various additives to meet diverse mechanical, electrical and thermal requirements. Some common types include: Paper reinforced NEMA XX per MIL-I-24768 PBG. Normal electrical applications, moderate mechanical strength, continuous operating temperature of 250 °F (120 °C). Synthesis: Canvas reinforced NEMA C per MIL-I-24768 TYPE FBM NEMA CE per MIL-I-24768 TYPE FBG. Good mechanical and impact strength with continuous operating temperature of 250 °F (120 °C). Linen reinforced NEMA L per MIL-I-24768 TYPE FBI NEMA LE per MIL-I-24768 TYPE FEI. Good mechanical and electrical strength. Recommended for intricate high strength parts. Continuous operating temperature 250 °F (120 °C). Nylon reinforced NEMA N-1 per MIL-I-24768 TYPE NPG. Superior electrical properties under humid conditions, fungus resistant, continuous operating temperature of 160 °F (70 °C). Properties: Bakelite has a number of important properties. It can be molded very quickly, decreasing production time. Moldings are smooth, retain their shape and are resistant to heat, scratches, and destructive solvents. It is also resistant to electricity, and prized for its low conductivity. It is not flexible.: 44–45 Phenolic resin products may swell slightly under conditions of extreme humidity or perpetual dampness. When rubbed or burnt, Bakelite has a distinctive, acrid, sickly-sweet or fishy odor. Applications and uses: The characteristics of Bakelite made it particularly suitable as a molding compound, an adhesive or binding agent, a varnish, and a protective coating. Bakelite was particularly suitable for the emerging electrical and automobile industries because of its extraordinarily high resistance to electricity, heat, and chemical action.: 44–45 The earliest commercial use of Bakelite in the electrical industry was the molding of tiny insulating bushings, made in 1908 for the Weston Electrical Instrument Corporation by Richard W. Seabury of the Boonton Rubber Company.: 43  Bakelite was soon used for non-conducting parts of telephones, radios and other electrical devices, including bases and sockets for light bulbs and electron tubes (vacuum tubes), supports for any type of electrical components, automobile distributor caps and other insulators. By 1912, it was being used to make billiard balls, since its elasticity and the sound it made were similar to ivory.During World War I, Bakelite was used widely, particularly in electrical systems. Important projects included the Liberty airplane engine, the wireless telephone and radio phone, and the use of micarta-bakelite propellers in the NBS-1 bomber and the DH-4B aeroplane.Bakelite's availability and ease and speed of molding helped to lower the costs and increase product availability so that telephones and radios became common household consumer goods.: 116–117  It was also very important to the developing automobile industry. It was soon found in myriad other consumer products ranging from pipe stems and buttons to saxophone mouthpieces, cameras, early machine guns, and appliance casings. Bakelite was also very commonly used in making molded grip panels on handguns, and as furniture for submachine guns and machineguns, and the classic Bakelite magazines for Kalashnikov rifles, as well as numerous knife handles and "scales" through the first half of the 20th century.Beginning in the 1920s, it became a popular material for jewelry. Designer Coco Chanel included Bakelite bracelets in her costume jewelry collections. : 27–29  Designers such as Elsa Schiaparelli used it for jewelry and also for specially designed dress buttons. Later, Diana Vreeland, editor of Vogue, was enthusiastic about Bakelite. Bakelite was also used to make presentation boxes for Breitling watches. Applications and uses: By 1930, designer Paul T. Frankl considered Bakelite a "Materia Nova", "expressive of our own age".: 107  By the 1930s, Bakelite was used for game pieces like chessmen, poker chips, dominoes and mahjong sets. Kitchenware made with Bakelite, including canisters and tableware, was promoted for its resistance to heat and to chipping. In the mid-1930s, Northland marketed a line of skis with a black "Ebonite" base, a coating of Bakelite. By 1935, it was used in solid-body electric guitars. Performers such as Jerry Byrd loved the tone of Bakelite guitars but found them difficult to keep in tune.Charles Plimpton patented BAYKO in 1933 and rushed out his first construction sets for Christmas 1934. He called the toy Bayko Light Constructional Sets, the words "Bayko Light" being a pun on the word "Bakelite."During World War II, Bakelite was used in a variety of wartime equipment including pilots' goggles and field telephones. It was also used for patriotic wartime jewelry. In 1943, the thermosetting phenolic resin was even considered for the manufacture of coins, due to a shortage of traditional material. Bakelite and other non-metal materials were tested for usage for the one cent coin in the US before the Mint settled on zinc-coated steel.During World War II, Bakelite buttons were part of British uniforms. These included brown buttons for the Army and black buttons for the RAF.In 1947, Dutch art forger Han van Meegeren was convicted of forgery, after chemist and curator Paul B. Coremans proved that a purported Vermeer contained Bakelite, which van Meegeren had used as a paint hardener.Bakelite was sometimes used in the pistol grip, hand guard, and butt stock of firearms. The AKM and some early AK-74 rifles are frequently mistakenly identified as using Bakelite, but most were made with AG-4S.By the late 1940s, newer materials were superseding Bakelite in many areas. Phenolics are less frequently used in general consumer products today due to their cost and complexity of production and their brittle nature. They still appear in some applications where their specific properties are required, such as small precision-shaped components, molded disc brake cylinders, saucepan handles, electrical plugs, switches and parts for electrical irons, Printed circuit boards, as well as in the area of inexpensive board and tabletop games produced in China, Hong Kong and India. Items such as billiard balls, dominoes and pieces for board games such as chess, checkers, and backgammon are constructed of Bakelite for its look, durability, fine polish, weight, and sound. Common dice are sometimes made of Bakelite for weight and sound, but the majority are made of a thermoplastic polymer such as acrylonitrile butadiene styrene (ABS). Applications and uses: Bakelite continues to be used for wire insulation, brake pads and related automotive components, and industrial electrical-related applications. Bakelite stock is still manufactured and produced in sheet, rod and tube form for industrial applications in the electronics, power generation and aerospace industries, and under a variety of commercial brand names.Phenolic resins have been commonly used in ablative heat shields. Soviet heatshields for ICBM warheads and spacecraft reentry consisted of asbestos textolite, impregnated with Bakelite. Bakelite is also used in the mounting of metal samples in metallography. Collectible status: Bakelite items, particularly jewelry and radios, have become popular collectibles. The term Bakelite is sometimes used in the resale market to indicate various types of early plastics, including Catalin and Faturan, which may be brightly colored, as well as items made of Bakelite material. Patents: The United States Patent and Trademark Office granted Baekeland a patent for a "Method of making insoluble products of phenol and formaldehyde" on December 7, 1909. Producing hard, compact, insoluble and infusible condensation products of phenols and formaldehyde marked the beginning of the modern plastics industry. Similar plastics: Catalin is also a phenolic resin, similar to Bakelite, but contained different mineral fillers that allowed the production of light colors. Condensites are similar thermoset materials having much the same properties, characteristics, and uses. Crystalate is an early plastic. Faturan is phenolic resin, also similar to Bakelite, that turns red over time, regardless of its original color. Galalith is an early plastic derived from milk products. Micarta is an early composite insulating plate that used Bakelite as a binding agent. It was developed in 1910 by Westinghouse Elec. & Mfg Co. Novotext is a brand name for cotton textile-phenolic resin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neutral red** Neutral red: Neutral red (toluylene red, Basic Red 5, or C.I. 50040) is a eurhodin dye used for staining in histology. It stains lysosomes red. It is used as a general stain in histology, as a counterstain in combination with other dyes, and for many staining methods. Together with Janus Green B, it is used to stain embryonal tissues and supravital staining of blood. Can be used for staining Golgi apparatus in cells and Nissl granules in neurons. Neutral red: In microbiology, it is used in the MacConkey agar to differentiate bacteria for lactose fermentation. Neutral red: Neutral red can be used as a vital stain. The Neutral Red Cytotoxicity Assay was first developed by Ellen Borenfreund in 1984. In the Neutral Red Assay live cells incorporate neutral red into their lysosomes. As cells begin to die, their ability to incorporate neutral red diminishes. Thus, loss of neutral red uptake corresponds to loss of cell viability. The neutral red is also used to stain cell cultures for plate titration of viruses. Neutral red: Neutral red is added to some growth media for bacterial and cell cultures. It usually is available as a chloride salt. Neutral red acts as a pH indicator, changing from red to yellow between pH 6.8 and 8.0. Other references: Borenfreund, Ellen; Puerner, James A. (1985). "Toxicity determined in vitro by morphological alterations and neutral red absorption". Toxicology Letters. 24 (2–3): 119–124. doi:10.1016/0378-4274(85)90046-3. PMID 3983963. Borenfreund, E.; Babich, H.; Martin-Alguacil, N. (1988). "Comparisons of two in vitro cytotoxicity assays—The neutral red (NR) and tetrazolium MTT tests". Toxicology in Vitro. 2 (1): 1–6. doi:10.1016/0887-2333(88)90030-6. PMID 20702351.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Napolitains** Napolitains: Neapolitans (also Napolitains or Naps) are individually wrapped square or rectangular pieces of chocolate in assorted flavours. They are often served by hotels and coffee shops (often with a cup of coffee) and when used for promotional purposes may feature packaging with personalised branding.Neapolitans are about 3 centimeters (1.2 in) by 2 centimeters (0.79 in) in size, weigh about 5 grams (0.18 oz), and are individually wrapped. They may be of any type of chocolate. Terry's of York, England, first mass-produced neapolitans in 1899. They have since been produced in many flavours by many confectionery companies. Napolitains: The name "neapolitan" originates from a gift that was made by Louis XVIII in 1819 to Marie-Caroline of Bourbon, a princess from Naples. Each rectangle of chocolate was wrapped individually and featured a view of Naples.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EPH receptor A4** EPH receptor A4: EPH receptor A4 (ephrin type-A receptor 4) is a protein that in humans is encoded by the EPHA4 gene.This gene belongs to the ephrin receptor subfamily of the protein-tyrosine kinase family. EPH and EPH-related receptors have been implicated in mediating developmental events, particularly in the nervous system. Receptors in the EPH subfamily typically have a single kinase domain and an extracellular region containing a Cys-rich domain and 2 fibronectin type III repeats. The ephrin receptors are divided into 2 groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands.In 2012, a publication in Nature Medicine revealed a connection between EPHA4 and the neurodegenerative disease Amyotrophic lateral sclerosis (ALS), where a defective gene allows ALS patients to live considerably longer than patients with an intact gene. This opens up for development of treatment for this currently untreatable disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paradeigma** Paradeigma: Paradeigma (Greek: παραδειγμα) is a Greek term for a pattern, example or sample; the plural reads Paradeigmata. Its closest translation is "an isolated example by which a general rule illustrated". Limited to rhetoric, a paradeigma is used to compare the situation of the audience to a similar past event, like a parable (Greek: παραβολή). It offers counsel on how the audience should act. In the Greek tradition many paradeigmata are mythological examples, often in reference to a popular legend or well-known character in a similar position to the audience. In literature: Aristotle was a prominent ancient rhetorician who explicitly discussed the use of paradeigmata. In literature: Homer's The Iliad (24.601–619) – Achilles is trying to encourage Priam to eat rather than continue to weep for his dead son Hector. He brings up Niobe, a woman that had lost twelve children but still found the strength to eat. He is trying to counsel Priam to do what he should by using Niobe as a paradeigma, an example to guide behaviour. In literature: Jesus' parables in the New Testament of the Bible – In Luke 7:41–47 Jesus uses the following paradeigmata to explain how much a man loves in response to how much he is forgiven. (Jesus is alluding to the magnitude of his coming sacrifice on the cross for all of mankind’s sin.) 41 "Two men owed money to a certain moneylender. One owed him five hundred denarii, and the other fifty. Neither of them had the money to pay him 42 back, so he cancelled the debts of both. Now which of them will love him more?" Simon replied, "I suppose the one who had the bigger debt cancelled." 43 "You have judged correctly," Jesus said. Then he turned toward the woman and said to Simon, "Do you see this 44 woman? I came into your house. You did not give me any water for my feet, but she wet my feet with her tears and wiped them with her hair. You did not give 45 me a kiss, but this woman, from the time I entered, has not stopped kissing my feet. You did not put oil on my head, but she has poured perfume on my feet. In literature: 46 Therefore, I tell you, her many sins have been forgiven—for she loved much. 47 But he who has been forgiven little, loves little."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PsycLIT** PsycLIT: PsycLIT was a CD-ROM version of Psychological Abstracts. It was merged into the PsycINFO online database in 2000. PsycLIT contained citations and abstracts to journal articles, and summaries of English-language chapters and books in psychology, as well as behavioral information from sociology, linguistics, medicine, law, psychiatry, and anthropology. It was one of a number of databases indexing psychological research papers and journals. Others included PsycINFO, Psychological Abstracts, Ulrich International Periodical Directory, PUBLIST (The Internet Directory Publications), ISSN International, PSICODOC, the ISOC database PSEDISOC, CSIC-RISO, CIRBIC-REVISTAS, COMPLUDOC Social Sciences Citation Index and the Institute for Scientific Information (Thomson-ISI).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EEF1B2P1** EEF1B2P1: Eukaryotic translation elongation factor 1 beta 2 pseudogene 1 (eEF1B1) is a protein that in humans is encoded by the EEF1B2P1 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arbitrary-precision arithmetic** Arbitrary-precision arithmetic: In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision. Arbitrary-precision arithmetic: Several modern programming languages have built-in support for bignums, and others have libraries available for arbitrary-precision integer and floating-point math. Rather than storing values as a fixed number of bits related to the size of the processor register, these implementations typically use variable-length arrays of digits. Arbitrary precision is used in applications where the speed of arithmetic is not a limiting factor, or where precise results with very large numbers are required. It should not be confused with the symbolic computation provided by many computer algebra systems, which represent numbers by expressions such as π·sin(2), and can thus represent any computable number with infinite precision. Applications: A common application is public-key cryptography, whose algorithms commonly employ arithmetic with integers having hundreds of digits. Another is in situations where artificial limits and overflows would be inappropriate. It is also useful for checking the results of fixed-precision calculations, and for determining optimal or near-optimal values for coefficients needed in formulae, for example the 1 3 {\textstyle {\sqrt {\frac {1}{3}}}} that appears in Gaussian integration.Arbitrary precision arithmetic is also used to compute fundamental mathematical constants such as π to millions or more digits and to analyze the properties of the digit strings or more generally to investigate the precise behaviour of functions such as the Riemann zeta function where certain questions are difficult to explore via analytical methods. Another example is in rendering fractal images with an extremely high magnification, such as those found in the Mandelbrot set. Applications: Arbitrary-precision arithmetic can also be used to avoid overflow, which is an inherent limitation of fixed-precision arithmetic. Similar to a 5-digit odometer's display which changes from 99999 to 00000, a fixed-precision integer may exhibit wraparound if numbers grow too large to represent at the fixed level of precision. Some processors can instead deal with overflow by saturation, which means that if a result would be unrepresentable, it is replaced with the nearest representable value. (With 16-bit unsigned saturation, adding any positive amount to 65535 would yield 65535.) Some processors can generate an exception if an arithmetic result exceeds the available precision. Where necessary, the exception can be caught and recovered from—for instance, the operation could be restarted in software using arbitrary-precision arithmetic. Applications: In many cases, the task or the programmer can guarantee that the integer values in a specific application will not grow large enough to cause an overflow. Such guarantees may be based on pragmatic limits: a school attendance program may have a task limit of 4,000 students. A programmer may design the computation so that intermediate results stay within specified precision boundaries. Applications: Some programming languages such as Lisp, Python, Perl, Haskell, Ruby and Raku use, or have an option to use, arbitrary-precision numbers for all integer arithmetic. Although this reduces performance, it eliminates the possibility of incorrect results (or exceptions) due to simple overflow. It also makes it possible to guarantee that arithmetic results will be the same on all machines, regardless of any particular machine's word size. The exclusive use of arbitrary-precision numbers in a programming language also simplifies the language, because a number is a number and there is no need for multiple types to represent different levels of precision. Implementation issues: Arbitrary-precision arithmetic is considerably slower than arithmetic using numbers that fit entirely within processor registers, since the latter are usually implemented in hardware arithmetic whereas the former must be implemented in software. Even if the computer lacks hardware for certain operations (such as integer division, or all floating-point operations) and software is provided instead, it will use number sizes closely related to the available hardware registers: one or two words only and definitely not N words. There are exceptions, as certain variable word length machines of the 1950s and 1960s, notably the IBM 1620, IBM 1401 and the Honeywell Liberator series, could manipulate numbers bound only by available storage, with an extra bit that delimited the value. Implementation issues: Numbers can be stored in a fixed-point format, or in a floating-point format as a significand multiplied by an arbitrary exponent. However, since division almost immediately introduces infinitely repeating sequences of digits (such as 4/7 in decimal, or 1/10 in binary), should this possibility arise then either the representation would be truncated at some satisfactory size or else rational numbers would be used: a large integer for the numerator and for the denominator. But even with the greatest common divisor divided out, arithmetic with rational numbers can become unwieldy very quickly: 1/99 − 1/100 = 1/9900, and if 1/101 is then added, the result is 10001/999900. Implementation issues: The size of arbitrary-precision numbers is limited in practice by the total storage available, and computation time. Numerous algorithms have been developed to efficiently perform arithmetic operations on numbers stored with arbitrary precision. In particular, supposing that N digits are employed, algorithms have been designed to minimize the asymptotic complexity for large N. The simplest algorithms are for addition and subtraction, where one simply adds or subtracts the digits in sequence, carrying as necessary, which yields an O(N) algorithm (see big O notation). Comparison is also very simple. Compare the high-order digits (or machine words) until a difference is found. Comparing the rest of the digits/words is not necessary. The worst case is Θ (N), but usually it will go much faster. Implementation issues: For multiplication, the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) require Θ (N2) operations, but multiplication algorithms that achieve O(N log(N) log(log(N))) complexity have been devised, such as the Schönhage–Strassen algorithm, based on fast Fourier transforms, and there are also algorithms with slightly worse complexity but with sometimes superior real-world performance for smaller N. The Karatsuba multiplication is such an algorithm. Implementation issues: For division, see division algorithm. For a list of algorithms along with complexity estimates, see computational complexity of mathematical operations. For examples in x86 assembly, see external links. Pre-set precision: In some languages such as REXX, the precision of all calculations must be set before doing a calculation. Other languages, such as Python and Ruby, extend the precision automatically to prevent overflow. Example: The calculation of factorials can easily produce very large numbers. This is not a problem for their usage in many formulas (such as Taylor series) because they appear along with other terms, so that—given careful attention to the order of evaluation—intermediate calculation values are not troublesome. If approximate values of factorial numbers are desired, Stirling's approximation gives good results using floating-point arithmetic. The largest representable value for a fixed-size integer variable may be exceeded even for relatively small arguments as shown in the table below. Even floating-point numbers are soon outranged, so it may help to recast the calculations in terms of the logarithm of the number. Example: But if exact values for large factorials are desired, then special software is required, as in the pseudocode that follows, which implements the classic algorithm to calculate 1, 1×2, 1×2×3, 1×2×3×4, etc. the successive factorial numbers. constants: Limit = 1000 % Sufficient digits. Base = 10 % The base of the simulated arithmetic. FactorialLimit = 365 % Target number to solve, 365! tdigit: Array[0:9] of character = ["0","1","2","3","4","5","6","7","8","9"] variables: digit: Array[1:Limit] of 0..9 % The big number. carry, d: Integer % Assistants during multiplication. last: Integer % Index into the big number's digits. text: Array[1:Limit] of character % Scratchpad for the output. digit[*] := 0 % Clear the whole array. last := 1 % The big number starts as a single-digit, digit[1] := 1 % its only digit is 1. for n := 1 to FactorialLimit: % Step through producing 1!, 2!, 3!, 4!, etc. carry := 0 % Start a multiply by n. for i := 1 to last: % Step along every digit. d := digit[i] * n + carry % Multiply a single digit. digit[i] := d mod Base % Keep the low-order digit of the result. carry := d div Base % Carry over to the next digit. while carry > 0: % Store the remaining carry in the big number. if last >= Limit: error("overflow") last := last + 1 % One more digit. digit[last] := carry mod Base carry := carry div Base % Strip the last digit off the carry. text[*] := " " % Now prepare the output. for i := 1 to last: % Translate from binary to text. text[Limit - i + 1] := tdigit[digit[i]] % Reversing the order. Example: print text[Limit - last + 1:Limit], " = ", n, "!" With the example in view, a number of details can be discussed. The most important is the choice of the representation of the big number. In this case, only integer values are required for digits, so an array of fixed-width integers is adequate. It is convenient to have successive elements of the array represent higher powers of the base. Example: The second most important decision is in the choice of the base of arithmetic, here ten. There are many considerations. The scratchpad variable d must be able to hold the result of a single-digit multiply plus the carry from the prior digit's multiply. In base ten, a sixteen-bit integer is certainly adequate as it allows up to 32767. However, this example cheats, in that the value of n is not itself limited to a single digit. This has the consequence that the method will fail for n > 3200 or so. In a more general implementation, n would also use a multi-digit representation. A second consequence of the shortcut is that after the multi-digit multiply has been completed, the last value of carry may need to be carried into multiple higher-order digits, not just one. Example: There is also the issue of printing the result in base ten, for human consideration. Because the base is already ten, the result could be shown simply by printing the successive digits of array digit, but they would appear with the highest-order digit last (so that 123 would appear as "321"). The whole array could be printed in reverse order, but that would present the number with leading zeroes ("00000...000123") which may not be appreciated, so this implementation builds the representation in a space-padded text variable and then prints that. The first few results (with spacing every fifth digit and annotation added here) are: This implementation could make more effective use of the computer's built in arithmetic. A simple escalation would be to use base 100 (with corresponding changes to the translation process for output), or, with sufficiently wide computer variables (such as 32-bit integers) we could use larger bases, such as 10,000. Working in a power-of-2 base closer to the computer's built-in integer operations offers advantages, although conversion to a decimal base for output becomes more difficult. On typical modern computers, additions and multiplications take constant time independent of the values of the operands (so long as the operands fit in single machine words), so there are large gains in packing as much of a bignumber as possible into each element of the digit array. The computer may also offer facilities for splitting a product into a digit and carry without requiring the two operations of mod and div as in the example, and nearly all arithmetic units provide a carry flag which can be exploited in multiple-precision addition and subtraction. This sort of detail is the grist of machine-code programmers, and a suitable assembly-language bignumber routine can run faster than the result of the compilation of a high-level language, which does not provide direct access to such facilities but instead maps the high-level statements to its model of the target machine using an optimizing compiler. Example: For a single-digit multiply the working variables must be able to hold the value (base−1)2 + carry, where the maximum value of the carry is (base−1). Similarly, the variables used to index the digit array are themselves limited in width. A simple way to extend the indices would be to deal with the bignumber's digits in blocks of some convenient size so that the addressing would be via (block i, digit j) where i and j would be small integers, or, one could escalate to employing bignumber techniques for the indexing variables. Ultimately, machine storage capacity and execution time impose limits on the problem size. History: IBM's first business computer, the IBM 702 (a vacuum-tube machine) of the mid-1950s, implemented integer arithmetic entirely in hardware on digit strings of any length from 1 to 511 digits. The earliest widespread software implementation of arbitrary-precision arithmetic was probably that in Maclisp. Later, around 1980, the operating systems VAX/VMS and VM/CMS offered bignum facilities as a collection of string functions in the one case and in the languages EXEC 2 and REXX in the other. History: An early widespread implementation was available via the IBM 1620 of 1959–1970. The 1620 was a decimal-digit machine which used discrete transistors, yet it had hardware (that used lookup tables) to perform integer arithmetic on digit strings of a length that could be from two to whatever memory was available. For floating-point arithmetic, the mantissa was restricted to a hundred digits or fewer, and the exponent was restricted to two digits only. The largest memory supplied offered 60 000 digits, however Fortran compilers for the 1620 settled on fixed sizes such as 10, though it could be specified on a control card if the default was not satisfactory. Software libraries: Arbitrary-precision arithmetic in most computer software is implemented by calling an external library that provides data types and subroutines to store numbers with the requested precision and to perform computations. Software libraries: Different libraries have different ways of representing arbitrary-precision numbers, some libraries work only with integer numbers, others store floating point numbers in a variety of bases (decimal or binary powers). Rather than representing a number as single value, some store numbers as a numerator/denominator pair (rationals) and some can fully represent computable numbers, though only up to some storage limit. Fundamentally, Turing machines cannot represent all real numbers, as the cardinality of R exceeds the cardinality of Z
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flood forecasting** Flood forecasting: Flood forecasting is the process of predicting the occurrence, magnitude, timing, and duration of floods in a specific area, often by analysing various hydrological, meteorological, and environmental factors. The primary goal of flood forecasting is to deliver timely and accurate information to decision-makers, empowering them to take appropriate actions to mitigate the potential consequences of flooding on human lives, property, and the environment. By accounting for the various dimensions of a flood event, such as occurrence, magnitude, duration, and spatial extent, flood forecasting models can offer a more holistic and detailed representation of the impending risks and facilitate more effective response strategies. Flood forecasting is a multifaceted discipline that aims to predict various aspects of flood events, including their occurrence, magnitude, timing, duration, and spatial extent. However, the scope and definition of flood forecasting can differ across scientific publications and methodologies. In some cases, flood forecasting is focused on estimating the moment when a specific threshold in a river system is exceeded, while in other cases, it involves predicting the flood extent and employing hydrodynamic information from models. When flood forecasting is limited to estimating the moment a threshold is exceeded, researchers often concentrate on predicting water levels or river discharge in a particular location. This approach provides valuable information about the potential onset of a flood event, enabling decision-makers to initiate preventive measures and minimize potential damages. In this context, flood forecasting models are designed to predict when the water level or discharge will surpass a predefined threshold, usually based on historical data and established risk levels. Flood forecasting: On the other hand, more comprehensive flood forecasting methods involve predicting the flood extent by utilizing hydrodynamic information from models. These approaches not only consider the exceedance of a threshold but also aim to estimate the spatial distribution, timing and extent of the flooding. Hydrodynamic models, such as the Hydrologic Engineering Center's River Analysis System (HEC-RAS) or the MIKE suite of models, simulate water flow and its interaction with the surrounding environment, providing detailed predictions of flood extent, depth, and velocity. Flood forecasting: Incorporating hydrodynamic information into flood forecasting models allows for a more complete understanding of the potential impacts of flood events, accounting for factors such as the inundation of infrastructure, agricultural lands, and residential areas. By considering the spatial distribution of flooding, these models enable more effective flood management and response strategies, ensuring that resources are allocated appropriately and that vulnerable populations are adequately protected. Flood forecasting can be done using various methodologies, which can be broadly categorized into physically-based models, data-driven models, or a combination of both. The choice of the most suitable approach depends on factors such as data availability, catchment characteristics, and desired prediction accuracy. Here is an overview of each approach: Physically-based models simulate the underlying physical processes involved in flood generation and propagation, such as precipitation, infiltration, runoff, and routing. These models are typically more stable and reliable due to their inherent representation of the physics, making them less susceptible to forecast errors in comparison to data-driven models, especially in the absence of inputs like rainfall. However, physically-based models are state-dependent and require accurate initial conditions for optimal performance. During the so-called "warming period" of the model, the performance might be lower due to the reliance on initial conditions. Data-driven models focus on discovering patterns and relationships within historical data without explicitly representing the physical processes. They can learn complex, non-linear relationships and adapt to changing conditions, making them useful in situations where data is abundant and accurate representation of physical processes is challenging. Examples of data-driven models include regression techniques, Artificial Neural Networks (ANN), Support Vector Machines (SVM), and tree-based algorithms like Random Forest or XGBoost. Hybrid models combine the strengths of physically-based and data-driven models to enhance flood forecasting accuracy and reliability. Hybrid models can utilize the physical understanding from physically-based models while benefiting from the adaptive learning capabilities of data-driven models. An example of a hybrid model is coupling a hydrological model with a machine learning algorithm to improve flood prediction accuracy. Flood forecasting can be mathematically represented as: F(t)=f(Pt,Xt,Ht,Ct) where: F(t) is the flood forecast at time t ,Pt represents the precipitation input at time t ,Xt denotes a vector of proxy variables (e.g., soil moisture, land use, topography) at time t ,Ht is the historical data up to time t ,Ct represents the initial conditions and catchment characteristics, f is the flood forecasting model, which can be a physically-based model or a data-driven model depending on the approach chosen.In many operational systems forecasted precipitation is fed into rainfall-runoff and streamflow routing models to forecast flow rates and water levels for periods ranging from a few hours to days ahead, depending on the size of the watershed or river basin. Flood forecasting can also make use of forecasts of precipitation in an attempt to extend the lead-time available. Flood forecasting is an important component of flood warning, where the distinction between the two is that the outcome of flood forecasting is a set of forecast time-profiles of channel flows or river levels at various locations, while "flood warning" is the task of making use of these forecasts to tell decisions on warnings of floods. Flood forecasting: Real-time flood forecasting at regional area can be done within seconds by using the technology of artificial neural network. Effective real-time flood forecasting models could be useful for early warning and disaster prevention.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radical 85** Radical 85: Radical 85 or radical water (水部) meaning "water" is a Kangxi radical; one of 35 of the 214 that are composed of 4 strokes. Its left-hand form, 氵, is closely related to Radical 15, 冫 bīng (also known as 两点水 liǎngdiǎnshuǐ), meaning "ice", from which it differs by the addition of just one stroke. In the Kangxi Dictionary, there are 1,595 characters (out of 40,000) to be found under this radical. 水 is also the 77th indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China, with 氵 and 氺 being its associated indexing component. In the Chinese wuxing ("Five Phases"), 水 represents the element Water. In Taoist cosmology, 水 (Water) is the nature component of the bagua diagram 坎 kǎn. Literature: Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1. Leyi Li: “Tracing the Roots of Chinese Characters: 500 Cases”. Beijing 1993, ISBN 978-7-5859-0204-2 tai
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**All-pass filter** All-pass filter: An all-pass filter is a signal processing filter that passes all frequencies equally in gain, but changes the phase relationship among various frequencies. Most types of filter reduce the amplitude (i.e. the magnitude) of the signal applied to it for some values of frequency, whereas the all-pass filter allows all frequencies through without changes in level. Common applications: A common application in electronic music production is in the design of an effects unit known as a "phaser", where a number of all-pass filters are connected in sequence and the output mixed with the raw signal. It does this by varying its phase shift as a function of frequency. Generally, the filter is described by the frequency at which the phase shift crosses 90° (i.e., when the input and output signals go into quadrature – when there is a quarter wavelength of delay between them). They are generally used to compensate for other undesired phase shifts that arise in the system, or for mixing with an unshifted version of the original to implement a notch comb filter. They may also be used to convert a mixed phase filter into a minimum phase filter with an equivalent magnitude response or an unstable filter into a stable filter with an equivalent magnitude response. Active analog implementation: Implementation using low-pass filter The operational amplifier circuit shown in adjacent figure implements a single-pole active all-pass filter that features a low-pass filter at the non-inverting input of the opamp. The filter's transfer function is given by: H(s)=−s−1RCs+1RC=1−sRC1+sRC, which has one pole at -1/RC and one zero at 1/RC (i.e., they are reflections of each other across the imaginary axis of the complex plane). The magnitude and phase of H(iω) for some angular frequency ω are and arctan ⁡(ωRC). Active analog implementation: The filter has unity-gain magnitude for all ω. The filter introduces a different delay at each frequency and reaches input-to-output quadrature at ω=1/RC (i.e., phase shift is 90°).This implementation uses a low-pass filter at the non-inverting input to generate the phase shift and negative feedback. At high frequencies, the capacitor is a short circuit, creating an inverting amplifier (i.e., 180° phase shift) with unity gain. At low frequencies and DC, the capacitor is an open circuit, creating a unity-gain voltage buffer (i.e., no phase shift). Active analog implementation: At the corner frequency ω=1/RC of the low-pass filter (i.e., when input frequency is 1/(2πRC)), the circuit introduces a 90° shift (i.e., output is in quadrature with input; the output appears to be delayed by a quarter period from the input).In fact, the phase shift of the all-pass filter is double the phase shift of the low-pass filter at its non-inverting input. Active analog implementation: Interpretation as a Padé approximation to a pure delay The Laplace transform of a pure delay is given by e−sT, where T is the delay (in seconds) and s∈C is complex frequency. This can be approximated using a Padé approximant, as follows: e−sT=e−sT/2esT/2≈1−sT/21+sT/2, where the last step was achieved via a first-order Taylor series expansion of the numerator and denominator. By setting RC=T/2 we recover H(s) from above. Active analog implementation: Implementation using high-pass filter The operational amplifier circuit shown in the adjacent figure implements a single-pole active all-pass filter that features a high-pass filter at the non-inverting input of the opamp. The filter's transfer function is given by: H(s)=s−1RCs+1RC, which has one pole at -1/RC and one zero at 1/RC (i.e., they are reflections of each other across the imaginary axis of the complex plane). The magnitude and phase of H(iω) for some angular frequency ω are and arctan ⁡(ωRC). Active analog implementation: The filter has unity-gain magnitude for all ω. The filter introduces a different delay at each frequency and reaches input-to-output quadrature at ω=1/RC (i.e., phase lead is 90°). This implementation uses a high-pass filter at the non-inverting input to generate the phase shift and negative feedback. At high frequencies, the capacitor is a short circuit, thereby creating a unity-gain voltage buffer (i.e., no phase lead). At low frequencies and DC, the capacitor is an open circuit and the circuit is an inverting amplifier (i.e., 180° phase lead) with unity gain. Active analog implementation: At the corner frequency ω=1/RC of the high-pass filter (i.e., when input frequency is 1/(2πRC)), the circuit introduces a 90° phase lead (i.e., output is in quadrature with input; the output appears to be advanced by a quarter period from the input).In fact, the phase shift of the all-pass filter is double the phase shift of the high-pass filter at its non-inverting input. Active analog implementation: Voltage controlled implementation The resistor can be replaced with a FET in its ohmic mode to implement a voltage-controlled phase shifter; the voltage on the gate adjusts the phase shift. In electronic music, a phaser typically consists of two, four or six of these phase-shifting sections connected in tandem and summed with the original. A low-frequency oscillator (LFO) ramps the control voltage to produce the characteristic swooshing sound. Passive analog implementation: The benefit to implementing all-pass filters with active components like operational amplifiers is that they do not require inductors, which are bulky and costly in integrated circuit designs. In other applications where inductors are readily available, all-pass filters can be implemented entirely without active components. There are a number of circuit topologies that can be used for this. The following are the most commonly used circuits. Passive analog implementation: Lattice filter The lattice phase equaliser, or filter, is a filter composed of lattice, or X-sections. With single element branches it can produce a phase shift up to 180°, and with resonant branches it can produce phase shifts up to 360°. The filter is an example of a constant-resistance network (i.e., its image impedance is constant over all frequencies). Passive analog implementation: T-section filter The phase equaliser based on T topology is the unbalanced equivalent of the lattice filter and has the same phase response. While the circuit diagram may look like a low pass filter it is different in that the two inductor branches are mutually coupled. This results in transformer action between the two inductors and an all-pass response even at high frequency. Passive analog implementation: Bridged T-section filter The bridged T topology is used for delay equalisation, particularly the differential delay between two landlines being used for stereophonic sound broadcasts. This application requires that the filter has a linear phase response with frequency (i.e., constant group delay) over a wide bandwidth and is the reason for choosing this topology. Digital implementation: A Z-transform implementation of an all-pass filter with a complex pole at z0 is H(z)=z−1−z0¯1−z0z−1 which has a zero at 1/z0¯ , where z¯ denotes the complex conjugate. The pole and zero sit at the same angle but have reciprocal magnitudes (i.e., they are reflections of each other across the boundary of the complex unit circle). The placement of this pole-zero pair for a given z0 can be rotated in the complex plane by any angle and retain its all-pass magnitude characteristic. Complex pole-zero pairs in all-pass filters help control the frequency where phase shifts occur. Digital implementation: To create an all-pass implementation with real coefficients, the complex all-pass filter can be cascaded with an all-pass that substitutes z0¯ for z0 , leading to the Z-transform implementation H(z)=z−1−z0¯1−z0z−1×z−1−z01−z0¯z−1=z−2−2ℜ(z0)z−1+|z0|21−2ℜ(z0)z−1+|z0|2z−2, which is equivalent to the difference equation y[k]−2ℜ(z0)y[k−1]+|z0|2y[k−2]=x[k−2]−2ℜ(z0)x[k−1]+|z0|2x[k], where y[k] is the output and x[k] is the input at discrete time step k Filters such as the above can be cascaded with unstable or mixed-phase filters to create a stable or minimum-phase filter without changing the magnitude response of the system. For example, by proper choice of z0 , a pole of an unstable system that is outside of the unit circle can be canceled and reflected inside the unit circle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Color vision** Color vision: Color vision, a feature of visual perception, is an ability to perceive differences between light composed of different frequencies independently of light intensity. Color perception is a part of the larger visual system and is mediated by a complex process between neurons that begins with differential stimulation of different types of photoreceptors by light entering the eye. Those photoreceptors then emit outputs that are propagated through many layers of neurons and then ultimately to the brain. Color vision is found in many animals and is mediated by similar underlying mechanisms with common types of biological molecules and a complex history of evolution in different animal taxa. In primates, color vision may have evolved under selective pressure for a variety of visual tasks including the foraging for nutritious young leaves, ripe fruit, and flowers, as well as detecting predator camouflage and emotional states in other primates. Wavelength: Isaac Newton discovered that white light after being split into its component colors when passed through a dispersive prism could be recombined to make white light by passing them through a different prism. The visible light spectrum ranges from about 380 to 740 nanometers. Spectral colors (colors that are produced by a narrow band of wavelengths) such as red, orange, yellow, green, cyan, blue, and violet can be found in this range. These spectral colors do not refer to a single wavelength, but rather to a set of wavelengths: red, 625–740 nm; orange, 590–625 nm; yellow, 565–590 nm; green, 500–565 nm; cyan, 485–500 nm; blue, 450–485 nm; violet, 380–450 nm. Wavelength: Wavelengths longer or shorter than this range are called infrared or ultraviolet, respectively. Humans cannot generally see these wavelengths, but other animals may. Wavelength: Hue detection Sufficient differences in wavelength cause a difference in the perceived hue; the just-noticeable difference in wavelength varies from about 1 nm in the blue-green and yellow wavelengths to 10 nm and more in the longer red and shorter blue wavelengths. Although the human eye can distinguish up to a few hundred hues, when those pure spectral colors are mixed together or diluted with white light, the number of distinguishable chromaticities can be much higher. Wavelength: In very low light levels, vision is scotopic: light is detected by rod cells of the retina. Rods are maximally sensitive to wavelengths near 500 nm and play little, if any, role in color vision. In brighter light, such as daylight, vision is photopic: light is detected by cone cells which are responsible for color vision. Cones are sensitive to a range of wavelengths, but are most sensitive to wavelengths near 555 nm. Between these regions, mesopic vision comes into play and both rods and cones provide signals to the retinal ganglion cells. The shift in color perception from dim light to daylight gives rise to differences known as the Purkinje effect. Wavelength: The perception of "white" is formed by the entire spectrum of visible light, or by mixing colors of just a few wavelengths in animals with few types of color receptors. In humans, white light can be perceived by combining wavelengths such as red, green, and blue, or just a pair of complementary colors such as blue and yellow. Non-spectral colors There are a variety of colors in addition to spectral colors and their hues. These include grayscale colors, shades of colors obtained by mixing grayscale colors with spectral colors, violet-red colors, impossible colors, and metallic colors. Grayscale colors include white, gray, and black. Rods contain rhodopsin, which reacts to light intensity, providing grayscale coloring. Shades include colors such as pink or brown. Pink is obtained from mixing red and white. Brown may be obtain from mixing orange with gray or black. Navy is obtained from mixing blue and black. Violet-red colors include hues and shades of magenta. The light spectrum is a line on which violet is one end and the other is red, and yet we see hues of purple that connect those two colors. Impossible colors are a combination of cone responses that cannot be naturally produced. For example, medium cones cannot be activated completely on their own; if they were, we would see a 'hyper-green' color. Dimensionality: Color vision is categorized foremost according to the dimensionality of the color gamut, which is defined by the number of primaries required to represent the color vision. This is generally equal to the number of photopsins expressed: a correlation that holds for vertebrates but not invertebrates. The common vertebrate ancestor possessed four photopsins (expressed in cones) plus rhodopsin (expressed in rods), so was tetrachromatic. However, many vertebrate lineages have lost one or many photopsin genes, leading to lower-dimension color vision. The dimensions of color vision range from 1-dimensional and up: Monochromacy - 1D color vision - lack of any color perception Dichromacy - 2D color vision - dimensionality of most mammals and a quarter of color blind humans Trichromacy - 3D color vision - dimensionality of most humans Tetrachromacy - 4D color vision - dimensionality of most birds, reptiles and fish Pentachromacy and higher - 5D+ color vision - rare in vertebrates Physiology of color perception: Perception of color begins with specialized retinal cells known as cone cells. Cone cells contain different forms of opsin – a pigment protein – that have different spectral sensitivities. Humans contain three types, resulting in trichromatic color vision. Physiology of color perception: Each individual cone contains pigments composed of opsin apoprotein covalently linked to a light-absorbing prosthetic group: either 11-cis-hydroretinal or, more rarely, 11-cis-dehydroretinal.The cones are conventionally labeled according to the ordering of the wavelengths of the peaks of their spectral sensitivities: short (S), medium (M), and long (L) cone types. These three types do not correspond well to particular colors as we know them. Rather, the perception of color is achieved by a complex process that starts with the differential output of these cells in the retina and which is finalized in the visual cortex and associative areas of the brain. Physiology of color perception: For example, while the L cones have been referred to simply as red receptors, microspectrophotometry has shown that their peak sensitivity is in the greenish-yellow region of the spectrum. Similarly, the S cones and M cones do not directly correspond to blue and green, although they are often described as such. The RGB color model, therefore, is a convenient means for representing color but is not directly based on the types of cones in the human eye. Physiology of color perception: The peak response of human cone cells varies, even among individuals with so-called normal color vision; in some non-human species this polymorphic variation is even greater, and it may well be adaptive. Physiology of color perception: Theories Two complementary theories of color vision are the trichromatic theory and the opponent process theory. The trichromatic theory, or Young–Helmholtz theory, proposed in the 19th century by Thomas Young and Hermann von Helmholtz, posits three types of cones preferentially sensitive to blue, green, and red, respectively. Others have suggested that the trichromatic theory is not specifically a theory of color vision but a theory of receptors for all vision, including color but not specific or limited to it. Equally, it has been suggested that the relationship between the phenomenal opponency described by Hering and the physiological opponent processes are not straightforward (see below), making of physiological opponency a mechanism that is relevant to the whole of vision, and not just to color vision alone. Ewald Hering proposed the opponent process theory in 1872. It states that the visual system interprets color in an antagonistic way: red vs. green, blue vs. yellow, black vs. white. Both theories are generally accepted as valid, describing different stages in visual physiology, visualized in the adjacent diagram.: 168 Green–magenta and blue—yellow are scales with mutually exclusive boundaries. In the same way that there cannot exist a "slightly negative" positive number, a single eye cannot perceive a bluish-yellow or a reddish-green. Although these two theories are both currently widely accepted theories, past and more recent work has led to criticism of the opponent process theory, stemming from a number of what are presented as discrepancies in the standard opponent process theory. For example, the phenomenon of an after-image of complementary color can be induced by fatiguing the cells responsible for color perception, by staring at a vibrant color for a length of time, and then looking at a white surface. This phenomenon of complementary colors demonstrates cyan, rather than green, to be the complement of red and magenta, rather than red, to be the complement of green, as well as demonstrating, as a consequence, that the reddish-green color proposed to be impossible by opponent process theory is, in fact, the color yellow. Although this phenomenon is more readily explained by the trichromatic theory, explanations for the discrepancy may include alterations to the opponent process theory, such as redefining the opponent colors as red vs. cyan, to reflect this effect. Despite such criticisms, both theories remain in use. Physiology of color perception: A recent demonstration, using the Color Mondrian, has shown that, just as the color of a surface that is part of a complex 'natural' scene is independent of the wavelength-energy composition of the light reflected from it alone but depends upon the composition of the light reflected from its surrounds as well, so the after image produced by looking at a given part of a complex scene is also independent of the wavelength energy-composition of the light reflected from it alone. Thus, while the color of the after-image produced by looking at a green surface that is reflecting more "green" (middle-wave) than "red" (long-wave) light is magenta, so is the after image of the same surface when it reflects more "red" than "green" light (when it is still perceived as green). This would seem to rule out an explanation of color opponency based on retinal cone adaptation. Physiology of color perception: Cone cells in the human eye A range of wavelengths of light stimulates each of these receptor types to varying degrees. The brain combines the information from each type of receptor to give rise to different perceptions of different wavelengths of light. Physiology of color perception: Cones and rods are not evenly distributed in the human eye. Cones have a high density at the fovea and a low density in the rest of the retina. Thus color information is mostly taken in at the fovea. Humans have poor color perception in their peripheral vision, and much of the color we see in our periphery may be filled in by what our brains expect to be there on the basis of context and memories. However, our accuracy of color perception in the periphery increases with the size of stimulus.The opsins (photopigments) present in the L and M cones are encoded on the X chromosome; defective encoding of these leads to the two most common forms of color blindness. The OPN1LW gene, which encodes the opsin present in the L cones, is highly polymorphic; one study found 85 variants in a sample of 236 men. A small percentage of women may have an extra type of color receptor because they have different alleles for the gene for the L opsin on each X chromosome. X chromosome inactivation means that while only one opsin is expressed in each cone cell, both types may occur overall, and some women may therefore show a degree of tetrachromatic color vision. Variations in OPN1MW, which encodes the opsin expressed in M cones, appear to be rare, and the observed variants have no effect on spectral sensitivity. Physiology of color perception: Color in the primate brain Color processing begins at a very early level in the visual system (even within the retina) through initial color opponent mechanisms. Both Helmholtz's trichromatic theory and Hering's opponent-process theory are therefore correct, but trichromacy arises at the level of the receptors, and opponent processes arise at the level of retinal ganglion cells and beyond. In Hering's theory, opponent mechanisms refer to the opposing color effect of red-green, blue-yellow, and light-dark. However, in the visual system, it is the activity of the different receptor types that are opposed. Some midget retinal ganglion cells oppose L and M cone activity, which corresponds loosely to red–green opponency, but actually runs along an axis from blue-green to magenta. Small bistratified retinal ganglion cells oppose input from the S cones to input from the L and M cones. This is often thought to correspond to blue–yellow opponency but actually runs along a color axis from yellow-green to violet. Physiology of color perception: Visual information is then sent to the brain from retinal ganglion cells via the optic nerve to the optic chiasma: a point where the two optic nerves meet and information from the temporal (contralateral) visual field crosses to the other side of the brain. After the optic chiasma, the visual tracts are referred to as the optic tracts, which enter the thalamus to synapse at the lateral geniculate nucleus (LGN). Physiology of color perception: The lateral geniculate nucleus is divided into laminae (zones), of which there are three types: the M-laminae, consisting primarily of M-cells, the P-laminae, consisting primarily of P-cells, and the koniocellular laminae. M- and P-cells receive relatively balanced input from both L- and M-cones throughout most of the retina, although this seems to not be the case at the fovea, with midget cells synapsing in the P-laminae. The koniocellular laminae receives axons from the small bistratified ganglion cells.After synapsing at the LGN, the visual tract continues on back to the primary visual cortex (V1) located at the back of the brain within the occipital lobe. Within V1 there is a distinct band (striation). This is also referred to as "striate cortex", with other cortical visual regions referred to collectively as "extrastriate cortex". It is at this stage that color processing becomes much more complicated. Physiology of color perception: In V1 the simple three-color segregation begins to break down. Many cells in V1 respond to some parts of the spectrum better than others, but this "color tuning" is often different depending on the adaptation state of the visual system. A given cell that might respond best to long-wavelength light if the light is relatively bright might then become responsive to all wavelengths if the stimulus is relatively dim. Because the color tuning of these cells is not stable, some believe that a different, relatively small, population of neurons in V1 is responsible for color vision. These specialized "color cells" often have receptive fields that can compute local cone ratios. Such "double-opponent" cells were initially described in the goldfish retina by Nigel Daw; their existence in primates was suggested by David H. Hubel and Torsten Wiesel, first demonstrated by C.R. Michael and subsequently confirmed by Bevil Conway. As Margaret Livingstone and David Hubel showed, double opponent cells are clustered within localized regions of V1 called blobs, and are thought to come in two flavors, red–green and blue-yellow. Red-green cells compare the relative amounts of red-green in one part of a scene with the amount of red-green in an adjacent part of the scene, responding best to local color contrast (red next to green). Modeling studies have shown that double-opponent cells are ideal candidates for the neural machinery of color constancy explained by Edwin H. Land in his retinex theory. Physiology of color perception: From the V1 blobs, color information is sent to cells in the second visual area, V2. The cells in V2 that are most strongly color tuned are clustered in the "thin stripes" that, like the blobs in V1, stain for the enzyme cytochrome oxidase (separating the thin stripes are interstripes and thick stripes, which seem to be concerned with other visual information like motion and high-resolution form). Neurons in V2 then synapse onto cells in the extended V4. This area includes not only V4, but two other areas in the posterior inferior temporal cortex, anterior to area V3, the dorsal posterior inferior temporal cortex, and posterior TEO. Area V4 was initially suggested by Semir Zeki to be exclusively dedicated to color, and he later showed that V4 can be subdivided into subregions with very high concentrations of color cells separated from each other by zones with lower concentration of such cells though even the latter cells respond better to some wavelengths than to others, a finding confirmed by subsequent studies. The presence in V4 of orientation-selective cells led to the view that V4 is involved in processing both color and form associated with color but it is worth noting that the orientation selective cells within V4 are more broadly tuned than their counterparts in V1, V2 and V3. Color processing in the extended V4 occurs in millimeter-sized color modules called globs. This is the part of the brain in which color is first processed into the full range of hues found in color space.Anatomical studies have shown that neurons in extended V4 provide input to the inferior temporal lobe. "IT" cortex is thought to integrate color information with shape and form, although it has been difficult to define the appropriate criteria for this claim. Despite this murkiness, it has been useful to characterize this pathway (V1 > V2 > V4 > IT) as the ventral stream or the "what pathway", distinguished from the dorsal stream ("where pathway") that is thought to analyze motion, among other features. Subjectivity of color perception: Color is a feature of visual perception by an observer. There is a complex relationship between the wavelengths of light in the visual spectrum and human experiences of color. Although most people are assumed to have the same mapping, the philosopher John Locke recognized that alternatives are possible, and described one such hypothetical case with the "inverted spectrum" thought experiment. For example, someone with an inverted spectrum might experience green while seeing 'red' (700 nm) light, and experience red while seeing 'green' (530 nm) light. This inversion has never been demonstrated in experiment, though. Subjectivity of color perception: Synesthesia (or ideasthesia) provides some atypical but illuminating examples of subjective color experience triggered by input that is not even light, such as sounds or shapes. The possibility of a clean dissociation between color experience from properties of the world reveals that color is a subjective psychological phenomenon. Subjectivity of color perception: The Himba people have been found to categorize colors differently from most Westerners and are able to easily distinguish close shades of green, barely discernible for most people. The Himba have created a very different color scheme which divides the spectrum to dark shades (zuzu in Himba), very light (vapa), vivid blue and green (buru) and dry colors as an adaptation to their specific way of life. Subjectivity of color perception: The perception of color depends heavily on the context in which the perceived object is presented.Psychophysical experiments have shown that color is perceived before the orientation of lines and directional motion by as much as 40ms and 80 ms respectively, thus leading to a perceptual asynchrony that is demonstrable with brief presentation times. Subjectivity of color perception: Chromatic adaptation In color vision, chromatic adaptation refers to color constancy; the ability of the visual system to preserve the appearance of an object under a wide range of light sources. For example, a white page under blue, pink, or purple light will reflect mostly blue, pink, or purple light to the eye, respectively; the brain, however, compensates for the effect of lighting (based on the color shift of surrounding objects) and is more likely to interpret the page as white under all three conditions, a phenomenon known as color constancy. Subjectivity of color perception: In color science, chromatic adaptation is the estimation of the representation of an object under a different light source from the one in which it was recorded. A common application is to find a chromatic adaptation transform (CAT) that will make the recording of a neutral object appear neutral (color balance), while keeping other colors also looking realistic. For example, chromatic adaptation transforms are used when converting images between ICC profiles with different white points. Adobe Photoshop, for example, uses the Bradford CAT. Color vision in nonhumans: Many species can see light with frequencies outside the human "visible spectrum". Bees and many other insects can detect ultraviolet light, which helps them to find nectar in flowers. Plant species that depend on insect pollination may owe reproductive success to ultraviolet "colors" and patterns rather than how colorful they appear to humans. Birds, too, can see into the ultraviolet (300–400 nm), and some have sex-dependent markings on their plumage that are visible only in the ultraviolet range. Many animals that can see into the ultraviolet range, however, cannot see red light or any other reddish wavelengths. For example, bees' visible spectrum ends at about 590 nm, just before the orange wavelengths start. Birds, however, can see some red wavelengths, although not as far into the light spectrum as humans. It is a myth that the common goldfish is the only animal that can see both infrared and ultraviolet light; their color vision extends into the ultraviolet but not the infrared.The basis for this variation is the number of cone types that differ between species. Mammals, in general, have a color vision of a limited type, and usually have red-green color blindness, with only two types of cones. Humans, some primates, and some marsupials see an extended range of colors, but only by comparison with other mammals. Most non-mammalian vertebrate species distinguish different colors at least as well as humans, and many species of birds, fish, reptiles, and amphibians, and some invertebrates, have more than three cone types and probably superior color vision to humans. Color vision in nonhumans: In most Catarrhini (Old World monkeys and apes—primates closely related to humans), there are three types of color receptors (known as cone cells), resulting in trichromatic color vision. These primates, like humans, are known as trichromats. Many other primates (including New World monkeys) and other mammals are dichromats, which is the general color vision state for mammals that are active during the day (i.e., felines, canines, ungulates). Nocturnal mammals may have little or no color vision. Trichromat non-primate mammals are rare.: 174–175 Many invertebrates have color vision. Honeybees and bumblebees have trichromatic color vision which is insensitive to red but sensitive to ultraviolet. Osmia rufa, for example, possess a trichromatic color system, which they use in foraging for pollen from flowers. In view of the importance of color vision to bees one might expect these receptor sensitivities to reflect their specific visual ecology; for example the types of flowers that they visit. However, the main groups of hymenopteran insects excluding ants (i.e., bees, wasps and sawflies) mostly have three types of photoreceptor, with spectral sensitivities similar to the honeybee's. Papilio butterflies possess six types of photoreceptors and may have pentachromatic vision. The most complex color vision system in the animal kingdom has been found in stomatopods (such as the mantis shrimp) having between 12 and 16 spectral receptor types thought to work as multiple dichromatic units.Vertebrate animals such as tropical fish and birds sometimes have more complex color vision systems than humans; thus the many subtle colors they exhibit generally serve as direct signals for other fish or birds, and not to signal mammals. In bird vision, tetrachromacy is achieved through up to four cone types, depending on species. Each single cone contains one of the four main types of vertebrate cone photopigment (LWS/ MWS, RH2, SWS2 and SWS1) and has a colored oil droplet in its inner segment. Brightly colored oil droplets inside the cones shift or narrow the spectral sensitivity of the cell. Pigeons may be pentachromats.Reptiles and amphibians also have four cone types (occasionally five), and probably see at least the same number of colors that humans do, or perhaps more. In addition, some nocturnal geckos and frogs have the capability of seeing color in dim light. At least some color-guided behaviors in amphibians have also been shown to be wholly innate, developing even in visually deprived animals.In the evolution of mammals, segments of color vision were lost, then for a few species of primates, regained by gene duplication. Eutherian mammals other than primates (for example, dogs, mammalian farm animals) generally have less-effective two-receptor (dichromatic) color perception systems, which distinguish blue, green, and yellow—but cannot distinguish oranges and reds. There is some evidence that a few mammals, such as cats, have redeveloped the ability to distinguish longer wavelength colors, in at least a limited way, via one-amino-acid mutations in opsin genes. The adaptation to see reds is particularly important for primate mammals, since it leads to the identification of fruits, and also newly sprouting reddish leaves, which are particularly nutritious. Color vision in nonhumans: However, even among primates, full color vision differs between New World and Old World monkeys. Old World primates, including monkeys and all apes, have vision similar to humans. New World monkeys may or may not have color sensitivity at this level: in most species, males are dichromats, and about 60% of females are trichromats, but the owl monkeys are cone monochromats, and both sexes of howler monkeys are trichromats. Visual sensitivity differences between males and females in a single species is due to the gene for yellow-green sensitive opsin protein (which confers ability to differentiate red from green) residing on the X sex chromosome. Color vision in nonhumans: Several marsupials, such as the fat-tailed dunnart (Sminthopsis crassicaudata), have trichromatic color vision.Marine mammals, adapted for low-light vision, have only a single cone type and are thus monochromats. Evolution: Color perception mechanisms are highly dependent on evolutionary factors, of which the most prominent is thought to be satisfactory recognition of food sources. In herbivorous primates, color perception is essential for finding proper (immature) leaves. In hummingbirds, particular flower types are often recognized by color as well. On the other hand, nocturnal mammals have less-developed color vision since adequate light is needed for cones to function properly. There is evidence that ultraviolet light plays a part in color perception in many branches of the animal kingdom, especially insects. In general, the optical spectrum encompasses the most common electronic transitions in the matter and is therefore the most useful for collecting information about the environment. Evolution: The evolution of trichromatic color vision in primates occurred as the ancestors of modern monkeys, apes, and humans switched to diurnal (daytime) activity and began consuming fruits and leaves from flowering plants. Color vision, with UV discrimination, is also present in a number of arthropods—the only terrestrial animals besides the vertebrates to possess this trait.Some animals can distinguish colors in the ultraviolet spectrum. The UV spectrum falls outside the human visible range, except for some cataract surgery patients. Birds, turtles, lizards, many fish and some rodents have UV receptors in their retinas. These animals can see the UV patterns found on flowers and other wildlife that are otherwise invisible to the human eye. Evolution: Ultraviolet vision is an especially important adaptation in birds. It allows birds to spot small prey from a distance, navigate, avoid predators, and forage while flying at high speeds. Birds also utilize their broad spectrum vision to recognize other birds, and in sexual selection. Mathematics of color perception: A "physical color" is a combination of pure spectral colors (in the visible range). In principle there exist infinitely many distinct spectral colors, and so the set of all physical colors may be thought of as an infinite-dimensional vector space (a Hilbert space). This space is typically notated Hcolor. More technically, the space of physical colors may be considered to be the topological cone over the simplex whose vertices are the spectral colors, with white at the centroid of the simplex, black at the apex of the cone, and the monochromatic color associated with any given vertex somewhere along the line from that vertex to the apex depending on its brightness. Mathematics of color perception: An element C of Hcolor is a function from the range of visible wavelengths—considered as an interval of real numbers [Wmin,Wmax]—to the real numbers, assigning to each wavelength w in [Wmin,Wmax] its intensity C(w). A humanly perceived color may be modeled as three numbers: the extents to which each of the 3 types of cones is stimulated. Thus a humanly perceived color may be thought of as a point in 3-dimensional Euclidean space. We call this space R3color. Since each wavelength w stimulates each of the 3 types of cone cells to a known extent, these extents may be represented by 3 functions s(w), m(w), l(w) corresponding to the response of the S, M, and L cone cells, respectively. Mathematics of color perception: Finally, since a beam of light can be composed of many different wavelengths, to determine the extent to which a physical color C in Hcolor stimulates each cone cell, we must calculate the integral (with respect to w), over the interval [Wmin,Wmax], of C(w)·s(w), of C(w)·m(w), and of C(w)·l(w). The triple of resulting numbers associates with each physical color C (which is an element in Hcolor) a particular perceived color (which is a single point in R3color). This association is easily seen to be linear. It may also easily be seen that many different elements in the "physical" space Hcolor can all result in the same single perceived color in R3color, so a perceived color is not unique to one physical color. Mathematics of color perception: Thus human color perception is determined by a specific, non-unique linear mapping from the infinite-dimensional Hilbert space Hcolor to the 3-dimensional Euclidean space R3color. Mathematics of color perception: Technically, the image of the (mathematical) cone over the simplex whose vertices are the spectral colors, by this linear mapping, is also a (mathematical) cone in R3color. Moving directly away from the vertex of this cone represents maintaining the same chromaticity while increasing its intensity. Taking a cross-section of this cone yields a 2D chromaticity space. Both the 3D cone and its projection or cross-section are convex sets; that is, any mixture of spectral colors is also a color. Mathematics of color perception: In practice, it would be quite difficult to physiologically measure an individual's three cone responses to various physical color stimuli. Instead, a psychophysical approach is taken. Three specific benchmark test lights are typically used; let us call them S, M, and L. To calibrate human perceptual space, scientists allowed human subjects to try to match any physical color by turning dials to create specific combinations of intensities (IS, IM, IL) for the S, M, and L lights, resp., until a match was found. This needed only to be done for physical colors that are spectral, since a linear combination of spectral colors will be matched by the same linear combination of their (IS, IM, IL) matches. Note that in practice, often at least one of S, M, L would have to be added with some intensity to the physical test color, and that combination matched by a linear combination of the remaining 2 lights. Across different individuals (without color blindness), the matchings turned out to be nearly identical. Mathematics of color perception: By considering all the resulting combinations of intensities (IS, IM, IL) as a subset of 3-space, a model for human perceptual color space is formed. (Note that when one of S, M, L had to be added to the test color, its intensity was counted as negative.) Again, this turns out to be a (mathematical) cone, not a quadric, but rather all rays through the origin in 3-space passing through a certain convex set. Again, this cone has the property that moving directly away from the origin corresponds to increasing the intensity of the S, M, L lights proportionately. Again, a cross-section of this cone is a planar shape that is (by definition) the space of "chromaticities" (informally: distinct colors); one particular such cross-section, corresponding to constant X+Y+Z of the CIE 1931 color space, gives the CIE chromaticity diagram. Mathematics of color perception: This system implies that for any hue or non-spectral color not on the boundary of the chromaticity diagram, there are infinitely many distinct physical spectra that are all perceived as that hue or color. So, in general, there is no such thing as the combination of spectral colors that we perceive as (say) a specific version of tan; instead, there are infinitely many possibilities that produce that exact color. The boundary colors that are pure spectral colors can be perceived only in response to light that is purely at the associated wavelength, while the boundary colors on the "line of purples" can each only be generated by a specific ratio of the pure violet and the pure red at the ends of the visible spectral colors. Mathematics of color perception: The CIE chromaticity diagram is horseshoe-shaped, with its curved edge corresponding to all spectral colors (the spectral locus), and the remaining straight edge corresponding to the most saturated purples, mixtures of red and violet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Human communication** Human communication: Human communication, or anthroposemiotics, is a field of study dedicated to understanding how humans communicate. Humans' ability to communicate with one another would not be possible without an understanding of what we are referencing or thinking about. Because humans are unable to fully understand one another's perspective, there needs to be a creation of commonality through a shared mindset or viewpoint. The field of communication is very diverse, as there are multiple layers of what communication is and how we use its different features as human beings. Human communication: Humans have communicatory abilities other animals do not, for example, humans are able to communicate about time and place as though they are solid objects. Humans communicate to request help, inform others, and share attitudes for bonding. Communication is a joint activity largely dependent on the ability to maintain common attention. We share relevant background knowledge and joint experience in order to communicate content and coherence in exchanges.The evolution of human communication took place over a long period of time. Humans evolved from simple hand gestures to the use of spoken language. Most face-to-face communication requires visually reading and following along with the other person, offering gestures in reply, and maintaining eye contact throughout the interaction. Category: The current study of human communication can be branched off into two major categories; rhetorical and relational. The focus of rhetorical communication is primarily on the study of influence; the art of rhetorical communication is based on the idea of persuasion. The relational approach examines communication from a transactional perspective; two or more people interact to reach an agreed perspective.In its early stages, rhetoric was developed to help ordinary people prove their claims in court; this shows how persuasion is key in this form of communication. Aristotle stated that effective rhetoric is based on argumentation. As explained in the text, rhetoric involves a dominant party and a submissive party or a party that succumbs to that of the most dominant party. While the rhetorical approach stems from Western societies, the relational approach stems from Eastern societies. Eastern societies hold higher standards for cooperation, which makes sense as to why they would sway more toward a relational approach for that matter. "Maintaining valued relationships is generally seen as more important than exerting influence and control over others". "The study of human communication today is more diversified than ever before in its history".Classification of human communication can be found in the workplace, especially for group work. Co-workers need to argue with each other to gain the best solutions for their projects, while they also need to nurture their relationships to maintain their collaboration. For example, in their group work, they may use the communication tactic of "saving face". Category: Spoken language involves speech, mostly human quality to acquire. For example, chimpanzees are humans' closest relatives, but they are unable to produce speech. Chimpanzees are the closest living species to humans. Chimpanzees are closer to humans, in genetic and evolutionary terms, than they are to gorillas or other apes. The fact that a chimpanzee will not acquire speech, even when raised in a human home with all the environmental input of a normal human child, is one of the central puzzles we face when contemplating the biology of our species. In repeated experiments, starting in the 1910s, chimpanzees raised in close contact with humans have universally failed to speak, or even to try to speak, despite their rapid progress in many other intellectual and motor domains. Each normal human is born with a capacity to rapidly and unerringly acquire their mother tongue, with little explicit teaching or coaching. In contrast, no nonhuman primate has spontaneously produced even a word of the local language. Definition: Human communication can be defined as any Shared Symbolic Interaction. Shared, because each communication process also requires a system of signification (the Code) as its necessary condition, and if the encoding is not known to all those who are involved in the communication process, there is no understanding and therefore fails the same notification. Symbolic, because there is a need for a signifier or sign, which allows the transmission of the message. Interaction, since it involves two or more people, resulting in a further increase of knowledge on the part of all those who interact. Types: Human communication can be subdivided into a variety of types: Intrapersonal communication (communication with oneself): This very basic form of information, is the standard and foundation, of all things communication. This communication with ourselves showcases the process in which we think on our previous and ongoing actions, as well as what we choose to understand from other types of communications and events. Our intrapersonal communication, may be shown and expressed to others by our reactions to certain outcomes, through simple acts of gestures and expressions. Types: Interpersonal communication (communication between two or more people) - Communication relies heavily on understanding the processes and situations that you are in, in order to communicate effectively. It is more than simple behaviors and strategies, on how and what it means to communicate with another person. Interpersonal communication reflects the personality and characteristics, of a person, seen through the type of dialect, form, and content, a person chooses to communicate with. As simple as this is, interpersonal communication can only be correctly done if both persons involved in the communication, understand what it is to be human beings, and share similar qualities of what it means to be humans. It involves acts of trust and openness, as well as a sense of respect and care towards what the other person is talking about.Nonverbal communication: The messages we send to each other, in ways that cover the act of word-by-mouth. These actions may be done through the use of our facial features and expressions, arms and hands, the tone of our voice, or even our very appearance can display a certain type of message. Types: Speech: Allowing words to make for an understanding as to what people are feeling and expressing. It allows a person to get a direct thought out to another by using their voice to create words that then turn into a sentence, which in turn then turns into a conversation to get a message across. "What is spoken or expressed, as in conversation; uttered or written words: seditious speech. A talk or public address, or a written copy of this: The senator gave a speech. The language or dialect of a nation or region: American speech. One's manner or style of speaking: the mayor's mumbling speech. The study of oral communication, speech sounds, and vocal physiology". Types: Conversation: Allows however many people to say words back and forth to each other that will equal into a meaningful rhythm called conversation. It defines ideas between people, teams, or groups. To have a conversation requires at least two people, making it possible to share the values and interests of each person. Conversation makes it possible to get messages across to other people, whether that be an important message or just a simple message. "Strong conversation skills will virtually guarantee that you will be better understood by most people" Visual communication: The type of communication where it involves using your eyes that allow you to read signs, charts, graphs, and pictures that have words or phrases and or pictures showing and describing what needs to be portrayed to get information across. Using visual communication allows for people to live daily lives without constantly needing to speak. A simple example is driving in a car and seeing a red sign that says "stop" on it; as a driver, you are using visual communication to read the sign understand what is being said and stop your car to not get into an accident. "If carried out properly, visual communication has various benefits. In the information era and fast-paced society in which time is limited, visual communication help to communicate ideas faster and better. Generally speaking, it offers these benefits: instant conveyance, ease of understanding, cross-cultural communication and generation of enjoyment". Types: Writing: What I am forming together right now is called writing where it revolves around putting words together to create a sentence that flows into a sentence of meaning. Words are letters that are put together to transform a word that allows the person to understand and follow along with what is being portrayed. Writing requires us to use our hands and paper to form words and letters to create the flow of a message or conversation. Writing can also be done in the form of typing which is what you are seeing here, forming words together on a computer. "Writing" is the process of using symbols (letters of the alphabet, punctuation, and spaces) to communicate thoughts and ideas in a readable form". Types: Mail: This is in the form of postage which is in a letter or package. When someone uses the post office service requiring them to send a letter that they wrote with pencil and paper or they are using the postage service to send an object to someone out of state. Makes for an easier process to send a loved one messages or objects that do not live next to you or within a 20 min drive distance. "Material (such as letters and packages) sent or carried in a postal system". For an example a loved one is in the military and is out of state, to let them know what is going on in your life and to also ask how they are doing you send them a letter via the postal service to get that message to them at their location. Workers at the postal service get the letters and packages across states and countries. Types: Mass media: "The means of communication reaching a large number of people such as the population of a nation through certain channels like film, radio, books, music, or television in that the consumer participation stays passive with comparison to interactive network platforms". The television allows for getting messages to a lot of people in different locations in a matter of minutes making it for the fastest communication skill. Types: Telecommunication: A style of communication that allows humans to understand conversation, speech, and or visual communication through technology. Whether you are listening to the radio, using your eyes to watch television, or reading words in an email that is Telecommunication. This type of communication allows for a faster and more efficient process for a message to get across to another one from anywhere you are. Location is not a problem for this type of communication. "The transmission media in telecommunication have evolved through numerous stages of technology, from beacons and other visual signals (such as smoke signals, semaphore telegraphs, signal flags, and optical heliographs), to electrical cable and electromagnetic radiation, including light. Such transmission paths are often divided into communication channels, which afford the advantages of multiplexing multiple concurrent communication sessions. Telecommunication is often used in its plural form". Types: Organizational communication (communication within organizations): Defined by structure and planning, making words, phrases, and images flow into direction and meaning. "The construct of organizational communication structure is defined by its 5 main dimensions: relationships, entities, contexts, configuration, and temporal stability". Making it easier to work into groups of different culture and thoughts. Types: Mass communication: This type of communication involves the process of communicating with known and unknown audiences, through the use of technology or other mediums. There is hardly ever an opportunity for the audience to respond directly to those who sent the message, there is a divide/separation between the sender and receiver. There are typically four players in the process of mass communication, these players are those who send the message, the message itself, the medium in which the message is sent, and those who receive the message. These four components come together to be the communication we see and are a part of the most, as the media helps in distributing these messages to the world every day. Types: Group dynamics (communication within groups): Allows ideas to be created within a group of people, allowing many minds to think together to form and create meaning. "The interactions that influence the attitudes and behavior of people when they are grouped with others through either choice or accidental circumstances". Types: Cross-cultural communication (communication across cultures): This allows different people from different locations, gender, and culture, in a group to feed off of each other's ideas to form something much bigger and better. "Culture is a way of thinking and living whereby one picks up a set of attitudes, values, norms, and beliefs that are taught and reinforced by other members in the group".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xiaomi Mi A2** Xiaomi Mi A2: The Xiaomi Mi A2 (also known as Xiaomi Mi 6X) is a mid-range smartphone co-developed by Xiaomi and Google as part of Android One program. Specifications: Hardware The phone features a 5.99 inches Full HD+ IPS LCD display with 1080 x 2160 pixels and 403 ppi pixel density, unimetal body and Corning Gorilla Glass 5 protection. It is powered by Qualcomm Snapdragon 660 SoC with Adreno 512 GPU.and has a 2.0, Type-C 1.0 reversible connector. It has a dual rear camera setup with 12 MP Sony IMX486 primary camera (1.25 μm pixel size and f/1.75 aperture) and 20 MP Sony IMX376 secondary camera (2.0 μm pixel size and f/1.75 aperture). The front camera is 20 MP Sony IMX376 sensor with 2.0 μm pixel size with f/2.2 aperture. It has a 3010 mAh battery which supports Qualcomm Quick Charge 3.0 (4.0 for India).This is the first mid-range Xiaomi smartphone to remove the 3.5 mm audio jack and microSD slot. Specifications: Software The Xiaomi Mi A2 is part of the Android One program where software updates are provided directly from Google. The Mi 6X variant runs on Xiaomi's MIUI. It is preinstalled with Android 8.1.0 "Oreo" out of the box, and can be upgraded to Android 10. Unofficially, the operating system can be replaced with Ubuntu Touch.Being a part of Android One program, Mi A2 provides a Stock Android experience and UI which is very close to those of Google Pixel UI. Specifications: Release The Xiaomi Mi A2 is a re-branded Xiaomi Mi 6X phone. The Xiaomi Mi 6X was first released in April 2018, while the A2 was released in July 2018. A stripped-down version of the phone was released on the same month known as Xiaomi Mi A2 Lite.The Mi A2 Lite, however, did include 3.5 mm audio jack, microSD slot and a higher battery capacity (4000 mAh). Unlike the regular Mi A2, the Mi A2 Lite is based on Xiaomi's budget Redmi series. A variant of this phone is known as Redmi 6 Pro, which shares the same hardware specification, but runs on MIUI user interface instead of Android One.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IBM cloud computing** IBM cloud computing: IBM Cloud (formerly known as Bluemix) is a set of cloud computing services for business offered by the information technology company IBM. Services: As of 2021, IBM Cloud contains more than 170 services including compute, storage, networking, database, analytics, machine learning, and developer tools. History: SoftLayer SoftLayer Technologies, Inc. (now IBM Cloud) was a dedicated server, managed hosting, and cloud computing provider, founded in 2005 and acquired by IBM in 2013. SoftLayer initially specialized in hosting workloads for gaming companies and startups, but shifted focus to enterprise workloads after its acquisition.SoftLayer had bare-metal compute offerings before other large cloud providers such as Amazon Web Services.SoftLayer has hosted workloads for companies such as The Hartford, WhatsApp, Whirlpool, Daimler, and Macy's. History: Timeline Year 2005: SoftLayer was established in 2005 by Lance Crosby and several of his ex-coworkers. Year 2010 - August: GI Partners acquired a majority equity stake in SoftLayer in August 2010. Year 2010 - November: In November of that year it merged the company with The Planet Internet Services, SoftLayer's biggest competitor, and consolidated the customer base under the SoftLayer brand. Year 2011 - Q1: In Q1 2011, the company reported hosting more than 81,000 servers for more than 26,000 customers in locations throughout the United States. Year 2011 - July: In July 2011, the company announced plans for international expansion to Amsterdam and Singapore to add to the existing network of North American-based data centers in Dallas (Texas), San Jose (California), Seattle (Washington), Santiago de Querétaro (Mexico), Houston (Texas) and Washington, D.C. Most of these data centers were leased via Digital Realty. Year 2013 June 4: On June 4, 2013, IBM announced its acquisition of SoftLayer under undisclosed financial terms, in a deal that according to Reuters could have fetched more than $2 billion, to form an IBM Cloud Services Division. At the time of acquisition, SoftLayer was described as the biggest privately held cloud infrastructure provider (IaaS) in the world. Year 2015 - May: As of May 2015, the company has 23 data centers in 11 different countries. Year 2018: By 2018, SoftLayer was renamed to IBM Cloud. History: Initial launch of Bluemix (2013-2016) In June 2013, IBM acquired SoftLayer, a public cloud platform, to serve as the foundation for its IaaS offering. Bluemix was announced for public beta in February 2014 after having been developed since early 2013. Bluemix was based on the open source Cloud Foundry project and ran on SoftLayer infrastructure. IBM announced the general availability of the Bluemix Platform-as-a-Service (PaaS) offering in July 2014.By April 2015, Bluemix included a suite of over 100 cloud-based development tools "including social, mobile, security, analytics, database, and IoT (internet of things). Bluemix had grown to 83,000 users in India with growth of approximately 10,000 users each month.A year after announcement, Bluemix had made little headway in the cloud-computing platform space relative to its competition, and remained substantially behind market leaders Microsoft Azure and Amazon AWS. By August 2016, little had changed in market acceptance of the Bluemix offering. In February 2016, IBM Bluemix includes IBM's Function as a Service (FaaS) system, or Serverless computing offering, that is built using open source from the Apache OpenWhisk incubator project largely credited to IBM for seeding. This system, equivalent to Amazon Lambda, Microsoft Azure Functions, Oracle Cloud Fn or Google Cloud Functions, allows calling of a specific function in response to an event without requiring any resource management from the developer. History: Re-brand to IBM Cloud (2017–Present) In May 2017 IBM released Kubernetes support as the IBM Bluemix Container Service, later renamed to the IBM Cloud Kubernetes Service (IKS). IKS was built using the open source Kubernetes project. This system, equivalent to Amazon Web Services EKS, Microsoft Azure AKS, or Google Cloud GKE, aims to provide a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. In October 2017, IBM announced that they would rebrand their cloud as IBM Cloud brand, merging all components, thus retiring the Bluemix and Softlayer brands. In March 2018, IBM launched an industry first managed Kubernetes service on bare metal. In August 2019, 3 weeks after the close of Red Hat acquisition, IBM launched a managed Red Hat OpenShift on IBM Cloud.In November 2019, IBM has announced that it had designed the world's first financial services-ready public cloud and that Bank of America was its first committed collaborator and anchor customer, joined shortly thereafter in 2020 by BNP Paribas as its first Europeean anchor client. IBM announced in April 2021 the general availability of IBM Cloud for Financial Services, including support for Red Hat OpenShift and other cloud-native technologies. In July 2021, it was announced that SAP is onboarding two of its finance and data management solutions to IBM Cloud for Financial Services. In September 2021, it was CaixaBank's turn to boost digital capabilities with IBM Cloud for Financial Services and onboarding to new IBM Cloud Multizone Region in Spain. History: Customer base In 2019, IBM partnered with the United States Tennis Association (USTA) to provide new AI-powered tools for the US Open.In May 2020, IBM announced agreements with six European companies, including Osram and Crédit Mutuel, that use IBM Cloud to access advanced technologies such as AI, blockchain and analytics. Reviews: IBM Cloud continued to be considered a leader in bare-metal in 2020, and distinguished itself by providing over 11 million possible custom configurations with the latest Power, Intel, and AMD CPUs and Nvidia GPUs. Environmental impact: In 2021, IBM announced it would achieve net zero greenhouse gas emissions by 2030.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Traveler IQ Challenge** Traveler IQ Challenge: Free geography gameTraveler IQ Challenge was created by Canadian developer Travelpod. Development: The Wall Street Journal explained that the game was "created as a marketing gimmick in June by TravelPod, a travel Web site owned by Expedia". It noted that Traveler IQ Challenge fit into the growing category of casual games and contextually came at a time when there was a "renewed interest in geography, stimulated by new technologies like GPS satellite-based navigation devices and Google Earth".Luc Levesque, a Canadian programmer, traveler, and founder of TravelPod, was inspired by a game he played on long train trips where he "would randomly name a country and one of his travel companions would attempt to name another country or capital city that starts with the third letter of the previous country's name". After Facebook opened up its site so independent developers could create games for the social networking site, "Two programmers created the game for TravelPod in just under three weeks".In 2007, Traveler IQ had "more than four million people a month who play it on sites across the Internet, including Facebook's popular social network". As a result of the game, Travelpod saw "huge increases in registrations and traffic". By 2013, the game had "netted 7800 links from almost 1000 root domains". Gameplay: The Wall Street Journal described the gameplay: "Traveler IQ starts out asking users to locate some of the better known cities and attractions in the world, like London, giving users a limit of about 10 seconds to pinpoint them on a map. The locations quickly get harder with cities like Ashkabat, Turkmenistan. The game tells users how close, in kilometers, they got to the actual locations and scores them accordingly, with more points awarded for shorter distances". Critical reception: Geographer at the University of Kansas, Jerome Dobson, despite not having played the game, said "new technological applications like Traveler IQ are helping to revive geography after a decades-long decline in the teaching of the subject in U.S. schools". TeachersFirst said "This challenging geography website is sure to excite your students as they click their way throughout the world", and noted its classroom potential. Facebook Applications gave the game 3.5/5, "because it's fun, challenging, and educative as well".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Leukotriene** Leukotriene: Leukotrienes are a family of eicosanoid inflammatory mediators produced in leukocytes by the oxidation of arachidonic acid (AA) and the essential fatty acid eicosapentaenoic acid (EPA) by the enzyme arachidonate 5-lipoxygenase.Leukotrienes use lipid signaling to convey information to either the cell producing them (autocrine signaling) or neighboring cells (paracrine signaling) in order to regulate immune responses. The production of leukotrienes is usually accompanied by the production of histamine and prostaglandins, which also act as inflammatory mediators.One of their roles (specifically, leukotriene D4) is to trigger contractions in the smooth muscles lining the bronchioles; their overproduction is a major cause of inflammation in asthma and allergic rhinitis. Leukotriene antagonists are used to treat these disorders by inhibiting the production or activity of leukotrienes. History and name: The name leukotriene, introduced by Swedish biochemist Bengt Samuelsson in 1979, comes from the words leukocyte and triene (indicating the compound's three conjugated double bonds). What would be later named leukotriene C, "slow reaction smooth muscle-stimulating substance" (SRS) was originally described between 1938 and 1940 by Feldberg and Kellaway. The researchers isolated SRS from lung tissue after a prolonged period following exposure to snake venom and histamine. Types: Cysteinyl leukotrienes LTC4, LTD4, LTE4 and LTF4 are often called cysteinyl leukotrienes due to the presence of the amino acid cysteine in their structure. The cysteinyl leukotrienes make up the slow-reacting substance of anaphylaxis (SRS-A). LTF4, like LTD4, is a metabolite of LTC4, but, unlike LTD4, which lacks the glutamic residue of glutathione, LTF4 lacks the glycine residue of glutathione. Types: LTB4 LTB4 is synthesized in vivo from LTA4 by the enzyme LTA4 hydrolase. Its primary function is to recruit neutrophils to areas of tissue damage, though it also helps promote the production of inflammatory cytokines by various immune cells. Drugs that block the actions of LTB4 have shown some efficacy in slowing the progression of neutrophil-mediated diseases. LTG4 There has also been postulated the existence of LTG4, a metabolite of LTE4 in which the cysteinyl moiety has been oxidized to an alpha-keto-acid (i.e.—the cysteine has been replaced by a pyruvate). Very little is known about this putative leukotriene. Types: LTB5 Leukotrienes originating from the omega-3 class eicosapentanoic acid (EPA) have diminished inflammatory effects. In human subjects whose diets have been supplemented with eicosapentaenoic acid, leukotrine B5, along with leukotrine B4, is produced by neutrophils. LTB5 induces aggregation of rat neutrophils, chemokinesis of human polymorphonuclear neutrophils (PMN), lysosomal enzyme release from human PMN and potentiation of bradykinin-induced plasma exudation, although compared to LTB4, it has at least 30 times less potency. Biochemistry: Synthesis Leukotrienes are synthesized in the cell from arachidonic acid by arachidonate 5-lipoxygenase. The catalytic mechanism involves the insertion of an oxygen moiety at a specific position in the arachidonic acid backbone.The lipoxygenase pathway is active in leukocytes and other immunocompetent cells, including mast cells, eosinophils, neutrophils, monocytes, and basophils. When such cells are activated, arachidonic acid is liberated from cell membrane phospholipids by phospholipase A2, and donated by the 5-lipoxygenase-activating protein (FLAP) to 5-lipoxygenase.5-Lipoxygenase (5-LO) uses FLAP to convert arachidonic acid into 5-hydroperoxyeicosatetraenoic acid (5-HPETE), which spontaneously reduces to 5-hydroxyeicosatetraenoic acid (5-HETE). The enzyme 5-LO acts again on 5-HETE to convert it into leukotriene A4 (LTA4), an unstable epoxide. 5-HETE can be further metabolized to 5-oxo-ETE and 5-oxo-15-hydroxy-ETE, all of which have pro-inflammatory actions similar but not identical to those of LTB4 and mediated not by LTB4 receptors but rather by the OXE receptor (see 5-Hydroxyicosatetraenoic acid and 5-oxo-eicosatetraenoic acid).In cells equipped with LTA hydrolase, such as neutrophils and monocytes, LTA4 is converted to the dihydroxy acid leukotriene LTB4, which is a powerful chemoattractant for neutrophils acting at BLT1 and BLT2 receptors on the plasma membrane of these cells.In cells that express LTC4 synthase, such as mast cells and eosinophils, LTA4 is conjugated with the tripeptide glutathione to form the first of the cysteinyl-leukotrienes, LTC4. Outside the cell, LTC4 can be converted by ubiquitous enzymes to form successively LTD4 and LTE4, which retain biological activity.The cysteinyl-leukotrienes act at their cell-surface receptors CysLT1 and CysLT2 on target cells to contract bronchial and vascular smooth muscle, to increase permeability of small blood vessels, to enhance secretion of mucus in the airway and gut, and to recruit leukocytes to sites of inflammation.Both LTB4 and the cysteinyl-leukotrienes (LTC4, LTD4, LTE4) are partly degraded in local tissues, and ultimately become inactive metabolites in the liver. Biochemistry: Function Leukotrienes act principally on a subfamily of G protein-coupled receptors. They may also act upon peroxisome proliferator-activated receptors. Leukotrienes are involved in asthmatic and allergic reactions and act to sustain inflammatory reactions. Several leukotriene receptor antagonists such as montelukast and zafirlukast are used to treat asthma. Recent research points to a role of 5-lipoxygenase in cardiovascular and neuropsychiatric illnesses.Leukotrienes are very important agents in the inflammatory response. Some such as LTB4 have a chemotactic effect on migrating neutrophils, and as such help to bring the necessary cells to the tissue. Leukotrienes also have a powerful effect in bronchoconstriction and increase vascular permeability. Leukotrienes in asthma: Leukotrienes contribute to the pathophysiology of asthma, especially in patients with aspirin-exacerbated respiratory disease (AERD), and cause or potentiate the following symptoms: Airflow obstruction Increased secretion of mucus Mucosal accumulation Bronchoconstriction Infiltration of inflammatory cells in the airway wall Role of cysteinyl leukotrienes Cysteinyl leukotriene receptors CYSLTR1 and CYSLTR2 are present on mast cells, eosinophil, and endothelial cells. During cysteinyl leukotriene interaction, they can stimulate proinflammatory activities such as endothelial cell adherence and chemokine production by mast cells. As well as mediating inflammation, they induce asthma and other inflammatory disorders, thereby reducing the airflow to the alveoli. The levels of cysteinyl leukotrienes, along with 8-isoprostane, have been reported to be increased in the EBC of patients with asthma, correlating with disease severity. Cysteinyl leukotrienes may also play a role in adverse drug reactions in general and in contrast medium induced adverse reactions in particular.In excess, the cysteinyl leukotrienes can induce anaphylactic shock. Leukotrienes in dementia: Leukotrienes are found to play an important role in the later stages of Alzheimer's disease and related dementias in studies with animals. In tau transgenic mice, which develop tau pathology, "zileuton, a drug that inhibits leukotriene formation by blocking the 5-lipoxygenase enzyme" was found to reverse memory loss.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Information-Technology Engineers Examination** Information-Technology Engineers Examination: The Information-Technology Engineers Examination (Japanese: 情報処理技術者試験, Hepburn: jōhō shori gijutsusha shiken, or ITEE) is a group of information technology examinations administered by the Information Technology Promotion Agency, Japan (IPA). The ITEE was introduced in 1969 by Japan's Ministry of International Trade and Industry (MITI), and it has since changed hands twice, first to the Japan Information Processing Development Corporation (JIPDEC) in 1984, and then to the IPA in 2004. At first there were two examination categories, one for lower-level programmers and one for upper-level programmers, and over the years the number of categories increased to twelve as of 2016. Information-Technology Engineers Examination: The examinations are carried out during the course of one day; candidates sit a morning test and an afternoon test. The morning test assesses the breadth of the candidate's subject-matter knowledge, and the afternoon test assesses the candidate's ability to apply that knowledge. The examinations have a low pass rate: between 1969 and 2010 15.4 million people took them, but only 1.7 million were successful (an average success rate of 11 percent). Information-Technology Engineers Examination: The questions are developed by a committee of experts, and are continually updated to reflect changes in the computer industry. The examination categories are also subject to change based upon industry trends. The ITEE examinations are recognized as qualifications in several Asian countries, including India, Singapore, South Korea, China, the Philippines, Thailand, Vietnam, Myanmar, Taiwan, and Bangladesh. History: The Information Technology Engineers Examination was founded in 1969 as a national examination by Japan's Ministry of International Trade and Industry (MITI). At first, two categories of examination were offered: Class I Information Technology Engineer, aimed at upper-level programmers, and Class II Information Technology Engineer, aimed at lower level programmers. These two categories were followed in 1971 by the Special Information Technology Engineer Examination.In 1984, MITI (then known as the Ministry of Economy, Trade and Industry, or METI) handed over the administration of the examinations to Japan Information Processing Development Corporation (JIPDEC). JIPDEC received most of its funding from METI, and while the two organizations were technically independent, they shared close ties with each other. JIPDEC founded the Japan Information Technology Engineers Examination Center (JITEC) to oversee the actual running of the examinations.The 1980s saw the introduction of two new examination categories: the Information Technology Systems Audit Engineer Examination in 1986, and the Online Information Technology Engineer Examination in 1988. The former was aimed at systems auditors, and the latter at network engineers.The examination categories underwent a major upheaval in 1994. The Special Information Technology Engineer Examination was expanded into four separate examinations: the Applications Engineer Examination, the Systems Analyst Examination, the Project Manager Examination, and the Systems Administration Engineer Examination. The Online Information Technology Engineer Examination became the Network Specialist Examination, the Information Technology Systems Auditor Examination became the Systems Auditor Examination, and three new categories of examination were introduced: the Production Engineer Examination, the Database Specialist Examination, and the Basic Systems Administrator Examination. These were followed by a further two new categories in 1996: the Advanced Systems Administrator Examination and the Applied Microcontroller Systems Engineer Examination.There was another major change to the categories in 2001. The Class I Information Technology Engineer Examination became the Fundamental Information Technology Engineer Examination, and the Class II Information Technology Engineer became the Applied Information Technology Engineer Examination. The Production Engineer Examination was discontinued, and the Information Security Administrator Examination was introduced.In 2004, the administration of the examinations changed hands from JIPDEC to the Information-Technology Promotion Agency (IPA). This was followed in 2006 by the introduction of a new examination category, the Technical Engineer (Information Security) Examination.2009 saw the introduction of a new test, the IT Passport Examination, while others examination categories were consolidated. The Systems Analyst Examination and the Advanced Systems Administrator Examination were merged to form the IT Strategist Examination, and the Technical Engineer (Information Security) Examination and the Information Security Administrator Examination were merged to form the Information Security Specialist Examination. Format: The examinations are all carried out in one day, with a morning test and an afternoon tests. The morning test is multiple choice, and aims to test the candidate's breadth of knowledge of the material being examined. The afternoon test tests the candidate's ability to apply that knowledge and with a series of case studies and essay questions. The afternoon test also aims to test the candidate's past experience.After the examinations are over, candidates are allowed to take their question papers home with them, and the answers to some of the questions are made available online. Candidates who pass the examinations receive certificates from METI. These certificates show the date that they were awarded, but they have no expiration date.Between 1969 and 2010, 15.4 million people took one of the ITEE examinations, and only 1.7 million people passed, giving an average success rate of 11 percent. Categories: As of July 2016 there are 13 examination categories, divided into four levels. Administration: JITEC bases the scope and the difficulty of the exams on the advice of a committee of experts from the computer industry and from academia. This committee investigates the skills currently used by engineers in the relevant examination category, and uses that to base their recommendations on. In this way, the questions in the exams are kept up to date with new and evolving technologies. The knowledge areas that are tested are taken from software engineering, information systems and computer science.The examination categories are also constantly reviewed to ensure that they are both relevant to current trends in information technology and to keep them consistent with previous exams.The examination questions themselves are also developed by committee consisting of around 400 experts. Subcommittees are put in charge of question development, of checking, and of question selection, and are given independent authority to create questions. New questions are made for each exam, but some questions intended to test the breadth of candidates' knowledge may be altered and reused.In addition to its national examination status in Japan, the ITEE is also recognized as a professional credential in several Asian countries, including India, Singapore, South Korea, China, the Philippines, Thailand, Vietnam, Myanmar, Taiwan, and Bangladesh.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diamine oxidase** Diamine oxidase: Diamine oxidase (DAO), also known "amine oxidase, copper-containing, 1" (AOC1), formerly called histaminase, is an enzyme (EC 1.4.3.22) involved in the metabolism, oxidation, and inactivation of histamine and other polyamines such as putrescine or spermidine in animals. It belongs to the amine oxidase (copper-containing) (AOC) family of amine oxidase enzymes. In humans, DAO is encoded by the AOC1 gene. Diamine oxidase: The highest levels of DAO expression are observed in the digestive tract and the placenta. In humans, a certain subtype of cells of the placenta, namely the extravillous trophoblasts, express the enzyme and secrete it into the blood stream of a pregnant woman. Lowered diamine oxidase values in maternal blood in early pregnancy might be an indication for trophoblast-related pregnancy disorders like early-onset preeclampsia. Normally the enzyme is not or only very scarcely present in the blood circulation of humans, but it increases vastly in pregnant women suggesting a protective mechanism against adverse histamine. It is also secreted by eosinophils. In case of a shortage of diamine oxidase in the human body, it may appear as an allergy or histamine intolerance. Supplementation: For people with a histamine intolerance, the benefits of supplementation are inconclusive due to limited research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brpf1** Brpf1: Peregrin also known as bromodomain and PHD finger-containing protein 1 is a protein that in humans is encoded by the BRPF1 gene located on 3p26-p25. Peregrin is a multivalent chromatin regulator that recognizes different epigenetic marks and activates three histone acetyltransferases (Moz, Morf and Hbo1). BRPF1 contains two PHD fingers, one bromodomain and one chromo/Tudor-related Pro-Trp-Trp-Pro (PWWP) domain. Function: Embryo development Brpf1 gene is very conserved and has a critical role in different developmental processes. Zebrafish BRPF1, which is coordinated by its particular set of PWWP domains, mediates Moz -dependent histone acetylation and maintains Hox genes expression throughout vertebrate development, hence determines the proper pharyngeal segmental identities. Furthermore, Brpf1 may not only has significant role for maintaining the anterior-posterior axis of the craniofacial skeleton, but also the dorsal-ventral axis of the caudal skeleton. Recent studies have shown that ablation of the mouse Brpf1 gene causes embryonic lethality at embryonic day 9.5. Specifically, Brpf1 regulates placenta vascular formation, neural tube closure, primitive hematopoiesis and embryonic fibroblast proliferation.For the central nervous system, Brpf1 has high expression and is essential for the development of several important structures, including neocortex and dentate gyrus in the hippocampus. Brpf1 is dynamically expressed during forebrain development, especially the hippocampal neurogenesis. Brpf1 shares phenotypes with transcription factors Sox2, Tlx and Tbr2 in dentate gyrus development and has potential link to neural stem cells and progenitors. Except for the forebrain, Brpf1 is also required for the proper patterning of the craniofacial cartilage, which is derived from neural crest cells that migrate from the hindbrain. Function: Cancer development Recently, Brpf1 was reported to play the tumor suppressor or oncogenic role in several malignant tumors, including leukemia, medulloblastoma and endometrial stromal sarcoma. Brpf1 was considered a tumor suppressor gene because mutations in cancer cells appear to diminish the function of Brpf1 However, oncogenic role of Brpf1 is also possible in cancer. For example, Brpf1 can form a stable complex with Moz-Tif2, which could lead to the development of human acute myeloid leukemia (AML). There is another Brpf1 related complex Brpf1–Ing5–Eaf6, which also plays a direct role in cancer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rendezvous protocol** Rendezvous protocol: A rendezvous protocol is a computer network protocol that enables resources or P2P network peers to find each other. A rendezvous protocol uses a handshaking model, unlike an eager protocol which directly copies the data. In a rendezvous protocol the data is sent when the destination says it is ready, but in an eager protocol the data is sent assuming the destination can store the data.Examples of rendezvous protocols include JXTA, SIP, Freenet Project, I2P, and such protocols generally involve hole punching. Rendezvous protocol: Because of firewall network address translation (NAT) issues, rendezvous protocols generally require that there be at least one unblocked and un-NATed server that lets the peers locate each other and initiate concurrent packets at each other.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flow focusing** Flow focusing: Flow focusing in fluid dynamics is a technology whose aim is the production of drops or bubbles by straightforward hydrodynamic means. The output is a dispersed liquid or gas, frequently in the form of a fine aerosol or an emulsion. No other driving force is required, apart from traditional pumping, a key difference with other comparable technologies, such as electrospray (where an electric field is needed). Both flow focusing and electrospray working in their most extensively used regime produce high quality sprays composed by homogeneous and well-controlled-size droplets. Flow focusing was invented by Prof. Alfonso M. Gañan-Calvo (who now teaches at ETSI in Seville) in 1994, patented in 1996, and published for the first time in 1998. Mechanism: The basic principle consists of a continuous phase fluid (focusing or sheath fluid) flanking or surrounding the dispersed phase (focused or core fluid), so as to give rise to droplet or bubble break-off in the vicinity of an orifice through which both fluids are extruded. The principle may be extended to two or more coaxial fluids; gases and liquids may be combined; and, depending on the geometry of the feed tube and orifices, the flow pattern may be cylindrical or planar. Both cylindrical and planar flow focusing have led to a variety of developments (see also the works of Peter Walzal). Mechanism: A flow focusing device consists of a pressure chamber pressurized with a continuous focusing fluid supply. Inside, one or more focused fluids are injected through a capillary feed tube whose extremity opens up in front of a small orifice, linking the pressure chamber with the exterior ambient. The focusing fluid stream moulds the fluid meniscus into a cusp giving rise to a steady micro or nano-jet exiting the chamber through the orifice; the jet size is much smaller than the exit orifice, thus precluding any contact (which may lead to unwanted deposition or reaction). Capillary instability breaks up the steady jet into homogeneous droplets or bubbles. Mechanism: The feed tube may be composed of two or more concentric needles and different immiscible liquids or gases to be injected, leading to compound drops. On being suitably cured, such drops may lead to multilayer microcapsules with multiple shells of controllable thickness. Flow focusing ensures an extremely fast as well as controlled production of up to millions of droplets per second as the jet breaks up. Mechanism: The role of the tangential viscous stress is essential in establishing a steady meniscus shape in flow focusing, as illustrated in the case of a simple liquid jet surrounded by a gas. In the absence of a sufficiently strong tangential stress, a round-apex meniscus is obtained. Both the inner liquid and the external gas flows would exhibit stagnation regions around the round apex. The surface tension stress σ/D would be simply balanced by an appropriate pressure jump across the interface. If one slowly pushes a liquid flow rate Q, the system would spit intermittently the excess of liquid to recover the round-apex equilibrium shape. However, when the tangential stress is sufficiently vigorous compared to σ /D, the surface can be deformed into a steady tapering shape, which allows the continuous and smooth acceleration of the liquid under the combined actions of the pressure drop ΔP and the tangential viscous stress τs on the liquid surface. Applications: Flow focusing may be applied in the food, medicine, pharmaceutical, cosmetic, photographic and environmental industry, among other potential uses. The production of compound particles is an important field: drug encapsulation, dye-labeled particles and multiple-core particles can be cited. Other applications include flow cytometry and microfluidic circuits. Contrast agent such as droplets and Microbubbles can be produced in flow focusing microfluidics device.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mapathon** Mapathon: A mapathon (sometime written map-a-thon) is a coordinated mapping event and a kind of editathon. The public is invited to make online map improvements in their local area to improve coverage and to help disaster risk assessment and energy management.Mapathons use an online site for storing map data, such as OpenStreetMap. Google Maps was also an option until 2017. A mapathon is organized by a respective organization or a non-profit organisation or local government. Mapathon: Mapathons are often held inside (armchair mapping) in a room with strong Wi-Fi for simultaneous access, assisted by satellite imagery. Mapathons can also be an outside activity with online simultaneous map editing assisted by global positioning system trackers on mobile devices. History: In 2009, in Atlanta, the capital of the US state of Georgia, United States, about 200 volunteers walked around the city with GPS-enabled devices and expanded OpenStreetMap.Google Mapathon was an annual event organized by Google that invited the public to make improvements to Google Maps, through the Google Map Maker. Google Map Maker was officially shut down on March 31, 2017.In February and March, 2013, in India, volunteers mapped local areas on Google Maps. The prizes included Samsung Galaxy Note tablets. Some locals, including the competition winner Vishal Saini, mapped sensitive military installations in Pathankot. In March 2013, a right-wing Bharatiya Janata Party member of parliament Tarun Vijay told authorities that mapping the area was against India's national map policy. Delhi police investigated the incident. Survey of India, an Indian government mapping agency, contacted Google. Google responded by denying the claim and asserting that the mapping was legal. In January 2016, following an attack on the military structures in Pathankot, the Delhi High Court ruled Google to appear in Court in February, but did not make any rulings restricting Google from continuing to host the map data online.In May 2015, after a 7.9 earthquake in Nepal, online volunteers expanded the map of Nepal for two weeks. They mapped 4,000 mapping volunteers have edited 91,951 times, 29,798 segments of road and 243,500 buildings, and also expanded maps of Botswana and Philippines. The project was supported by MapGive and by the Humanitarian OpenStreetMap Team.In May 2015, at a White House event celebrating citizen's cartography, about 80 volunteers edited and added more than 400 roads and 1,000 buildings in OpenStreetMap. The volunteers added power outage information on 152 utilities, and mapped US parks.In February 2016 in a hotel in central Paris, France, about 60 volunteers helped the Missing Maps humanitarian project to preemptively map vulnerable parts of the world on OpenStreetMap. They used satellite imagery and field data to add 4,000 buildings and nearly 170 kilometers (105 miles) of road in Uganda. Another twelve mapathons were scheduled to take place in US and Europe.In the same month, February 2016, Missing Maps also organized a mapathon in Grenoble to map Tsangano district, Tete, Mozambique to help a local conflict between the country main party and the opposition. The Grenoble Missing Maps mapathon photos are included below.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Compound (fortification)** Compound (fortification): In military science, a compound is a type of fortification made up of walls or fences surrounding several buildings in the center of a large piece of land. The walls can either serve the purpose of being tall, thick, and impenetrable, in which case they would be made of wood, stone, or some other like substance; or dangerous to attempt to scale, in which case they could be made of barbed wire or electrified. Compounds can be designed to double as living spaces and military structures in the middle of hostile territory or as a military area within a country's territory; they are also used by those who want to protect against threats to themselves or their property. Compound (fortification): A number of survivalists own fortified compound-like structures as a means of protection in case civilization breaks down or their government becomes abusive. The term compound is also used to refer to an unfortified enclosure, especially in Africa and Asia. See compound (enclosure). Specific group usage: Insurgent, militant, and terrorist groups alike have been known to maintain their own compounds which commonly feature training camps. The CSA, Al-Qaeda, various Ku Klux Klan factions, The Taliban, Aggressive Christianity Missionary Training Corps and ISIS are some of the organizations who use such.Large outlaw motorcycle clubs will typically use a compound for a particular chapter's headquarters. In motorcycle subculture, these are commonly referred to as "clubhouses".Many new religious movement organizations and cults have used compounds to facilitate their own privatized practices or activities. Additionally, they can serve as a residency for followers of that particular commune. Mount Carmel Center, Tama-Re, YFZ Ranch, and Gold Base are some of the most notable of these.Due to their offensive imagery and paraphernalia, large neo-Nazi and white supremacist groups will operate secluded compounds that can serve as a headquarters for a certain organization. In this particular movement, they are also used to hold gatherings and serve as venues for Nazi punk, Rock Against Communism and White power music concerts. The most famous of these would be the now-defunct Aryan Nations compound in Hayden Lake, Idaho. Other examples: Murphy Ranch - a now-defunct compound that belonged to the Silver Legion of America; designed for the group's Nazi activities in the United States during WWII. The Knights of the KKK currently operate a compound within Zinc, Arkansas. The headquarters of Imperial Klans of America consists of a heavily guarded 28-acre compound in Dawson Springs, Kentucky. Elohim City Bab al-Azizia Camp Nordland Osama bin Laden's compound in Abbottabad Camp Siegfried
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phonogram (linguistics)** Phonogram (linguistics): A phonogram is a grapheme (written character) which represents a phoneme (speech sound) or combination of phonemes, such as the letters of the Latin alphabet or Korean letter Hangul. For example, "igh" is an English-language phonogram that represents the sound in "high". Whereas the word phonemes refers to the sounds, the word phonogram refers to the letter(s) that represent that sound. Phonograms contrast with logograms, which represent words and morphemes (meaningful units of language), and determinatives, silent characters used to mark semantic categories.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Derek Corneil** Derek Corneil: Derek Gordon Corneil is a Canadian mathematician and computer scientist, a professor emeritus of computer science at the University of Toronto, and an expert in graph algorithms and graph theory. Life: When he was leaving high school, Corneil was told by his English teacher that doing a degree in mathematics and physics was a bad idea, and that the best he could hope for was to go to a technical college. His interest in computer science began when, as an undergraduate student at Queens College, he heard that a computer was purchased by the London Life insurance company in London, Ontario, where his father worked. As a freshman, he took a summer job operating the UNIVAC Mark II at the company. One of his main responsibilities was to operate a printer. An opportunity for a programming job with the company sponsoring his college scholarship appeared soon after. It was a chance that Corneil jumped at after being denied a similar position at London Life. There was an initial mix-up at his job as his overseer thought that he knew how to program the UNIVAC Mark II, and so he would easily transition to doing the same for the company's newly acquired IBM 1401 machine. However, Corneil did not have the assumed programming background. Thus, in the two-week window that Corneil had been given to learn how to grasp programming the IBM 1401, he learned how to write code from scratch by relying heavily on the instruction manual. This experience pushed him further on his way as did a number of projects he worked on in that position later on.Corneil went on to earn a bachelor's degree in mathematics and physics from Queen's University in 1964. Initially he had planned to do his graduate studies before becoming a high school teacher, but his acceptance into the brand new graduate program in computer science at the University of Toronto changed that. At the University of Toronto, Corneil earned a master's degree and then in 1968 a doctorate in computer science under the supervision of Calvin Gotlieb. (His post-doctoral supervisor was Jaap Seidel.) It was during this time that Corneil became interested in graph theory. He and Gotlieb eventually became good friends. After postdoctoral studies at the Eindhoven University of Technology, Corneil returned to Toronto as a faculty member in 1970. Before his retirement in 2010, Corneil held many positions at the University of Toronto, including Department Chair of the Computer Science department (July 1985 to June 1990), Director of Research Initiatives of the Faculty of Arts and Science (July 1991 to March 1998), and Acting Vice President of Research and International Relations (September to December 1993). During his time as a professor, he was also a visiting professor at universities such as the University of British Columbia, Simon Fraser University, the Université de Grenoble and the Université de Montpellier. Work: Corneil did his research in algorithmic graph theory and graph theory in general. He has overseen 49 theses and published over 100 papers on his own or with co-authors. These papers include: A proof that recognizing graphs of small treewidth is NP-complete,The discovery of the cotree representation for cographs and of fast recognition algorithms for cographs,Generating algorithms for graph isomorphism.Algorithmic and structural properties of complement reducible graphs.Properties of asteroidal triple-free graphs.An algorithm to solve the problem of determining whether a graph is a partial graph of a k-tree.Results addressing graph theoretic, algorithmic, and complexity issues with regard to tree spanners.An explanation of the relationship between tree width and clique-width.Determining the diameter of restricted graph families.Outlining the structure of trapezoid graphs.As a professor emeritus, Corneil still does research and is also an editor of several publications such as Ars Combinatoria and SIAM Monographs on Discrete Mathematics and Applications. Awards: He was inducted as a Fields Institute Fellow in 2004.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum nonlocality** Quantum nonlocality: In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not admit an interpretation in terms of a local realistic theory. Quantum nonlocality has been experimentally verified under different physical assumptions. Any physical theory that aims at superseding or replacing quantum theory should account for such experiments and therefore cannot fulfill local realism; quantum nonlocality is a property of the universe that is independent of our description of nature. Quantum nonlocality: Quantum nonlocality does not allow for faster-than-light communication or action-at-a-distance, and hence is compatible with special relativity and its universal speed limit of objects. Thus, quantum theory is local in the strict sense defined by special relativity and, as such, the term "quantum nonlocality" is sometimes considered a misnomer. Still, it prompts many of the foundational discussions concerning quantum theory. History: Einstein, Podolsky and Rosen In 1935, Einstein, Podolsky and Rosen published a thought experiment with which they hoped to expose the incompleteness of the Copenhagen interpretation of quantum mechanics in relation to the violation of local causality at the microscopic scale that it described. Afterwards, Einstein presented a variant of these ideas in a letter to Erwin Schrödinger, which is the version that is presented here. The state and notation used here are more modern, and akin to David Bohm's take on EPR. The quantum state of the two particles prior to measurement can be written as where {\textstyle \left|\pm \right\rangle ={\frac {1}{\sqrt {2}}}\left(\left|0\right\rangle \pm \left|1\right\rangle \right)} .Here, subscripts “A” and “B” distinguish the two particles, though it is more convenient and usual to refer to these particles as being in the possession of two experimentalists called Alice and Bob. The rules of quantum theory give predictions for the outcomes of measurements performed by the experimentalists. Alice, for example, will measure her particle to be spin-up in an average of fifty percent of measurements. However, according to the Copenhagen interpretation, Alice's measurement causes the state of the two particles to collapse, so that if Alice performs a measurement of spin in the z-direction, that is with respect to the basis {|0⟩A,|1⟩A} , then Bob's system will be left in one of the states {|0⟩B,|1⟩B} . Likewise, if Alice performs a measurement of spin in the x-direction, that is, with respect to the basis {|+⟩A,|−⟩A} , then Bob's system will be left in one of the states {|+⟩B,|−⟩B} . Schrödinger referred to this phenomenon as "steering". This steering occurs in such a way that no signal can be sent by performing such a state update; quantum nonlocality cannot be used to send messages instantaneously and is therefore not in direct conflict with causality concerns in special relativity.In the Copenhagen view of this experiment, Alice's measurement—and particularly her measurement choice—has a direct effect on Bob's state. However, under the assumption of locality, actions on Alice's system do not affect the "true", or "ontic" state of Bob's system. We see that the ontic state of Bob's system must be compatible with one of the quantum states |↑⟩B or |↓⟩B , since Alice can make a measurement that concludes with one of those states being the quantum description of his system. At the same time, it must also be compatible with one of the quantum states |←⟩B or |→⟩B for the same reason. Therefore, the ontic state of Bob's system must be compatible with at least two quantum states; the quantum state is therefore not a complete descriptor of his system. Einstein, Podolsky and Rosen saw this as evidence of the incompleteness of the Copenhagen interpretation of quantum theory, since the wavefunction is explicitly not a complete description of a quantum system under this assumption of locality. Their paper concludes: While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible. Although various authors (most notably Niels Bohr) criticised the ambiguous terminology of the EPR paper, the thought experiment nevertheless generated a great deal of interest. Their notion of a "complete description" was later formalised by the suggestion of hidden variables that determine the statistics of measurement results, but to which an observer does not have access. Bohmian mechanics provides such a completion of quantum mechanics, with the introduction of hidden variables; however the theory is explicitly nonlocal. The interpretation therefore does not give an answer to Einstein's question, which was whether or not a complete description of quantum mechanics could be given in terms of local hidden variables in keeping with the "Principle of Local Action". History: Bell inequality In 1964 John Bell answered Einstein's question by showing that such local hidden variables can never reproduce the full range of statistical outcomes predicted by quantum theory. Bell showed that a local hidden variable hypothesis leads to restrictions on the strength of correlations of measurement results. If the Bell inequalities are violated experimentally as predicted by quantum mechanics, then reality cannot be described by local hidden variables and the mystery of quantum nonlocal causation remains. According to Bell: This [grossly nonlocal structure] is characteristic ... of any such theory which reproduces exactly the quantum mechanical predictions. History: Clauser, Horne, Shimony and Holt (CHSH) reformulated these inequalities in a manner that was more conducive to experimental testing (see CHSH inequality).In the scenario proposed by Bell (a Bell scenario), two experimentalists, Alice and Bob, conduct experiments in separate labs. At each run, Alice (Bob) conducts an experiment x (y) in her (his) lab, obtaining outcome a (b) . If Alice and Bob repeat their experiments several times, then they can estimate the probabilities P(a,b|x,y) , namely, the probability that Alice and Bob respectively observe the results a,b when they respectively conduct the experiments x,y. In the following, each such set of probabilities {P(a,b|x,y):a,b,x,y} will be denoted by just P(a,b|x,y) . In the quantum nonlocality slang, P(a,b|x,y) is termed a box.Bell formalized the idea of a hidden variable by introducing the parameter λ to locally characterize measurement results on each system: "It is a matter of indifference ... whether λ denotes a single variable or a set ... and whether the variables are discrete or continuous". However, it is equivalent (and more intuitive) to think of λ as a local "strategy" or "message" that occurs with some probability ρ(λ) when Alice and Bob reboot their experimental setup. EPR's criteria of local separability then stipulates that each local strategy defines the distributions of independent outcomes if Alice conducts experiment x and Bob conducts experiment y Here PA(a|x,λA) (PB(b|y,λB) ) denotes the probability that Alice (Bob) obtains the result a (b) when she (he) conducts experiment x (y) and the local variable describing her (his) experiment has value λA (λB ). History: Suppose that λA,λB can take values from some set Λ . If each pair of values λA,λB∈Λ has an associated probability ρ(λA,λB) of being selected (shared randomness is allowed, i.e., λA,λB can be correlated), then one can average over this distribution to obtain a formula for the joint probability of each measurement result: A box admitting such a decomposition is called a Bell local or a classical box. Fixing the number of possible values which a,b,x,y can each take, one can represent each box P(a,b|x,y) as a finite vector with entries (P(a,b|x,y))a,b,x,y . In that representation, the set of all classical boxes forms a convex polytope. History: In the Bell scenario studied by CHSH, where a,b,x,y can take values within 0,1 , any Bell local box P(a,b|x,y) must satisfy the CHSH inequality: where The above considerations apply to model a quantum experiment. Consider two parties conducting local polarization measurements on a bipartite photonic state. The measurement result for the polarization of a photon can take one of two values (informally, whether the photon is polarized in that direction, or in the orthogonal direction). If each party is allowed to choose between just two different polarization directions, the experiment fits within the CHSH scenario. As noted by CHSH, there exist a quantum state and polarization directions which generate a box P(a,b|x,y) with SCHSH equal to 2.828 . This demonstrates an explicit way in which a theory with ontological states that are local, with local measurements and only local actions cannot match the probabilistic predictions of quantum theory, disproving Einstein's hypothesis. Experimentalists such as Alain Aspect have verified the quantum violation of the CHSH inequality as well as other formulations of Bell's inequality, to invalidate the local hidden variables hypothesis and confirm that reality is indeed nonlocal in the EPR sense. Possibilistic nonlocality: The demonstration of nonlocality due to Bell is probabilistic in the sense that it shows that the precise probabilities predicted by quantum mechanics for some entangled scenarios cannot be met by a local theory. (For short, here and henceforth "local theory" means "local hidden variables theory".) However, quantum mechanics permits an even stronger violation of local theories: a possibilistic one, in which local theories cannot even agree with quantum mechanics on which events are possible or impossible in an entangled scenario. The first proof of this kind was due to Daniel Greenberger, Michael Horne, and Anton Zeilinger in 1993 The state involved is often called the GHZ state. In 1993, Lucien Hardy demonstrated a logical proof of quantum nonlocality that, like the GHZ proof is a possibilistic proof. It starts with the observation that the state |ψ⟩ defined below can be written in a few suggestive ways: where, as above, |±⟩=12(|0⟩±|1⟩) The experiment consists of this entangled state being shared between two experimenters, each of whom has the ability to measure either with respect to the basis {|0⟩,|1⟩} or {|+⟩,|−⟩} . We see that if they each measure with respect to {|0⟩,|1⟩} , then they never see the outcome 11 ⟩ . If one measures with respect to {|0⟩,|1⟩} and the other {|+⟩,|−⟩} , they never see the outcomes |−0⟩, |0−⟩. Possibilistic nonlocality: However, sometimes they see the outcome |−−⟩ when measuring with respect to {|+⟩,|−⟩} , since 0. Possibilistic nonlocality: This leads to the paradox: having the outcome |−−⟩ we conclude that if one of the experimenters had measured with respect to the {|0⟩,|1⟩} basis instead, the outcome must have been |−1⟩ or |1−⟩ , since |−0⟩ and |0−⟩ are impossible. But then, if they had both measured with respect to the {|0⟩,|1⟩} basis, by locality the result must have been 11 ⟩ , which is also impossible. Nonlocal hidden variable models with a finite propagation speed: The work of Bancal et al. generalizes Bell's result by proving that correlations achievable in quantum theory are also incompatible with a large class of superluminal hidden variable models. In this framework, faster-than-light signaling is precluded. However, the choice of settings of one party can influence hidden variables at another party's distant location, if there is enough time for a superluminal influence (of finite, but otherwise unknown speed) to propagate from one point to the other. In this scenario, any bipartite experiment revealing Bell nonlocality can just provide lower bounds on the hidden influence's propagation speed. Quantum experiments with three or more parties can, nonetheless, disprove all such non-local hidden variable models. Analogs of Bell’s theorem in more complicated causal structures: The random variables measured in a general experiment can depend on each other in complicated ways. In the field of causal inference, such dependencies are represented via Bayesian networks: directed acyclic graphs where each node represents a variable and an edge from a variable to another signifies that the former influences the latter and not otherwise, see the figure. Analogs of Bell’s theorem in more complicated causal structures: In a standard bipartite Bell experiment, Alice's (Bob's) setting x (y ), together with her (his) local variable λA (λB ), influence her (his) local outcome a (b ). Bell's theorem can thus be interpreted as a separation between the quantum and classical predictions in a type of causal structures with just one hidden node (λA,λB) . Similar separations have been established in other types of causal structures. The characterization of the boundaries for classical correlations in such extended Bell scenarios is challenging, but there exist complete practical computational methods to achieve it. Entanglement and nonlocality: Quantum nonlocality is sometimes understood as being equivalent to entanglement. However, this is not the case. Quantum entanglement can be defined only within the formalism of quantum mechanics, i.e., it is a model-dependent property. In contrast, nonlocality refers to the impossibility of a description of observed statistics in terms of a local hidden variable model, so it is independent of the physical model used to describe the experiment. Entanglement and nonlocality: It is true that for any pure entangled state there exists a choice of measurements that produce Bell nonlocal correlations, but the situation is more complex for mixed states. While any Bell nonlocal state must be entangled, there exist (mixed) entangled states which do not produce Bell nonlocal correlations (although, operating on several copies of some of such states, or carrying out local post-selections, it is possible to witness nonlocal effects). Moreover, while there are catalysts for entanglement, there are none for nonlocality. Finally, reasonably simple examples of Bell inequalities have been found for which the quantum state giving the largest violation is never a maximally entangled state, showing that entanglement is, in some sense, not even proportional to nonlocality. Quantum correlations: As shown, the statistics achievable by two or more parties conducting experiments in a classical system are constrained in a non-trivial way. Analogously, the statistics achievable by separate observers in a quantum theory also happen to be restricted. The first derivation of a non-trivial statistical limit on the set of quantum correlations, due to B. Tsirelson, is known as Tsirelson's bound. Quantum correlations: Consider the CHSH Bell scenario detailed before, but this time assume that, in their experiments, Alice and Bob are preparing and measuring quantum systems. In that case, the CHSH parameter can be shown to be bounded by −22≤CHSH≤22. Quantum correlations: The sets of quantum correlations and Tsirelson’s problem Mathematically, a box P(a,b|x,y) admits a quantum realization if and only if there exists a pair of Hilbert spaces HA,HB , a normalized vector |ψ⟩∈HA⊗HB and projection operators Eax:HA→HA,Fby:HB→HB such that For all x,y , the sets {Eax}a,{Fby}b represent complete measurements. Namely, ∑aEax=IA,∑bFby=IB .P(a,b|x,y)=⟨ψ|Eax⊗Fby|ψ⟩ , for all a,b,x,y .In the following, the set of such boxes will be called Q . Contrary to the classical set of correlations, when viewed in probability space, Q is not a polytope. On the contrary, it contains both straight and curved boundaries. In addition, Q is not closed: this means that there exist boxes P(a,b|x,y) which can be arbitrarily well approximated by quantum systems but are themselves not quantum. Quantum correlations: In the above definition, the space-like separation of the two parties conducting the Bell experiment was modeled by imposing that their associated operator algebras act on different factors HA,HB of the overall Hilbert space H=HA⊗HB describing the experiment. Alternatively, one could model space-like separation by imposing that these two algebras commute. This leads to a different definition: P(a,b|x,y) admits a field quantum realization if and only if there exists a Hilbert space H , a normalized vector |ψ⟩∈H and projection operators Eax:H→H,Fby:H→H such that For all x,y , the sets {Eax}a,{Fby}b represent complete measurements. Namely, ∑aEax=I,∑bFby=I .P(a,b|x,y)=⟨ψ|EaxFby|ψ⟩ , for all a,b,x,y .[Eax,Fby]=0 , for all a,b,x,y .Call Qc the set of all such correlations P(a,b|x,y) How does this new set relate to the more conventional Q defined above? It can be proven that Qc is closed. Moreover, Q¯⊆Qc , where Q¯ denotes the closure of Q . Tsirelson's problem consists in deciding whether the inclusion relation Q¯⊆Qc is strict, i.e., whether or not Q¯=Qc . This problem only appears in infinite dimensions: when the Hilbert space H in the definition of Qc is constrained to be finite-dimensional, the closure of the corresponding set equals Q¯ .In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen claimed a result in quantum complexity theory that would imply that Q¯≠Qc , thus solving Tsirelson's problem.Tsirelson's problem can be shown equivalent to Connes embedding problem, a famous conjecture in the theory of operator algebras. Quantum correlations: Characterization of quantum correlations Since the dimensions of HA and HB are, in principle, unbounded, determining whether a given box P(a,b|x,y) admits a quantum realization is a complicated problem. In fact, the dual problem of establishing whether a quantum box can have a perfect score at a non-local game is known to be undecidable. Moreover, the problem of deciding whether P(a,b|x,y) can be approximated by a quantum system with precision poly ⁡(|X||Y|) is NP-hard. Characterizing quantum boxes is equivalent to characterizing the cone of completely positive semidefinite matrices under a set of linear constraints.For small fixed dimensions dA,dB , one can explore, using variational methods, whether P(a,b|x,y) can be realized in a bipartite quantum system HA⊗HB , with dim ⁡(HA)=dA , dim ⁡(HB)=dB . That method, however, can just be used to prove the realizability of P(a,b|x,y) , and not its unrealizability with quantum systems. Quantum correlations: To prove unrealizability, the most known method is the Navascués–Pironio–Acín (NPA) hierarchy. This is an infinite decreasing sequence of sets of correlations Q1⊃Q2⊃Q3⊃... Quantum correlations: with the properties: If P(a,b|x,y)∈Qc , then P(a,b|x,y)∈Qk for all k If P(a,b|x,y)∉Qc , then there exists k such that P(a,b|x,y)∉Qk For any k , deciding whether P(a,b|x,y)∈Qk can be cast as a semidefinite program.The NPA hierarchy thus provides a computational characterization, not of Q , but of Qc . If Tsirelson's problem is solved in the affirmative, namely, Q¯=Qc , then the above two methods would provide a practical characterization of Q¯ . If, on the contrary, Q¯≠Qc , then a new method to detect the non-realizability of the correlations in Qc−Q¯ is needed. Quantum correlations: The physics of supra-quantum correlations The works listed above describe what the quantum set of correlations looks like, but they do not explain why. Are quantum correlations unavoidable, even in post-quantum physical theories, or on the contrary, could there exist correlations outside Q¯ which nonetheless do not lead to any unphysical operational behavior? In their seminal 1994 paper, Popescu and Rohrlich explore whether quantum correlations can be explained by appealing to relativistic causality alone. Namely, whether any hypothetical box P(a,b|x,y)∉Q¯ would allow building a device capable of transmitting information faster than the speed of light. At the level of correlations between two parties, Einstein's causality translates in the requirement that Alice's measurement choice should not affect Bob's statistics, and vice versa. Otherwise, Alice (Bob) could signal Bob (Alice) instantaneously by choosing her (his) measurement setting x (y) appropriately. Mathematically, Popescu and Rohrlich's no-signalling conditions are: Like the set of classical boxes, when represented in probability space, the set of no-signalling boxes forms a polytope. Popescu and Rohrlich identified a box P(a,b|x,y) that, while complying with the no-signalling conditions, violates Tsirelson's bound, and is thus unrealizable in quantum physics. Dubbed the PR-box, it can be written as: Here a,b,x,y take values in 0,1 , and a⊕b denotes the sum modulo two. It can be verified that the CHSH value of this box is 4 (as opposed to the Tsirelson bound of 2.828 ). This box had been identified earlier, by Rastall and Khalfin and Tsirelson.In view of this mismatch, Popescu and Rohrlich pose the problem of identifying a physical principle, stronger than the no-signalling conditions, that allows deriving the set of quantum correlations. Several proposals followed: Non-trivial communication complexity (NTCC). This principle stipulates that nonlocal correlations should not be so strong as to allow two parties to solve all 1-way communication problems with some probability p>1/2 using just one bit of communication. It can be proven that any box violating Tsirelson's bound by more than 0.4377 is incompatible with NTCC. Quantum correlations: No Advantage for Nonlocal Computation (NANLC). The following scenario is considered: given a function f0,1n→1 , two parties are distributed the strings of n bits x,y and asked to output the bits a,b so that a⊕b is a good guess for f(x⊕y) . The principle of NANLC states that non-local boxes should not give the two parties any advantage to play this game. It is proven that any box violating Tsirelson's bound would provide such an advantage. Quantum correlations: Information Causality (IC). The starting point is a bipartite communication scenario where one of the parts (Alice) is handed a random string x of n bits. The second part, Bob, receives a random number k∈{1,...,n} . Their goal is to transmit Bob the bit xk , for which purpose Alice is allowed to transmit Bob s bits. The principle of IC states that the sum over k of the mutual information between Alice's bit and Bob's guess cannot exceed the number s of bits transmitted by Alice. It is shown that any box violating Tsirelson's bound would allow two parties to violate IC. Quantum correlations: Macroscopic Locality (ML). In the considered setup, two separate parties conduct extensive low-resolution measurements over a large number of independently prepared pairs of correlated particles. ML states that any such “macroscopic” experiment must admit a local hidden variable model. It is proven that any microscopic experiment capable of violating Tsirelson's bound would also violate standard Bell nonlocality when brought to the macroscopic scale. Besides Tsirelson's bound, the principle of ML fully recovers the set of all two-point quantum correlators. Quantum correlations: Local Orthogonality (LO). This principle applies to multipartite Bell scenarios, where n parties respectively conduct experiments x1,...,xn in their local labs. They respectively obtain the outcomes a1,...,an . The pair of vectors (a¯|x¯) is called an event. Two events (a¯|x¯) , (a¯′|x¯′) are said to be locally orthogonal if there exists k such that xk=xk′ and ak≠ak′ . The principle of LO states that, for any multipartite box, the sum of the probabilities of any set of pair-wise locally orthogonal events cannot exceed 1. It is proven that any bipartite box violating Tsirelson's bound by an amount of 0.052 violates LO.All these principles can be experimentally falsified under the assumption that we can decide if two or more events are space-like separated. This sets this research program aside from the axiomatic reconstruction of quantum mechanics via Generalized Probabilistic Theories. Quantum correlations: The works above rely on the implicit assumption that any physical set of correlations must be closed under wirings. This means that any effective box built by combining the inputs and outputs of a number of boxes within the considered set must also belong to the set. Closure under wirings does not seem to enforce any limit on the maximum value of CHSH. However, it is not a void principle: on the contrary, in it is shown that many simple, intuitive families of sets of correlations in probability space happen to violate it. Quantum correlations: Originally, it was unknown whether any of these principles (or a subset thereof) was strong enough to derive all the constraints defining Q¯ . This state of affairs continued for some years until the construction of the almost quantum set Q~ . Q~ is a set of correlations that is closed under wirings and can be characterized via semidefinite programming. It contains all correlations in Qc⊃Q¯ , but also some non-quantum boxes P(a,b|x,y)∉Qc . Remarkably, all boxes within the almost quantum set are shown to be compatible with the principles of NTCC, NANLC, ML and LO. There is also numerical evidence that almost-quantum boxes also comply with IC. It seems, therefore, that, even when the above principles are taken together, they do not suffice to single out the quantum set in the simplest Bell scenario of two parties, two inputs and two outputs. Device independent protocols: Nonlocality can be exploited to conduct quantum information tasks which do not rely on the knowledge of the inner workings of the prepare-and-measurement apparatuses involved in the experiment. The security or reliability of any such protocol just depends on the strength of the experimentally measured correlations P(a,b|x,y) . These protocols are termed device-independent. Device independent protocols: Device-independent quantum key distribution The first device-independent protocol proposed was device-independent quantum key distribution (QKD). In this primitive, two distant parties, Alice and Bob, are distributed an entangled quantum state, that they probe, thus obtaining the statistics P(a,b|x,y) . Based on how non-local the box P(a,b|x,y) happens to be, Alice and Bob estimate how much knowledge an external quantum adversary Eve (the eavesdropper) could possess on the value of Alice and Bob's outputs. This estimation allows them to devise a reconciliation protocol at the end of which Alice and Bob share a perfectly correlated one-time pad of which Eve has no information whatsoever. The one-time pad can then be used to transmit a secret message through a public channel. Although the first security analyses on device-independent QKD relied on Eve carrying out a specific family of attacks, all such protocols have been recently proven unconditionally secure. Device independent protocols: Device-independent randomness certification, expansion and amplification Nonlocality can be used to certify that the outcomes of one of the parties in a Bell experiment are partially unknown to an external adversary. By feeding a partially random seed to several non-local boxes, and, after processing the outputs, one can end up with a longer (potentially unbounded) string of comparable randomness or with a shorter but more random string. This last primitive can be proven impossible in a classical setting.Device-independent (DI) randomness certification, expansion, and amplification are techniques used to generate high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography, where high-quality random numbers are essential for ensuring the security of cryptographic protocols. Device independent protocols: Randomness certification is the process of verifying that the output of a random number generator is truly random and has not been tampered with by an adversary. DI randomness certification does this verification without making assumptions about the underlying devices that generate random numbers. Instead, randomness is certified by observing correlations between the outputs of different devices that are generated using the same physical process. Recent research has demonstrated the feasibility of DI randomness certification using entangled quantum systems, such as photons or electrons. Randomness expansion is taking a small amount of initial random seed and expanding it into a much larger sequence of random numbers. In DI randomness expansion, the expansion is done using measurements of quantum systems that are prepared in a highly entangled state. The security of the expansion is guaranteed by the laws of quantum mechanics, which make it impossible for an adversary to predict the expansion output. Recent research has shown that DI randomness expansion can be achieved using entangled photon pairs and measurement devices that violate a Bell inequality. Device independent protocols: Randomness amplification is the process of taking a small amount of initial random seed and increasing its randomness by using a cryptographic algorithm. In DI randomness amplification, this process is done using entanglement properties and quantum mechanics. The security of the amplification is guaranteed by the fact that any attempt by an adversary to manipulate the algorithm's output will inevitably introduce errors that can be detected and corrected. Recent research has demonstrated the feasibility of DI randomness amplification using quantum entanglement and the violation of a Bell inequality.DI randomness certification, expansion, and amplification are powerful techniques for generating high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography and are likely to become increasingly crucial as quantum computing technology advances. In addition, a milder approach called semi-DI exists where random numbers can be generated with some assumptions on the working principle of the devices, environment, dimension, energy, etc., in which it benefits from ease-of-implementation and high generation rate. Device independent protocols: Self-testing Sometimes, the box P(a,b|x,y) shared by Alice and Bob is such that it only admits a unique quantum realization. This means that there exist measurement operators Eax,Fby and a quantum state |ψ⟩ giving rise to P(a,b|x,y) such that any other physical realization E~ax,F~by,|ψ~⟩ of P(a,b|x,y) is connected to Eax,Fby,|ψ⟩ via local unitary transformations. This phenomenon, that can be interpreted as an instance of device-independent quantum tomography, was first pointed out by Tsirelson and named self-testing by Mayers and Yao. Self-testing is known to be robust against systematic noise, i.e., if the experimentally measured statistics are close enough to P(a,b|x,y) , one can still determine the underlying state and measurement operators up to error bars. Device independent protocols: Dimension witnesses The degree of non-locality of a quantum box P(a,b|x,y) can also provide lower bounds on the Hilbert space dimension of the local systems accessible to Alice and Bob. This problem is equivalent to deciding the existence of a matrix with low completely positive semidefinite rank. Finding lower bounds on the Hilbert space dimension based on statistics happens to be a hard task, and current general methods only provide very low estimates. However, a Bell scenario with five inputs and three outputs suffices to provide arbitrarily high lower bounds on the underlying Hilbert space dimension. Quantum communication protocols which assume a knowledge of the local dimension of Alice and Bob's systems, but otherwise do not make claims on the mathematical description of the preparation and measuring devices involved are termed semi-device independent protocols. Currently, there exist semi-device independent protocols for quantum key distribution and randomness expansion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Schmidt–Pechan prism** Schmidt–Pechan prism: A Schmidt–Pechan prism is a type of optical prism used to rotate an image by 180°. These prisms are commonly used in binoculars as an image erecting system. The Schmidt–Pechan prism makes use of a roof prism section (from the German: "Dachkante", lit. roof edge). Binoculars designs using Schmidt–Pechan prisms can be constructed more compactly than ones using Porro or Uppendahl roof and Abbe–Koenig roof prisms. Schmidt–Pechan prism: A Schmidt–Pechan prism is sometimes called a Pechan prism pair. Method of operation: The Schmidt-Pechan is based on the Pechan prism design: Both are a composite of two prisms, separated by an air gap. Because of the air gap there are four glass/air transition surfaces. The Pechan design will invert or revert (flip) the image, depending on the orientation of the prism, but not both at the same time. For the Schmidt-Pechan design, the upper prism from the Pechan design is replaced with a Schmidt roof prism, so the Schmidt–Pechan prism can both invert and revert the image and so act as an image rotator. The lower prism is known as a half-pentaprism or Bauernfeind prism. The image's handedness is not changed by the Schmidt-Pechan. Method of operation: The design of the two prisms is such that the entrance beam and exit beam are coaxial, i.e. the Schmidt–Pechan prism does not deviate the beam if it is centered on the optical axis. The "roof" section of the upper prism flips (reverts) the image laterally with two total internal reflections in the horizontal plane from the roof surface: once on each side of the roof. This latter pair of reflections can be considered as one reflection in the vertical plane. Both inversion and reversion together cause a 180° rotation of the image, but in doing so deviate the path by 45°. The lower prism corrects for this by interfacing the beam at 45° with the upper prism. The lower prism uses one total internal reflection, followed by a second reflection on the bottom surface to direct the beam into the second Schmidt prism. This second reflection in the lower prism happens at less than the critical angle, therefore the Schmidt–Pechan prism requires a reflective coating for this surface to be usable in practice. This is unlike other roof prisms, like the Abbe–Koenig prism, which uses total internal reflection on all reflective surfaces. Method of operation: The net effect of the six reflections (two reflections are on roof plains) is to flip the image both vertically and horizontally. Problems with the Schmidt–Pechan prism: The Schmidt–Pechan roof prism is from a purely technical point of view a rather complicated roof prism design. Light entering the Schmidt–Pechan design reflects more times and less efficient than in the Abbe-König prism design. Problems with the Schmidt–Pechan prism: Glass–air transitions All of the entry and exit surfaces must be optically coated to minimize losses, though the type of coating has to be carefully chosen as the same faces of the prism act both as entry faces (desiring good anti-reflection coating) and internally reflective faces (require a coating maximizing reflection). A paper, "Progress in Binocular Design", by Konrad Seil at Swarovski Optik shows that single-layer anti-reflective coatings on these surfaces maximized image contrast. Problems with the Schmidt–Pechan prism: Reflection losses As the incidence angle on the lower surface of the lower prism is less than the critical angle, total internal reflection does not occur. To mitigate this problem, a mirror coating is used on this surface. Typically an aluminum mirror coating (reflectivity of 87% to 93%) or silver mirror coating (reflectivity of 95% to 98%) is used. The transmission of the prism can be further improved by using a dielectric coating rather than a metallic mirror coating. This causes the prism surfaces to act as a dielectric mirror. A well-designed dielectric coating can provide a reflectivity of more than 99% across the visible light spectrum. This reflectivity is much improved compared to either an aluminum or silver mirror coating and the performance of the Schmidt–Pechan prism is similar to the Porro prism or the Abbe–Koenig prism. The necessary mirror coating not only adds a manufacturing step, but it makes the Schmidt–Pechan roof prism lossier than the other image erectors using Porro prism or Abbe–Koenig prism that rely only on total internal reflections. A dielectric mirror coating is comparable in reflection effectivity, but makes the Schmidt-Pechan more expensive. Problems with the Schmidt–Pechan prism: Phase correction The Schmidt-Pechan furthermore shares the phase correction problems with other roof prisms. Schmidt-Pechan prism and other roof prism binoculars benefit from phase-correction coatings to minimize these problems and substantially improve resolution and contrast. Commercial market share in binoculars: Despite complications from a purely technical point of view, Schmidt–Pechan prism type binoculars result in lighter, more compact and cheaper roof prism binoculars. In the early 2020s the commercial market share of Schmidt–Pechan prism type binoculars had become the dominant optical design compared to other prism type designs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**G/M/1 queue** G/M/1 queue: In queueing theory, a discipline within the mathematical theory of probability, the G/M/1 queue represents the queue length in a system where interarrival times have a general (meaning arbitrary) distribution and service times for each job have an exponential distribution. The system is described in Kendall's notation where the G denotes a general distribution, M the exponential distribution for service times and the 1 that the model has a single server. G/M/1 queue: The arrivals of a G/M/1 queue are given by a renewal process. It is an extension of an M/M/1 queue, where this renewal process must specifically be a Poisson process (so that interarrival times have exponential distribution). Models of this type can be solved by considering one of two M/G/1 queue dual systems, one proposed by Ramaswami and one by Bright. Queue size at arrival times: Let (Xt,t≥0) be a G/M(μ)/1 queue with arrival times (An,n∈N) that have interarrival distribution A. Define the size of the queue immediately before the nth arrival by the process Un=XAn− . This is a discrete-time Markov chain with stochastic matrix: P=(1−a0a0000⋯1−(a0+a1)a1a000⋯1−(a0+a1+a2)a2a1a00⋯1−(a0+a1+a2+a3)a3a2a1a0⋯⋮⋮⋮⋮⋮⋱) where av=E((μX)ve−μAv!) .: 427–428 The Markov chain Un has a stationary distribution if and only if the traffic intensity ρ=(μE(A))−1 is less than 1, in which case the unique such distribution is the geometric distribution with probability η of failure, where η is the smallest root of the equation exp ⁡(μ(η−1)A)) .: 428 In this case, under the assumption that the queue is first-in first-out (FIFO), a customer's waiting time W is distributed by:: 430  exp for x≥0 Busy period: The busy period can be computed by using a duality between the G/M/1 model and M/G/1 queue generated by the Christmas tree transformation. Response time: The response time is the amount of time a job spends in the system from the instant of arrival to the time they leave the system. A consistent and asymptotically normal estimator for the mean response time, can be computed as the fixed point of an empirical Laplace transform.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Messaging spam** Messaging spam: Messaging spam, sometimes called SPIM, is a type of spam targeting users of instant messaging (IM) services, SMS, or private messages within websites. Instant messaging applications: Instant messaging systems, such as Telegram, WhatsApp, Twitter Direct Messaging, Kik, Skype and Snapchat are all targets for spammers. Many IM services are publicly linked to social media platforms, which may include information on the user such as age, sex, location and interests. Advertisers and scammers can gather this information, sign on to the service, and send unsolicited messages which could contain scam links, pornographic material, malware or ransomware. With most services users can report and block spam accounts, or set privacy settings so only contacts can contact them. Countermeasures: Many users choose to receive IMs only from people already on their contact list. In corporate settings, spam over IM is blocked by IM spam blockers like those from Actiance, ScanSafe, and Symantec. IM providers like Kik have a "report user" button, which sends a chatlog to the IM administrators who can then take action. Pornographic IM spambots: Spam-bots often sign on to popular messaging services like Kik or Skype to spread pornographic images. Often if the user responds they receive a URL inviting them to a private livestream that will ask them to enter credit card details for "age verification". These bots target random usernames; this often results in minors receiving unsolicited pornographic images. On Windows NT-based systems: In 2002, a number of spammers began abusing the Windows Messenger service, a function of Windows designed to allow administrators to send alerts to users' workstations (not to be confused with Windows Messenger or Windows Live Messenger, a free instant messaging application) in Microsoft's Windows NT-based operating systems. Messenger Service spam appears as normal dialog boxes containing the spammer's message. These messages are easily blocked by firewalls configured to block packets to the NetBIOS ports 135-139 and 445 as well as unsolicited UDP packets to ports above 1024. Additionally, Windows XP Service Pack 2 disables the Messenger Service by default. On Windows NT-based systems: Messenger Service spammers frequently send messages to vulnerable Windows machines with a URL. The message promises the user to eradicate spam messages sent via the Messenger Service. The URL leads to a website where, for a fee, users are told how to disable the Messenger service. Though the Messenger is easily disabled for free by the user, this works because it creates a perceived need and then offers an immediate solution. In opinion-based recommender systems: In an opinion based recommender system, an important concern is how to evaluate the user-generated reviews on the items. One of the purpose of this evaluation is to identify malicious or spam reviews. Poorly written reviews are considered helpless to the recommender system. However, even if a review is well generated, they can still be harmful to the recommender system by their biased prejudice to form an actual advertisement or slander towards a target item. In opinion-based recommender systems: Current approach of spam detection methods includes analyzing the spam text and identifying the spam reviewers by their reviews and activities. For the first kind, a machine learning application on review text has been developed. For the second kind, researchers use network motif analysis technique to identify spam reviewers by their recurring reviewing activity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Imipenem/cilastatin/relebactam** Imipenem/cilastatin/relebactam: Imipenem/cilastatin/relebactam, sold under the brand name Recarbrio, is a fixed-dose combination medication used as an antibiotic. In 2019, it was approved for use in the United States for the treatment of complicated urinary tract and complicated intra-abdominal infections. It is administered via intravenous injection.The most common adverse reactions include nausea, diarrhea, headache, fever and increased liver enzymes.The most common adverse reactions observed in people treated for hospital-acquired bacterial pneumonia and ventilator-associated bacterial pneumonia (HABP/VABP) include increased aspartate/alanine aminotransferases (increased liver enzymes), anemia, diarrhea, hypokalemia (low potassium), and hyponatremia (low sodium). Antimicrobial activity: Imipenem/cilastatin/relebactam has improved activity against P. aeruginosa with decreased porins expression and/or overproducing β-lactamases of the category "AmpC", thanks to relebactam AmpC inhibition. Imipenem/cilastatin/relebactam maintains a limited activity against blaOXA-48-expressing carbapenem-resistant Enterobacterales, and has no activity against metallo-β-lactamase-producing isolates. Relebactam has no activity against OXA class D β-lactamases of A. baumannii. For susceptibility testing purposes, the concentration of relebactam is fixed at 4 mg/L. The European Committee on Antimicrobial Susceptibility Testing (EUCAST) provided a susceptibility clinical breakpoint of ≤2 mg/L for Enterobacterales, P. aeruginosa, and Acinetobacter spp., while The Clinical & Laboratory Standards Institute (CLSI) provided a susceptibility clinical breakpoint of ≤1 mg/L for Enterobacterales and ≤2 mg/L for P. aeruginosa. Medical uses: In the United States imipenem/cilastatin/relebactam is indicated for the treatment of people with complicated urinary tract infections and complicated intra-abdominal infections who have limited or no alternative treatment options. It is also indicated to treat HABP/VABP in adults 18 years of age and older.In the European Union it is indicated for the treatment of infections due to aerobic Gram-negative organisms in adults with limited treatment options. History: The application for imipenem/cilastatin/relebactam was granted Qualified Infectious Disease Product (QIDP), fast track, and priority review designations by the U.S. Food and Drug Administration (FDA). The FDA granted the approval of Recarbrio to Merck & Co., Inc.The determination of efficacy of imipenem/cilastatin/relebactam was supported in part by the findings of the efficacy and safety of imipenem-cilastatin for the treatment of complicated urinary tract infections (cUTI) and complicated intra-abdominal infections (cIAI). The contribution of relebactam to imipenem/cilastatin/relebactam was assessed based on data from in vitro studies and animal models of infection. The safety of imipenem/cilastatin/relebactam, administered via injection, was studied in two trials (Trial 1/NCT01505634, Trial 2/NCT01506271), one each for cUTI and cIAI. The cUTI trial included 298 adult participants with 99 treated with the proposed dose of imipenem/cilastatin/relebactam. The cIAI trial included 347 participants with 117 treated with the proposed dose of imipenem/cilastatin/relebactam.Trial 1 enrolled adult participants hospitalized with cUTI. Trial 2 enrolled adult participants hospitalized with cIAI that required surgery or drainage. In both trials, participants were assigned to either imipenem/cilastatin with varying doses of relebactam or imipenem/cilastatin with placebo intravenously, every 6 hours for 4 to 14 days. Neither the participants nor the investigators knew which treatment was being given until after the trial was completed. The trials were conducted in Europe, South America, the United States, Asia Pacific, Africa, and Mexico.It was approved for use in the European Union in February 2020.In June 2020, imipenem/cilastatin/relebactam was approved for the indication to treat hospital-acquired bacterial pneumonia and ventilator-associated bacterial pneumonia (HABP/VABP) in adults 18 years of age and older.The safety and efficacy of imipenem/cilastatin/relebactam for the treatment of HABP/VABP were evaluated in a randomized, controlled clinical trial of 535 hospitalized adults with HABP/VABP due to Gram-negative bacteria (a type of bacteria) in which 266 participants were treated with imipenem/cilastatin/relebactam and 269 participants were treated with piperacillin-tazobactam, another antibacterial drug. Overall, 16% of participants who received imipenem/cilastatin/relebactam and 21% of participants who received piperacillin-tazobactam died through day 28 of the study.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thirst** Thirst: Thirst is the craving for potable fluids, resulting in the basic instinct of animals to drink. It is an essential mechanism involved in fluid balance. It arises from a lack of fluids or an increase in the concentration of certain osmolites, such as sodium. If the water volume of the body falls below a certain threshold or the osmolite concentration becomes too high, structures in the brain detect changes in blood constituents and signal thirst.Continuous dehydration can cause acute and chronic diseases, but is most often associated with renal and neurological disorders. Excessive thirst, called polydipsia, along with excessive urination, known as polyuria, may be an indication of diabetes mellitus or diabetes insipidus. Thirst: There are receptors and other systems in the body that detect a decreased volume or an increased osmolite concentration. Some sources distinguish "extracellular thirst" from "intracellular thirst", where extracellular thirst is thirst generated by decreased volume and intracellular thirst is thirst generated by increased osmolite concentration. Detection: It is vital for organisms to be able to maintain their fluid levels in very narrow ranges. The goal is to keep the interstitial fluid, the fluid outside the cell, at the same concentration as the intracellular fluid, the fluid inside the cell. This condition is called isotonic and occurs when the same levels of solutes are present on either side of the cell membrane so that the net water movement is zero. If the interstitial fluid has a higher concentration of solutes (or a lower concentration of water) than the intracellular fluid, it will pull water out of the cell. This condition is called hypertonic and if enough water leaves the cell, it will not be able to perform essential chemical functions. The animal will then become thirsty in response to the demand for water in the cell. After the animal drinks water, the interstitial fluid becomes less concentrated of solutes (more concentrated of water) than the intracellular fluid and the cell will fill with water as it tries to equalize the concentrations. This condition is called hypotonic and can be dangerous because it can cause the cell to swell and rupture. One set of receptors responsible for thirst detects the concentration of interstitial fluid. The other set of receptors detects blood volume. Detection: Decreased volume This is one of two types of thirst and is defined as thirst caused by loss of blood volume (hypovolemia) without depleting the intracellular fluid. This can be caused by blood loss, vomiting, and diarrhea. This loss of volume is problematic because if the total blood volume falls too low the heart cannot circulate blood effectively and the eventual result is hypovolemic shock. The vascular system responds by constricting blood vessels thereby creating a smaller volume for the blood to fill. This mechanical solution, however, has definite limits and usually must be supplemented with increased volume. The loss of blood volume is detected by cells in the kidneys and triggers thirst for both water and salt via the renin-angiotensin system. Detection: Renin-angiotensin system Hypovolemia leads to activation of the renin angiotensin system (RAS) and is detected by cells in the kidney. When these cells detect decreased blood flow due to the low volume they secrete an enzyme called renin. Renin then enters the blood where it catalyzes a protein called angiotensinogen to angiotensin I. Angiotensin I is then almost immediately converted by an enzyme already present in the blood to the active form of the protein, angiotensin II. Angiotensin II then travels in the blood until it reaches the posterior pituitary gland and the adrenal cortex, where it causes a cascade effect of hormones that cause the kidneys to retain water and sodium, increasing blood pressure. It is also responsible for the initiation of drinking behavior and salt appetite via the subfornical organ. Detection: Others Arterial baroreceptors sense a decreased arterial pressure, and signal to the central nervous system in the area postrema and nucleus tractus solitarii. Cardiopulmonary receptors sense a decreased blood volume, and signal to area postrema and nucleus tractus solitarii. Detection: Cellular dehydration and osmoreceptor stimulation Osmometric thirst occurs when the solute concentration of the interstitial fluid increases. This increase draws water out of the cells, and they shrink in volume. The solute concentration of the interstitial fluid increases by high intake of sodium in diet or by the drop in volume of extracellular fluids (such as blood plasma and cerebrospinal fluid) due to loss of water through perspiration, respiration, urination and defecation. The increase in interstitial fluid solute concentration causes water to migrate from the cells of the body, through their membranes, to the extracellular compartment, by osmosis, thus causing cellular dehydration.Clusters of cells (osmoreceptors) in the organum vasculosum of the lamina terminalis (OVLT) and subfornical organ (SFO), which lie outside of the blood brain barrier can detect the concentration of blood plasma and the presence of angiotensin II in the blood. They can then activate the median preoptic nucleus which initiates water seeking and ingestive behavior. Destruction of this part of the hypothalamus in humans and other animals results in partial or total loss of desire to drink even with extremely high salt concentration in the extracellular fluids. In addition, there are visceral osmoreceptors which project to the area postrema and nucleus tractus solitarii in the brain. Detection: Salt craving Because sodium is also lost from the plasma in hypovolemia, the body's need for salt proportionately increases in addition to thirst in such cases. This is also a result of the renin-angiotensin system activation. Elderly In adults over the age of 50 years, the body's thirst sensation reduces and continues diminishing with age, putting this population at increased risk of dehydration. Several studies have demonstrated that elderly persons have lower total water intakes than younger adults, and that women are particularly at risk of too low an intake. Detection: In 2009, the European Food Safety Authority (EFSA) included water as a macronutrient in its dietary reference values for the first time. Recommended intake volumes in the elderly are the same as for younger adults (2.0 L/day for females and 2.5 L/day for males) as despite lower energy consumption, the water requirement of this group is increased due to a reduction in renal concentrating capacity. Thirst quenching: According to preliminary research, quenching of thirst – the homeostatic mechanism to stop drinking – occurs via two neural phases: a "preabsorptive" phase which signals quenched thirst many minutes before fluid is absorbed from the stomach and distributed to the body via the circulation, and a "postabsorptive" phase which is regulated by brain structures sensing to terminate fluid ingestion. The preabsorptive phase relies on sensory inputs in the mouth, pharynx, esophagus, and upper gastrointestinal tract to anticipate the amount of fluid needed, providing rapid signals to the brain to terminate drinking when the assessed amount has been consumed. The postabsorptive phase occurs via blood monitoring for osmolality, fluid volume, and sodium balance, which are collectively sensed in brain circumventricular organs linked via neural networks to terminate thirst when fluid balance is established.Thirst quenching varies among animal species, with dogs, camels, sheep, goats, and deer replacing fluid deficits quickly when water is available, whereas humans and horses may need hours to restore fluid balance. Neurophysiology: The areas of the brain that contribute to the sense of thirst are mainly located in the midbrain and the hindbrain. Specifically, the hypothalamus appears to play a key role in the regulation of thirst. Neurophysiology: The area postrema and nucleus tractus solitarii signal to the subfornical organ and to the lateral parabrachial nucleus. The latter signaling relies on the neurotransmitter serotonin. The signal from the lateral parabrachial nucleus is relayed to the median preoptic nucleus.The median preoptic nucleus and the subfornical organ receive signals of decreased volume and increased osmolite concentration. Finally, the signals are received in cortex areas of the forebrain where thirst arises. The subfornical organ and the organum vasculosum of the lamina terminalis contribute to regulating the overall bodily fluid balance by signalling to the hypothalamus to form vasopressin, which is later released by the pituitary gland.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital Addressable Lighting Interface** Digital Addressable Lighting Interface: Digital Addressable Lighting Interface (DALI) is a trademark for network-based products that control lighting. The underlying technology was established by a consortium of lighting equipment manufacturers as a successor for 1-10 V/0–10 V lighting control systems, and as an open standard alternative to several proprietary protocols. The DALI, DALI-2 and D4i trademarks are owned by the lighting industry alliance, DiiA (Digital Illumination Interface Alliance). Digital Addressable Lighting Interface: DALI is specified by a series of technical standards in IEC 62386. Standards conformance ensures that equipment from different manufacturers will interoperate. The DALI trademark is allowed on devices that comply with the DiiA testing and certification requirements, and are listed as either registered (DALI version-1) or certified (DALI-2) on the DiiA website. D4i certification - an extension of DALI-2 - was added by DiiA in November 2019. Digital Addressable Lighting Interface: Members of the AG DALI were allowed to use the DALI trademark until the DALI working party was dissolved on 30 March 2017, when trademark use was transferred to DiiA members. Since 9 June 2017, Digital Illumination Interface Alliance (DiiA) certifies DALI products. DiiA is a Partner Program of IEEE-ISTO. Technical overview: A DALI network consists of at least one application controller and bus power supply (which may be built into any of the products) as well as input devices (e.g. sensors and push-buttons), control gear (e.g., electrical ballasts, LED drivers and dimmers) with DALI interfaces. Application controllers can control, configure or query each device by means of a bi-directional data exchange. Unlike DMX, multiple controllers can co-exist on the bus. The DALI protocol permits addressing devices individually, in groups or via broadcast. Scenes can be stored in the devices, for recall on an individual, group or broadcast basis. Groups and scenes are used to insure simultaneous execution of level changes, since each packet requires about 25 ms - or 1.5 seconds if all 64 addresses were to change level. Technical overview: Each device is assigned a unique short address between 0 and 63, making up to 64 devices possible in a basic system. Address assignment is performed over the bus using a "commissioning" protocol built into the DALI controller, usually after all hardware is installed, or successively as devices are added. The Device Address is commonly a LED driver with one or many LEDs sharing the same level. A DT6 driver is for single color temperature applications, a DT8 driver is used for CCT color tuning, or RGBWW multi color applications - for example a strip where all the "pixels" have the same color. Data is transferred between devices by means of an asynchronous, half-duplex, serial protocol over a two-wire bus with a fixed data transfer rate of 1200 bit/s. Collision detection is used to allow multiple transmitters on the bus. Technical overview: A single pair of wires comprises the bus used for communication on a DALI network. The network can be arranged in bus or star topology, or a combination of these. Each device on a DALI network can be addressed individually, unlike DSI and 0–10V devices. Consequently, DALI networks typically use fewer wires than DSI or 0–10V systems. Technical overview: The bus is used for both signal and bus power. A power supply provides a current limited source of up to 250 mA at typically 16 V DC; each device may draw up to 2 mA unless bus-powered.: 20,35  While many devices are mains-powered (line-powered), low-power devices such as motion detectors may be powered directly from the DALI bus. Each device has a bridge rectifier on its input so it is polarity-insensitive. The bus is a wired-AND configuration where signals are sent by briefly shorting the bus to a low voltage level. (The power supply is required to tolerate this, limiting the current to 250 mA.) Although the DALI control cable operates at ELV potential, it is not classified as SELV (Safety Extra Low Voltage) and must be treated as if it has only basic insulation from mains. This has the disadvantage that the network cable is required to be mains-rated, but has the advantage that it may be run next to mains cables or within a multi-core cable which includes mains power. Also, mains-powered devices (e.g., LED drivers) need only provide functional insulation between the mains and the DALI control wires. Technical overview: The network cable is required to provide a maximum drop of 2 volts along the cable.: 19  At 250 mA of supply current, that requires a resistance of ≤ 4 Ω per wire. The wire size needed to achieve this depends on the length of the bus, up to a recommended maximum of 2.5 mm2 at 300 m when using the maximum rating of bus power supply. Technical overview: The speed is kept low so no termination resistors are required,: 21  and data is transmitted using relatively high voltages (0±4.5 V for low and 16±6.5 V for high: 19 ) enabling reliable communications in the presence of significant electrical noise. (This also allows plenty of headroom for a bridge rectifier in each slave.) Each bit is sent using Manchester encoding (a "1" bit is low for the first half of the bit time, and high for the second, while "0" is the reverse), so that power is present for half of each bit. When the bus is idle, the voltage level is continuously high (which is not the same as a data bit). Frames begin with a "1" start bit, then 8 to 32 data bits in msbit-first order (standard RS-232 is lsbit-first), followed by a minimum of 2.45 ms of idle. Device addressing: A DALI device, such as an LED driver, can be controlled individually via its short address. Additionally, each DALI device may be members of one to 16 groups, or be a member of up to 16 scenes. All devices of a group respond to the commands addressed to the group. For example, a room with 4 ballasts can be changed from off to on in three common ways: Single device Using the Short Address, e.g. sending the following DALI messages: DALI Short Address 1 go to 100% DALI Short Address 2 go to 100% DALI Short Address 3 go to 100% DALI Short Address 4 go to 100%This method has the advantage of not requiring programming of group and scene information for each ballast. The fade time of the transition can be chosen on the fly. Device addressing: If a large number of devices need to change at once, note that only 40 commands per second are possible - therefore, 64 individual addresses would require 1.5 seconds. For example, turning all lighting fixtures off may result in a visible delay between the first and last ballasts switching off. This issue is normally not a problem in rooms with a smaller number of ballasts. Groups and Scenes solve that. Device addressing: Device groups Using the DALI Group previously assigned to ballasts in the room, if Short Address 1, 2, 3 and 4 are members of Group 1, e.g.: DALI Group address 1 go to 100%This method has the advantage of being immune to synchronization effects as described above. This method has the disadvantage of requiring each ballast to be programmed once, by a DALI master, with the required group numbers and scene information. The fade time can still be configured on the fly, if required. Scenes: Devices store 16 programmable output levels as "scenes". Individual, Group or ALL devices can respond to a global Scene recall command to change to its previously configured level, e.g. dim lights over the audience and bright lights over the stage. (A programmed brightness level of 255 causes a device to not respond to a given scene - hence be excluded from scene recall commands.) System Fail brightness: A "system failure" level can be triggered by a loss of power (sustained low level) on the DALI bus, to provide a safe fallback if control is lost, a level of 255 excludes the device from this feature. Broadcast Using the DALI Broadcast command, all control gear will change to that level, e.g.: DALI Broadcast go to 50% Brightness control: DALI lighting levels are specified by an 8-bit value, with 0 representing off, 1 means 0.1% of full brightness, 254 means full brightness, and other values being logarithmically interpolated, giving a 2.77% increase per step. I.e., a (non-zero) control byte x denotes a power level of 103(x−254)/253. (A value of 255 is reserved for freezing the current lighting level without changing it.) This is designed to match human eye sensitivity so that perceived brightness steps are uniform, and to ensure corresponding brightness levels in units from different manufacturers.: 21 Commands for control gear: Forward frames sent to control gear are 16 bits long, comprising an address byte followed by an opcode byte. The address byte specifies a target device or a special command addressed to all devices. Except for special commands, when addressing a device, the 7 most significant bits is the device address. The least significant bit of the address byte specifies the interpretation of the opcode byte, with "0" meaning that the opcode is a light level ( ARC ) , and "1" meaning that the opcode is a command. Commands for control gear: Multi packet commands are used for more complex tasks - like setting RGB colors. These commands use three "data transfer registers" (DTR, DTR1, DTR2 ) which can be read and written or used as a parameter by subsequent commands. For example, copy the current ARC level to DTR, save DTR as a scene. Evidently, the DTR value can be different in different devices. Commands for control gear: Address byte format: 0AAA AAAS: Target device 0 ≤ A < 64. 100A AAAS: Target group 0 ≤ A < 16. Each control gear may be a member of any or all groups. 1111 110S: Broadcast unaddressed 1111 111S: Broadcast 1010 0000 to 1100 1011: Special commands 1100 1100 to 1111 1011: ReservedCommon control gear commands: Commands for control devices: The DALI-2 standard added standardisation of control devices. Control devices can include input devices such as daylight sensors, passive infrared room occupancy sensors, and manual lighting controls, or they can be application controllers that are the "brains" of the system - using information to make decisions and control the lights and other devices. Control devices can also combine the functionality of an application controller and an input device. Control devices use 24-bit forward frames, which are ignored by control gear, so up to 64 control devices may share the bus with up to 64 control gear. D4i: DiiA published several new specifications in 2018 and 2019, extending DALI-2 functionality with power and data, especially for intra-luminaire DALI systems. Applications include indoor and outdoor luminaires, and small DALI systems. The D4i trademark is used on certified products to indicate that these new features are included in the products. Colour control (DT8): IEC 62386-209 describes colour control gear. This describes several colour types - methods of controlling colour. The most popular of these is Tc (tunable white), and was added to DALI-2 certification in January 2020. Emergency lighting: IEC 62386-202 describes self-contained emergency lighting. Features include automated triggering of function tests and duration tests, and recording of results. These devices are currently included in DALI version-1 registration, with tests for DALI-2 certification in development. Such DALI version-1 products can be mixed with DALI-2 products in the same system, with no problems expected. Wireless: IEC 62386-104 describes several wireless and wired transport alternatives to the conventional wired DALI bus system. DiiA is working with other industry associations to enable certification of DALI-2 products that operate over certain underlying wireless carriers. It is also possible to combine DALI with wireless communication via application gateways that translate between DALI and the wireless protocol of choice. While such gateways are not standardized, DiiA is working with other industry associations to develop the necessary specifications and tests to achieve this. DiiA: DALI and Wireless
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Speaker diarisation** Speaker diarisation: Speaker diarisation (or diarization) is the process of partitioning an audio stream containing human speech into homogeneous segments according to the identity of each speaker. It can enhance the readability of an automatic speech transcription by structuring the audio stream into speaker turns and, when used together with speaker recognition systems, by providing the speaker’s true identity. It is used to answer the question "who spoke when?" Speaker diarisation is a combination of speaker segmentation and speaker clustering. The first aims at finding speaker change points in an audio stream. The second aims at grouping together speech segments on the basis of speaker characteristics. Speaker diarisation: With the increasing number of broadcasts, meeting recordings and voice mail collected every year, speaker diarisation has received much attention by the speech community, as is manifested by the specific evaluations devoted to it under the auspices of the National Institute of Standards and Technology for telephone speech, broadcast news and meetings. Main types of diarisation systems: In speaker diarisation, one of the most popular methods is to use a Gaussian mixture model to model each of the speakers, and assign the corresponding frames for each speaker with the help of a Hidden Markov Model. There are two main kinds of clustering scenario. The first one is by far the most popular and is called Bottom-Up. The algorithm starts in splitting the full audio content in a succession of clusters and progressively tries to merge the redundant clusters in order to reach a situation where each cluster corresponds to a real speaker. The second clustering strategy is called top-down and starts with one single cluster for all the audio data and tries to split it iteratively until reaching a number of clusters equal to the number of speakers. Main types of diarisation systems: A 2010 review can be found at [1]. More recently, speaker diarisation is performed via neural networks leveraging large-scale GPU computing and methodological developments in deep learning. Open source speaker diarisation software: There are some open source initiatives for speaker diarisation (in alphabetical order): ALIZE Speaker Diarization (last repository update: July 2016; last release: February 2013, version: 3.0): ALIZE Diarization System, developed at the University Of Avignon, a release 2.0 is available [2]. Audioseg (last repository update: May 2014; last release: January 2010, version: 1.2): AudioSeg is a toolkit dedicated to audio segmentation and classification of audio streams. [3]. pyannote.audio (last repository update: August 2022, last release: July 2022, version: 2.0): pyannote.audio is an open-source toolkit written in Python for speaker diarization. [4]. Open source speaker diarisation software: pyAudioAnalysis (last repository update: August 2018): Python Audio Analysis Library: Feature Extraction, Classification, Segmentation and Applications [5] SHoUT (last update: December 2010; version: 0.3): SHoUT is a software package developed at the University of Twente to aid speech recognition research. SHoUT is a Dutch acronym for Speech Recognition Research at the University of Twente. [6] LIUM SpkDiarization (last release: September 2013, version: 8.4.1): LIUM_SpkDiarization tool [7].
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DPI-287** DPI-287: DPI-287 is an opioid drug that is used in scientific research. It is a highly selective agonist for the δ-opioid receptor, which produces less convulsions than most drugs from this family. It has antidepressant-like effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Middle Arm Point Formation** Middle Arm Point Formation: The Middle Arm Point Formation is a Tremadocian formation cropping out in Western Newfoundland, containing arthropod embryos preserved in the Orsten fashion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Game show** Game show: A game show is a genre of broadcast viewing entertainment (radio, television, internet, stage or other) where contestants compete for a reward. These programs can either be participatory or demonstrative and are typically directed by a host, sharing the rules of the program as well as commentating and narrating where necessary. The history of game shows dates back to the invention of television as a medium. On most game shows, contestants either have to answer questions or solve puzzles, typically to win either money or prizes. Game shows often reward players with prizes such as cash, trips and goods and services provided by the show's sponsor. History: 1930s–1950s Game shows began to appear on radio and television in the late 1930s. The first television game show, Spelling Bee, as well as the first radio game show, Information Please, were both broadcast in 1938; the first major success in the game show genre was Dr. I.Q., a radio quiz show that began in 1939. Truth or Consequences was the first game show to air on commercially licensed television; the CBS Television Quiz followed shortly thereafter as the first to be regularly scheduled. The first episode of each aired in 1941 as an experimental broadcast. Over the course of the 1950s, as television began to pervade the popular culture, game shows quickly became a fixture. Daytime game shows would be played for lower stakes to target stay-at-home housewives. Higher-stakes programs would air in prime time. (One particular exception in this era was You Bet Your Life, ostensibly a game show, but the game show concept was largely a framework for a talk show moderated by its host, Groucho Marx.) During the late 1950s, high-stakes games such as Twenty-One and The $64,000 Question began a rapid rise in popularity. However, the rise of quiz shows proved to be short-lived. In 1959, many of the higher stakes game shows were exposed as being either biased or outright scripted in the 1950s quiz show scandals and ratings declines led to most of the primetime games being canceled. History: An early variant of the game show, the panel show, survived the quiz show scandals. On shows like What's My Line?, I've Got A Secret, and To Tell the Truth, panels of celebrities would interview a guest in an effort to determine some fact about them; in others, celebrities would answer questions. Panel games had success in primetime until the late 1960s, when they were collectively dropped from television because of their perceived low budget nature. Panel games made a comeback in American daytime television (where the lower budgets were tolerated) in the 1970s through comedy-driven shows such as Match Game and Hollywood Squares. In the UK, commercial demographic pressures were not as prominent, and restrictions on game shows made in the wake of the scandals limited the style of games that could be played and the amount of money that could be awarded. Panel shows there were kept in primetime and have continued to thrive; they have transformed into showcases for the nation's top stand-up comedians on shows such as Have I Got News for You, Would I Lie to You?, Mock the Week, QI, and 8 Out of 10 Cats, all of which put a heavy emphasis on comedy, leaving the points as mere formalities. The focus on quick-witted comedians has resulted in strong ratings, which, combined with low costs of production, have only spurred growth in the UK panel show phenomenon. History: 1950s–1970s Game shows remained a fixture of US daytime television through the 1960s after the quiz show scandals. Lower-stakes games made a slight comeback in daytime in the early 1960s; examples include Jeopardy! which began in 1964 and the original version of The Match Game first aired in 1962. Let's Make a Deal began in 1963 and the 1960s also marked the debut of Hollywood Squares, Password, The Dating Game, and The Newlywed Game. History: Though CBS gave up on daytime game shows in 1968, the other networks did not follow suit. Color television was introduced to the game show genre in the late 1960s on all three networks. The 1970s saw a renaissance of the game show as new games and massive upgrades to existing games made debuts on the major networks. The New Price Is Right, an update of the 1950s-era game show The Price Is Right, debuted in 1972 and marked CBS's return to the game show format in its rural purge. The Match Game became "Big Money" Match Game 73, which proved popular enough to prompt a spin-off, Family Feud, on ABC in 1976. The $10,000 Pyramid and its numerous higher-stakes derivatives also debuted in 1973, while the 1970s also saw the return of formerly disgraced producer and game show host Jack Barry, who debuted The Joker's Wild and a clean version of the previously rigged Tic-Tac-Dough in the 1970s. Wheel of Fortune debuted on NBC in 1975. The Prime Time Access Rule, which took effect in 1971, barred networks from broadcasting in the 7–8 p.m. time slot immediately preceding prime time, opening up time slots for syndicated programming. Most of the syndicated programs were "nighttime" adaptations of network daytime game shows. These game shows originally aired once a week, but by the late 1970s and early 1980s most of the games had transitioned to five days a week. History: 1980s–1990s Game shows were the lowest priority of television networks and were rotated out every thirteen weeks if unsuccessful. Most tapes were wiped until the early 1980s. Over the course of the 1980s and early 1990s, as fewer new hits (e.g. Press Your Luck, Sale of the Century, and Card Sharks) were produced, game shows lost their permanent place in the daytime lineup. ABC transitioned out of the daytime game show format in the mid-1980s (briefly returning to the format for one season in 1990 with a Match Game revival). NBC's game block also lasted until 1991, but the network attempted to bring them back in 1993 before cancelling its game show block again in 1994. CBS phased out most of its game shows, except for The Price Is Right, by 1993. To the benefit of the genre, the moves of Wheel of Fortune and a modernized revival of Jeopardy! to syndication in 1983 and 1984, respectively, was and remains highly successful; the two are, to this day, fixtures in the prime time "access period". History: Cable television also allowed for the debut of game shows such as Supermarket Sweep and Debt (Lifetime), Trivial Pursuit and Family Challenge (Family Channel), and Double Dare (Nickelodeon). It also opened up a previously underdeveloped market for game show reruns. General interest networks such as CBN Cable Network (forerunner to Freeform) and USA Network had popular blocks for game show reruns from the mid-1980s to the mid-'90s before that niche market was overtaken by Game Show Network in 1994. History: In the United Kingdom, game shows have had a more steady and permanent place in the television lineup and never lost popularity in the 1990s as they did in the United States, due in part to the fact that game shows were highly regulated by the Independent Broadcasting Authority in the 1980s and that those restrictions were lifted in the 1990s, allowing for higher-stakes games to be played. History: After the popularity of game shows hit a nadir in the mid-1990s United States (at which point The Price Is Right was the only game show still on daytime network television and numerous game shows designed for cable television were canceled), the British game show Who Wants to Be a Millionaire? began distribution around the globe. Upon the show's American debut in 1999, it was a hit and became a regular part of ABC's primetime lineup until 2002; that show would eventually air in syndication for seventeen years afterward. Several shorter-lived high-stakes games were attempted around the time of the millennium, both in the United States and the United Kingdom, such as Winning Lines, The Chair, Greed, Paranoia, and Shafted, leading to some dubbing this period as "The Million-Dollar Game Show Craze". The boom quickly went bust, as by July 2000, almost all of the imitator million-dollar shows were canceled (one of those exceptions was Winning Lines, which continued to air in the United Kingdom until 2004 even though it was canceled in the United States in early 2000); these higher stakes contests nevertheless opened the door to reality television contests such as Survivor and Big Brother, in which contestants win large sums of money for outlasting their peers in a given environment. Several game shows returned to daytime in syndication during this time as well, such as Family Feud, Hollywood Squares, and Millionaire. History: 2000s–present Wheel of Fortune, Jeopardy! and Family Feud have continued in syndication. To keep pace with the prime-time quiz shows, Jeopardy! doubled its question values in 2001 and lifted its winnings limit in 2003, which one year later allowed Ken Jennings to become the show's first multi-million dollar winner; it has also increased the stakes of its tournaments and put a larger focus on contestants with strong personalities. The show has since produced four more millionaires: tournament winner Brad Rutter and recent champions James Holzhauer, Matt Amodio, and Amy Schneider. Family Feud revived in popularity with a change in tone under host Steve Harvey to include more ribaldry. History: In 2009, actress and comedienne Kim Coles became the first black woman to host a prime time game show, Pay It Off. History: The rise of digital television in the United States opened up a large market for rerun programs. Buzzr was established by Fremantle, owners of numerous classic U.S. game shows, as a broadcast outlet for its archived holdings in June 2015. There was also a rise of live game shows at festivals and public venues, where the general audience could participate in the show, such as the science-inspired Geek Out Game Show or the Yuck Show. History: Since the early 2000s, several game shows were conducted in a tournament format; examples included History IQ, Grand Slam, PokerFace (which never aired in North America), Duel, The Million Second Quiz, 500 Questions, The American Bible Challenge, and Mental Samurai. Most game shows conducted in this manner only lasted for one season. History: A boom in prime time revivals of classic daytime game shows began to emerge in the mid-2010s. In 2016, ABC packaged the existing Celebrity Family Feud, which had returned in 2015, with new versions of To Tell the Truth, The $100,000 Pyramid, and Match Game in 2016; new versions of Press Your Luck and Card Sharks would follow in 2019. TBS launched a cannabis-themed revival of The Joker's Wild, hosted by Snoop Dogg, in October 2017. This is in addition to a number of original game concepts that appeared near the same time, including Awake, Deal or No Deal (which originally aired in 2005), Child Support, Hollywood Game Night, 1 vs. 100, Minute to Win It (which originally aired in 2010), The Wall, and a string of music-themed games such as Don't Forget the Lyrics!, The Singing Bee, and Beat Shazam. International issues: The popularity of game shows in the United States was closely paralleled around the world. Reg Grundy Organisation, for instance, would buy the international rights for American game shows and reproduce them in other countries, especially in Grundy's native Australia. Dutch producer Endemol (later purchased by American companies Disney and Apollo Global Management, then resold to French company Banijay) has created and released numerous game shows and reality television formats popular around the world. Most game show formats that are popular in one country are franchised to others. International issues: Game shows have had an inconsistent place in television in Canada, with most homegrown game shows there being made for the French-speaking Quebec market and the majority of English-language game shows in the country being rebroadcast from, or made with the express intent of export to, the United States. There have been exceptions to this (see, for instance, the long-running Definition). Unlike reality television franchises, international game show franchises generally only see Canadian adaptations in a series of specials, based heavily on the American versions but usually with a Canadian host to allow for Canadian content credits (one of those exceptions was Le Banquier, a Quebec French-language version of Deal or No Deal which aired on TVA from 2008 to 2015). The smaller markets and lower revenue opportunities for Canadian shows in general also affect game shows there, with Canadian games (especially Quebecois ones) often having very low budgets for prizes, unless the series is made for export. Canadian contestants are generally allowed to participate on American game shows, and there have been at least three Canadian game show hosts – Howie Mandel, Monty Hall and Alex Trebek – who have gone on to long careers hosting American series, while Jim Perry, an American host, was prominent as a host of Canadian shows. International issues: American game shows have a tendency to hire stronger contestants than their British or Australian counterparts. Many of the most successful game show contestants in America would likely never be cast in a British or Australian game show for fear of having them dominate the game, according to Mark Labbett, who appeared in all three countries on the game show The Chase. International issues: Japanese game show The Japanese game show is a distinct format, borrowing heavily from variety formats, physical stunts and athletic competitions. The Japanese style has been adapted overseas (and at one point was parodied with an American reality competition, I Survived a Japanese Game Show, which used a fake Japanese game show as its central conceit). Prizes: Many of the prizes awarded on game shows are provided through product placement, but in some cases they are provided by private organizations or purchased at either the full price or at a discount by the show. There is the widespread use of "promotional consideration", in which a game show receives a subsidy from an advertiser in return for awarding that manufacturer's product as a prize or consolation prize. Some products supplied by manufacturers may not be intended to be awarded and are instead just used as part of the gameplay such as the low-priced items used in several The Price is Right pricing games. Although in this show the smaller items (sometimes even in the single digits of dollars) are awarded as well when the price is correctly guessed,even when a contestant loses the major prize they were playing for. Prizes: For high-stakes games, a network may purchase prize indemnity insurance to avoid paying the cost of a rare but expensive prize out of pocket. If the said prize is won too often, the insurance company may refuse to insure a show; this was a factor in the discontinuation of The Price Is Right $1,000,000 Spectacular series of prime-time specials. In April 2008, three of the contestants on The Price Is Right $1,000,000 Spectacular won the top prize in a five-episode span after fifteen episodes without a winner, due in large part to a change in the rules. The insurance companies had made it extremely difficult to get further insurance for the remaining episodes. A network or syndicator may also opt to distribute large cash prizes in the form of an annuity, spreading the cost of the prize out over several years or decades. Prizes: From about 1960 through the rest of the 20th century, American networks placed restrictions on the amount of money that could be given away on a game show, in an effort to avoid a repeat of the scandals of the 1950s. This usually took the form of an earnings cap that forced a player to retire once they had won a certain amount of money or a limit on how many episodes, usually five, on which a player could appear on a show. The introduction of syndicated games, particularly in the 1980s, eventually allowed for more valuable prizes and extended runs on a particular show. British television was under even stricter regulations on prizes until the 1990s, seriously restricting the value of prizes that could be given and disallowing games of chance to have an influence on the results of the game. (Thus, the British version of The Price Is Right at first did not include the American version's "Showcase Showdown", in which contestants spun a large wheel to determine who would advance to the Showcase bonus round.) In Canada, prizes were limited not by bureaucracy but necessity, as the much smaller population limited the audience of shows marketed toward that country. The lifting of these restrictions in the 1990s was a major factor in the explosion of high-stakes game shows in the later part of that decade in both the U.S. and Britain and, subsequently, around the world. Bonus round: A bonus round (also known as a bonus game or an end game) usually follows a main game as a bonus to the winner of that game. In the bonus round, the stakes are higher and the game is considered to be tougher.The game play of a bonus round usually varies from the standard game play of the front game, and there are often borrowed or related elements of the main game in the bonus round to ensure the entire show has a unified premise. Though some end games are referred to as "bonus rounds", many are not specifically referred to as such in games but fit the same general role. Bonus round: There is no one formula for the format of a bonus round. There are differences in almost every bonus round, though there are many recurring elements from show to show. The bonus round is often played for the show's top prize. It is almost always played without an opponent; two notable exceptions to this are Jeopardy! and the current version of The Price Is Right. On Jeopardy!, the final round involves all remaining contestants with a positive score wagering strategically to win the game and be invited back the next day; Jeopardy! attempted to replace this round with a traditional solo bonus round in 1978, but this version was not a success and the round was replaced by the original Final Jeopardy! when the show returned in 1984. The Price Is Right uses a knockout tournament format, in which the six contestants to make it onstage are narrowed to two in a "Showcase Showdown;" these two winners then move on to the final Showcase round to determine the day's winner. Bonus round: Until the 1960s, most game shows did not offer a bonus round. In traditional two-player formats, the winner – if a game show's rules provided for this – became the champion and simply played a new challenger either on the next show or after the commercial break.One of the earliest forms of bonus rounds was the Jackpot Round of the original series Beat the Clock. After two rounds of performing stunts, the wife of the contestant couple would perform at a jackpot board for a prize. The contestant was shown a famous quotation or common phrase, and the words were scrambled. To win the announced bonus, the contestant had to unscramble the words within 20 seconds. The contestant received a consolation gift worth over $200 if she was unsuccessful. Bonus round: Another early bonus round ended each episode of You Bet Your Life with the team who won the most money answering one final question for a jackpot which started at $1,000 and increased $500 each week until won. Bonus round: Another early example was the Lightning Round on the word game Password, starting in 1961. The contestant who won the front game played a quick-fire series of passwords within 60 seconds, netting $50 per correctly guessed word, for a maximum bonus prize of $250.The bonus round came about after game show producer Mark Goodson was first presented Password, contending that it was not enough to merely guess passwords during the show. "We needed something more, and that's how the Lightning Round was invited," said Howard Felsher, who produced Password and Family Feud. "From that point on every game show had to have an end round. You'd bring a show to a network and they'd say, 'What's the endgame?' as if they had thought of it themselves."The end game of Match Game, hosted for most of its run by Gene Rayburn, served as the impetus for a completely new game show. The first part of Match Game's "Super-Match" bonus round, called the "Audience Match", asked contestants to guess how a studio audience responded to a question. In 1975, with then regular panelist Richard Dawson becoming restless and progressively less cooperative, Goodson decided that this line of questioning would make a good game show of its own, and the concept eventually became Family Feud, as whose inaugural host Dawson was hired.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endocannabinoid transporter** Endocannabinoid transporter: The endocannabinoid transporters (eCBTs) are transport proteins for the endocannabinoids. Most neurotransmitters are water-soluble and require transmembrane proteins to transport them across the cell membrane. The endocannabinoids (anandamide, AEA, and 2-arachidonoylglycerol, 2-AG) on the other hand, are non-charged lipids that readily cross lipid membranes. However, since the endocannabinoids are water immiscible, protein transporters have been described that act as carriers to solubilize and transport the endocannabinoids through the aqueous cytoplasm. These include the heat shock proteins (Hsp70s) and fatty acid-binding proteins for anandamide (FABPs). FABPs such as FABP1, FABP3, FABP5, and FABP7 have been shown to bind endocannabinoids. FABP inhibitors attenuate the breakdown of anandamide by the enzyme fatty acid amide hydrolase (FAAH) in cell culture. One of these inhibitors (SB-FI-26), isolated from a virtual library of a million compounds, belongs to a class of compounds (named the "truxilloids') that act as an anti-nociceptive agent with mild anti-inflammatory activity in mice. These truxillic acids and their derivatives have been known to have anti-inflammatory and anti-nociceptive effects in mice and are active components of a Chinese herbal medicine ((−)-Incarvillateine Incarvillea sinensis) used to treat rheumatism and pain in human. The blockade of anandamide transport may, at least in part, be the mechanism through which these compounds exert their anti-nociceptive effects. Endocannabinoid transporter: Studies have found the involvement of cholesterol in membrane uptake and transport of anandamide. Cholesterol stimulates both the insertion of anandamide into synthetic lipid monolayers and bilayers, and its transport across bilayer membranes, suggest that besides putative anandamide protein-transporters, cholesterol could be an important component of the anandamide transport machinery, and as cholesterol-dependent modulation of CB1 cannabinoid receptors in nerve cells. The catalytic efficiency (i.e., the ratio between maximal velocity and Michaelis–Menten constant) of the AEA membrane transporter (AMT) is almost doubled compared with control cells, demonstrate that, among the proteins of the “endocannabinoid system,” only CB1 and AMT critically depend on membrane cholesterol content, an observation that may have important implications for the role of CB1 in protecting nerve cells against (endo)cannabinoid-induced apoptosis. This can be a reason, why the use of drugs to lower cholesterol is tied to a higher depression risk, and the correlation between levels and increased death rates from suicide and other violent causes.Activation of CB1 enhances AMT activity through increased nitric oxide synthase (NOS) activity and subsequent increase of NO production, whereas AMT activity instead is reduced by activation of the CB2 cannabinoid receptor, which inhibits NOS and NO release, also suggesting the distribution of these receptors may drive AEA directional transport through the blood–brain barrier and other endothelial cells.As reviewed in 2016; "Many of the AMT (EMT) proposals have fallen by the wayside." To date a transmembrane protein transporter has not been identified.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sarah Jamie Lewis** Sarah Jamie Lewis: Sarah Jamie Lewis is an anonymity and privacy researcher with a special interest in the privacy protocols (or lack thereof) of sex toys. She has been cited in academic research regarding the security and ethics considerations associated with this technology. Sarah Jamie Lewis: Lewis has shared concerns about the lack of legal framework related to the field of onion dildonics, stating that "We are currently sprinting into this world of connected sex toys and connected sex tech without regards to what consent, privacy, or security means in that context..." and recommending "100% encrypted peer-to-peer cyber sex over tor hidden services." More generally, due to the litigious environment in which computer security researchers operate, she has opted to build bespoke secure systems rather than fix broken systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biotic stress** Biotic stress: Biotic stress is stress that occurs as a result of damage done to an organism by other living organisms, such as bacteria, viruses, fungi, parasites, beneficial and harmful insects, weeds, and cultivated or native plants. It is different from abiotic stress, which is the negative impact of non-living factors on the organisms such as temperature, sunlight, wind, salinity, flooding and drought. The types of biotic stresses imposed on an organism depend the climate where it lives as well as the species' ability to resist particular stresses. Biotic stress remains a broadly defined term and those who study it face many challenges, such as the greater difficulty in controlling biotic stresses in an experimental context compared to abiotic stress. Biotic stress: The damage caused by these various living and nonliving agents can appear very similar. Even with close observation, accurate diagnosis can be difficult. For example, browning of leaves on an oak tree caused by drought stress may appear similar to leaf browning caused by oak wilt, a serious vascular disease caused by a fungus, or the browning caused by anthracnose, a fairly minor leaf disease. Agriculture: Biotic stressors are a major focus of agricultural research, due to the vast economic losses caused to cash crops. The relationship between biotic stress and plant yield affects economic decisions as well as practical development. The impact of biotic injury on crop yield impacts population dynamics, plant-stressor coevolution, and ecosystem nutrient cycling.Biotic stress also impacts horticultural plant health and natural habitats ecology. It also has dramatic changes in the host recipient. Plants are exposed to many stress factors, such as drought, high salinity or pathogens, which reduce the yield of the cultivated plants or affect the quality of the harvested products. Although there are many kinds of biotic stress, the majority of plant diseases are caused by fungi. Arabidopsis thaliana is often used as a model plant to study the responses of plants to different sources of stress. Agriculture: In history Biotic stresses have had huge repercussions for humanity; an example of this is the potato blight, an oomycete which caused widespread famine in England, Ireland and Belgium in the 1840s. Another example is grape phylloxera coming from North America in the 19th century, which led to the Great French Wine Blight. Agriculture: Today Losses to pests and disease in crop plants continue to pose a significant threat to agriculture and food security. During the latter half of the 20th century, agriculture became increasingly reliant on synthetic chemical pesticides to provide control of pests and diseases, especially within the intensive farming systems common in the developed world. However, in the 21st century, this reliance on chemical control is becoming unsustainable. Pesticides tend to have a limited lifespan due to the emergence of resistance in the target pests, and are increasingly recognised in many cases to have negative impacts on biodiversity, and on the health of agricultural workers and even consumers. Agriculture: Tomorrow Due to the implications of climate change, it is suspected that plants will have increased susceptibility to pathogens. Additionally, elevated threat of abiotic stresses (i.e. drought and heat) are likely to contribute to plant pathogen susceptibility. Effect on plant growth: Photosynthesis Many biotic stresses affect photosynthesis, as chewing insects reduce leaf area and virus infections reduce the rate of photosynthesis per leaf area. Vascular-wilt fungi compromise the water transport and photosynthesis by inducing stomatal closure. Response to stress: Plants have co-evolved with their parasites for several hundred million years. This co-evolutionary process has resulted in the selection of a wide range of plant defences against microbial pathogens and herbivorous pests which act to minimise frequency and impact of attack. These defences include both physical and chemical adaptations, which may either be expressed constitutively, or in many cases, are activated only in response to attack. For example, utilization of high metal ion concentrations derived from the soil allow plants to reduce the harmful effects of biotic stressors (pathogens, herbivores etc.); meanwhile preventing the infliction of severe metal toxicity by way of safeguarding metal ion distribution throughout the plant with protective physiological pathways. Such induced resistance provides a mechanism whereby the costs of defence are avoided until defense is beneficial to the plant. At the same time, successful pests and pathogens have evolved mechanisms to overcome both constitutive and induced resistance in their particular host species. In order to fully understand and manipulate plant biotic stress resistance, we require a detailed knowledge of these interactions at a wide range of scales, from the molecular to the community level. Response to stress: Inducible defense responses to insect herbivores In order for a plant to defend itself against biotic stress, it must be able to differentiate between an abiotic and biotic stress. A plants response to herbivores starts with the recognition of certain chemicals that are abundant in the saliva of the herbivores. These compounds that trigger a response in plants are known as elicitors or herbivore-associated molecular patterns (HAMPs). These HAMPs trigger signalling pathways throughout the plant, initiating its defence mechanism and allowing the plant to minimise damage to other regions. These HAMPs trigger signalling pathways throughout the plant, initiating its defence mechanism and allowing the plant to minimise damage to other regions. Phloem feeders, like aphids, do not cause a great deal of mechanical damage to plants, but they are still regarded as pests and can seriously harm crop yields. Plants have developed a defence mechanism using salicylic acid pathway, which is also used in infection stress, when defending itself against phloem feeders. Plants perform a more direct attack on an insects digestive system. The plants do this using proteinase inhibitors. These proteinase inhibitors prevent protein digestion and once in the digestive system of an insect, they bind tightly and specifically to the active site of protein hydrolysing enzymes such as trypsin and chymotrypsin. This mechanism is most likely to have evolved in plants when dealing with insect attack. Response to stress: Plants detect elicitors in the insects saliva. Once detected, a signal transduction network is activated. The presence of an elicitor causes an influx of Ca2+ ions to be released in to the cytosol. This increase in cytosolic concentration activates target proteins such as Calmodulin and other binding proteins. Downstream targets, such as phosphorylation and transcriptional activation of stimulus specific responses, are turned on by Ca2+ dependent protein kinases. In Arabidopsis, over expression of the IQD1 calmodulin-binding transcriptional regulator leads to inhibitor of herbivore activity. The role of calcium ions in this signal transduction network is therefore important. Response to stress: Calcium Ions also play a large role in activating a plants defensive response. When fatty acid amides are present in insect saliva, the mitogen-activated protein kinases (MAPKs) are activated. These genes when activated, play a role in the jasmonic acid pathway. The jasmonic acid pathway is also referred to as the Octadecanoid pathway. This pathway is vital for the activation of defence genes in plants. The production of jasmonic acid, a phytohormone, is a result of the pathway. In an experiment using virus-induced gene silencing of two calcium-dependent protein kinases (CDPKs) in a wild tobacco (Nicotiana attenuata), it was discovered that the longer herbivory continued the higher the accumulation of jasmonic acid in wild-type plants and in silenced plants, the production of more defence metabolites was seen as well as the decrease in the growth rate of the herbivore used, the tobacco hornworm (Manduca sexta). This example demonstrates the importance of MAP kinases in plant defence regulation. Response to stress: Inducible defense responses to pathogens Plants are capable of detecting invaders through the recognition of non-self signals despite the lack of a circulatory or immune system like those found in animals. Often a plant's first line of defense against microbes occurs at the plant cell surface and involves the detection of microorganism-associated molecular patterns (MAMPs). MAMPs include nucleic acids common to viruses and endotoxins on bacterial cell membranes which can be detected by specialized pattern-recognition receptors. Another method of detection involves the use of plant immune receptors to detect effector molecules released into plant cells by pathogens. Detection of these signals in infected cells leads to an activation of effector-triggered immunity (ETI), a type of innate immune response.Both the pattern recognition immunity (PTI) and effector-triggered immunity (ETI) result from the upregulation of multiple defense mechanisms including defensive chemical signaling compounds. An increase in the production of salicylic acid (SA) has been shown to be induced by pathogenic infection. The increase in SA results in the production of pathogenesis related (PR) genes which ultimately increase plant resistance to biotrophic and hemibiotrophic pathogens. Increases in jasmonic acid (JA) synthesis near the sites of pathogen infection have also been described. This physiological response to increase JA production has been implicated in the ubiquitination of jasmonate ZIM domains (JAZ) proteins, which inhibit JA signaling, leading to their degradation and a subsequent increase in JA activated defense genes.Studies regarding the upregulation of defensive chemicals have confirmed the role of SA and JA in pathogen defense. In studies utilizing Arabidopsis mutants with the bacterial NahG gene, which inhibits the production and accumulation of SA, were shown to be more susceptible to pathogens than the wild-type plants. This was thought to result from the inability to produce critical defensive mechanisms including increased PR gene expression. Other studies conducted by injecting tobacco plants and Arabidopsis with salicylic acid resulted in higher resistance of infection by the alfalfa and tobacco mosaic viruses, indicating a role for SA biosynthesis in reducing viral replication. Additionally, studies performed using Arabidopsis with mutated jasmonic acid biosynthesis pathways have shown JA mutants to be at an increased risk of infection by soil pathogens.Along with SA and JA, other defensive chemicals have been implicated in plant viral pathogen defenses including abscisic acid (ABA), gibberellic acid (GA), auxin, and peptide hormones. The use of hormones and innate immunity presents parallels between animal and plant defenses, though pattern-triggered immunity is thought to have arisen independently in each. Response to stress: Cross tolerance with abiotic stress Evidence shows that a plant undergoing multiple stresses, both abiotic and biotic (usually pathogen or herbivore attack), can produce a positive effect on plant performance, by reducing their susceptibility to biotic stress compared to how they respond to individual stresses. The interaction leads to a crosstalk between their respective hormone signalling pathways which will either induce or antagonize another restructuring genes machinery to increase tolerance of defense reactions. Response to stress: Reactive oxygen species (ROS) are key signalling molecules produced in response to biotic and abiotic stress cross tolerance. ROS are produced in response to biotic stresses during the oxidative burst. Dual stress imposed by ozone (O3) and pathogen affects tolerance of crop and leads to altered host pathogen interaction (Fuhrer, 2003). Alteration in pathogenesis potential of pest due to O3 exposure is of ecological and economical importance. Tolerance to both biotic and abiotic stresses has been achieved. In maize, breeding programmes have led to plants which are tolerant to drought and have additional resistance to the parasitic weed Striga hermonthica. Remote sensing: The Agricultural Research Service (ARS) and various government agencies and private institutions have provided a great deal of fundamental information relating spectral reflectance and thermal emittance properties of soils and crops to their agronomic and biophysical characteristics. This knowledge has facilitated the development and use of various remote sensing methods for non-destructive monitoring of plant growth and development and for the detection of many environmental stresses that limit plant productivity. Coupled with rapid advances in computing and position locating technologies, remote sensing from ground-, air-, and space-based platforms is now capable of providing detailed spatial and temporal information on plant response to their local environment that is needed for site specific agricultural management approaches. This is very important in today's society because with increasing pressure on global food productivity due to population increase, result in a demand for stress-tolerant crop varieties that has never been greater.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Circle of forces** Circle of forces: The circle of forces, traction circle, friction circle, or friction ellipse is a useful way to think about the dynamic interaction between a vehicle's tire and the road surface. The diagram below shows the tire from above, so that the road surface lies in the xy-plane. The vehicle to which the tire is attached is moving in the positive y direction. Circle of forces: In this example, the vehicle would be cornering to the right (i.e. the positive x direction points to the center of the corner). Note that the plane of rotation of the tire is at an angle to the actual direction that the tire is moving (the positive y direction). Put differently, rather than being allowed to simply "roll" in the direction that it is "pointing" (in this case, rightwards from the positive y direction), the tire instead must "slip" in a different direction from that which it is pointing in order to maintain its "forward" motion in the positive y direction. This difference between the direction the tire "points" (its plane of rotation) and the tire's actual direction of travel is the slip angle. Circle of forces: A tire can generate horizontal force where it meets the road surface by the mechanism of slip. That force is represented in the diagram by the vector F. Note that in this example F is perpendicular to the plane of the tire. That is because the tire is rolling freely, with no torque applied to it by the vehicle's brakes or drive train. However, that is not always the case. Circle of forces: The magnitude of F is limited by the dashed circle, but it can be any combination of the components Fx and Fy that does not extend beyond the dashed circle. (For a real-world tire, the circle is likely to be closer to an ellipse, with the y axis slightly longer than the x axis.) In the example, the tire is generating a component of force in the x direction (Fx) which, when transferred to the vehicle's chassis via the suspension system in combination with similar forces from the other tires, will cause the vehicle to turn to the right. Note that there is also a small component of force in the negative y direction (Fy). This represents drag that will, if not countered by some other force, cause the vehicle to decelerate. Drag of this kind is an unavoidable consequence of the mechanism of slip, by which the tire generates lateral force. Circle of forces: The diameter of the circle of forces, and therefore the maximum horizontal force that the tire can generate, depends upon many factors, including the design of the tire and its condition (age and temperature, for example), the qualities of the road surface, and the vertical load on the tire.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biconic cusp** Biconic cusp: The biconic cusp was one of the earliest suggestions for plasma confinement in a fusion reactor. It consists of two parallel electromagnets with the current running in opposite directions, creating oppositely directed magnetic fields. The two fields interact to form a "null area" between them where the fusion fuel can be trapped. Biconic cusp: Most early work on the cusp design was carried out at the Courant Institute in New York by Harold Grad in the late 1950s and early 1960s. Variations on the basic concept include the picket fence reactor built at Los Alamos in the 1950s and ring reactors. All of these devices leaked their fuel plasma at rates much greater than predicted and most work on the concept ended by the mid-1960s. Mikhail Ioffe later demonstrated why these problems arose. Biconic cusp: A later device that shares some design with the cusp is the polywell concept of the 1990s. This can be thought of as multiple cusps arranged in three dimensions. Description: The magnetic fields in this system were made by electromagnets placed close together. This was a theoretical construct used to model how to contain plasma. The fields were made by two coils of wire facing one another. These electromagnets had poles which faced one another and in the center was a null point in the magnetic field. This was also termed a zero point field. These devices were explored theoretically by Dr. Harold Grad at NYU's Courant Institute in the late 1950s and early 1960s. Because the fields were planar symmetric this plasma system was simple to model. Particle behavior: Simulations of these geometries revealed the existence of three classes of particles. The first class moved back and forth far away from the null point. These particles would be reflected close to the poles of the electromagnets and the plane cusp in the center. This reflection was due to the magnetic mirror effect. These are very stable particles, but their motion changes as they radiate energy over time. This radiation loss arose from acceleration or deceleration by the field and can be calculated using the larmor formula. The second particle moved close to the null point in the center. Because particles passed through locations with no magnetic field, their motions could be straight, with an infinite gyroradius. This straight motion caused the particle to make a more erratic path through the fields. The third class of particles was a transition between these types. Biconic cusps were recently revived because of its similar geometry to the Polywell fusion reactor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VC-6** VC-6: SMPTE ST 2117-1, informally known as VC-6, is a video coding format. Overview: The VC-6 codec is optimized for intermediate, mezzanine or contribution coding applications. Typically, these applications involve compressing finished compositions for editing, contribution, primary distribution, archiving and other applications where it is necessary to preserve image quality as close to the original as possible, whilst reducing bitrates, and optimizing processing, power and storage requirements. VC-6, like other codecs in this category uses only intra-frame compressions, where each frame is stored independently and can be decoded with no dependencies on any other frame. The codec implements lossless and lossy compression, depending on the encoding parameters that have been selected. It was standardized in 2020. Earlier variants of the codec have been deployed by V-Nova since 2015 under the trade name Perseus. The codec is based on hierarchical data structures called s-trees, and does not involve DCT or wavelet transform compression. The compression mechanism is independent of the data being compressed, and can be applied to pixels as well as other non-image data.Unlike DCT based codecs, VC-6 is based on hierarchical, repeatable s-tree structures that are similar to modified quadtrees. These simple structures provide intrinsic capabilities, such as massive parallelism and the ability to choose the type of filtering used to reconstruct higher-resolution images from lower-resolution images. In the VC-6 standard an up-sampler developed with an in-loop Convolutional Neural Network is provided to optimize the detail in the reconstructed image, without requiring a large computational overhead. The ability to navigate spatially within the VC-6 bitstream at multiple levels also provides the ability for decoding devices to apply more resources to different regions of the image allowing for Region-of-Interest applications to operate on compressed bitstreams without requiring a decode of the full-resolution image. History: At the NAB Show in 2015, V-Nova claimed "2x–3x average compression gains, at all quality levels, under practical real-time operating scenarios versus H.264, HEVC and JPEG2000.". Making this announcement on 1 April before a major trade show attracted the attention of many compression experts. Since then, V-Nova have deployed and licensed the technology, known at the time as Perseus, in both contribution and distribution applications around the world including Sky Italia, Fast Filmz, Harmonic Inc, and others. A variant of the technology optimized for enhancing distribution codec will soon be standardized as MPEG-5 Part-2 LCEVC. Core concepts: Planes The standard describes a compression algorithm that is applied to independent planes of data. These planes might be RGB or RGBA pixels originating in a camera, YCbCr pixels from a conventional TV-centric video source or some other planes of data. There may be up to 255 independent planes of data, and each plane can have a grid of data values of dimensions up to 65535 x 65535. The SMPTE ST 2117-1 standard focuses on compressing planes of data values, typically pixels. To compress and decompress the data in each plane, VC-6 uses hierarchical representations of small tree-like structure that carry metadata used to predict other trees. There are 3 fundamental structures repeated in each plane. Core concepts: S-tree The core compression structure in VC-6 is the s-tree. It is similar to the quadtree structure common in other schemes. An s-tree is comprised nodes arranged in a tree structure, where each node links to 4 nodes in the next layer. The total number of layers above the root node is known as the rise of the s-tree. Compression is achieved in an s-tree by using metadata to signal whether levels can be predicted with selective carrying of enhancement data in the bitstream. The more data that can be predicted, the less information that is sent, and the better the compression ratio. Core concepts: Tableau The standard defines a tableau as the root node, or the highest layer of an s-tree, that contains nodes for another s-tree. Like the generic s-trees from which they are constructed, tableaux are arranged in layers with metadata in the nodes indicating whether or not higher layers are predicted or transmitted in the bitstream. Core concepts: Echelon The hierarchical s-tree and tableau structures in the standard are used to carry enhancements (called resid-vals) and other metadata to reduce the amount of raw data that needs to be carried in the bitstream payload. The final hierarchical tool is an ability to arrange the tableaux, so that data from each plane (i.e. pixels) can be dequantized at different resolutions and used as predictors for higher resolutions. Each of these resolutions is defined by the standard as an echelon. Each echelon within a plane is identified by an index, where a more negative index indicates a low resolution and a larger, more positive index indicates a higher resolution. Bitstream overview: VC-6 is an example of intra-frame coding, where each picture is coded without referencing other pictures. It is also intra-plane, where no information from one plane is used to predict another plane. As a result, the VC-6 bitstream contains all of the information for all of the planes of a single image. An image sequence is created by concatenating the bitstreams for multiple images, or by packaging them in a container such as MXF or Quicktime or Matroska. Bitstream overview: The VC-6 bitstream is defined in the standard. by pseudo code, and a reference decoder has been demonstrated based on that definition. The primary header is the only fixed structure defined by the standard. The secondary header contains marker and sizing information depending on the values in the primary header. The tertiary header is entirely calculated, and then the payload structure is derived from the parameters calculated during header decoding Decoding overview: The standard defines a process called plane reconstruction for decoding images from a bitstream. The process starts with the echelon having the lowest index. No predictions are used for this echelon. Firstly, the bitstream rules are used to reconstruct residuals. Next, desparsification and entropy decoding processes are performed to fill the grid with data values at each coordinate. These values are then dequantised to create full-range values that can be used as predictions for the echelon with the next highest index. Each echelon uses the upsampler specified in the header to create a predicted plane from the echelon below which is added to the residual grid from the current echelon that can be upsampled as a prediction for the next echelon.The final, full-resolution, echelon, defined by the standard, is at index 0, and its results are displayed, rather than used for another echelon. Upsampler options: Basic options The standard defines a number of basic upsamplers to create higher-resolution predictions from lower-resolution echelons. There are two linear upsamplers, bicubic and sharp, and a nearest-neighbour upsampler. Convolutional Neural Network Upsampler Six different non-linear upsamplers are defined by a set of processes and coefficients that are provided in JSON format. These coefficients were generated using Convolutional Neural Network techniques.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Auditosensory cortex** Auditosensory cortex: Auditosensory cortex is the part of the auditory system that is associated with the sense of hearing in humans. It occupies the bilateral primary auditory cortex in the temporal lobe of the mammalian brain. The term is used to describe Brodmann area 42 together with the transverse temporal gyri of Heschl. The auditosensory cortex takes part in the reception and processing of auditory nerve impulses, which passes sound information from the thalamus to the brain. Abnormalities in this region are responsible for many disorders in auditory abilities, such as congenital deafness, true cortical deafness, primary progressive aphasia and auditory hallucination. History: The auditosensory cortex defines Brodmann area 42, which is part of the primary auditory cortex. It is also known as the posterior transverse temporal area, located superiorly within the temporal lobe of the cerebral cortex. The cortical area has been studied in a variety of mammals, including humans. It is a functional region found to serve essential roles in hearing. History: Previous studies by Richard Ladislaus Heschl first revealed the anatomical features of this cortical region in 1878. Heschl found a cortical structure that appeared differently from most of the temporal lobe. The distinct structure occupied Brodmann area 42 and was later named the transverse temporal gyri of Heschl. The discovery provided insight into the anatomical network within the primary cortex. It is the first site to process incoming sound information. Due to the close correspondence, Brodmann area 42 is also referred to as Heschl's gyrus.Mapped by German neurologist Korbinian Brodmann in 1909, the auditosensory cortex is one of the 52 cortical regions identified in the cerebral cortex according to their histological characteristics, density, shape, distribution and cell body size. These subdivided cortical regions are later known as the Brodmann areas. Brodmann was the pioneer of cerebral cortex mapping. He grouped several cortical regions based on their nervous function, two of which are areas 41 and 42 for auditory processing. It has been suggested that Brodmann area 42 is a homotypical acoustic association area. Structure: Anatomical position The primary auditory cortex lies medially in the superior temporal gyrus of the human brain. It is responsible for receiving signals from the medial geniculate nucleus. Within the primary auditory cortex, the auditosensory cortex extends posteromedially over the gyrus. Brodmann area 42 is an auditory core region bordered medially by Brodmann area 41 and laterally by Brodmann area 22. The auditosensory cortex demarcates the lateral edge of Brodmann area 41. Structure: Relationship to the transverse temporal gyri The auditosensory cortex is a differentiated anatomical area within the posterior-medial field of the transverse temporal gyrus of Heschl in the lateral sulcus. The cortex of transverse temporal gyrus of Heschl forms a homogeneous structural region with Brodmann area 22. In contrast to other temporal lobe gyri, the transverse temporal gyrus has a distinct feature of stretching mediolaterally towards the brain centre. Function: The main and most apparent function of the auditosensory cortex is hearing. Hearing is a sense of sound reception and perception. Sound reception is the receiving of sound stimuli. The sound wave is transmitted to our auditory apparatus from external environment. This sensory signal is then converted to an electrical signal in a process called sensory transduction. This electrical impulse is carried from the inner ear to the brainstem via the vestibulocochlear nerve (cranial nerve XIII). Furthermore, the auditory impulse is recognised, organised and interpreted as sensory information. The different properties of sound waves are necessary to help the comprehension of language and sound. The language competence directly correlates with the ability of auditosensory cortex in terms of strength and frequency of neuronal activity. Function: Reception and Perception of auditory nerve impulses The primary auditory cortex includes the auditosensory cortex (Brodmann area 42) and the auditopsychic cortex (Brodmann area 41). The primary function of the auditosensory cortex is the sense of hearing. It is the initial cortical destination of auditory nerve impulses from the thalamus. The characteristics of neural activities in this cortex correspond with the physical properties of sound waves. Function: The perception of auditory signals came as a nervous impulse from the inner ear to the cochlear nuclei of the brainstem, which is the first relay station. In an ascending pathway, various acoustic reflexes and sound localisation are regulated via relay stations. The impulse reaches the auditory cortical projections on the superior temporal gyrus, which is the auditosensory cortex. This is the first site of unprocessed recognition of sound. The impulse propagates across the auditosensory area, the auditopsychic area and eventually the entire temporal lobe. Therefore, this allows the formation of memory and comprehension of sound to take place. The posterior auditopsychic region has a site especially for an understanding of speech called the Wernicke's area (Brodmann area 22). Function: The auditosensory cortex solely is insufficient for the complete production and reception of language. The subcortical structures, such as the thalamus, are necessary for controlling emotional and cognitive integration, and the cerebellum for coordinating movements. Function: Analysis of sound properties The auditosensory cortex can analyse acrostic characteristics, namely pitch, loudness and timbre. A higher frequency gives a higher pitch, whereas a lower frequency gives a lower pitch. A larger amplitude gives a higher volume, on the contrary, a smaller amplitude gives a lower volume. Amplitude is for determination of intensity. Timbre is the characteristic of a tone to distinguish sound with the same pitch and volume. Factors affecting timbre are the harmonics, vibration and envelope of the wave. Function: The transverse temporal gyrus, which contains the auditosensory cortex, processes sound impulse in low frequency. Its lateral aspect maps the sound impulse in a tonotopic organisation that produces a mirror image of spatial gradients of frequency sensitivity. It depends on the duration and intensity of the sound stimuli. The early processing of speech recognition requires the ability of the transverse temporal gyrus to discriminate frequency. Hence, this region can distinguish phonetic characteristics of sound. The responsiveness to prosody corresponds to the sensitivity to the slight variation of frequency and duration of time. Function: Linguistic competence Language competence is acquired from the ability of the auditosensory cortex to interpret sound stimuli. The information processing pathway in the transverse temporal gyrus is necessary to the recognise and comprehend of speech, which is called the two-streams hypothesis. The ventral pathway is responsible for processing linguistic semantic information that allows the understanding of meaning. The dorsal stream is responsible for processing phonological information that forms the structure of language. Function: Neuroplasticity includes our auditory perception. It can be shaped by stimuli from the environment, memory and attentional factors. Neural activities in other brain areas are closely bound up with auditosensory processing in the transverse temporal gyrus. For instance, attention and focus, and face perception have an emphasis on our language competence. Clinical Significance: There is a strong association between the cerebral cortex and auditory function. Animal studies have shown that extirpation of the auditosensory cortex leads to the loss of responsiveness to previously learnt tones. The locations of auditory cortical neurones and conformations of the primary auditory cortex are unique to every individual. Therefore, any surgical procedure should take these anatomical variations into account to minimise the damage to our auditory and language functions. Functional brain mapping (FBM) is one of the pre-operative procedures. Clinical Significance: Congenital deafness Congenital deafness is the loss of hearing present at birth. The primary auditory cortex is never stimulated by auditory signals in these patients. This condition also affects the development of the auditory cortex, which gives rise to auditory functional deficits. There are fewer nerve fibres and less myelination in patients' primary auditory cortex, illustrated by the higher grey matter-to-white matter ratios in the Heschl gyrus. The cells and synapses undergo dystrophy in a deafness auditory pathway. If the infants receive cochlear implants during the early critical period, the neurosensory functions can be restored. A recent study concluded that congenital deafness does not damage the general cortical cytoarchitecture. However, there is anatomical dystrophy of deep layers over higher-order cortical fields. The sensory deprivation of auditory neurones induces dystrophy beyond the primary auditory cortex, namely the dorsal zone of the auditory cortex (DZ) and secondary auditory cortex (A2). Clinical Significance: True cortical deafness Cortical deafness is characterised by the unresponsiveness to both verbal and nonverbal sounds due to cortical lesions. However, this sensorineural hearing loss shows no damage to the auditory pathway from the cochlea to the upper brainstem. The onset is usually during childhood, where they have severely impaired ability to distinguish the different vowels and consonant sounds, and impaired capability to comprehend auditory information. They have no subjective experience of hearing as they are unable to process acoustic impulse. They may learn how to identify the meaning of nonverbal sounds correctly. Clinical Significance: Primary Progressive Aphasia Primary progressive aphasia is characterised by the progressive impairment of speech production, comprehension and communication. It is secondary to neurodegenerative diseases, for instance, Alzheimer's disease and frontotemporal lobar degeneration. The Heschl gyrus undergoes deterioration, as shown by the low activity of the primary auditory cortex after stimulation. The symptoms are difficulty and delay in communication and speech organisation. The patients may become reluctant to communicate or even unable to understand verbal or written language, eventually causing primary progressive aphasia. Clinical Significance: Auditory impairment in Mild Traumatic Brain Injury Difficulty in auditory processing is a complication of mild traumatic brain injury (mTBI). mTBI patients have reduced activation of the primary auditory cortex, as shown by fMRI screening. The neural communication of the left and right primary auditory cortices are poorly transmitted. As a result, the lateralisation and responsiveness of the cerebral cortex are affected. The temporal fine structure processing has degenerated as presented by the reduced temporal resolution. It is often due to diffuse axonal injury and demyelination. There may be peripheral and central symptoms, such as reduced auditory understanding in a complex listening environment, central auditory processing disorder and auditory hallucination. mTBI patients can develop hyperacusis that is the hypersensitivity to environmental noise. Clinical Significance: Auditory hallucination in Schizophrenia Auditory hallucination is one of the major symptoms displayed in schizophrenia patients. Studies supported by functional imaging and electrophysiology have shown a possible correlation between the auditory cortex and auditory hallucinations. In the case of an average individual, speaking-induced suppression is generated due to speaking to reduce the activity in the primary auditory cortex. This acts as a physiological mechanism in the auditory system for the speaker to be more focused on the sounds externally produced. Clinical Significance: However, this is not demonstrated in individuals with schizophrenia. In contrast, schizophrenia patients experience increased activity in the auditory cortex instead of sound suppression. Even in a silent environment without external auditory stimuli, schizophrenia patients tend to have abnormal activation of the auditory cortex, leading to auditory hallucinations. The volume of the auditory cortex in these individuals is also much smaller than those without the mental disorder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pirolate** Pirolate: Pirolate (CP-32,387) is an antihistamine drug with a tricyclic chemical structure which was patented as an "antiallergen". It was never marketed and there are very few references to it in the literature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endophthalmitis** Endophthalmitis: Endophthalmitis, or endophthalmia, is inflammation of the interior cavity of the eye, usually caused by an infection. It is a possible complication of all intraocular surgeries, particularly cataract surgery, and can result in loss of vision or loss of the eye itself. Infection can be caused by bacteria or fungi, and is classified as exogenous (infection introduced by direct inoculation as in surgery or penetrating trauma), or endogenous (organisms carried by blood vessels to the eye from another site of infection and is more common in people who have an immunocompromised state). Other non-infectious causes include toxins, allergic reactions, and retained intraocular foreign bodies. Intravitreal injections are a rare cause, with an incidence rate usually less than 0.05%. Endophthalmitis: Endophthalmitis requires immediate medical attention to ensure the condition is diagnosed as soon as possible and treatment is started in order to reduce the risk of the person losing vision in the eye. Treatment options depend on the cause and whether the condition is cause by an endogenous or exogenous mechanism. For people with suspected exogenous endophthalmitis, a biopsy (virtuous tap) and treatment with antibiotics (usually by injection) is usually the first line of treatment. Once the person's response to the antibiotics is assessed, different further treatment options may be considered including surgery. Signs and symptoms: People with endophthalmitis often have a history of recent eye surgery or penetrating trauma to the eye. Signs and symptoms: Symptoms include severe pain, vision loss, and intense redness of the conjunctiva. Hypopyon can be present and should be looked for on examination by a slit lamp. It can first present with the 'black dot sign' (Martin-Farina sign), where patients may report a small area of loss of vision that resembles a black dot or fly. Pus is often contained in the inflamed tissue of the eye (purulent). Signs and symptoms: An eye exam should be considered in systemic candidiasis, as up to 3% of cases of candidal blood infections lead to endophthalmitis. Complications Panophthalmitis — Progression to involve all the coats of the eye. Corneal ulcer Orbital cellulitis Impairment of vision Complete loss of vision Loss of eye architecture Enucleation Cause: Bacteria: N. meningitidis, Staphylococcus aureus, S. epidermidis, S. pneumoniae, other streptococcal spp., Cutibacterium acnes, Pseudomonas aeruginosa, other gram negative organisms. Viruses: Herpes simplex virus. Cause: Fungi: Candida spp. Fusarium Parasites: Toxoplasma gondii, Toxocara.A recent systematic review found that the most common source of infectious transmission following cataract surgery was attributed to a contaminated intaocular solution (i.e. irrigation solution, viscoelastic, or diluted antibiotic), although there is a large diversity of exogenous microorganisms that can travel via various routes including the operating room environment, phacoemulsifcation machine, surgical instruments, topical anesthetics, intraocular lens, autoclave solution, and cotton wool swabs.Late-onset endophthalmitis is mostly caused by Cutibacterium acnes.Causative organisms are not present in all cases. Endophthalmitis can emerge by entirely sterile means, e.g. an allergic reaction to a drug administered intravitreally. Diagnosis: Diagnosis: Microbiology testing. PCR. TASS vs Infectious endophthalmitis. Prevention: Different approaches have been suggested to prevent exogenous endophthalmitis after cataract surgery. Perioperative antibiotic injections into the eye, specifically cefuroxime at the end of surgery, lowers the chance of endophthalmitis. Moderate evidence also supports antibiotic eye drops (levofloxacin or chloramphenicol) with antibiotic injections (cefuroxime or penicillin) to reduce the risk of endophthalmitis after cataract surgery compared with injections or eye drops alone. Periocular injection of penicillin along with chloramphenicol-suphadimidine eye drops and an intracameral cefuroxime injection with topical levofloxacin also reduces the risk reduction of developing endophthalmitis following cataract surgery for some people.For people undergoing intravitreal injections, antibiotics are not as effective at preventing this type of infection. Studies have demonstrated no difference between rates of infection with and without antibiotics when intravitreal injections are performed. There is evidence to suggest that a solution of povidone-iodine applied pre-injection may be effective at preventing some cases of endophthalmitis in people undergoing introvitreal injections. Treatment: Urgent medical attention is required if a person has suspected endophthalmitis. An ophthalmologist is preferred, ideally a vitreoretinal specialist. The first step in treatment (usual care) is an intravitreal injection of potent antibiotics and a biopsy to determine the type of infection. Injections of vancomycin (to kill Gram-positive bacteria) and ceftazidime (to kill Gram-negative bacteria) are routine. Even though antibiotics can have negative impacts on the retina in high concentrations, since visual acuity worsens in 65% of endophthalmitis patients and prognosis gets poorer the longer an infection goes untreated, most medical professionals make the clinical judgment decision that immediate intervention with antibiotics is necessary. People with endophthalmitis may also require an urgent surgery (pars plana vitrectomy). In some cases, evisceration may be necessary to remove a severe and intractable infection which could result in a blind and painful eye. Treatment: Steroids may be injected intravitreally if the cause is allergic. In people with acute endophthalmitis, combined steroid treatment with antibiotics have been found to improve visual outcomes, versus patients only treated with antibiotics, but any improvements on the resolution acute endophthalmitis is unknown.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cyclic steps** Cyclic steps: Cyclic steps are rhythmic bedforms associated with Froude super-critical flow instability. They are a type of sediment wave, and are created when supercritical sediment-laden water (turbidity currents) travels downslope through sediment beds. Each ‘step’ has a steep drop, and together they tend to migrate upstream. On the ocean floor, this phenomenon was first shown to be possible in 2006, although it was observed in open-channel flows over a decade earlier. Geological features appearing to be submarine cyclic steps have been detected in the northern lowlands of Mars in the Aeolis Mensae region, providing evidence of an ancient Martian ocean. Formation: There are many parameters which govern the formation of cyclic steps; bed slope, bed porosity, erosion resistance, sediment concentration, and flow rate all play a role. Tilting flumes can be used to create cyclic steps in subaerial laboratory conditions, provided the Froude number is high enough. If the Froude number is lower than required, antidunes will form instead. Additionally, if the sediment is too fine then chute-and-pool features will form. In subaqueous conditions, most of the work has traditionally been in building mathematical, rather than physical, models of cyclic step formation. However, cyclic steps have attracted increasing scientific attention in the past decade, and numerous real world examples of cyclic steps have now been found.Cyclic steps can be categorized by the rate at which sediment is deposited (the aggradation rate) on different parts of the steps. The categorization concerns the difference in rate on the stoss (flow-facing) and lee (flow-opposing) sides of the feature. Type-1 cyclic steps have more lee erosion than there is stoss aggradation, Type-2 have a roughly equal amount of lee erosion and stoss aggradation, and Type-3 has aggradation on both sides. Type-1 cyclic steps play an important role in canyon formation. Type-2 cyclic steps have been created in the laboratory, in contrast to Type-3 which is common on the sea floor but is harder to create in laboratory conditions - it was first made experimentally in 2013. Types 1, 2, and 3 are also called 'falling', 'transportational', and 'climbing', respectively. Laboratory work has successfully created all three types of cyclic steps in open-channel flows. Relation to other bedforms: In density flows, antidunes can turn into cyclic steps by wave breaking. Fluid flow is Froude-supercritical over the entirety of antidunes, whereas the flow alternates between the sub- and super-criticality over cyclic steps (with hydraulic jumps between cycles). Additionally, cyclic steps tend to have a much larger wavelength-to-flow-thickness ratio and a higher suspension index (ratio of shear velocity to sediment settling velocity). Antidunes are typically unstable (although they can be made stable in laboratory conditions), in contrast to cyclic steps. Despite these differences, it is not uncommon for researchers to incorrectly label a cyclic step as an antidune. Cyclic steps also have similarities to chute-and-pool features. Like cyclic steps, chute-and-pool flows undergo hydraulic jumps, although the flow does not undergo repeated transitions from sub- to super-critical. When the flow remains subcritical over the whole feature, ripples and dunes form instead. Examples: Attention on real world cyclic steps has mostly been focused on the ocean floor and at river deltas. Several submarine cyclic steps have been discovered off the coast of California, such as those in the underwater canyons Monterey Canyon and Eel Canyon. They have also been discovered in the South China Sea, at the South Taiwan shoal and the West Penghu submarine canyons. The cyclic step structure at the South Taiwan shoal is the longest ever observed (as of 2015), consisting of 19 steps and ranging over 100 kilometres (62 mi). They have also been discovered in the Japan Sea at the Toyama deep-sea channel. On Mars, they have been observed at Aeolis Mensae. At prodeltas (the portion of a river delta furthest from shore), cyclic steps have been observed in the Mediterranean. The wavelength of prodelta cyclic steps tends to be an order of magnitude smaller than their seafloor counterparts; the Mediterranean cyclic steps have a wavelength ranging from 20 to 100 metres (66 to 328 ft) whereas submarine cyclic steps are typically measured in kilometers.While no modern examples have been found, cyclic steps can also form within rivers. Geologic evidence from the Cambrian-Ordovician Potsdam Group strata indicates that the Quebec Basin once possessed this type of cyclic step. Glaciolacustrine cyclic steps have also been found in modern Quebec. Cyclic steps can also form along underwater volcanos, such as those in the Punta del Rosario fan, as well as along carbonate slopes and under bedrock streams. Cyclic steps do not need to form underwater - wind can cause them too. Katabatic winds may have caused cyclic steps to form on the ice sheet of Antarctica, and are actively forming cyclic steps at Mars’ poles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Addition-chain exponentiation** Addition-chain exponentiation: In mathematics and computer science, optimal addition-chain exponentiation is a method of exponentiation by a positive integer power that requires a minimal number of multiplications. Using the form of the shortest addition chain, with multiplication instead of addition, computes the desired exponent (instead of multiple) of the base. (This corresponds to OEIS sequence A003313 (Length of shortest addition chain for n).) Each exponentiation in the chain can be evaluated by multiplying two of the earlier exponentiation results. More generally, addition-chain exponentiation may also refer to exponentiation by non-minimal addition chains constructed by a variety of algorithms (since a shortest addition chain is very difficult to find). Addition-chain exponentiation: The shortest addition-chain algorithm requires no more multiplications than binary exponentiation and usually less. The first example of where it does better is for a15, where the binary method needs six multiplications but the shortest addition chain requires only five: 15 =a×(a×[a×a2]2)2 (binary, 6 multiplications) 15 =([a2]2×a)3 (shortest addition chain, 5 multiplications). Addition-chain exponentiation: 15 =a3×([a3]2)2 (also shortest addition chain, 5 multiplications).On the other hand, the determination of a shortest addition chain is hard: no efficient optimal methods are currently known for arbitrary exponents, and the related problem of finding a shortest addition chain for a given set of exponents has been proven NP-complete. Even given a shortest chain, addition-chain exponentiation requires more memory than the binary method, because it must potentially store many previous exponents from the chain. So in practice, shortest addition-chain exponentiation is primarily used for small fixed exponents for which a shortest chain can be pre-computed and is not too large. Addition-chain exponentiation: There are also several methods to approximate a shortest addition chain, and which often require fewer multiplications than binary exponentiation; binary exponentiation itself is a suboptimal addition-chain algorithm. The optimal algorithm choice depends on the context (such as the relative cost of the multiplication and the number of times a given exponent is re-used).The problem of finding the shortest addition chain cannot be solved by dynamic programming, because it does not satisfy the assumption of optimal substructure. That is, it is not sufficient to decompose the power into smaller powers, each of which is computed minimally, since the addition chains for the smaller powers may be related (to share computations). For example, in the shortest addition chain for a15 above, the subproblem for a6 must be computed as (a3)2 since a3 is re-used (as opposed to, say, a6 = a2(a2)2, which also requires three multiplies). Addition-subtraction–chain exponentiation: If both multiplication and division are allowed, then an addition-subtraction chain may be used to obtain even fewer total multiplications+divisions (where subtraction corresponds to division). However, the slowness of division compared to multiplication makes this technique unattractive in general. For exponentiation to negative integer powers, on the other hand, since one division is required anyway, an addition-subtraction chain is often beneficial. One such example is a−31, where computing 1/a31 by a shortest addition chain for a31 requires 7 multiplications and one division, whereas the shortest addition-subtraction chain requires 5 multiplications and one division: 31 =a/((((a2)2)2)2)2 (addition-subtraction chain, 5 mults + 1 div).For exponentiation on elliptic curves, the inverse of a point (x, y) is available at no cost, since it is simply (x, −y), and therefore addition-subtraction chains are optimal in this context even for positive integer exponents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cytotrienin A** Cytotrienin A: Cytotrienin A (Cyt A) is a secondary metabolite isolated from Streptomyces sp. RK95-74 isolated from soil in Japan in 1997. Cyt A is an ansamycin. Cytotrienin A induces apoptosis on HL-60 cells, as well as inhibiting translation in eukaryotes by inhibiting eukaryotic elongation factor 1A (eEF1A), which can act as an oncogene. These functions lead to the potential of the microbial metabolite acting as an anticancer agent, specifically for blood cancers, as it has proved to be more effective with leukemic cell lines. Cyt A is thought to induce apoptosis by activating c-Jun N-terminal Kinase (JNK), p38 mitogen-activated protein kinase (MAPK), and p36 myelin basic protein (MBP) kinase. Structure: Cytotrienin A is an ansamycin with a macrocyclic, twenty-one carbon, lactam, featuring an E,E,E-triene and an unusual aminocyclopropane carboxylic acid side chain. The structure features four stereocenters, 3 of which are contiguous. The absolute stereochemistry of the naturally occurring metabolite revealed it to be (+)-Cytotrienin A. Biology and Biochemistry: Cytotrienin A has a median effective dose value of 7.7 nM to induce apoptosis on HL-60 cells. It was also shown that the metabolite has greater growth inhibitory activity on HL-60 cells at low concentrations and little impact on non-tumorous cells, while at high concentrations the reverse relationship was seen. The apoptosis pathway was shown to involve the proteolytic activation of MST/Krs proteins by caspase-3 which results in the activation of p36 MBP kinase through the creation of ROS. The concentrations needed to induce this pathway were found to be the same required to induce apoptosis on HL-60 cells. Cyt A also activates c-Jun N-terminal kinase, and its apoptotic effects are inhibited by dominant negative c-Jun. While the presence of inactive MST/Krs proteins and dominant negative c-Jun completely inhibited apoptosis.Additionally, cytotrienin A acts to inhibit eEF-1A, and thus inhibiting translation. This can make the tumor less effective at producing anti-apoptotic mediators and lowering the tumor's drug resistance. The translation inhibition is thought to occur before apoptosis can be detected. Inhibition of HUVEC tube formation was also shown, which indicates an inhibition of angiogenesis and further evidence of its anti-cancer potential. In A549 cells, where cytotrienin A has been shown to be less effective, cyt A inhibits the expression of ICAM-1. This occurs when the TNF receptor 1 is shredded through the activation of extracellular signal-regulated kinase (ERK) and p38 MAPK. Cyt A inhibited ICAM-1 expression by TNF-α and IL-1α at similar concentrations, but was found to be inhibited by TAPI-2, an inhibitor to the TNF-α-converting enzyme (TACE).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sternocleidomastoid branches of occipital artery** Sternocleidomastoid branches of occipital artery: The two sternocleidomastoid branches of the occipital artery (sternocleidomastoid artery) arise directly from the occipital artery and are the initial two branches of this artery. Uncommonly, the lower sternocleidomastoid branch can branch directly from the external carotid. The lower sternocleidomastoid branch passes infero-external to the hypoglossal nerve before descending into the substance of the muscle to which its name is derived. The upper sternocleidomastoid branch diverts from the main trunk at the deep border of the proximal end of the posterior digastric muscle belly, coursing with the spinal accessory nerve prior to arborising into the sternocleidomastoid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coulomb collision** Coulomb collision: A Coulomb collision is a binary elastic collision between two charged particles interacting through their own electric field. As with any inverse-square law, the resulting trajectories of the colliding particles is a hyperbolic Keplerian orbit. This type of collision is common in plasmas where the typical kinetic energy of the particles is too large to produce a significant deviation from the initial trajectories of the colliding particles, and the cumulative effect of many collisions is considered instead. The importance of Coulomb collisions was first pointed out by Lev Landau in 1936, who also derived the corresponding kinetic equation which is known as the Landau kinetic equation. Simplified mathematical treatment for plasmas: In a plasma, a Coulomb collision rarely results in a large deflection. The cumulative effect of the many small angle collisions, however, is often larger than the effect of the few large angle collisions that occur, so it is instructive to consider the collision dynamics in the limit of small deflections. Simplified mathematical treatment for plasmas: We can consider an electron of charge −e and mass me passing a stationary ion of charge +Ze and much larger mass at a distance b with a speed v . The perpendicular force is Ze2/4πϵ0b2 at the closest approach and the duration of the encounter is about b/v . The product of these expressions divided by the mass is the change in perpendicular velocity: Δmev⊥≈Ze24πϵ01vb Note that the deflection angle is proportional to 1/v2 . Fast particles are "slippery" and thus dominate many transport processes. The efficiency of velocity-matched interactions is also the reason that fusion products tend to heat the electrons rather than (as would be desirable) the ions. If an electric field is present, the faster electrons feel less drag and become even faster in a "run-away" process. Simplified mathematical treatment for plasmas: In passing through a field of ions with density n , an electron will have many such encounters simultaneously, with various impact parameters (distance to the ion) and directions. The cumulative effect can be described as a diffusion of the perpendicular momentum. The corresponding diffusion constant is found by integrating the squares of the individual changes in momentum. The rate of collisions with impact parameter between b and (b+db) is nv(2πbdb) , so the diffusion constant is given by Dv⊥=∫(Ze24πϵ0)21v2b2nv(2πbdb)=(Ze24πϵ0)22πnv∫dbb Obviously the integral diverges toward both small and large impact parameters. The divergence at small impact parameters is clearly unphysical since under the assumptions used here, the final perpendicular momentum cannot take on a value higher than the initial momentum. Setting the above estimate for Δmev⊥ equal to mv , we find the lower cut-off to the impact parameter to be about b0=Ze24πϵ01mev2 We can also use πb02 as an estimate of the cross section for large-angle collisions. Under some conditions there is a more stringent lower limit due to quantum mechanics, namely the de Broglie wavelength of the electron, h/mev where h is Planck's constant. Simplified mathematical treatment for plasmas: At large impact parameters, the charge of the ion is shielded by the tendency of electrons to cluster in the neighborhood of the ion and other ions to avoid it. The upper cut-off to the impact parameter should thus be approximately equal to the Debye length: λD=ϵ0kTenee2 Coulomb logarithm: The integral of 1/b thus yields the logarithm of the ratio of the upper and lower cut-offs. This number is known as the Coulomb logarithm and is designated by either ln ⁡Λ or λ . It is the factor by which small-angle collisions are more effective than large-angle collisions. The Coulomb logarithm was introduced independently by Lev Landau in 1936 and Subrahmanyan Chandrasekhar in 1943. For many plasmas of interest it takes on values between 5 and 15 . (For convenient formulas, see pages 34 and 35 of the NRL Plasma formulary.) The limits of the impact parameter integral are not sharp, but are uncertain by factors on the order of unity, leading to theoretical uncertainties on the order of 1/λ . For this reason it is often justified to simply take the convenient choice 10 . The analysis here yields the scalings and orders of magnitude. Mathematical treatment for plasmas accounting for all impact parameters: An N-body treatment accounting for all impact parameters can be performed by taking into account a few simple facts. The main two ones are: (i) The above change in perpendicular velocity is the lowest order approximation in 1/b of a full Rutherford deflection. Therefore, the above perturbative theory can also be done by using this full deflection. This makes the calculation correct up to the smallest impact parameters where this full deflection must be used. (ii) The effect of Debye shielding for large impact parameters can be accommodated by using a Debye-shielded Coulomb potential (Screening effect Debye length). This cancels the above divergence at large impact parameters. The above Coulomb logarithm turns out to be modified by a constant of order unity. Historical aspects: In the 1950s, transport due to collisions in non-magnetized plasmas was simultaneously studied by two groups at UC Berkeley’s Radiation Laboratory. They quoted each other’s results in their respective papers. The first reference deals with the mean-field part of the interaction by using perturbation theory in electric field amplitude. Within the same approximations, a more elegant derivation of the collisional transport coefficients was provided, by using the Balescu–Lenard equation (see Sec. 8.4 of and Secs. 7.3 and 7.4 of ). The second reference uses the Rutherford picture of two-body collisions. The calculation of the first reference is correct for impact parameters much larger than the interparticle distance, while those of the second one work in the opposite case. Both calculations are extended to the full range of impact parameters by introducing each a single ad hoc cutoff, and not two as in the above simplified mathematical treatment, but the transport coefficients depend only logarithmically thereon; both results agree and yield the above expression for the diffusion constant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RAND Tablet** RAND Tablet: The RAND Tablet is a graphical computer input device developed by The RAND Corporation. The RAND Tablet is claimed to be the first digital graphic device marketed as being a low cost device. The creation of the tablet was performed by the Advanced Research Projects Agency. The RAND Tablet was one of the first devices to utilize a stylus as a highly practical instrument. The tablet is connected to an input of a computer and/or an oscilloscope display. The display would register the input and display it on the computer screen. History: Development of the RAND Tablet began with research on the Sketchpad, a system where the user could write commands for a computer directly on the tablet, conducted by Ivan Sutherland. A multitude of different experimental systems were developed to recognize handwritten letters and gestures like Tom Ellis' flowchart based Graphic Input Language (GRAIL) method. Tom Ellis, an author of many RAND corporation reports, stated that this GRAIL system was what allowed the natural and real-time recognition of text and symbols written on the flowchart. The RAND Tablet was one of the first devices to recognize freehand drawing, using programs like Ellis'. The RAND Tablet was also called the "Grafacon" and is considered one of the first produced graphics tablets. The original RAND Tablet cost $18,000 and was available to research facilities in 1964 after years of development. However, the RAND Tablet did not catch on commercially, likely due to an inertia in user habits which made consumers more familiar with a keyboard device, and the lack of practical applications for a tablet device during this time period. Description: The RAND Tablet is a large 10"x10" printed-circuit screen with printed-circuit capacitive-coupled encoders and 40 external connections. The surface has 100 lines per inch resolution so it is able to digitize in over 1 million locations. This is why the handwriting functionality appeared to be so natural. The tablet connects to the input channel of a general-purpose computer and to an oscilloscope display which controls the multiplexes of the pen position information. The tablet design initially consisted of a woven grid of Formex wires. Each wire has a 0.1" resolution and is driven by a digital signal which indicates its position in the matrix. A free-hand stylus would pick up a signal unique to its position when moving over the surface. By the time of the tablet's production, printed-circuit technology had advanced to allow a grid of copper strips on a bi-axially oriented polyethylene terephthalate (boPET) surface to yield a resolution of 0.01". This surface was then covered with a plastic wear layer and mounted in a metal frame. The stylus used on the RAND Tablet had a tiny click switch that, when depressed, would send a signal to the machine. Capabilities: Handwriting recognition A program was written in IBM 360 Assembler Language to allow an online computer user to write data and directives on the RAND Tablet. Using point-by-point pen location, the scheme could immediately recognize and display 53 letters, numbers, and symbols in multiple printing styles as long as they adhered to coding conventions. The tablet was able to pick up and identify multiple stroke symbols by analyzing the sequence of direction, end-point locations, and was even able to use contextual clues when necessary. The pen track is displayed until the symbol is recognized. Capabilities: Chinese-Character Lookup Chinese Character Lookup was a companion project which used the RAND Tablet to provide a translation aid. The desired character could be drawn on the tablet and when reproduced on the CRT display page, would include the character, its pronunciation, and its identification number in the standard Chinese-English dictionary. By analyzing point-by-point location of each stroke drawn by the tablet's pen, both Chinese and Roman characters could be identified in milliseconds. Capabilities: Map Annotation Markings, words, and symbols could by easily be added to maps using the RAND Tablet. The tablet proved to be a straightforward, convenient, and easy means to position boxes, lines, arrows, text, and other annotations over the display map. Digitally scanned maps were believed to be provided by the Defense Intelligence Agency and the USAF R&D facility at the Rome Air Development Center, however, they also could have been produced at RAND to interest the USAF. Capabilities: Videographic System RAND partnered with IBM to create a system which could merge information either provided by user actions or generated by computers with information coming from other sources. The RAND Video Graphic System serves 32 consoles, each with a full range of interaction and full graphics, and can accommodate up to 8 different input devices. Several computers could be accessed from any terminal. Scanning and buffer storage was centralized to improve performance and reduce cost. The heart of the Video Graphic system was an IBM 1800 computer and a 36" diameter magnetic disk to store scanned images and keep the parts of a displayed composite image in sync. However, the system was tricky and commercial development quickly overtook the technology, resulting in the program's termination.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hemicholinium-3** Hemicholinium-3: Hemicholinium-3 (HC3), also known as hemicholine, is a drug which blocks the reuptake of choline by the high-affinity choline transporter (ChT; encoded in humans by the gene SLC5A7) at the presynapse. The reuptake of choline is the rate-limiting step in the synthesis of acetylcholine; hence, hemicholinium-3 decreases the synthesis of acetylcholine. It is therefore classified as an indirect acetylcholine antagonist.Acetylcholine is synthesized from choline and a donated acetyl group from acetyl-CoA, by the action of choline acetyltransferase (ChAT). Thus, decreasing the amount of choline available to a neuron will decrease the amount of acetylcholine produced. Neurons affected by hemicholinium-3 must rely on the transport of choline from the soma (cell body), rather than relying on reuptake of choline from the synaptic cleft. Toxicity: Hemicholinium-3 is highly toxic because it interferes with cholinergic neurotransmission. The LD50 of hemicholinium-3 for mice is about 35 μg.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**McPixel** McPixel: McPixel is an independently produced puzzle video game by Polish developer Mikołaj Kamiński (also known as Sos Sosowski) in 2012. Gameplay: The game centers around the title character, McPixel, who is a parody of both MacGyver and his other parody, MacGruber. The game features numerous references to popular culture characters. Gameplay: McPixel's objective in the game is to defuse bombs or "save the day" in 20 seconds each level. There are four chapters in the game, each with three levels and an unlockable level. Each level contains six sequences. The puzzle solutions are often absurdist and nonsensical, using cartoon-style physics and logic; typical interactions with people and objects involve McPixel kicking them, and directly kicking the bomb often causes it to explode. In one level, to stop a fire from reaching a bomb, the player may have to get McPixel to urinate on it; in another level, McPixel may have to feed a bomb to another person, causing it to explode inside their stomach and protecting the surroundings from destruction. Release and reception: The game was released on 25 June 2012 for Android and iPhone and as a computer game. Release and reception: McPixel received positive reviews, with a critic score of 76/100 on Metacritic for the PC version, and 83/100 critic score for the iOS version. The Verge gave the game a score of 8 out of 10, stating "McPixel is the step further, a parody of a parody. But it's stranger, grosser, funnier and far more blasphemous."The game's creator and developer, Mikolaj "Sos" Kamiński, said: "The largest force driving attention to McPixel at that time were 'Let's Play' videos. Mostly by Jesse Cox and PewDiePie." Sos promoted the distribution of his game on The Pirate Bay to market it. He found out that McPixel was being torrented from a Reddit post. Due to this event, McPixel became the first game ever to be endorsed by the Pirate Bay.As of September 2012, McPixel had sold 3,056 copies. The game was also the first game to be released via Steam Greenlight.During August 15–22, 2013, McPixel featured alongside four other games in the Humble Bundle Weekly Sale ("Hosted by PewDiePie"), which sold 189,927 units. Release and reception: As of October 2013, a Linux version exists, but is not yet available on Steam. Kamiński has stated on the Steam Forums that this is because the Adobe Air run-time can not be distributed via Steam. To fix this and other issues, Kamiński has stated that he intended to rewrite the game engine to not use Adobe Air. Kamiński announced the rewrite in June 2013, writing that he hoped to be done by September 2013, though there had been no news as of September 2014. As of June 2019, the Linux version is not on Steam, however Proton can be used to run the game. Sequel: A sequel titled McPixel 3 was announced on 17 February 2022 and released on 14 November 2022.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shallow trench isolation** Shallow trench isolation: Shallow trench isolation (STI), also known as box isolation technique, is an integrated circuit feature which prevents electric current leakage between adjacent semiconductor device components. STI is generally used on CMOS process technology nodes of 250 nanometers and smaller. Older CMOS technologies and non-MOS technologies commonly use isolation based on LOCOS.STI is created early during the semiconductor device fabrication process, before transistors are formed. The key steps of the STI process involve etching a pattern of trenches in the silicon, depositing one or more dielectric materials (such as silicon dioxide) to fill the trenches, and removing the excess dielectric using a technique such as chemical-mechanical planarization.[1] Certain semiconductor fabrication technologies also include deep trench isolation, a related feature often found in analog integrated circuits. Shallow trench isolation: The effect of the trench edge has given rise to what has recently been termed the "reverse narrow channel effect" or "inverse narrow width effect". Basically, due to the electric field enhancement at the edge, it is easier to form a conducting channel (by inversion) at a lower voltage. The threshold voltage is effectively reduced for a narrower transistor width. The main concern for electronic devices is the resulting subthreshold leakage current, which is substantially larger after the threshold voltage reduction. Process flow: Stack deposition (oxide + protective nitride) Lithography print Dry etch (Reactive-ion etching) Trench fill with oxide Chemical-mechanical polishing of the oxide Removal of the protective nitride Adjusting the oxide height to Si
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sentilo Platform** Sentilo Platform: Sentilo (“Sensor” in Esperanto) is an open-source software sensor and actuator project. Description: Sentilo is designed to be cross-platform with the objective of sharing information between heterogeneous systems and to easily integrate legacy applications. Sentilo was started in November 2012 by the Barcelona City Council, through the Municipal Institute of Informatics (IMI), for its smart city project called City OS. Since then, the city of Terrassa joined the project, as well as the Cellnex Telecom.Sentilo could be used for other Internet of things applications. Cisco Systems promoted the CityOS project and Sentilo in 2014. Also in 2014, Spanish company Libelium announced they would use it with their sensors. In 2016, some initial tests were discussed. It can be on-premises software or a platform as a service using cloud computing from OpenTrends called Thingtia, announced in January 2017.Sentilo can be downloaded direct from the web or viewed in GitHub. Data such as the number of sensors and transaction rate are available at a web site.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nerolic acid** Nerolic acid: Nerolic acid, also known as (Z)-3,7-Dimethyl-2,6-octadienoic acid, is an organically-derived chemical. In nature: Nerolic acid is found in the Nasonov scent gland of honey-bees along with geraniol, geranic acid, citral, farnesol, and nerol. Of these, nerolic acid, geraniol, and farnesol are present in the highest proportions.. It is one of the five compounds that are part of the essential oils of Myrcia Ovata.It is a compound also found in great quantity in Myrcia lundiana and it possess antifungal properties against pathogens such as Fusarium solani, and Lasiodiplodia theobromae. Both Myrcia Ovata and Myrcia lundiana are part of the Myrtaceae family plant, both containing a certain percentage of Nerolic acid compound.In addition to this, Nerolic acid is also a principal chemical compound of essential oils of lemongrass, and is also believed to possess antifungal properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Undecanol** Undecanol: Undecanol, also known by its IUPAC name 1-undecanol or undecan-1-ol, and by its trivial names undecyl alcohol and hendecanol, is a fatty alcohol. Undecanol is a colorless, water-insoluble liquid of melting point 19 °C and boiling point 243 °C. Industrial uses and production: It has a floral citrus like odor, and a fatty taste and is used as a flavoring ingredient in foods. It is commonly produced by the reduction of undecanal, the analogous aldehyde. Natural occurrence: 1-Undecanol is found naturally in many foods such as fruits (including apples and bananas), butter, eggs and cooked pork. Toxicity: Undecanol can irritate the skin, eyes and lungs. Ingestion can be harmful, with the approximate toxicity of ethanol.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mannan-binding lectin** Mannan-binding lectin: Mannose-binding lectin (MBL), also called mannan-binding lectin or mannan-binding protein (MBP), is a lectin that is instrumental in innate immunity as an opsonin and via the lectin pathway. Structure: MBL has an oligomeric structure (400-700 kDa), built of subunits that contain three presumably identical peptide chains of about 30 kDa each. Although MBL can form several oligomeric forms, there are indications that dimers and trimers are biologically inactive as an opsonin and at least a tetramer form is needed for activation of complement. Structure: Genes and polymorphisms Human MBL2 gene is located on chromosome 10q11.2-q21. Mice have two homologous genes, but in human the first of them was lost. A low level expression of an MBL1 pseudogene 1 (MBL1P1) was detected in liver. The pseudogene encodes a truncated 51-amino acid protein that is homologous to the MBLA isoform in rodents and some primates.Structural mutations in exon 1 of the human MBL2 gene, at codon 52 (Arg to Cys, allele D), codon 54 (Gly to Asp, allele B) and codon 57 (Gly to Glu, allele C), also independently reduce the level of functional serum MBL by disrupting the collagenous structure of the protein. Furthermore, several nucleotide substitutions in the promoter region of the MBL2 gene at position −550 (H/L polymorphism), −221 (X/Y polymorphism) and −427, −349, −336, del (−324 to −329), −70 and +4 (P/Q polymorphisms) affect the MBL serum concentration. Both the frequency of structural mutations and the promoter polymorphisms that are in strong linkage disequilibrium vary among ethnic groups resulting in seven major haplotypes: HYPA, LYQA, LYPA, LXPA, LYPB, LYQC and HYPD. Differences in the distribution of these haplotypes are the major cause of interracial variations in MBL serum levels. Both HYPA and LYQA are high-producing haplotypes, LYPA intermediate-producing haplotype and LXPA low-producing haplotype, whereas LYPB, LYQC and HYPD are defective haplotypes, which cause a severe MBL deficiency.Both MBL2 and MBL1P1 genes has been repeatedly hit throughout evolution of primates. The latter silenced eventually by mutations in the glycine residues of the collagen-like region. It has been selectively turned off during evolution through the same molecular mechanisms causing the MBL2 variant alleles in man, suggesting an evolutionary selection for low-producing MBL genes. Structure: Posttranslational modifications In rat hepatocytes, MBL is synthesized in the rough endoplasmic reticulum. While in Golgi, it undergoes two distinct posttranslational modifications and is assembled into high molecular weight multimeric complexes. The modifications produce MBL in multiple forms of slightly various molecular masses and pI from 5.7 to 6.2. Proteolytic cleavage also resulted in removal of the 20-aa N-terminal signal peptide, and hydroxylation and glycosylation were also detected. Some cysteine residues can be converted to dehydroalanin. Function: MBL belongs to the class of collectins in the C-type lectin superfamily, whose function appears to be pattern recognition in the first line of defense in the pre-immune host. MBL recognizes carbohydrate patterns found on the surface of a large number of pathogenic micro-organisms, including bacteria, viruses, protozoa and fungi. Binding of MBL to a micro-organism results in activation of the lectin pathway of the complement system. Function: Another important function of MBL is that this molecule binds senescent and apoptotic cells and enhances engulfment of whole, intact apoptotic cells, as well as cell debris by phagocytes. Activation The complement system can be activated through three pathways: the classical pathway, the alternative pathway, and the lectin pathway. One way the most-recently discovered lectin pathway is activated is through mannose-binding lectin protein. MBL binds to carbohydrates (to be specific, D-mannose and L-fucose residues) found on the surfaces of many pathogens. Function: For example, MBL has been shown to bind to: yeasts such as Candida albicans viruses such as HIV and influenza A many bacteria, including Salmonella and Streptococci parasites like Leishmania SARS-CoV-2 Complexes MBL in the blood is complexed with (bound to) a serine protease called MASP (MBL-associated serine protease). There are three MASPs: MASP-1, MASP-2 and MASP-3, which have protease domains. There are also sMAP (also called MAp19) and MAp44, which do not have protease domains and are thought to be regulatory molecules of MASPs. MASPs also form complexes with ficolins, which are similar to MBL functionally and structurally with the exception that ficolins recognize their targets through fibrinogen-like domains, unlike MBL. Function: In order to activate the complement system when MBL binds to its target (for example, mannose on the surface of a bacterium), the MASP protein functions to cleave the blood protein C4 into C4a and C4b. The C4b fragments can then bind to the surface of the bacterium, and initiate the formation of a C3-convertase. The subsequent complement cascade catalyzed by C3-convertase results in creating a membrane attack complex, which causes lysis of the pathogen as well as altered-self in the context of apoptotic and necrotic cells. MBL/MASP-1 complex also has thrombin-like activity (thrombin clots fibrin to initiate blood clots). Mice that genetically lack MBL or MASP-1/3 (but not MASP-2/sMAP) have prolonged bleeding time in experimental injury models, although mice are seen to be normal if there is no insult to the body. Clinical significance: It is produced in the liver as a response to infection, and is part of many other factors termed acute phase proteins. Expression and function in other organs were also suggested. The three structural polymorphisms of exon 1 have been reported to cause susceptibility to various common infections, including meningococcal disease. However, evidence has been presented that suggests no harmful effect of these variants with regard to mengingococcal disease. MBL deficiency is very common in humans, with approximately 10% of individuals having this deficiency.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Power graph analysis** Power graph analysis: In computational biology, power graph analysis is a method for the analysis and representation of complex networks. Power graph analysis is the computation, analysis and visual representation of a power graph from a graph (networks). Power graph analysis can be thought of as a lossless compression algorithm for graphs. It extends graph syntax with representations of cliques, bicliques and stars. Compression levels of up to 95% have been obtained for complex biological networks. Hypergraphs are a generalization of graphs in which edges are not just couples of nodes but arbitrary n-tuples. Power graphs are not another generalization of graphs, but instead a novel representation of graphs that proposes a shift from the "node and edge" language to one using cliques, bicliques and stars as primitives. Power graphs: Graphical representation Graphs are drawn with circles or points that represent nodes and lines connecting pairs of nodes that represent edges. Power graphs extend the syntax of graphs with power nodes, which are drawn as a circle enclosing nodes or other power nodes, and power edges, which are lines between power nodes. Bicliques are two sets of nodes with an edge between every member of one set and every member of the other set. In a power graph, a biclique is represented as an edge between two power nodes. Cliques are a set of nodes with an edge between every pair of nodes. In a power graph, a clique is represented by a power node with a loop. Stars are a set of nodes with an edge between every member of that set and a single node outside the set. In a power graph, a star is represented by a power edge between a regular node and a power node. Power graphs: Formal definition Given a graph G=(V,E) where V={v0,…,vn} is the set of nodes and E⊆V×V is the set of edges, a power graph G′=(V′,E′) is a graph defined on the power set V′⊆P(V) of power nodes connected to each other by power edges: E′⊆V′×V′ . Hence power graphs are defined on the power set of nodes as well as on the power set of edges of the graph G The semantics of power graphs are as follows: if two power nodes are connected by a power edge, this means that all nodes of the first power node are connected to all nodes of the second power node. Similarly, if a power node is connected to itself by a power edge, this signifies that all nodes in the power node are connected to each other by edges. Power graphs: The following two conditions are required: Power node hierarchy condition: Any two power nodes are either disjoint, or one is included in the other. Power edge disjointness condition: There is an onto mapping from edges of the original graph to power edges. Analogy to Fourier analysis: The Fourier analysis of a function can be seen as a rewriting of the function in terms of harmonic functions instead of t↦x pairs. This transformation changes the point of view from time domain to frequency domain and enables many interesting applications in signal analysis, data compression, and filtering. Similarly, Power graph analysis is a rewriting or decomposition of a network using bicliques, cliques and stars as primitive elements (just as harmonic functions for Fourier analysis). It can be used to analyze, compress and filter networks. There are, however, several key differences. First, in Fourier analysis the two spaces (time and frequency domains) are the same function space - but stricto sensu, power graphs are not graphs. Second, there is not a unique power graph representing a given graph. Yet a very interesting class of power graphs are minimal power graphs which have the fewest power edges and power nodes necessary to represent a given graph. Minimal power graphs: In general, there is no unique minimal power graph for a given graph. In this example (right) a graph of four nodes and five edges admits two minimal power graphs of two power edges each. The main difference between these two minimal power graphs is the higher nesting level of the second power graph as well as a loss of symmetry with respect to the underlying graph. Loss of symmetry is only a problem in small toy examples since complex networks rarely exhibit such symmetries in the first place. Additionally, one can minimize the nesting level but even then, there is in general not a unique minimal power graph of minimal nesting level. Power graph greedy algorithm: The power graph greedy algorithm relies on two simple steps to perform the decomposition: The first step identifies candidate power nodes through a hierarchical clustering of the nodes in the network based on the similarity of their neighboring nodes. The similarity of two sets of neighbors is taken as the Jaccard index of the two sets. The second step performs a greedy search for possible power edges between candidate power nodes. Power edges abstracting the most edges in the original network are added first to the power graph. Thus bicliques, cliques and stars are incrementally replaced with power edges, until all remaining single edges are also added. Candidate power nodes that are not the end point of any power edge are ignored. Modular decomposition: Modular decomposition can be used to compute a power graph by using the strong modules of the modular decomposition. Modules in modular decomposition are groups of nodes in a graph that have identical neighbors. A Strong Module is a module that does not overlap with another module. However, in complex networks strong modules are more the exception than the rule. Therefore, the power graphs obtained through modular decomposition are far from minimality. The main difference between modular decomposition and power graph analysis is the emphasis of power graph analysis in decomposing graphs not only using modules of nodes but also modules of edges (cliques, bicliques). Indeed, power graph analysis can be seen as a loss-less simultaneous clustering of both nodes and edges. Applications: Biological networks Power graph analysis has been shown to be useful for the analysis of several types of biological networks such as protein-protein interaction networks, domain-peptide binding motifs, gene regulatory networks and homology/paralogy networks. Also a network of significant disease-trait pairs have been recently visualized and analyzed with power graphs. Network compression, a new measure derived from power graphs, has been proposed as a quality measure for protein interaction networks. Drug repositioning Power graphs have been also applied to the analysis of drug-target-disease networks for drug repositioning. Social networks Power graphs have been applied to large-scale data in social networks, for community mining or for modeling author types.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tropylium tetrafluoroborate** Tropylium tetrafluoroborate: Tropylium tetrafluoroborate is an organic compound with the formula [C7H7]+[BF4]−. Containing the tropylium cation and the non-coordinating tetrafluoroborate counteranion, tropylium tetrafluoroborate is a rare example of a readily isolable carbocation. It is a white solid.This compound may be prepared by the reaction of cycloheptatriene with phosphorus pentachloride, followed by tetrafluoroboric acid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mutual recursion** Mutual recursion: In mathematics and computer science, mutual recursion is a form of recursion where two mathematical or computational objects, such as functions or datatypes, are defined in terms of each other. Mutual recursion is very common in functional programming and in some problem domains, such as recursive descent parsers, where the datatypes are naturally mutually recursive. Examples: Datatypes The most important basic example of a datatype that can be defined by mutual recursion is a tree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically: f: [t[1], ..., t[k]] t: v f A forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f (its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types. Further, it matches many algorithms on trees, which consist of doing one thing with the value, and another thing with the children. Examples: This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest: t: v [t[1], ..., t[k]] A tree t consists of a pair of a value v and a list of trees (its children). This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list of another, which require disentangling to prove results about. Examples: In Standard ML, the tree and forest datatypes can be mutually recursively defined as follows, allowing empty trees: Computer functions Just as algorithms on recursive datatypes can naturally be given by recursive functions, algorithms on mutually recursive data structures can be naturally given by mutually recursive functions. Common examples include algorithms on trees, and recursive descent parsers. As with direct recursion, tail call optimization is necessary if the recursion depth is large or unbounded, such as using mutual recursion for multitasking. Note that tail call optimization in general (when the function called is not the same as the original function, as in tail-recursive calls) may be more difficult to implement than the special case of tail-recursive call optimization, and thus efficient implementation of mutual tail recursion may be absent from languages that only optimize tail-recursive calls. In languages such as Pascal that require declaration before use, mutually recursive functions require forward declaration, as a forward reference cannot be avoided when defining them. Examples: As with directly recursive functions, a wrapper function may be useful, with the mutually recursive functions defined as nested functions within its scope if this is supported. This is particularly useful for sharing state across a set of functions without having to pass parameters between them. Examples: Basic examples A standard example of mutual recursion, which is admittedly artificial, determines whether a non-negative number is even or odd by defining two separate functions that call each other, decrementing by 1 each time. In C: These functions are based on the observation that the question is 4 even? is equivalent to is 3 odd?, which is in turn equivalent to is 2 even?, and so on down to 0. This example is mutual single recursion, and could easily be replaced by iteration. In this example, the mutually recursive calls are tail calls, and tail call optimization would be necessary to execute in constant stack space. In C, this would take O(n) stack space, unless rewritten to use jumps instead of calls. This could be reduced to a single recursive function is_even. In that case, is_odd, which could be inlined, would call is_even, but is_even would only call itself. Examples: As a more general class of examples, an algorithm on a tree can be decomposed into its behavior on a value and its behavior on children, and can be split up into two mutually recursive functions, one specifying the behavior on a tree, calling the forest function for the forest of children, and one specifying the behavior on a forest, calling the tree function for the tree in the forest. In Python: In this case the tree function calls the forest function by single recursion, but the forest function calls the tree function by multiple recursion. Examples: Using the Standard ML datatype above, the size of a tree (number of nodes) can be computed via the following mutually recursive functions: A more detailed example in Scheme, counting the leaves of a tree: These examples reduce easily to a single recursive function by inlining the forest function in the tree function, which is commonly done in practice: directly recursive functions that operate on trees sequentially process the value of the node and recurse on the children within one function, rather than dividing these into two separate functions. Examples: Advanced examples A more complicated example is given by recursive descent parsers, which can be naturally implemented by having one function for each production rule of a grammar, which then mutually recurse; this will in general be multiple recursion, as production rules generally combine multiple parts. This can also be done without mutual recursion, for example by still having separate functions for each production rule, but having them called by a single controller function, or by putting all the grammar in a single function. Examples: Mutual recursion can also implement a finite-state machine, with one function for each state, and single recursion in changing state; this requires tail call optimization if the number of state changes is large or unbounded. This can be used as a simple form of cooperative multitasking. A similar approach to multitasking is to instead use coroutines which call each other, where rather than terminating by calling another routine, one coroutine yields to another but does not terminate, and then resumes execution when it is yielded back to. This allows individual coroutines to hold state, without it needing to be passed by parameters or stored in shared variables. Examples: There are also some algorithms which naturally have two phases, such as minimax (min and max), which can be implemented by having each phase in a separate function with mutual recursion, though they can also be combined into a single function with direct recursion. Mathematical functions In mathematics, the Hofstadter Female and Male sequences are an example of a pair of integer sequences defined in a mutually recursive manner. Fractals can be computed (up to a given resolution) by recursive functions. This can sometimes be done more elegantly via mutually recursive functions; the Sierpiński curve is a good example. Prevalence: Mutual recursion is very common in functional programming, and is often used for programs written in LISP, Scheme, ML, and similar programming languages. For example, Abelson and Sussman describe how a meta-circular evaluator can be used to implement LISP with an eval-apply cycle. In languages such as Prolog, mutual recursion is almost unavoidable. Prevalence: Some programming styles discourage mutual recursion, claiming that it can be confusing to distinguish the conditions which will return an answer from the conditions that would allow the code to run forever without producing an answer. Peter Norvig points to a design pattern which discourages the use entirely, stating: If you have two mutually-recursive functions that both alter the state of an object, try to move almost all the functionality into just one of the functions. Otherwise you will probably end up duplicating code. Terminology: Mutual recursion is also known as indirect recursion, by contrast with direct recursion, where a single function calls itself directly. This is simply a difference of emphasis, not a different notion: "indirect recursion" emphasises an individual function, while "mutual recursion" emphasises the set of functions, and does not single out an individual function. For example, if f calls itself, that is direct recursion. If instead f calls g and then g calls f, which in turn calls g again, from the point of view of f alone, f is indirectly recursing, while from the point of view of g alone, g is indirectly recursing, while from the point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functions that call each other can be called a set of mutually recursive functions. Conversion to direct recursion: Mathematically, a set of mutually recursive functions are primitive recursive, which can be proven by course-of-values recursion, building a single function F that lists the values of the individual recursive function in order: F=f1(0),f2(0),f1(1),f2(1),…, and rewriting the mutual recursion as a primitive recursion. Conversion to direct recursion: Any mutual recursion between two procedures can be converted to direct recursion by inlining the code of one procedure into the other. If there is only one site where one procedure calls the other, this is straightforward, though if there are several it can involve code duplication. In terms of the call stack, two mutually recursive procedures yield a stack ABABAB..., and inlining B into A yields the direct recursion (AB)(AB)(AB)... Conversion to direct recursion: Alternately, any number of procedures can be merged into a single procedure that takes as argument a variant record (or algebraic data type) representing the selection of a procedure and its arguments; the merged procedure then dispatches on its argument to execute the corresponding code and uses direct recursion to call self as appropriate. This can be seen as a limited application of defunctionalization. This translation may be useful when any of the mutually recursive procedures can be called by outside code, so there is no obvious case for inlining one procedure into the other. Such code then needs to be modified so that procedure calls are performed by bundling arguments into a variant record as described; alternately, wrapper procedures may be used for this task.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ProQuest Dialog** ProQuest Dialog: Dialog is an online information service owned by ProQuest, who acquired it from Thomson Reuters in mid-2008.Dialog was one of the predecessors of the World Wide Web as a provider of information, though not in form. The earliest form of the Dialog system was completed in 1966 in Lockheed Martin under the direction of Roger K. Summit. According to its literature, it was "the world's first online information retrieval system to be used globally with materially significant databases". In the 1980s, a low-priced dial-up version of a subset of Dialog was marketed to individual users as Knowledge Index. This subset included INSPEC, MathSciNet, over 200 other bibliographic and reference databases, as well as third-party retrieval vendors who would go to physical libraries to copy materials for a fee and send it to the service subscriber. ProQuest Dialog: While being owned by the Thomson Corporation, Dialog consisted of the Dialog, DataStar, Profound, and NewsEdge businesses. Dialog and DataStar were consolidated into Dialog. The news content from Profound and NewsEdge were consolidated, and the market research business from Profound was sold to MarketResearch.com. The NewsEdge business was eventually sold to Acquire Media, now Naviga. Prior to being owned by Thomson, MAID purchased Knight-Ridder Information which included the Dialog and DataStar businesses. MAID renamed itself to be the Dialog Corporation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Telephone numbering plan** Telephone numbering plan: A telephone numbering plan is a type of numbering scheme used in telecommunication to assign telephone numbers to subscriber telephones or other telephony endpoints. Telephone numbers are the addresses of participants in a telephone network, reachable by a system of destination code routing. Telephone numbering plans are defined in each of the administrative regions of the public switched telephone network (PSTN) and in private telephone networks. Telephone numbering plan: For public numbering systems, geographic location typically plays a role in the sequence of numbers assigned to each telephone subscriber. Many numbering plan administrators subdivide their territory of service into geographic regions designated by a prefix, often called an area code or city code, which is a set of digits forming the most-significant part of the dialing sequence to reach a telephone subscriber. Telephone numbering plan: Numbering plans may follow a variety of design strategies which have often arisen from the historical evolution of individual telephone networks and local requirements. A broad division is commonly recognized between closed and open numbering plans. A closed numbering plan, as found in North America, features fixed-length area codes and local numbers, while an open numbering plan has a variance in the length of the area code, local number, or both of a telephone number assigned to a subscriber line. The latter type developed predominantly in Europe. Telephone numbering plan: The International Telecommunication Union (ITU) has established a comprehensive numbering plan, designated E.164, for uniform interoperability of the networks of its member state or regional administrations. It is an open numbering plan, however, imposing a maximum length of 15 digits to telephone numbers. The standard defines a country code for each member region which is prefixed to each national telephone number for international destination routing. Telephone numbering plan: Private numbering plans exist in telephone networks that are privately operated in an enterprise or organizational campus. Such systems may be supported by a private branch exchange (PBX), which provides a central access point to the PSTN and also controls internal calls between telephone extensions. Telephone numbering plan: In contrast to numbering plans, which determine telephone numbers assigned to subscriber stations, dialing plans establish the customer dialing procedures, i.e., the sequence of digits or symbols to be dialed to reach a destination. It is the manner in which the numbering plan is used. Even in closed numbering plans, it is not always necessary to dial all digits of a number. For example, an area code may often be omitted when the destination is in the same area as the calling station. Telephone number structure: National or regional telecommunication administrations that are members of the International Telecommunication Union (ITU) use national telephone numbering plans that conform to international standard E.164. E.164 specifies that a telephone number consist of a county code and a national telephone number. National telephone numbers are defined by national or regional numbering plans, such as the European Telephony Numbering Space, the North American Numbering Plan (NANP), or the UK number plan. Within a national numbering plan, a complete destination telephone number is typically composed of an area code and a subscriber telephone number. Telephone number structure: Many national numbering plans have developed from local historical requirements and progress or technological advancements, which resulted in a variety of structural characteristics of the telephone numbers assigned to telephones. In the United States, the industry decided in 1947 to unite all local telephone networks under one common numbering plan with a fixed length of ten digits for the national telephone number of each telephone, of which the last seven digits were known as the local directory number, or subscriber number. Such a numbering plan became known as a closed numbering plan. In several European countries, a different strategy prevailed, known as the open numbering plan, which features a variance in the length of the area code, the local number, or both. Telephone number structure: Subscriber number The subscriber number is the address assigned to a telephone line or wireless communication channel terminating at the customer equipment. The first few digits of the subscriber number may indicate smaller geographical scopes, such as towns or districts, based on municipal aspects, or individual telephone exchanges (central office code), such as a wire centers. In mobile networks they may indicate the network provider. Callers in a given area sometimes do not need to include area prefixes when dialing within the same area, but devices that dial telephone numbers automatically may include the full number with area and access codes. Telephone number structure: The subscriber number is typically listed in local telephone directories, and is therefor often referred to as the directory number. Telephone number structure: Area code Telephone administrations that manage telecommunication infrastructure of extended size, such as a large country, often divide the territory into geographic areas. This benefits independent management by administrative or historical subdivisions, such as states and provinces, of the territory or country. Each area of subdivision is identified in the numbering plan with a routing code. This concept was first developed in the planning for a nationwide numbering plan for Operator Toll Dialing and direct distance dialing (DDD) in the Bell System in the United States in the 1940s, a system that resulted in the North American Numbering Plan for World Zone 1. AT&T divided the United States and Canada into numbering plan areas (NPAs), and assigned to each NPA a unique three-digit prefix, the numbering plan area code, which became known in short-form as NPA code or simply area code. The area code is prefixed to each telephone number issued in its service area. Telephone number structure: Other national telecommunication authorities use various formats and dialing rules for area codes. The size of area code prefixes may either be fixed or variable. Area codes in the NANP have three digits, while two digits are used in Brazil, one digit in Australia and New Zealand. Variable-length formats exist in multiple countries including: Argentina, Austria (1 to 4), Germany (2 to 5 digits), Japan (1 to 5), Mexico (2 or 3 digits), Peru (1 or 2), Syria (1 or 2) and the United Kingdom. In addition to digit count, the format may be restricted to certain digit patterns. For example, the NANP had at times specific restrictions on the range of digits for the three positions, and required assignment to geographical areas avoiding nearby areas receiving similar area codes to avoid confusion and misdialing. Telephone number structure: Some countries, such as Denmark and Uruguay, have merged variable-length area codes and telephone numbers into fixed-length numbers that must always be dialed independently of location. In such administrations, the area code is not distinguished formally in the telephone number. In the UK, area codes were first known as subscriber trunk dialling (STD) codes. Depending on local dialing plans, they are often necessary only when dialed from outside the code area or from mobile phones. In North America ten-digit dialing is required in areas with overlay numbering plans, in which multiple area codes are assigned to the same area. The strict correlation of a telephone to a geographical area has been broken by technical advances, such as local number portability and voice over IP services.When dialing a telephone number, the area code may be preceded by a trunk prefix or national access code, the international access code, and country code. Telephone number structure: Area codes are often quoted by including the national access code. For example, a number in London may be listed as 020 7946 0321. Users must correctly interpret 020 as the code for London. If they call from another station within London, they may merely dial 7946 0321, or if dialing from another country, the initial 0 should be omitted after the country code. International numbering plan: The E.164 standard of the International Telecommunication Union is an international numbering plan and establishes a country calling code (country code) for each member organization. Country codes are prefixes to national telephone numbers that denote call routing to the network of a subordinate number plan administration, typically a country, or group of countries with a uniform numbering plan, such as the NANP. E.164 permits a maximum length of 15 digits for the complete international phone number consisting of the country code, the national routing code (area code), and the subscriber number. E.164 does not define regional numbering plans, however, it does provide recommendations for new implementations and uniform representation of all telephone numbers. International numbering plan: Country code Country codes are necessary only when dialing telephone numbers in other countries than the originating telephone, but many networks permit them for all calls. These are dialed before the national telephone number. International numbering plan: Following ITU-T specification E.123, international telephone numbers are commonly indicated in listings by prefixing the country code with a plus sign (+). This reminds the subscriber to dial the international access code of the country from which the call is placed. For example, the international dialing prefix or access code in all NANP countries is 011, and 00 in most other countries. On modern mobile telephones and many voice over IP services, the plus sign can usually be dialed and functions directly as the international access code. International numbering plan: Special services Within the system of country calling codes, the ITU has defined certain prefixes for special services, and assigns such codes for independent international networks, such as satellite systems, spanning beyond the scope of regional authorities. International numbering plan: Some special service codes are the following: 388 5 – shared code for groups of nations 388 3 – European Telephony Numbering Space – Europe-wide services (discontinued) 800 – International Freephone (UIFN) 808 – reserved for Shared Cost Services 878 – Universal Personal Telecommunications services 881 – Global Mobile Satellite System 882 and +883 – International Networks 888 - international disaster relief operations 979 – International Premium Rate Service 991 – International Telecommunications Public Correspondence Service trial (ITPCS) 999 – reserved for future global service Satellite telephone systems Satellite phones are typically issued with telephone numbers with a special country calling code, for example: Inmarsat: 870: SNAC (Single Network Access Code) ICO Global: 881 0, 881 1 Ellipso: 881 2, 881 3 Iridium: 881 6, 881 7 Globalstar: 881 8, 881 9 Emsat: 882 13 Thuraya: 882 16 ACeS: 882 20Some satellite telephones are issued with telephone numbers from a national numbering plan; for example, Globalstar issues NANP telephone numbers. Private numbering plan: Like a public telecommunications network, a private telephone network in an enterprise or within an organizational campus may implement a private numbering plan for the installed base of telephones for internal communication. Such networks operate a private switching system or a private branch exchange (PBX) within the network. The internal numbers assigned are often called extension numbers, as the internal numbering plan extends an official, published main access number for the entire network. A caller from within the network only dials the extension number assigned to another internal destination telephone. Private numbering plan: A private numbering plan provides the convenience of mapping station telephone numbers to other commonly used numbering schemes in an enterprise. For example, station numbers may be assigned as the room number of a hotel or hospital. Station numbers may also be strategically mapped to certain keywords composed from the letters on the telephone dial, such as 4357 (help) to reach a help desk. Private numbering plan: The internal number assignments may be independent of any direct inward dialing (DID) services provided by external telecommunication vendors. For numbers without DID access, the internal switch relays externally originated calls via an operator, an automated attendant or an electronic interactive voice response system. Telephone numbers for users within such systems are often published by suffixing the official telephone number with the extension number, e.g., 1 800 555-0001 x2055. Private numbering plan: Some systems may automatically map a large block of DID numbers (differing only in a trailing sequence of digits) to a corresponding block of individual internal stations, allowing each of them to be reached directly from the public switched telephone network. In some of these cases, a special shorter dial-in number can be used to reach an operator who can be asked for general information, e.g. help looking up or connecting to internal numbers. For example, individual extensions at Universität des Saarlandes can be dialed directly from outside via their four-digit internal extension +49-681-302-xxxx, whereas the university's official main number is +49-681-302-0 (49 is the country code for Germany, 681 is the area code for Saarbrücken, 302 the prefix for the university). Private numbering plan: Callers within a private numbering plan often dial a trunk prefix to reach a national or international destination (outside line) or to access a leased line (or tie-line) to another location within the same enterprise. A large manufacturer with factories and offices in multiple cities may use a prefix (such as '8') followed by an internal routing code to indicate a city or location, then an individual four- or five-digit extension number at the destination site. A common trunk prefix for an outside line on North American systems is the digit 9, followed by the outside destination number. Private numbering plan: Additional dial plan customisations, such as single-digit access to a hotel front desk or room service from an individual room, are available at the sole discretion of the PBX owner. Numbering plan indicator: Signaling in telecommunication networks is specific to the technology in use for each link. During signaling, it is common that additional information is passed between switching systems that is not represented in telephone numbers, which serve only as network addresses of endpoints. One such information element is the numbering plan indicator (NPI). It is a number defined in the ITU standard Q.713, paragraph 3.4.2.3.3, indicating the numbering plan of the attached telephone number. NPIs can be found in Signalling Connection Control Part (SCCP) and short message service (SMS) messages. As of 2004, the following numbering plans and their respective numbering plan indicator values have been defined: Subscriber dialing procedures: While a telephone numbering plan specifies the digit sequence assigned to each telephone or wire line, establishing the network addresses needed for routing calls, numbering plan administrators may define certain dialing procedures for placing calls. This may include the dialing of additional prefixes necessary for administrative or technical reasons, or it may permit short code sequences for convenience or speed of service, such as in cases of emergency. The body of dialing procedures of a numbering plan administration is often called a dial plan. Subscriber dialing procedures: A dial plan establishes the expected sequence of digits dialed on subscriber premises equipment, such as telephones, in private branch exchange (PBX) systems, or in other telephone switches to effect access to the telephone networks for the routing of telephone calls, or to effect or activate specific service features by the local telephone company, such as 311 or 411 service. Subscriber dialing procedures: Variable-length dialing Within the North American Numbering Plan (NANP), the administration defines standard and permissive dialing procedures, specifying the number of mandatory digits to be dialed for local calls within a single numbering plan area (NPA), as well as alternate, optional sequences, such as adding the prefix 1 before the telephone number. Subscriber dialing procedures: Despite the closed numbering plan in the NANP, different dialing procedures exist in many of the territories for local and long-distance telephone calls. This means that to call another number within the same city or area, callers need to dial only a subset of the full telephone number. For example, in the NANP, only the seven-digit number may need to be dialed, but for calls outside the local numbering plan area, the full number including the area code is required. In these situations, ITU-T Recommendation E.123 suggests to list the area code in parentheses, signifying that in some cases the area code is optional or may not be required. Subscriber dialing procedures: Internationally, an area code is typically prefixed by a domestic trunk access code (usually 0) when dialing from inside a country, but is not necessary when calling from other countries; there are exceptions, such as for Italian land lines. Subscriber dialing procedures: To call a number in Sydney, Australia, for example: xxxx xxxx (within Sydney and other locations within New South Wales and the Australian Capital Territory - no area code required) (02) xxxx xxxx (outside New South Wales and the Australian Capital Territory, but still within Australia - the area code is required) +61 2 xxxx xxxx (outside Australia)The plus character (+) in the markup signifies that the following digits are the country code, in this case 61. Some phones, especially mobile telephones, allow the + to be entered directly. For other devices the user must replace the + with the international access code for their current location. In the United States, most carriers require the caller to dial 011 before the destination country code. New Zealand has a special-case dial plan. While most nations require the area code to be dialed only if it is different, in New Zealand, one needs to dial the area code if the phone is outside the local calling area. For example, the town of Waikouaiti is in the Dunedin City Council jurisdiction, and has phone numbers (03) 465 7xxx. To call the city council in central Dunedin (03) 477 4000, residents must dial the number in full, including the area code, even though the area code is the same, as Waikouaiti and Dunedin lie in different local calling areas (Palmerston and Dunedin, respectively.)In many areas of the NANP, the domestic trunk code (long-distance access code) must also be dialed along with the area code for long-distance calls even within the same numbering plan area. For example, to call a number in Regina in area code 306: 306 xxx xxxx — within Regina, Lumsden and other local areas 1 306 xxx xxxx — within Saskatchewan, but not within the Regina local calling area, e.g., Saskatoon 1 306 xxx xxxx — anywhere within the NANP outside SaskatchewanIn many parts of North America, especially in area code overlay complexes, dialing the area code, or 1 and the area code, is required even for local calls. Dialing from mobile phones does not require the trunk code in the US, although it is still necessary for calling all long-distance numbers from a mobile phone in Canada. Many mobile handsets automatically add the area code of the set's telephone number for outbound calls, if not dialed by the user. Subscriber dialing procedures: In some parts of the United States, especially northeastern states such as Pennsylvania served by Verizon Communications, the ten-digit number must be dialed. If the call is not local, the call fails unless the dialed number is preceded by digit 1. Thus: 610 xxx xxxx — local calls within the 610 area code and its overlay (484), as well as calls to or from the neighboring 215 area code and its overlay, 267. Area code is required; one of two completion options for mobile phones within the U.S. Subscriber dialing procedures: 1 610 xxx xxxx — calls from numbers outside the 610/484 and 215/267 area codes; second of two completion options for mobile phones within the U.S.In California and New York, because of the existence of both overlay area codes (where an area code must be dialed for every call) and non-overlay area codes (where an area code is dialed only for calls outside the subscriber's home area code), "permissive home area code dialing" of 1 + the area code within the same area code, even if no area code is required, has been permitted since the mid-2000s. For example, in the 559 area code (a non-overlay area code), calls may be dialed as seven digits (XXX-XXXX) or 1 559 + 7 digits. The manner in which a call is dialed does not affect the billing of the call. This "permissive home area code dialing" helps maintain uniformity and eliminates confusion given the different types of area code relief that has made California the nation's most "area code" intensive State. Unlike other states with overlay area codes (Texas, Maryland, Florida and Pennsylvania and others), the California Public Utilities Commission and the New York State Public Service Commission maintain two different dial plans: Landlines must dial 1 + area code whenever an Area Code is part of the dialed digits while cellphone users can omit the "1" and just dial 10 digits. Subscriber dialing procedures: Many organizations have private branch exchange systems which permit dialing the access digit(s) for an outside line (usually 9 or 8), a "1" and finally the local area code and xxx xxxx in areas without overlays. This aspect is unintentionally helpful for employees who reside in one area code and work in an area code with one, two, or three adjacent area codes. 1+ dialing to any area code by an employee can be done quickly, with all exceptions processed by the private branch exchange and passed onto the public switched telephone network. Subscriber dialing procedures: Full-number dialing In small countries or areas, the full telephone number is used for all calls, even in the same area. This has traditionally been the case in small countries and territories where area codes have not been required. However, there has been a trend in many countries towards making all numbers a standard length, and incorporating the area code into the subscriber's number. This usually makes the use of a trunk code obsolete. Subscriber dialing procedures: For example, to call someone in Oslo in Norway before 1992, it was necessary to dial: xxx xxx (within Oslo - no area code required) (02) xxx xxx (within Norway - outside Oslo) 47 2 xxx xxx (outside Norway)After 1992, this changed to a closed eight-digit numbering plan, e.g.: 22xx xxxx (within Norway - including Oslo) 47 22xx xxxx (outside Norway)However, in other countries, such as France, Belgium, Japan, Switzerland, South Africa and some parts of North America, the trunk code is retained for domestic calls, whether local or national, e.g., Paris 01 xx xx xx xx (outside France +33 1 xxxx xxxx) Brussels 02 xxx xxxx (outside Belgium +32 2 xxx xxxx) Geneva 022 xxx xxxx (outside Switzerland +41 22 xxx xxxx) Cape Town 021 xxx xxxx (outside South Africa +27 21 xxx xxxx) New York 1 212 xxx xxxx (outside the North American Numbering Plan +1 212 xxx xxxx) Fukuoka 092 xxx xxxx (outside the Japanese Numbering Plan +81 92 xxx xxxx) India "0-10 Digit Number" (outside India +91 XXXXXXXXXX). In India due to the availability of multiple operators, the metro cities have short codes which range from 2 to 8 digits.While some, like Italy, require the initial zero to be dialed, even for calls from outside the country, e.g., Rome 06 xxxxxxxx (outside Italy +39 06 xxxxxxxx)While dialing of full national numbers takes longer than a local number without the area code, the increased use of phones that can store numbers means that this is of decreasing importance. It also makes it easier to display numbers in the international format, as no trunk code is required—hence a number in Prague, Czech Republic, can now be displayed as: 2xx xxx xxx (inside Czech Republic) +420 2xx xxx xxx (outside Czech Republic)as opposed to before September 21, 2002: 02 / xx xx xx xx (inside Czech Republic) +420 2 / xx xx xx xx (outside Czech Republic)Some countries already switched, but trunk prefix re-added with the closed dialing plan, for example in Bangkok, Thailand before 1997: xxx-xxxx (inside Bangkok) 02-xxx-xxxx (inside Thailand) +66 2-xxx-xxxx (outside Thailand)This was changed in 1997: 2-xxx-xxxx (inside Thailand) +66 2-xxx-xxxx (outside Thailand)Trunk prefix was re-added in 2001 02-xxx-xxxx (inside Thailand) +66 2-xxx-xxxx (outside Thailand)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**If a tree falls in a forest** If a tree falls in a forest: "If a tree falls in a forest and no one is around to hear it, does it make a sound?" is a philosophical thought experiment that raises questions regarding observation and perception. History: While the origin of the phrase is sometimes mistakenly attributed to George Berkeley, there are no extant writings in which he discussed this question. The closest are the following two passages from Berkeley's A Treatise Concerning the Principles of Human Knowledge, published in 1710: But, say you, surely there is nothing easier than for me to imagine trees, for instance, in a park, or books existing in a closet, and nobody by to perceive them. History: The objects of sense exist only when they are perceived; the trees therefore are in the garden... no longer than while there is somebody by to perceive them. Despite these passages bearing a distant resemblance to the question, Berkeley never actually proposed the question itself. However, his work did deal extensively with the question of whether objects could continue to exist without being perceived.In June 1883, in the magazine The Chautauquan, the question was asked, "If a tree were to fall on an island where there were no human beings would there be any sound?" They then went on to answer the query with, "No. Sound is the sensation excited in the ear when the air or other medium is set in motion." The magazine Scientific American corroborated the technical aspect of this question, while leaving out the philosophic side, a year later when they asked the question slightly reworded, "If a tree were to fall on an uninhabited island, would there be any sound?" And gave a more technical answer, "Sound is vibration, transmitted to our senses through the mechanism of the ear, and recognized as sound only at our nerve centers. The falling of the tree or any other disturbance will produce vibration of the air. If there be no ears to hear, there will be no sound."The current phrasing appears to have originated in the 1910 book Physics by Charles Riborg Mann and George Ransom Twiss. The question "When a tree falls in a lonely forest, and no animal is near by to hear it, does it make a sound? Why?" is posed along with many other questions to quiz readers on the contents of the chapter, and as such, is posed from a purely physical point of view.While physicists and good friends Albert Einstein and Niels Bohr were equally instrumental in founding quantum mechanics, the two had very different views on what quantum mechanics said about reality. On one of many daily lunchtime walks with fellow physicist Abraham Pais, who like Einstein was a close friend and associate of Bohr, Einstein suddenly stopped, turned to Pais, and asked: 'Do you really believe that the moon only exists if you look at it?" As recorded on the first page of Subtle Is the Lord, Pais' biography of Einstein, Pais responded to the effect of: 'The twentieth century physicist does not, of course, claim to have the definitive answer to this question.' Pais' answer was representative not just of himself and of Bohr, but of the majority of quantum physicists of that time, a situation that over time led to Einstein's effective exclusion from the very group he helped found. As Pais indicated, the majority view of the quantum mechanics community then and arguably to this day is that existence in the absence of an observer is at best a conjecture, a conclusion that can neither be proven nor disproven. Metaphysics: The possibility of unperceived existence Can something exist without being perceived by consciousness? – e.g. "is sound only sound if a person hears it?" The most immediate philosophical topic that the riddle introduces involves the existence of the tree (and the sound it produces) outside of human perception. If no one is around to see, hear, touch or smell the tree, how could it be said to exist? What is it to say that it exists when such an existence is unknown? Of course, from a scientific viewpoint, it exists. It is human beings that are able to perceive it. George Berkeley in the 18th century developed subjective idealism, a metaphysical theory to respond to these questions, coined famously as "to be is to be perceived". Today, meta-physicists are split. According to substance theory, a substance is distinct from its properties, while according to bundle theory, an object is merely its sense data. The definition of sound, simplified, is a hearable noise. The tree will make a sound, even if nobody heard it, simply because it could have been heard. The answer to this question depends on the definition of sound. We can define sound as our perception of air vibrations. Therefore, sound does not exist if we do not hear it. When a tree falls, the motion disturbs the air and sends off air waves. This physical phenomenon, which can be measured by instruments other than our ears, exists regardless of human perception (seeing or hearing) of it. Putting together, although the tree falling on the island sends off air waves, it does not produce sound if no human is within the distance where the air waves are strong enough for a human to perceive them. However, if we define sound as the waves themselves, then sound would be produced. Metaphysics: Knowledge of the unobserved world Can we assume the unobserved world functions the same as the observed world? – e.g., "does observation affect outcome?" A similar question does not involve whether or not an unobserved event occurs predictably, like it occurs when it is observed. The anthropic principle suggests that the observer, just in its existence, may impose on the reality observed. Metaphysics: However, most people, as well as scientists, assume that the observer doesn't change whether the tree-fall causes a sound or not, but this is an impossible claim to prove. However, many scientists would argue that a truly unobserved event is one which realises no effect (imparts no information) on any other (where 'other' might be e.g., human, sound-recorder or rock), it therefore can have no legacy in the present (or ongoing) wider physical universe. It may then be recognized that the unobserved event was absolutely identical to an event which did not occur at all. Of course, the fact that the tree is known to have changed state from 'upright' to 'fallen' implies that the event must be observed to ask the question at all – even if only by the supposed deaf onlooker. Metaphysics: The British philosopher of science Roy Bhaskar, credited with developing critical realism has argued, in apparent reference to this riddle, that: If men ceased to exist sound would continue to travel and heavy bodies to fall to the earth in exactly the same way, though ex hypothesi there would be no-one to know it This existence of an unobserved real is integral to Bhaskar's ontology, which contends (in opposition to the various strains of positivism which have dominated both natural and social science in the twentieth century) that 'real structures exist independently of and are often out of phase with the actual patterns of events'. In social science, this has made his approach popular amongst contemporary Marxists — notably Alex Callinicos – who postulate the existence of real social forces and structures which might not always be observable. Metaphysics: The dissimilarity between sensation and reality What is the difference between what something is, and how it appears? – e.g., "sound is the variation of pressure that propagates through matter as a wave" Perhaps the most important topic the riddle offers is the division between perception of an object and how an object really is. If a tree exists outside of perception, then there is no way for us to know that the tree exists. So then, what do we mean by 'existence'; what is the difference between perception and reality? Also, people may also say, if the tree exists outside of perception (as common sense would dictate), then it will produce sound waves. However, these sound waves will not actually sound like anything. Sound as it is mechanically understood will occur, but sound as it is understood by sensation will not occur. So then, how is it known that 'sound as it is mechanically understood' will occur if that sound is not perceived? In popular culture: Canadian singer-songwriter, social activist and environmentalist Bruce Cockburn poses the question in the chorus of his song "If a Tree Falls," on his 1988 album Big Circumstance. Cockburn's lyrics frame it a pressing question regarding the cause and effect of deforestation.Washington-state-based wilderness conservatory Northwest Trek used a shortened form of the quote in its mid-1970s television advertisement, as such: "There is no sound unless someone is there to see it or hear it. Experience it at Northwest Trek."A paraphrase of the quote ("When you're falling in a forest and there's nobody around / Do you ever really crash, or even make a sound?") forms the bridge of the protagonist's solo number "Waving Through A Window" in the musical Dear Evan Hansen, in line with the tree motif essential to the plot. The song itself discusses a feeling of isolation through fear of failing in social interactions, as a part of the main character's social anxiety disorder.In LucasArts adventure game Monkey Island 2: Le Chuck's Revenge, Guybrush Threepwood meets Herman Thootrot on Dinky Island. In their dialogue the young pirate asks Herman to teach him philosophy. His lesson - humorously - focuses on solving this Zen puzzle: "If a tree falls in the forest, and no one is around to hear it ... what color is the tree?"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Qmake** Qmake: qmake is a utility that automates the generation of makefiles. Makefiles are used by the program make to build executable programs from source code; therefore qmake is a make-makefile tool, or makemake for short. Qmake: The makefiles that qmake produces are tailored to the particular platform where it is run from based on qmake project files. This way one set of build instructions can be used to create build instructions on different operating systems. qmake supports code generation for the following operating systems: Linux (including Android), Apple macOS, Apple iOS, FreeBSD, Haiku, Symbian, Microsoft Windows and Microsoft Windows CE. Qmake: qmake was created by Trolltech (now The Qt Company). It is distributed and integrated with the Qt application framework, and automates the creation of moc (meta object compiler) and rcc (resource compiler) sources, which are used in Qt's meta-object system and in the integration of binary resources (e.g., pictures). The qmake tool helps simplify the build process for development projects across different platforms. It automates the generation of Makefiles so that only a few lines of information are needed to create each Makefile. You can use qmake for any software project, whether it is written with Qt or not.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MXRA7** MXRA7: Matrix remodeling associated 7 is a protein that in humans is encoded by the MXRA7 gene found on chromosome 17. Loss-of-function studies performed in MXRA7-deficient mice, in line with other types of data suggested that this gene was involved in multiple physiological or pathological processes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marker horizon** Marker horizon: Marker horizons (also referred to as chronohorizons, key beds or marker beds) are stratigraphic units of the same age and of such distinctive composition and appearance, that, despite their presence in separate geographic locations, there is no doubt about their being of equivalent age (isochronous) and of common origin. Such clear markers facilitate the correlation of strata, and used in conjunction with fossil floral and faunal assemblages and paleomagnetism, permit the mapping of land masses and bodies of water throughout the history of the earth. They usually consist of a relatively thin layer of sedimentary rock that is readily recognized on the basis of either its distinct physical characteristics or fossil content and can be mapped over a very large geographic area. As a result, a key bed is useful for correlating sequences of sedimentary rocks over a large area. Typically, key beds were created as the result of either instantaneous events or (geologically speaking) very short episodes of the widespread deposition of a specific types of sediment. As the result, key beds often can be used for both mapping and correlating sedimentary rocks and dating them. Volcanic ash beds (tonsteins and bentonite beds) and impact spherule beds, and specific megaturbidites are types of key beds created by instantaneous events. The widespread accumulation of distinctive sediments over a geologically short period of time have created key beds in the form of peat beds, coal beds, shell beds, marine bands, black shales in cyclothems, and oil shales. A well-known example of a key bed is the global layer of iridium-rich impact ejecta that marks the Cretaceous–Paleogene boundary (K–T boundary). Marker horizon: Palynology, the study of fossil pollens and spores, routinely works out the stratigraphy of rocks by comparing pollen and spore assemblages with those of well-known layers—a tool frequently used by petroleum exploration companies in the search for new fields. The fossilised teeth or elements of conodonts are an equally useful tool. Marker horizon: The ejecta from volcanoes and bolide impacts create useful markers, as different volcanic eruptions and impacts produce beds with distinctive compositions. Marker horizons of tephra are used as a dating tool in archaeology, since the dates of eruptions are generally well-established.One particular bolide impact 66 million years ago, which formed the Chicxulub crater, produced an iridium anomaly that occurs in a thin, global layer of clay marking the Cretaceous–Paleogene boundary. Iridium layers are associated with bolide impacts and are not unique, but when occurring in conjunction with the extinction of specialised tropical planktic foraminifera and the appearance of the first Danian species, signal a reliable marker horizon for the Cretaceous–Paleogene boundary.Fossil faunal and floral assemblages, both marine and terrestrial, make for distinctive marker horizons. Some marker units are distinctive by virtue of their magnetic qualities. The Water Tower Slates, forming part of the Hospital Hill Series in the Witwatersrand Basin, include a fine-grained ferruginous quartzite which is particularly magnetic. From the same series a ripple marked quartzite and a speckled bed are used as marker horizons. Marker horizon: On a much smaller time scale, marker horizons may be created by sedimentologists and limnologists in order to measure deposition and erosion rates in a marsh or pond environment. The materials used for such an artificial horizon are chosen for their visibility and stability and may be brick dust, grog, sand, kaolin, glitter or feldspar clay.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Muldamine** Muldamine: Muldamine is a phytosterol alkaloid isolated from Veratrum californicum. It is the acetate ester of the piperidine steroid teinemine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PYGB** PYGB: Glycogen phosphorylase, brain (PYGB, GPBB), is an enzyme that in humans is encoded by the PYGB gene on chromosome 20. The protein encoded by this gene is a glycogen phosphorylase found predominantly in the brain. The encoded protein forms homodimers which can associate into homotetramers, the enzymatically active form of glycogen phosphorylase. The activity of this enzyme is positively regulated by AMP and negatively regulated by ATP, ADP, and glucose-6-phosphate. This enzyme catalyzes the rate-determining step in glycogen degradation. [provided by RefSeq, Jul 2008] Structure: The PYGB gene encodes one of three major glycogen phosphorylase isoforms, which are distinguished by their different structures and subcellular localizations: brain (PYGB), muscle (PYGM), and liver (PYGL). GPBB is the longest of the three isozymes, with a length of 862 residues, due to the extended 3'-UTR at the enzyme's C-terminal. Nonetheless, it shares high homology in amino acid sequence with the other two isozymes, with 83% similarity with PYGM and 80% similarity with PYGL. Moreover, both its nucleotide and amino acid sequences and its codon usage share higher similarity with those of PYGM, thus indicating that the two share a closer evolutionary descent by gene duplication and translocation of a common ancestral gene. A possible pseudogene can be found on chromosome 10. Function: As a glycogen phosphorylase, GPBB catalyzes the phosphorolysis of glycogen to yield glucose 1-phosphate. This reaction serves as the rate-determining first step in glycogenolysis and, thus, contributes to the regulation of carbohydrate metabolism. In particular, GPBB is responsible for supplying emergency glucose during periods of stress, including anoxia, hypoglycemia, or ischemia. In normal cell conditions, GPBB is bound to the sarcoplasmic reticulum (SR) membrane by complexing with glycogen. When stimulated by stress conditions, Under stress conditions such as hypoxia, glycogen is degraded and GPBB is released into the cytoplasm. Though GPBB is primarily expressed in adult and fetal brain, it has also been detected in cardiomyocytes and at low levels in other adult and fetal tissues. These other tissues also express PYGL and PYGM, but the purpose of expressing multiple glycogen phosphorylases remains unclear. Nuclear localization was also cited for GPBB in gastrointestinal cancer. Clinical significance: Cancer GPBB overexpression has been associated with several cancers, including colorectal cancer, gastrointestinal cancer, and non-small cell lung cancer (NSCLC). Since GPBB is upregulated during the potential transition of adenoma cells into carcinoma cells, GPBB may be a useful biomarker to detect malignancy potential in precancerous lesions. Clinical significance: Ischemia Since GPBB is released from the SR membrane under ischemic conditions, it may serve as a biomarker for early detection of ischemia. Specifically, its release in acute myocardial ischemia has been attributed to increased glycogenolysis and plasma membrane permeability, and has been correlated with poor outcome. As a highly sensitive marker for myocardial ischemia, GPBB may aid in detection of perioperative myocardial damage and infarction in patients undergoing coronary artery bypass grafting. Meanwhile, GPBB levels are elevated in patients with hypertrophic cardiomyopathy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jockeying (association football)** Jockeying (association football): In association football, jockeying (also called “shepherding” or "guiding") is the defender's skill of keeping between the attacker and his or her intended target (usually the goal). It requires the defender to slow down or delay the attacker by backing off slowly while at the same time trying to force an error or make a successful tackle.The defender should be in a low position with both knees bent, turned slightly at an angle from the attacker.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hizb Rateb** Hizb Rateb: The Hizb Rateb (Arabic: الحزب الراتب) is a collective recitation of Quran or dhikr or dua or wird done by murids and saliks in islamic sufism. Presentation: The Hizb Rateb is a group tilawa of the Quran with one voice, in mosques, zawiyas, kuttabs and Quranic schools.This custom has been practised in the Maghreb countries since the tenth hijri century under the Almohad Caliphate, after Sheikh Abdullah Al-Habati created the rules for collective reading with one tone.It has an allocated and known times, because it may be recitated after the Fajr prayer or after the Maghrib prayer.It may also be recitated before the Zuhr prayer or before the Asr prayer.Thus, in the countries of the Maghreb, the muslims used to recite the Quran together in what is known as the Hizb Rateb, in line with the current custom in these states.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tachykinin receptor** Tachykinin receptor: There are three known mammalian tachykinin receptors termed NK1, NK2 and NK3. All are members of the 7 transmembrane G-protein coupled receptor family and induce the activation of phospholipase C, producing inositol triphosphate (so called Gq-coupled). Inhibitors of NK-1, known as NK-1 receptor antagonists, can be used as antiemetic agents, such as the drug aprepitant. Binding: The genes and receptor ligands are as follows: (Hökfelt et al., 2001; Page, 2004; Pennefather et al., 2004; Maggi, 2000)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3D body scanning** 3D body scanning: 3D body scanning is an application of various technologies such as Structured-light 3D scanner, 3D depth sensing, stereoscopic vision and others for ergonomic and anthropometric investigation of the human form as a point-cloud. The technology and practice within research has found 3D body scanning measurement extraction methodologies to be comparable to traditional anthropometric measurement techniques. Applications: While the technology is still developing in its application, the technology has regularly been applied in the areas of: Adapted performance sportswear Fashion design (e.g. garments, accessories) 3D printed figurines (3D selfies) 3D morphometric evaluation (i.e. for weight-loss purposes) Ergonomic body measurement 3D body measurement Body shape classification Comparison of changes in body positions However, despite the potential for the technology to have an impact in made-to-measure and mass customisation of items with ergonomic properties, 3D body scanning has yet to reach an early adopter or early majority stage of innovation diffusion. This in part due to the lack of ergonomic theory relating to how to identify key landmarks on the body morphology. The suitability of 3D body scanning is also context dependent as the measurements taken and the precision of the machine are highly relative to the task in hand rather than being an absolute. Additionally, a key limitation of 3D body scanning has been the upfront cost of the equipment and the required skills by which to collect data and apply it to scientific and technical fields. However, the utilization of depth cameras on recent smartphones helps reduce the cost of 3D scans. One example of this is the recent free face scan app available on the Apple App Store. For detailed investigation of the changes of the body dimensions a high speed (4D) scanning systems were developed by 3dMD and Instituto de Biomemechanics de Valencia (IBV). Scanning of moving humans with clothing at high resolution (usually 10-60 Hz) is technically possible, as reported multiple times by Chris Lane , Alfredo Ballester and Yordan Kyosev, but the analysis and application of this data seems to be challenging. Main worldwide events for scientific exchange in the area of 3D and 4D body scanning are the annual 3DBody.Tech Conference and Clothing-Body-Interaction conference[1] Scanning protocol: Although the process has been established for a considerable amount of time with international conferences held annually for industry and academics (e.g. the International Conference and Exhibition on 3D Body Scanning Technologies), the protocol and process of how to scan individuals is yet to be universally formalised. However, earlier research has proposed a standardised protocol of body scanning based on research and practice that demonstrates how non-standardised protocol and posture significantly influences body measurements; including the hip.The standard scanning protocol, however, produces no measurements that fail to meet the precision of manual measurement methods or ISO 20685:2010 tolerances. But through consecutive scanning and a free algorithm called GRYPHON, 97.5% of measurements meet ISO 20685:2010; a precision increase of 327%.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pill dispenser** Pill dispenser: Pill dispensers are items which release medication at specified times, to assist patients in adhering to their prescribed medication regime. They may also alert the patient that it is time to take the medication. Some devices can alert a monitoring station if the patient does not take the medication from the device promptly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MP3 SX** MP3 SX: mp3 SX (Stereo eXtended) is a program that allows users to upgrade mp3 stereo files to MP3 Surround files. mp3 SX analyzes the existing natural ambience of the stereo material and plays it back through the rear channels. The sound sources remain in the front channels, but are played back through the left, center, and right channel, providing a stable front image even for off-sweet-spot listening. The mp3 SX program preserves the original stereo sound stage, creating additional surround envelopment, with only 15 kB/s additional information.Using this program, Radio Classique, a French classical music station has been streaming its programming using 5.1 surround sound on the web.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autologous endometrial coculture** Autologous endometrial coculture: Autologous Endometrial Coculture is a technique of assisted reproductive technology. It involves placing a patient’s fertilized eggs on top of a layer of cells from her own uterine lining, creating a more natural environment for embryo development and maximizing the chance for an in vitro fertilization (IVF) pregnancy. How Coculture is performed: A typical Coculture cycle consists of the following steps: 1. Once a patient has been deemed an appropriate candidate for the procedure, she undergoes an endometrial biopsy during which a small piece of her uterine lining is removed. 2. The uterine lining sample is sent to a research lab, where it is treated, purified and frozen. 3. The patient then undergoes a typical IVF cycle and is given medication to stimulate egg growth in her ovaries. 4. The patient’s eggs are retrieved and mixed with the sperm. At this time, the lab begins thawing and growing her endometrial cells. 5. Once fertilization is confirmed, the patient’s embryos are placed on top of her own (and now thawed) endometrial cells. 6. Over the next two days, the embryos are closely monitored for growth and development. 7. The patient’s embryos are transferred into her uterus for implantation and pregnancy. The potential candidate: Coculture can be an effective treatment for patients who have failed previous IVF cycles or who have poor embryo quality. Advantages: A study of 12,377 embryo cultures showed that endometrial coculture is significantly better than sequential culture media; the rates (fraction) reaching blastocyst stage were 56% versus 46% in the coculture versus the sequential system, respectively, with own oocytes. With eggs from ovum donations, the rates were 71% versus 56%, respectively. Pregnancy rates were 39% vs. 28% and implantation rates were 33% vs. 21%.In addition to being noninvasive and relatively pain free, Coculture can be performed during a short office visit. The procedure also can improve embryo quality and stimulate embryo growth. Risks: The risks of Coculture are minimal. The procedure has been performed in over 1000 patients with no reported detrimental effects on embryo growth. Complications involving uterine infection or damage caused by endometrial biopsy are extremely rare.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamics (mechanics)** Dynamics (mechanics): Dynamics is the branch of classical mechanics that is concerned with the study of forces and their effects on motion. Isaac Newton was the first to formulate the fundamental physical laws that govern dynamics in classical non-relativistic physics, especially his second law of motion. Principles: Generally speaking, researchers involved in dynamics study how a physical system might develop or alter over time and study the causes of those changes. In addition, Newton established the fundamental physical laws which govern dynamics in physics. By studying his system of mechanics, dynamics can be understood. In particular, dynamics is mostly related to Newton's second law of motion. However, all three laws of motion are taken into account because these are interrelated in any given observation or experiment. Linear and rotational dynamics: The study of dynamics falls under two categories: linear and rotational. Linear dynamics pertains to objects moving in a line and involves such quantities as force, mass/inertia, displacement (in units of distance), velocity (distance per unit time), acceleration (distance per unit of time squared) and momentum (mass times unit of velocity). Rotational dynamics pertains to objects that are rotating or moving in a curved path and involves such quantities as torque, moment of inertia/rotational inertia, angular displacement (in radians or less often, degrees), angular velocity (radians per unit time), angular acceleration (radians per unit of time squared) and angular momentum (moment of inertia times unit of angular velocity). Very often, objects exhibit linear and rotational motion. Linear and rotational dynamics: For classical electromagnetism, Maxwell's equations describe the kinematics. The dynamics of classical systems involving both mechanics and electromagnetism are described by the combination of Newton's laws, Maxwell's equations, and the Lorentz force. Force: From Newton, force can be defined as an exertion or pressure which can cause an object to accelerate. The concept of force is used to describe an influence which causes a free body (object) to accelerate. It can be a push or a pull, which causes an object to change direction, have new velocity, or to deform temporarily or permanently. Generally speaking, force causes an object's state of motion to change. Newton's laws: Newton described force as the ability to cause a mass to accelerate. His three laws can be summarized as follows: First law: If there is no net force on an object, then its velocity is constant: either the object is at rest (if its velocity is equal to zero), or it moves with constant speed in a single direction. Second law: The rate of change of linear momentum P of an object is equal to the net force Fnet, i.e., dP/dt = Fnet. Third law: When a first body exerts a force F1 on a second body, the second body simultaneously exerts a force F2 = −F1 on the first body. This means that F1 and F2 are equal in magnitude and opposite in direction.Newton's laws of motion are valid only in an inertial frame of reference.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Traditional transmission** Traditional transmission: Traditional transmission (also called cultural transmission) is one of the 13 design features of language developed by anthropologist Charles F. Hockett to distinguish the features of human language from that of animal communication. Critically, animal communication might display some of the thirteen features but never all of them. It is typically considered as one of the crucial characteristics distinguishing human from animal communication and provides significant support for the argument that language is learned socially within a community and not inborn where the acquisition of information is via the avenue of genetic inheritance. Traditional transmission: In essence, the idea of traditional transmission details the process by which language is passed down from one generation to the next. In this manner, it is often also referred to as cultural transmission where it is a mechanism of iterated learning. Common processes would include imitation or teaching. The model purports that present learners acquire the cultural behaviour, that is language in this instance by observing similar behaviours in others who acquired the language the same way. This is an important distinction made in the Scientific American "The Origin of Speech", where Hockett defines traditional transmission as "the detailed conventions of any one language are transmitted extra-genetically by learning and teaching". While culture is not unique to the human species, the way it exhibits itself as language in human society is very distinctive and one key trait of this uniqueness is the element of social groups. Social groups: Traditional transmission is exemplified by the sociological context of social groups. American sociologist C.H. Cooley classifies social groups on the basis of contact in primary groups. In his paper Social Organization (1909), he describes primary groups (family, playgroups, neighborhoods, community of elders) as "those characterized by intimate face-to-face association and cooperation." Although there are other types of classifications by other sociologists, Cooley's classification and description is more applicable to the concept of traditional transmission. The idea of intimate interactions aligns with how the language is transmitted from the parents to their next generation on a fundamental level. Social groups: Consequently, following this line of thought, social groups play an integral role in the transmission of language from one generation to the next. In support of this notion, the importance of social groups in traditional transmission is illustrated in the presence of social isolation and children end up with an inability to effectively acquire a language. A child raised in social isolation would be commonly known as a "feral child/wild child". The following examples showcase a few classic case studies of rescued "wild" children who have gone through language deprivation and forms credible support for the argument of traditional transmission. Social groups: Some commonly known examples include: 1) Anna - (born in 1932) Anna from Pennsylvania, was raised in private due to illegitimacy. Kept hidden and trapped in an attic, malnourished and unable to move, till she was rescued at six years old. This resulted in her lack of language ability. Once rescued, Anna received linguistic input and showed aptitude for understanding instructions but ultimately never acquired speaking. Social groups: 2) Genie - (born in 1957) Genie remains one of the most prominent examples of linguistically isolated children ever studied. Only rescued at 13 years of age, Genie was inadequately exposed to language input and demonstrated no language ability upon rescue. Yet with linguistic input that came from her social circles then, Genie gradually acquired communication, albeit neither fluent nor smooth. Social groups: 3) Victor of Aveyron - (born in 1788) Victor was one of the earliest feral children studied. Rescued at 12, he ran away from civilisation eight times and his case was eventually undertaken by a young medical student who tried to train him to communicate. Victor showed impressive progress with reading and comprehension of simple words. However, he never progressed beyond a rudimentary level. Significance: A key consideration when it comes to traditional transmission and why it is a significant milestone in language acquisition is its influence on language learning patterns. Traditional transmission denotes naturally that learning is acquired through social interactions and built upon by teaching and enforcement. This influences research when it comes to language learning patterns, impacting our understanding of the human cognition as well as language structure. Compellingly, it also determines the direction of how language should be taught and passed down. From the standpoint of traditional transmission where language manifests as a socially learned, culturally transmitted system, language acquisition is mechanical and is directly affected by the present environment the individual is placed in. This removes the premise of language acquisition from that of a biological construct. Instead of having to biologically explain traditional transmission, it introduces the possibility that the design features of language itself stem from traditional transmission. Of course, for the above to be significant, it means that languages engage in cultural selection for learnability where an assumption of innateness is inconsequential and languages adapt over time due to a need for survival. The above plays a major role in the study of languages, especially in their properties, structure and how it developed with time or throughout human history to be the system it is today; providing valuable insights into language and the human race, language and the human cognition as well as language and its path to survival. Significance: Traditional transmission as a design feature is also significant in that it purports that while some aspects of language could possibly be inborn, the human race crucially acquire their language ability from other speakers. This is distinct from many animal communication systems because most animals are born with the innate knowledge, and skills necessary for survival. For example, honey bees have an inborn ability to perform and understand the waggle dance. Controversy & Criticism: The main argument to the validity of traditional transmission has always been one of social versus biological construct. The concept of language being taught and learnt socially rather than being an innate instinct has been a minefield of debate for years. Specifically, the idea of language being taught extra-genetically has been met with countless criticism. These criticisms mostly stem from the proponents of the American linguist, Noam Chomsky and his school of thought. Chomsky was a supporter of generative grammar. He and his followers believed that humans possess the innate ability for the learning and acquisition of languages. This theory assumes an inborn capacity for language where language learning takes place from a place of priming. Controversy & Criticism: As such, Chomsky's view is that humans invariably have a certain language input at birth and therefore language learning after happens as enforcement based on the already present structure of grammar in the individual. Chomsky also introduced the concept of linguistic competence as "the system of linguistic knowledge possessed by native speakers of a language", to further support the idea of generative grammar. Linguistic competence as opposed to linguistic performance (what is actually uttered) focuses on the mental states, thought processes and representations related to language. Linguistic performance, on the other hand, is defined by Chomsky as the tangible use of language in concrete situations and circumstances. It involves the ideas of production and comprehension of language. The main distinction between performance and competence is the variable of speech errors where one might have full competence of language but yet, still succumb to speech errors in performance because competence and performance are fundamentally two different aspects of language. Controversy & Criticism: Related to generative grammar, Chomsky also proposed the idea of a "Universal Grammar" where he postulates that a specific set of structural rules of language are universal for all human languages which is also a very much controversial topic discussed widely, one of them being the prominent paper by Evans and Levinson (2009). Specifically, Chomsky believed that the reason that children acquired language so easily is due to the innate predisposition of language principles which then enables them to master complex language operations. This in particular is controversial to the ideas of traditional transmission which posits cultural learning and transmission across generations as the tool children capitalise on to acquire language. All in all, Chomsky's ideas and theories have served as the main opposing view to Hockett's design features, remaining highly controversial and a dominant area of research in the linguistics field of studies even till today. Controversy & Criticism: On another front, in the domain of evolutionary linguistics, Wacewicz & Żywiczyński have argued on the whole against Hockett's design features and why his perspective is largely incompatible to modern language evolution research. For traditional transmission, they argue that "the problem with cultural/traditional transmission so conceived is that, again, it has to do purely with the properties of the medium, i.e. the vocal patterns. Per their argument, this is only superficially, if at all, related to what truly counts about human cultural transmission. In their paper, they suggest that comparative research into areas such as critical periods of the acquisition of vocalizations (Marler and Peters 1987) and other areas of vocal learning might raise alternative views that have relevance to language evolution. Hence, their criticisms towards traditional transmission seem to point at an alternate idea of the innate ability in humans for language, instead of being solely dependent on extra-genetic transmission via learning and teaching.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CDJ** CDJ: A CDJ is a specialized digital music player for DJing. Originally designed to play music from compact discs, many CDJs can play digital music files stored on USB flash drives or SD cards. In typical use, at least two CDJs are plugged into a DJ mixer. CDJs have jog wheels and pitch faders that allow manipulation of the digital music similar to a vinyl record on a DJ turntable. Many have additional features such as loops and beat analysis that are not present on turntables. Additionally, some can function as DJ controllers to control the playback of digital files in DJ software running on a laptop instead of playing the files on the CDJ. CDJ: Many pro audio companies such as Gemini, Denon DJ, Numark, Stanton, and Vestax produced DJ quality CD players. In 1993 Denon DJ was the first to implement a 2-piece rackmounted dual-deck variable-pitch CD player with a jog wheel and instant cue button for DJs. It quickly became the industry standard and was widely adopted in most clubs and mobile DJs throughout the 90s up until 2004 when Pioneer made an impact with the CDJ-1000. Pioneer DJ CDJs have since become widely regarded as the industry standard.The Pioneer CDJ-400, CDJ-800, CDJ-850K, CDJ-1000, CDJ-900, CDJ-2000 and the latest model CDJ-3000 have a vinyl emulation mode that allows the operator to manipulate music on a CD as if it were on a turntable. Models released prior to the CDJ-1000 lacked this feature. Pioneer CDJs released after the CDJ-400 can play from USB sticks as well as CDs. Pioneer integrated its software rekordbox with the CDJs to prepare music with cue points, accurate BPM, and search/playlist functions. For unknown reasons, the Pioneer CDJ-300 is left out of most popular accounts on CDJs. 1990s: CDJ-500 The CDJ-500 (known as the Mark 1 once the second version was released) was recognized by Pioneer DJ as their first CDJ CD player, released in 1994. However, there was a Pioneer CDJ-300 that was released in 1994 as a budget model for the CDJ-500.The CDJ-500 was the first Pioneer DJ player to have a Jog Dial, (although Technics SL-P1200 was the first ever to feature a jog dial in 1986), allowing for cueing of the CD unlike rack-mounted CD players that were common at the time. It included a loop function, as well as loop-out adjust, and other facilities associated with looping samples from the track being played. The pitch control was +/- 10% only, and Master Tempo allowed the pitch to be locked despite tempo changes being made. 1990s: All models of the CDJ-500 had top-opening CD loading, which is opposite to all the later ranges of CDJs (starting with CDJ-100S in 1999) which have since had front slot-loading of discs. CDJ-500II Pioneer later released the CDJ-500II, with the only changes being slightly faster performance, Loop Out adjustable and the maximum loop length was increased to 10 minutes. CDJ-500S The CDJ-500S (also known as the CDJ-700S in the United States) released in 1997, was a smaller version of the CDJ-500. It marked the first inclusion of an anti-skip system. CDJ-100S The CDJ-100S was a CDJ model that was released in early 1998. The CDJ-100S was a basic CD player with a pitch controller and three sound effect options. 2000s: CMX-5000 The CMX-5000, released in March 2000, was Pioneer's first attempt to enter the 19" rack mountable dual CD player-market (though, with an optional installation bracket, it had previously been possible to install two CDJ-500S players side by side into an industry standard rack) that had previously been dominated by Denon. The CMX-5000 consisted of a 2U section with a pair of slot-loading CD drives and a 3U 'controller' section with a pair of jog wheels and control buttons for the CD drive below. 2000s: CDJ-1000 The CDJ-1000 (retroactively known as the MK1 after the release of MK2) was introduced in 2001. Featuring "Vinyl Mode" which dramatically improved jog wheel performance, the CDJ-1000 was generally accepted as the first CD player that could accurately emulate a vinyl turntable - including the ability to scratch - soon established the CDJ-1000 as an industry standard for DJs.The player implemented a large touch-sensitive platter with a digital display in the middle that could relay information about the position in the music. Although this platter was not driven (meaning that it does not rotate by itself) like a turntable, the display in the center showed positioning information for accurate cueing. Also there was an orange cue marker that simulates the stickers used by scratch DJs. The waveform display gave DJs the opportunity to look ahead on tracks to see forthcoming breaks. 2000s: The CDJ-1000 (and its reincarnations) became a popular tool for dance clubs and DJs, and is currently the most widely used DJ-style CD deck in nightclubs. The player supported playback from CD, CD-R and CD-RW and implemented all of the essential features for DJ CD players such as looping and pitch changing in addition to less common features such as reverse play-back and turntable break-stop and start. It included the master tempo-function introduced on the earlier CDJ-500 & CDJ-500S models, whereby the music changed speed while maintaining pitch.The CDJ-1000 is generally regarded as the first CD player to be widely adopted in club use. Before its introduction, few clubs adopted CD decks, either due to their lack of DJ functionality and overall robustness, or due to the fact that DJs still preferred the vinyl format, as most of the music they played was still far more prevalent on vinyl than CD media. The introduction of recordable CD-R and then CD-RW media discs and stand-alone recorders able to record music onto them facilitated widespread adoption of the CDJ-1000. Before this, DJs who wanted to test a new piece of music they might have made themselves in a studio, in either a club or as early promotional items to radio DJs, often had to rely on getting acetate discs produced. These were both expensive to produce and had inherent short lifespan; as after a few plays the disc would wear-out and thus be completely unplayable. 2000s: CDJ-800 The CDJ-800 released in 2002 used a different mechanism for the jog wheel than the 1000 - it could perform "quick return" if the top surface of the wheel was pressed, then released. The general design purpose of the CDJ-800 was to offer DJs the facilities they have in the club on CDJ-1000s at home for a lower price. While the CDJ-1000 had a button to override the pitch slider, the CDJ-800 slider had a center detent, which was "easy to center." The CDJ-800 did not have the CDJ-1000's "hot cue" feature, and had only "one cue, and one loop" at a time, though these could be saved for up to 500 CDs. The CDJ-800 could alter loop "out-points" while playing, but could not alter in-points; loops had to be re-captured. Though the CDJ-1000 would relay (alternate CDs) in both vinyl and CDJ jog modes, the CDJ-800 would only relay in CDJ jog mode. The CDJ-800 also had an "auto-beat" function that the 1000 does not.The CDJ-800 was introduced in November 2002 and discontinued in February 2006 in favor of the updated second-generation version, called CDJ-800-MK2. 2000s: Dan Morrell, ("DJ Smurf") wrote of liking the CDJ-800 due its excellent sound and low price. 2000s: DMP-555 The DMP-555 was a single deck tabletop CD-player in Pioneer's range for DJs that was introduced in April 2002 and discontinued during 2004. The DMP-555 featured several innovative features, such as playback from SD card, and MP3 playback from either memory card or optical media. It also included the ability (unique in Pioneer's DJ product line) to cue from one media source and playback from another all on the same unit, allowing one to DJ two tracks from a single DMP-555 alone. The product was hobbled by a lack of support and updates, a 2GB limit on SD card capacity, and the inability to write MP3 files directly to the SD card. A special Pioneer-branded writer was required, and transfers had to be encrypted through custom Pioneer software because of music label concerns over copyright infringement. 2000s: CMX-3000 The CMX-3000 was Pioneer's second attempt to enter the market of rack-mountable dual deck CD-players. Released in 2003, in the wake of the CDJ-1000, the player was - and still is - often mistakenly advertised as a 19" inch rack mountable equivalent of dual CDJ-1000s even though the intended target audiences for the products, as well as their comparative pricing, were entirely in different leagues. The misconception is possibly caused by the fact that while Pioneer's earlier dual deck CD-player, the CMX-5000, only had a jog wheel comparable to earlier single deck CD-players for doing pitch bending, the CMX-3000 also allowed distinct jog mode that enabled the user to use the jog wheel for scratching, a feature that thus far was only available on the top-of-the line CDJ-1000. The jog wheel however relied upon the movement of the wheel itself and was not touch sensitive as opposed to the CDJ-1000, CDJ-800 and CDJ-400. Therefore, the scratch was intended as an effect or for cueing a track, and was not appropriate for stopping the track by touch unlike other CDJ models. 2000s: Mainly due to the product's comparative pricing (for the price of two CDJ-1000s, a DJ could get almost three CMX-3000 units with two players each) the CMX-3000s found their way to the setups of many mobile DJ's as well as into the booths of many world's best nightclubs as a backup player in case the industry standard CDJ-1000s fail for some reason during a night. 2000s: CDJ-1000MK2 An updated version of the CDJ-1000, the CDJ-1000 MK2 was released in July 2003 with additional features like an improved jog wheel and faster response time than in the original model. The product was discontinued in 2006 when the MK3 was introduced into the market. CDJ-200 The CDJ-200 was the discontinued budget model CDJ CD player released in 2004. It was similar in size to the CDJ-100S, however features such as MP3 playback capabilities and loop functions were added or improved. Both the CDJ-100S and the CDJ-200 had similar options to manipulate the CD, however they lacked the vinyl modes of other models. 2000s: DVJ-X1 The DVJ-X1 was a DVD quasi-turntable that allowed VJ's to scratch and mix video like a vinyl record. Released in 2004 and designed for professional use in clubs, it featured real-time digital video scratching, looping and instant hot cueing. It had capability to sync video and audio streams even when being pitched or reversed. It also played CDs with features similar to the regular CDJ-1000 CD turntable.In 2006, Pioneer introduced a successor unit, the DVJ-1000. 2000s: CDJ-800-MK2 Pioneer released the CDJ-800-MK2 in February 2006 replacing the CDJ-800. The main difference is that the CDJ-800-MK2 could play MP3 files from CDs. The design had also been changed. CDJ-1000MK3 The third model of the 1000 series known as the CDJ-1000 MK3 was released in March 2006. 2000s: Unlike the earlier versions, the MK3 supported playback of MP3s from CD-R and CD-RW media. Other improvements to earlier versions included bigger, lighter displays; a 100 dots waveform display instead of the earlier 50 dots waveform; the ability to record loops into hot cue slots instead of just cue points. The mechanical resistance of the jog wheel was adjustable to suit different styles of handling by the DJ. Furthermore, the MK3 used a newer SD media while the earlier incarnations used MultiMediaCard/MMC as a memory card format. 2000s: Discontinuation Shortly after the introduction of the CDJ-1000's successors, the Pioneer CDJ-900 and the Pioneer CDJ-2000, in a statement, UK sales manager Martin Dockree said: It is with mixed feelings that today we announce to the channel the discontinuation of the CDJ-1000MK3…….thanks to the hard work of our then newly appointed direct retailers, installers and established distribution, as well as the DJs who instantly recognised it as the first real practical DJ CD player, it very quickly became an industry standard fixture in the DJ booth. 2000s: DVJ-1000 The DVJ-1000 was a digital turntable capable of playing back video data on DVDs, as well as CD-Audio, and MP3 audio on both CDs and DVDs. Created by Pioneer Electronics in 2006, it was the successor to the Pioneer DVJ-X1. Unlike the DVJ-X1, the DVJ-1000 was approximately the same dimensions as Pioneer's audio-only CD turntables (CDJ-1000), and could be fitted into existing enclosures with relative ease, allowing for an easy upgrade path for club owners and sound engineers. In addition, the unit borrowed several usability features from the CDJ line of that era, including a brighter fluorescent display on both the information screen and the central on-jog display. Loop adjustment features were carried over as well, and a new automatic 4-beat loop feature had been included on this unit. Being that the unit played back DVD material, several new outputs had been added, including S/PDIF, composite outputs, a preview video output, which also doubled as a 'dashboard' for searching through video and MP3 content, as well as control outputs for compatible Pioneer DJ mixers. For the travelling DJ, the unit was multi-system, outputting both PAL and NTSC video signals for near-global compatibility. As part of its marketing strategy, Pioneer had equipped several noted DJs with the new unit, including Sander Kleinenberg.The unit retailed for US$2500 (£1599 GBP) which was about 25 percent less than the introductory pricing on the DVJ-X1. CDJ-400 The CDJ-400 was released in the late 2007. It was similar in size to the CDJ-200, but came with scratching abilities and effects, as well as being Pioneer's first model to have a USB input. This made it possible to play MP3 music from a USB memory stick. On the back of the CDJ-400 was another USB connector that could be used to connect the CDJ-400 to a computer. This enabled the MIDI control possibilities so the player could be used to control various types of DJ mix software. The CDJ-400 had a built in USB sound card. 2000s: MEP-7000 The MEP-7000 was Pioneer's addition to their product range for professional DJs released in late 2007. At The 2008 NAMM Show The MEP-7000 was featured along with Pioneer's DJM-3000 19" rackmount DJ mixer. In Australia, the DJM-3000 had been discontinued for sale in late 2006 but was re-released in June 2008 just for the MEP-7000. The player was a 19" rack mountable twin player type capable of playing media formats ranging from normal audio CD/CD-R/CD-RW to digital data files in MP3- and AAC-formats written on DVD's as well as USB-connected memorysticks and/or portable hard drives. 2000s: CDJ-900 The Pioneer CDJ-900 was announced simultaneously with Pioneer CDJ-2000 on September 17, 2009. The player had been available since the end of December 2009.The CDJ-900 was placed below the Pioneer CDJ-2000, but above the Pioneer CDJ-850 and the Pioneer CDJ-350. It included features on the Pioneer CDJ-2000 including a tilted screen, Serato Pro DJ link, and Serato HID support. Features that set it apart from the Pioneer CDJ-850 included a larger screen with dedicated playback and browse screens, quantize, and .5 frame step. 2000s: A unique feature was the inclusion of "slip" mode. This was not included on the Pioneer CDJ-2000, Pioneer CDJ-850, or the Pioneer CDJ-350. This allowed DJ's to manipulate the track and for it to return to where it should have been if the track had not been manipulated. If a loop was enabled at 02sec, for example, and leave it for 1 minute, this would then disengage the loop, jumping to 1:02 as if the user had never engaged the loop. 2000s: This CDJ allowed playback from USB drives, Audio CD, MP3 CD, act as a MIDI controller, and was a Serato accessory for HID playback. HID playback allowed the CDJ-900 as a controller for the computer program Serato, but at a much higher resolution than MIDI, and have access to all the features of the CDJ-900. CDJ-2000 The Pioneer CDJ-2000 was introduced into the CDJ-range of digital turntables targeted for professional DJs simultaneously with the CDJ-900 on September 17, 2009. It became available late December 2009.The Pioneer CDJ-2000 was the replacement for the Pioneer CDJ-1000 MK3. 2010s: CDJ-850 Replacing the CDJ-800MK2, the CDJ-850 was released in 2010 offering some major enhancements over its predecessor. This deck was designed to feel and function like a CDJ-900 or CDJ-2000 and is rekordbox enabled, while maintaining an affordable price. As compared to the CDJ-900's tracking accuracy of 1ms, however, the CDJ-850 had accuracy of only 1 frame (13ms), which could make seamless looping impossible without constant adjustments. Also the CDJ-850 had USB functionality with rekordbox capability. 2010s: CDJ-350 Released in July 2010 was released as a consumer friendly CDJ, features included a playlist button, manual and automatic loops options, a vinyl mode for scratching, tempo adjuster, and a Master Tempo button that changed the pitch of the song when disengaged. It was capable of using USB thumb drives and CDs, and could also be used as a MIDI controller. 2010s: CDJ-2000NXS The Pioneer CDJ-2000 was discontinued to the end of 2012 and was replaced with the Pioneer CDJ-2000 Nexus Released in September 2012. New features included a high resolution screen which displays detailed wave form information as well as Beat-Sync which allows DJ's to automatically beat-match tracks from 2, 3 or 4 players via ProDJLink. The Pioneer CDJ-2000 Nexus was also the first CDJ to allow playback of music stored on a smartphone or tablet via Wi-Fi/USB connection.The Pioneer CDJ-2000 Nexus was discontinued at the beginning of 2016, and was replaced by the CDJ-2000NXS2, released in February 2016. 2010s: CDJ-2000NXS2 In early 2016, Pioneer unveiled a newer model of its flagship CDJ range: the CDJ-2000NXS2. Containing a high definition 7" touchscreen, eight hot cues, as well as several other features, the CDJ-2000NXS2 was met with much success and praise. CDJ-2000NXS2 was the first Pioneer flagship DJ-player that supported FLAC file format. It was also the first of the CDJ lineup to support external peripherals such as the DDJ-SP1, DDJ-XP1 and DDJ-XP2. The use of an "Add-On" controller was most notable with instant access to all 8 hot cues. 2020s: CDJ-3000 In late 2020, Pioneer released the CDJ-3000, the newest flagship CDJ model. Unlike previous models of CDJ, the CDJ-3000 has no CD drive, making it similar to Pioneer's XDJ line of DJ media players such as the XDJ-1000MK2 which lacked CD drives. The CDJ-3000 introduced a larger 9" touchscreen, a rearranged interface with more hot cue buttons (but the same eight hot cues as available on the CDJ-2000NXS2), more loop and beat jump buttons, and an LCD screen in the middle of the jog wheel.CDJ-3000 supports FAT, FAT32, exFAT,and HFS+ file systems, but does not support NTFS or macOS GUID partitioning. FAT or FAT32 is required when flashing the firmware.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alkyne metathesis** Alkyne metathesis: Alkyne metathesis is an organic reaction that entails the redistribution of alkyne chemical bonds. The reaction requires metal catalysts. Mechanistic studies show that the conversion proceeds via the intermediacy of metal alkylidyne complexes. The reaction is related to olefin metathesis. History: Metal-catalyzed alkyne metathesis was first described in 1968 by Bailey, et al. The Bailey system utilized a mixture of tungsten and silicon oxides at temperatures as high as 450 °C. In 1974 Mortreux reported the use of a homogeneous catalyst—molybdenum hexacarbonyl at 160 °C—to observe an alkyne scrambling phenomenon, in which an unsymmetrical alkyne equilibrates with its two symmetrical derivatives. History: The Mortreux system consists of the molybdenum precatalyst molybdenum hexacarbonyl Mo(CO)6 and resorcinol cocatalyst. In 1975, T. J. Katz proposed a metal carbyne (i.e. alkylidyne) and a metallacyclobutadiene as intermediates. In 1981, R. R. Schrock characterized several metallacyclobutadiene complexes that were catalytically active. Molybdenum catalyst with aniline-derived ligands are highly effective catalysts. Catalysts for alkyne metathesis The so-called "canopy catalysts" containing tripodal ligands are particularly active and easy to prepare. Thorough experimental and computational studies showed that metallatetrahedranes were isolable but dynamic species within the catalytic cycle. Alkyne metathesis catalyst have also been developed using rhenium(V) complexes. Such catalysts are air stable and tolerant of diverse functional groups, including carboxylic acids. Catalyst degradation: Typical degradation pathways for these catalysts include hydrolysis and oxidation. Catalyst degradation: Dimerization of the alkylidyne units remains possible, as can be seen from complex 28, which was isolated in small amounts. In addition to the decomposition pathways by bimolecular collision or hydrolysis, Schrock alkylidyne complexes degrade upon attempted metathesis of terminal alkynes. The critical step occurs after formation of the metallacycle and consists of a transannular C-H activation with formation of a deprotio-metallacyclobutadiene and concomitant loss of one alkoxide ligand. This reaction course remains viable for the new alkylidynes with silanolate ligands. Specifically, compound 29 could be isolated upon addition of 1,10-phenanthroline. As a result, terminal alkynes can not be metathesized under existing catalysis system with similar efficiency. Catalyst degradation: In practice, 5 Å MS is used as butyne scavenger to shift the equilibrium to products. Ring closing alkyne metathesis: General Alkyne metathesis can be used in ring-closing operations and RCAM stands for ring closing alkyne metathesis. The olfactory molecule civetone can be synthesised from a di-alkyne. After ring closure the new triple bond is stereoselectively reduced with hydrogen and the Lindlar catalyst in order to obtain the Z-alkene (cyclic E-alkenes are available through the Birch reduction). An important driving force for this type of reaction is the expulsion of small gaseous molecules such as acetylene or but-2-yne. Ring closing alkyne metathesis: The same two-step procedure was used in the synthesis of the naturally occurring cyclophane turriane. Trisamidomolybdenum(VI) alkylidyne complexes catalyze alkyne metathesis. Natural product synthesis RCAM can also be used as strategic step in natural product total synthesis. Some examples show the power of these catalysts. For example, RCAM can serve as key step in total synthesis of marine prostanoid hybridalactone, where epoxide, internal olefin and ester are tolerated. Another example shows a highly functionalized enyne, which displays a rare thiazolidinone unit, can be metathesized under Mo(III) catalyst, neither this unusual sulfur-containing heterocycle nor the elimination-prone tertiary glycoside posed any problem in the ring-closing step. Ring closing alkyne metathesis: The total synthesis of spirastrellolide F employs alkyne metathesis in one step. The molecular frame of this potent phosphatase inhibitor is decorated with no less than 21 stereogenic centers and features a labile skipped diene in the side chain. Its macrocyclic core incorporates a tetrahydropyran ring, a spiroketal unit, as well as a highly unusual chlorinated bis-spiroketal motif. Specifically, a sequence of RCAM coupled with a gold-catalyzed acetalization successfully build the polycyclic system at the late stage of the synthesis. Nitrile-alkyne cross-metathesis: By replacing a tungsten alkylidyne by a tungsten nitride and introducing a nitrile Nitrile-Alkyne Cross-Metathesis or NACM couples two nitrile groups together to a new alkyne. Nitrogen is collected by use of a sacrificial alkyne (elemental N2 is not formed):
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bismuth polycations** Bismuth polycations: Bismuth polycations are polyatomic ions of the formula Bin+x. They were originally observed in solutions of bismuth metal in molten bismuth chloride. It has since been found that these clusters are present in the solid state, particularly in salts where germanium tetrachloride or tetrachloroaluminate serve as the counteranions, but also in amorphous phases such as glasses and gels. Bismuth endows materials with a variety of interesting optical properties that can be tuned by changing the supporting material. Commonly-reported structures include the trigonal bipyramidal Bi3+5 cluster, the octahedral Bi2+6 cluster, the square antiprismatic Bi2+8 cluster, and the tricapped trigonal prismatic Bi5+9 cluster. Known materials: Crystalline Bi5(AlCl4)3 Bi8(AlCl4)2 Bi5(GaCl4)3 Bi8(GaCl4)2 Metal complexes [CuBi8][AlCl4]3 [Ru(Bi8)2]6+ [Ru2Bi14Br4][AlCl4]4 Structure and bonding: Bismuth polycations form despite the fact that they possess fewer total valence electrons than would seem necessary for the number of sigma bonds. The shapes of these clusters are generally dictated by Wade's rules, which are based on the treatment of the electronic structure as delocalized molecular orbitals. The bonding can also be described with three-center two-electron bonds in some cases, such as the Bi3+5 cluster. Structure and bonding: Bismuth clusters have been observed to act as ligands for copper and ruthenium ions. This behavior is possible due to the otherwise fairly inert lone pairs on each of the bismuth that arise primarily from the s-orbitals left out of Bi–Bi bonding. Optical properties: The variety of electron-deficient sigma aromatic clusters formed by bismuth gives rise to a wide range of spectroscopic behaviors. Of particular interest are the systems capable of low-energy electronic transitions, as these have demonstrated potential as near-infrared light emitters. It is the tendency of electron-deficient bismuth to form sigma-delocalized clusters with small HOMO/LUMO gaps that gives rise to the near-infrared emissions. This property makes these species potentially valuable to the field of near-infrared optical tomography, which exploits the near-infrared window in biological tissue.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bryan J. Traynor** Bryan J. Traynor: Bryan J. Traynor is a neurologist and a senior investigator at the National Institute on Aging, and an adjunct professor at Johns Hopkins University. Dr. Traynor studies the genetics of human neurological conditions such as amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD). He led the international consortium that identified pathogenic repeat expansions in the C9orf72 gene as a common cause of ALS and FTD. Dr. Traynor also led efforts that identified other Mendelian genes responsible for familial ALS and dementia, including VCP, MATR3, KIF5A, HTT, and SPTLC1.Dr. Traynor is a co-recipient of the Potamkin Prize for Research in Pick's, Alzheimer's, and Related Diseases for the discovery of the C9orf72 repeat expansions, and the Sheila Essay Award for his contributions to our understanding of ALS. He also received the NIH Director’s Award. Education: Dr. Traynor received his medical degree (MB, BCh, BAO, 1993), his Medical Doctorate (MD, 2000), and his Doctor of Philosophy (PhD, 2012) from University College Dublin. He also received his Master of Medical Science (MMSc) in drug development and clinical trial design from Harvard-MIT HST in 2004. He completed his neurology residency and fellowship at Brigham and Women’s Hospital and Massachusetts General Hospital. Awards, prizes, and honors: 2011 National Institute on Aging Director's Award 2012 Derek Denny-Brown Award 2012 Elected fellow of the American Neurological Association 2012 National Institutes of Health Director's Award 2013 Diamond Award 2013 Sheila Essey Award 2016 Potamkin Prize 2018 Health and Life Sciences 50 Honoree 2020 Elected fellow of the Royal College of Physicians of Ireland 2021 Elected fellow of the Royal College of Physicians (London) 2022 Elected member of the Association of American Physicians Notable professional service: Chief, Neuromuscular Diseases Research Section, NIA, NIH Team leader, RNA Therapeutics Laboratory, NCATS, NIH Member, Health Genetics of Health and Disease NIH Study Section (2015-2021) Chair, Congressionally Mandated Department of Defense ALS Research Program (2015-2019) Co-chair, NIH Gene Therapy Task Force Member, Scientific Program Advisory Committee, American Neurological Association Editorial board member, Journal of Neurology, Neurosurgery and Psychiatry; Neurobiology of Aging; JAMA Neurology (2017-2021); Lancet eClinicalMedicine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acetylcholine receptor** Acetylcholine receptor: An acetylcholine receptor (abbreviated AChR) is an integral membrane protein that responds to the binding of acetylcholine, a neurotransmitter. Classification: Like other transmembrane receptors, acetylcholine receptors are classified according to their "pharmacology," or according to their relative affinities and sensitivities to different molecules. Although all acetylcholine receptors, by definition, respond to acetylcholine, they respond to other molecules as well. Nicotinic acetylcholine receptors (nAChR, also known as "ionotropic" acetylcholine receptors) are particularly responsive to nicotine. The nicotine ACh receptor is also a Na+, K+ and Ca2+ ion channel. Muscarinic acetylcholine receptors (mAChR, also known as "metabotropic" acetylcholine receptors) are particularly responsive to muscarine.Nicotinic and muscarinic are two main kinds of "cholinergic" receptors. Receptor types: Molecular biology has shown that the nicotinic and muscarinic receptors belong to distinct protein superfamilies. Nicotinic receptors are of two types: Nm and Nn. Nm is located in the neuromuscular junction which causes the contraction of skeletal muscles by way of end-plate potential (EPPs). Nn causes depolarization in autonomic ganglia resulting in post ganglionic impulse. Nicotinic receptors cause the release of catecholamine from the adrenal medulla, and also site specific excitation or inhibition in brain. Both Nm and Nn are Na+ and Ca2+ channel linked but Nn is also linked with an extra K+ channel. Receptor types: nAChR The nAChRs are ligand-gated ion channels, and, like other members of the "cys-loop" ligand-gated ion channel superfamily, are composed of five protein subunits symmetrically arranged like staves around a barrel. The subunit composition is highly variable across different tissues. Each subunit contains four regions which span the membrane and consist of approximately 20 amino acids. Region II which sits closest to the pore lumen, forms the pore lining. Receptor types: Binding of acetylcholine to the N termini of each of the two alpha subunits results in the 15° rotation of all M2 helices. The cytoplasm side of the nAChR receptor has rings of high negative charge that determine the specific cation specificity of the receptor and remove the hydration shell often formed by ions in aqueous solution. In the intermediate region of the receptor, within the pore lumen, valine and leucine residues (Val 255 and Leu 251) define a hydrophobic region through which the dehydrated ion must pass.The nAChR is found at the edges of junctional folds at the neuromuscular junction on the postsynaptic side; it is activated by acetylcholine release across the synapse. The diffusion of Na+ and K+ across the receptor causes depolarization, the end-plate potential, that opens voltage-gated sodium channels, which allows for firing of the action potential and potentially muscular contraction. Receptor types: mAChR In contrast, the mAChRs are not ion channels, but belong instead to the superfamily of G-protein-coupled receptors that activate other ionic channels via a second messenger cascade. The muscarine cholinergic receptor activates a G-protein when bound to extracellular ACh. The alpha subunit of the G-protein activates guanylate cyclase (inhibiting the effects of intracellular cAMP) while the beta-gamma subunit activates the K-channels and therefore hyperpolarize the cell. This causes a decrease in cardiac activity. Origin and evolution: ACh receptors are related to GABA, glycine, and 5-HT3 receptors and their similar protein sequence and gene structure strongly suggest that they evolved from a common ancestral receptor. In fact, relatively minor mutations, such as a change in 3 amino acids in many of these receptors can convert a cation-selective channel to an anion-selective channel gated by acetylcholine, showing that even fundamental properties can relatively easily change in evolution. Pharmacology: Acetylcholine receptor modulators can be classified by which receptor subtypes they act on: Role in health and disease: Nicotinic acetylcholine receptors can be blocked by curare, hexamethonium and toxins present in the venoms of snakes and shellfishes, like α-bungarotoxin. Drugs such as the neuromuscular blocking agents bind reversibly to the nicotinic receptors in the neuromuscular junction and are used routinely in anaesthesia. Nicotinic receptors are the primary mediator of the effects of nicotine. In myasthenia gravis, the receptor at the neuromuscular junction is targeted by antibodies, leading to muscle weakness. Role in health and disease: Muscarinic acetylcholine receptors can be blocked by the drugs atropine and scopolamine. Role in health and disease: Congenital myasthenic syndrome (CMS) is an inherited neuromuscular disorder caused by defects of several types at the neuromuscular junction. Postsynaptic defects are the most frequent cause of CMS and often result in abnormalities in nicotinic acetylcholine receptors. The majority of mutations causing CMS are found in the AChR subunits genes.Out of all mutations associated with CMS, more than half are mutations in one of the four genes encoding the adult acetylcholine receptor subunits. Mutations of the AChR often result in endplate deficiency. Most of the mutations of the AChR are mutations of the CHRNE gene with mutations encoding for the Alpha5 Nicotinic Acetylcholine Receptor cause increased susceptibility to addiction. The CHRNE gene codes for the epsilon subunit of the AChR. Most mutations are autosomal recessive loss-of-function mutations and as a result there is endplate AChR deficiency. CHRNE is associated with changing the kinetic properties of the AChR. One type of mutation of the epsilon subunit of the AChR introduces an Arg into the binding site at the α/ε subunit interface of the receptor. The addition of a cationic Arg into the anionic environment of the AChR binding site greatly reduces the kinetic properties of the receptor. The result of the newly introduced ARG is a 30-fold reduction of agonist affinity, 75-fold reduction of gating efficiency, and an extremely weakened channel opening probability. This type of mutation results in an extremely fatal form of CMS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heimburg Formation** Heimburg Formation: The Heimburg Formation is a geologic formation in Germany. It preserves fossils dating back to the Cretaceous period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nuclear Overhauser effect** Nuclear Overhauser effect: The nuclear Overhauser effect (NOE) is the transfer of nuclear spin polarization from one population of spin-active nuclei (e.g. 1H, 13C, 15N etc.) to another via cross-relaxation. A phenomenological definition of the NOE in nuclear magnetic resonance spectroscopy (NMR) is the change in the integrated intensity (positive or negative) of one NMR resonance that occurs when another is saturated by irradiation with an RF field. The change in resonance intensity of a nucleus is a consequence of the nucleus being close in space to those directly affected by the RF perturbation. Nuclear Overhauser effect: The NOE is particularly important in the assignment of NMR resonances, and the elucidation and confirmation of the structures or configurations of organic and biological molecules. The 1H two-dimensional NOE spectroscopy (NOESY) experiment and its extensions are important tools to identify stereochemistry of proteins and other biomolecules in solution, whereas in solid form crystal x-ray diffraction typically used to identify stereochemistry. The heteronuclear NOE is particularly important in 13C NMR spectroscopy to identify carbons bonded to protons, to provide polarization enhancements to such carbons to increase signal-to-noise, and to ascertain the extent the relaxation of these carbons is controlled by the dipole-dipole relaxation mechanism. History: The NOE developed from the theoretical work of American physicist Albert Overhauser who in 1953 proposed that nuclear spin polarization could be enhanced by the microwave irradiation of the conduction electrons in certain metals. The electron-nuclear enhancement predicted by Overhauser was experimentally demonstrated in 7Li metal by T. R. Carver and C. P. Slichter also in 1953. History: A general theoretical basis and experimental observation of an Overhauser effect involving only nuclear spins in the HF molecule was published by Ionel Solomon in 1955. Another early experimental observation of the NOE was used by Kaiser in 1963 to show how the NOE may be used to determine the relative signs of scalar coupling constants, and to assign spectral lines in NMR spectra to transitions between energy levels. In this study, the resonance of one population of protons (1H) in an organic molecule was enhanced when a second distinct population of protons in the same organic molecule was saturated by RF irradiation. The application of the NOE was used by Anet and Bourn in 1965 to confirm the assignments of the NMR resonances for β,β-dimethylacrylic acid and dimethyl formamide, thereby showing that conformation and configuration information about organic molecules in solution can be obtained. History: Bell and Saunders reported direct correlation between NOE enhancements and internuclear distances in 1970 while quantitative measurements of internuclear distances in molecules with three or more spins was reported by Schirmer et al. Richard R. Ernst was awarded the 1991 Nobel Prize in Chemistry for developing Fourier transform and two-dimensional NMR spectroscopy, which was soon adapted to the measurement of the NOE, particularly in large biological molecules. In 2002, Kurt Wuthrich won the Nobel Prize in Chemistry for the development of nuclear magnetic resonance spectroscopy for determining the three-dimensional structure of biological macromolecules in solution, demonstrating how the 2D NOE method (NOESY) can be used to constrain the three-dimensional structures of large biological macromolecules. Professor Anil Kumar was the first to apply the two-dimensional Nuclear Overhauser Effect (2D-NOE now known as NOESY) experiment to a biomolecule, which opened the field for the determination of three-dimensional structures of biomolecules in solution by NMR spectroscopy. Relaxation: The NOE and nuclear spin-lattice relaxation are closely related phenomena. For a single spin-1⁄2 nucleus in a magnetic field there are two energy levels that are often labeled α and β, which correspond to the two possible spin quantum states, +1⁄2 and -1⁄2, respectively. At thermal equilibrium, the population of the two energy levels is determined by the Boltzmann distribution with spin populations given by Pα and Pβ. If the spin populations are perturbed by an appropriate RF field at the transition energy frequency, the spin populations return to thermal equilibrium by a process called spin-lattice relaxation. The rate of transitions from α to β is proportional to the population of state α, Pα, and is a first order process with rate constant W. The condition where the spin populations are equalized by continuous RF irradiation (Pα = Pβ) is called saturation and the resonance disappears since transition probabilities depend on the population difference between the energy levels. Relaxation: In the simplest case where the NOE is relevant, the resonances of two spin-1⁄2 nuclei, I and S, are chemically shifted but not J-coupled. The energy diagram for such a system has four energy levels that depend on the spin-states of I and S corresponding to αα, αβ, βα, and ββ, respectively. The W's are the probabilities per unit time that a transition will occur between the four energy levels, or in other terms the rate at which the corresponding spin flips occur. There are two single quantum transitions, W1I, corresponding to αα ➞ βα and αβ ➞ ββ; W1S, corresponding to αα ➞ αβ and βα ➞ ββ; a zero quantum transition, W0, corresponding to βα ➞ αβ, and a double quantum transition corresponding to αα ➞ ββ. Relaxation: While rf irradiation can only induce single-quantum transitions (due to so-called quantum mechanical selection rules) giving rise to observable spectral lines, dipolar relaxation may take place through any of the pathways. The dipolar mechanism is the only common relaxation mechanism that can cause transitions in which more than one spin flips. Specifically, the dipolar relaxation mechanism gives rise to transitions between the αα and ββ states (W2) and between the αβ and the βα states (W0). Relaxation: Expressed in terms of their bulk NMR magnetizations, the experimentally observed steady-state NOE for nucleus I when the resonance of nucleus S is saturated ( MS=0 ) is defined by the expression: ηIS=(MIS−M0IM0I) where M0I is the magnetization (resonance intensity) of nucleus I at thermal equilibrium. An analytical expression for the NOE can be obtained by considering all the relaxation pathways and applying the Solomon equations to obtain ηIS=MIS−M0IM0I=γSγIσISρI=γSγI(W2−W02W1I+W0+W2) where ρI=2W1I+W0+W2 and σIS=W2−W0 .ρI is the total longitudinal dipolar relaxation rate ( 1/T1 ) of spin I due to the presence of spin s, σIS is referred to as the cross-relaxation rate, and γI and γS are the magnetogyric ratios characteristic of the I and S nuclei, respectively. Relaxation: Saturation of the degenerate W1S transitions disturbs the equilibrium populations so that Pαα = Pαβ and Pβα = Pββ. The system's relaxation pathways, however, remain active and act to re-establish an equilibrium, except that the W1S transitions are irrelevant because the population differences across these transitions are fixed by the RF irradiation while the population difference between the WI transitions does not change from their equilibrium values. This means that if only the single quantum transitions were active as relaxation pathways, saturating the S resonance would not affect the intensity of the I resonance. Therefore to observe an NOE on the resonance intensity of I, the contribution of W0 and W2 must be important. These pathways, known as cross-relaxation pathways, only make a significant contribution to the spin-lattice relaxation when the relaxation is dominated by dipole-dipole or scalar coupling interactions, but the scalar interaction is rarely important and is assumed to be negligible. In the homonuclear case where γI=γS , if W2 is the dominant relaxation pathway, then saturating S increases the intensity of the I resonance and the NOE is positive, whereas if W0 is the dominant relaxation pathway, saturating S decreases the intensity of the I resonance and the NOE is negative. Molecular motion: Whether the NOE is positive or negative depends sensitively on the degree of rotational molecular motion. Molecular motion: The three dipolar relaxation pathways contribute to differing extents to the spin-lattice relaxation depending a number of factors. A key one is that the balance between ω2, ω1 and ω0 depends crucially on molecular rotational correlation time, τc , the time it takes a molecule to rotate one radian. NMR theory shows that the transition probabilities are related to τc and the Larmor precession frequencies, ω , by the relations: W1I∝3τc(1+ωI2τc2)1r6 W0∝2τc(1+(ωI−ωS)2τc2)1r6 12 τc(1+(ωI+ωS)2τc2)1r6 where r is the distance separating two spin-1⁄2 nuclei. Molecular motion: For relaxation to occur, the frequency of molecular tumbling must match the Larmor frequency of the nucleus. In mobile solvents, molecular tumbling motion is much faster than ω . The so-called extreme-narrowing limit where ωτc≪1 ). Under these conditions the double-quantum relaxation W2 is more effective than W1 or W0, because τc and 2ω0 match better than τc and ω1. When ω2 is the dominant relaxation process, a positive NOE results. Molecular motion: W1I∝γI2γS23τcr6 W0∝γI2γS22τcr6 12 τcr6 12 12 12 12 ]=γSγI12 This expression shows that for the homonuclear case where I = S, most notably for 1H NMR, the maximum NOE that can be observed is 1\2 irrespective of the proximity of the nuclei. In the heteronuclear case where I ≠ S, the maximum NOE is given by 1\2 (γS/γI), which, when observing heteronuclei under conditions of broadband proton decoupling, can produce major sensitivity improvements. The most important example in organic chemistry is observation of 13C while decoupling 1H, which also saturates the 1J resonances. The value of γS/γI is close to 4, which gives a maximum NOE enhancement of 200% yielding resonances 3 times as strong as they would be without NOE. Molecular motion: In many cases, carbon atoms have an attached proton, which causes the relaxation to be dominated by dipolar relaxation and the NOE to be near maximum. For non-protonated carbon atoms the NOE enhancement is small while for carbons that relax by relaxation mechanisms by other than dipole-dipole interactions the NOE enhancement can be significantly reduced. This is one motivation for using deuteriated solvents (e.g. CDCl3) in 13C NMR. Since deuterium relaxes by the quadrupolar mechanism, there are no cross-relaxation pathways and NOE is non-existent. Another important case is 15N, an example where the value of its magnetogyric ratio is negative. Often 15N resonances are reduced or the NOE may actually null out the resonance when 1H nuclei are decoupled. It is usually advantageous to take such spectra with pulse techniques that involve polarization transfer from protons to the 15N to minimize the negative NOE. Structure elucidation: While the relationship of the steady-state NOE to internuclear distance is complex, depending on relaxation rates and molecular motion, in many instances for small rapidly tumbling molecules in the extreme-narrowing limit, the semiquantitative nature of positive NOE's is useful for many structural applications often in combination with the measurement of J-coupling constants. For example, NOE enhancements can be used to confirm NMR resonance assignments, distinguish between structural isomers, identify aromatic ring substitution patterns and aliphatic substituent configurations, and determine conformational preferences.Nevertheless, the inter-atomic distances derived from the observed NOE can often help to confirm the three-dimensional structure of a molecule. In this application, the NOE differs from the application of J-coupling in that the NOE occurs through space, not through chemical bonds. Thus, atoms that are in close proximity to each other can give a NOE, whereas spin coupling is observed only when the atoms are connected by 2–3 chemical bonds. However, the relation ηIS(max)=1⁄2 obscures how the NOE is related to internuclear distances because it applies only for the idealized case where the relaxation is 100% dominated by dipole-dipole interactions between two nuclei I and S. In practice, the value of ρI contains contributions from other competing mechanisms, which serve only to reduce the influence of W0 and W2 by increasing W1. Sometimes, for example, relaxation due to electron-nuclear interactions with dissolved oxygen or paramagnetic metal ion impurities in the solvent can prohibit the observation of weak NOE enhancements. The observed NOE in the presence of other relaxation mechanisms is given by ηI=σISρI+ρ∗ where ρ⋇ is the additional contribution to the total relaxation rate from relaxation mechanisms not involving cross relaxation. Using the same idealized two-spin model for dipolar relaxation in the extreme narrowing limit: ρI∝τcr6 It is easy to show that ηIS∝(τcρ∗)1r6 Thus, the two-spin steady-state NOE depends on internuclear distance only when there is a contribution from external relaxation. Bell and Saunders showed that following strict assumptions ρ⋇/τc is nearly constant for similar molecules in the extreme narrowing limit. Therefore, taking ratio's of steady-state NOE values can give relative values for the internuclear distance r. While the steady-state experiment is useful in many cases, it can only provide information on relative internuclear distances. On the other hand, the initial rate at which the NOE grows is proportional to rIS−6, which provides other more sophisticated alternatives for obtaining structural information via transient experiments such as 2D-NOESY. Two-dimensional NMR: The motivations for using two-dimensional NMR for measuring NOE's are similar as for other 2-D methods. The maximum resolution is improved by spreading the affected resonances over two dimensions, therefore more peaks are resolved, larger molecules can be observed and more NOE's can be observed in a single measurement. More importantly, when the molecular motion is in the intermediate or slow motional regimes when the NOE is either zero or negative, the steady-state NOE experiment fails to give results that can be related to internuclear distances.Nuclear Overhauser Effect Spectroscopy (NOESY) is a 2D NMR spectroscopic method used to identify nuclear spins undergoing cross-relaxation and to measure their cross-relaxation rates. Since 1H dipole-dipole couplings provide the primary means of cross-relaxation for organic molecules in solution, spins undergoing cross-relaxation are those close to one another in space. Therefore, the cross peaks of a NOESY spectrum indicate which protons are close to each other in space. In this respect, the NOESY experiment differs from the COSY experiment that relies on J-coupling to provide spin-spin correlation, and whose cross peaks indicate which 1H's are close to which other 1H's through the chemical bonds of the molecule. Two-dimensional NMR: The basic NOESY sequence consists of three 90° pulses. The first pulse creates transverse spin magnetization. The spins precess during the evolution time t1, which is incremented during the course of the 2D experiment. The second pulse produces longitudinal magnetization equal to the transverse magnetization component orthogonal to the pulse direction. Thus, the idea is to produce an initial condition for the mixing period τm. During the NOE mixing time, magnetization transfer via cross-relaxation can take place. For the basic NOESY experiment, τm is kept constant throughout the 2D experiment, but chosen for the optimum cross-relaxation rate and build-up of the NOE. The third pulse creates transverse magnetization from the remaining longitudinal magnetization. Data acquisition begins immediately following the third pulse and the transverse magnetization is observed as a function of the pulse delay time t2. The NOESY spectrum is generated by a 2D Fourier transform with respect to t1 and t2. A series of experiments are carried out with increasing mixing times, and the increase in NOE enhancement is followed. The closest protons show the most rapid build-up rates of the NOE. Two-dimensional NMR: Inter-proton distances can be determined from unambiguously assigned, well-resolved, high signal-to-noise NOESY spectra by analysis of cross peak intensities. These may be obtained by volume integration and can be converted into estimates of interproton distances. The distance between two atoms i and j can be calculated from the cross-peak volumes V and a scaling constant c NOE =(cVij)1/6 where c can be determined based on measurements of known fixed distances. The range of distances can be reported based on known distances and volumes in the spectrum, which gives a mean c and a standard deviation cSD , a measurement of multiple regions in the NOESY spectrum showing no peaks, i.e. noise Verr , and a measurement error mv . The parameter x is set so that all known distances are within the error bounds. This shows that the lower range of the NOESY volume can be shown NOE lower =(c−xcSD1mvVij+Verr)1/6 and that the upper bound is NOE higher =(c+xcSD1mvVij−Verr)1/6 Such fixed distances depend on the system studied. For example, locked nucleic acids have many atoms whose distance varies very little in the sugar, which allows estimation of the glycosidic torsion angles, which allowed NMR to benchmark LNA molecular dynamics predictions. RNAs, however, have sugars that are much more conformationally flexible, and require wider estimations of low and high bounds.In protein structural characterization, NOEs are used to create constraints on intramolecular distances. In this method, each proton pair is considered in isolation and NOESY cross peak intensities are compared with a reference cross peak from a proton pair of fixed distance, such as a geminal methylene proton pair or aromatic ring protons. This simple approach is reasonably insensitive to the effects of spin diffusion or non-uniform correlation times and can usually lead to definition of the global fold of the protein, provided a sufficiently large number of NOEs have been identified. NOESY cross peaks can be classified as strong, medium or weak and can be translated into upper distance restraints of around 2.5, 3.5 and 5.0 Å, respectively. Such constraints can then be used in molecular mechanics optimizations to provide a picture of the solution state conformation of the protein. Full structure determination relies on a variety of NMR experiments and optimization methods utilizing both chemical shift and NOESY constraints. Some experimental methods: Some examples of one and two-dimensional NMR experimental techniques exploiting the NOE include: NOESY, Nuclear Overhauser effect Spectroscopy HOESY, Heteronuclear Overhauser effect spectroscopy ROESY, Rotational frame nuclear Overhauser effect spectroscopy TRNOE, Transferred nuclear Overhauser effect DPFGSE-NOE, Double pulsed field gradient spin echo NOE experimentNOESY is the determination of the relative orientations of atoms in a molecule, for example a protein or other large biological molecule, producing a three-dimensional structure. HOESY is NOESY cross-correlation between atoms of different elements. ROESY involves spin-locking the magnetization to prevent it from going to zero, applied for molecules for which regular NOESY is not applicable. TRNOE measures the NOE between two different molecules interacting in the same solution, as in a ligand binding to a protein. Some experimental methods: In a DPFGSE-NOE experiment, a transient experiment that allows for suppression of strong signals and thus detection of very small NOEs. Examples of nuclear Overhauser effect: The figure (top) displays how Nuclear Overhauser Effect Spectroscopy can elucidate the structure of a switchable compound. In this example, the proton designated as {H} shows two different sets of NOEs depending on the isomerization state (cis or trans) of the switchable azo groups. In the trans state proton {H} is far from the phenyl group showing blue coloured NOEs; while the cis state holds proton {H} in the vicinity of the phenyl group resulting in the emergence of new NOEs (show in red). Examples of nuclear Overhauser effect: Another example (bottom) where application where the NOE is useful to assign resonances and determine configuration is polysaccharides. For instance, complex glucans possess a multitude of overlapping signals, especially in a proton spectrum. Therefore, it is advantageous to utilize 2D NMR experiments including NOESY for the assignment of signals. See, for example, NOE of carbohydrates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Decision-theoretic rough sets** Decision-theoretic rough sets: In the mathematical theory of decisions, decision-theoretic rough sets (DTRS) is a probabilistic extension of rough set classification. First created in 1990 by Dr. Yiyu Yao, the extension makes use of loss functions to derive α and β region parameters. Like rough sets, the lower and upper approximations of a set are used. Definitions: The following contains the basic principles of decision-theoretic rough sets. Definitions: Conditional risk Using the Bayesian decision procedure, the decision-theoretic rough set (DTRS) approach allows for minimum-risk decision making based on observed evidence. Let A={a1,…,am} be a finite set of m possible actions and let Ω={w1,…,ws} be a finite set of s states. P(wj∣[x]) is calculated as the conditional probability of an object x being in state wj given the object description [x] . λ(ai∣wj) denotes the loss, or cost, for performing action ai when the state is wj The expected loss (conditional risk) associated with taking action ai is given by: R(ai∣[x])=∑j=1sλ(ai∣wj)P(wj∣[x]). Definitions: Object classification with the approximation operators can be fitted into the Bayesian decision framework. The set of actions is given by A={aP,aN,aB} , where aP , aN , and aB represent the three actions in classifying an object into POS( A ), NEG( A ), and BND( A ) respectively. To indicate whether an element is in A or not in A , the set of states is given by Ω={A,Ac} . Let λ(a⋄∣A) denote the loss incurred by taking action a⋄ when an object belongs to A , and let λ(a⋄∣Ac) denote the loss incurred by take the same action when the object belongs to Ac Loss functions Let λPP denote the loss function for classifying an object in A into the POS region, λBP denote the loss function for classifying an object in A into the BND region, and let λNP denote the loss function for classifying an object in A into the NEG region. A loss function λ⋄N denotes the loss of classifying an object that does not belong to A into the regions specified by ⋄ Taking individual can be associated with the expected loss R(a⋄∣[x]) actions and can be expressed as: R(aP∣[x])=λPPP(A∣[x])+λPNP(Ac∣[x]), R(aN∣[x])=λNPP(A∣[x])+λNNP(Ac∣[x]), R(aB∣[x])=λBPP(A∣[x])+λBNP(Ac∣[x]), where λ⋄P=λ(a⋄∣A) , λ⋄N=λ(a⋄∣Ac) , and ⋄=P , N , or B Minimum-risk decision rules If we consider the loss functions λPP≤λBP<λNP and λNN≤λBN<λPN , the following decision rules are formulated (P, N, B): P: If P(A∣[x])≥γ and P(A∣[x])≥α , decide POS( A ); N: If P(A∣[x])≤β and P(A∣[x])≤γ , decide NEG( A ); B: If β≤P(A∣[x])≤α , decide BND( A );where, α=λPN−λBN(λBP−λBN)−(λPP−λPN), γ=λPN−λNN(λNP−λNN)−(λPP−λPN), β=λBN−λNN(λNP−λNN)−(λBP−λBN). Definitions: The α , β , and γ values define the three different regions, giving us an associated risk for classifying an object. When α>β , we get α>γ>β and can simplify (P, N, B) into (P1, N1, B1): P1: If P(A∣[x])≥α , decide POS( A ); N1: If P(A∣[x])≤β , decide NEG( A ); B1: If β<P(A∣[x])<α , decide BND( A ).When α=β=γ , we can simplify the rules (P-B) into (P2-B2), which divide the regions based solely on α P2: If P(A∣[x])>α , decide POS( A ); N2: If P(A∣[x])<α , decide NEG( A ); B2: If P(A∣[x])=α , decide BND( A ).Data mining, feature selection, information retrieval, and classifications are just some of the applications in which the DTRS approach has been successfully used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sedimentary organic matter** Sedimentary organic matter: Sedimentary organic matter includes the organic carbon component of sediments and sedimentary rocks. The organic matter is usually a component of sedimentary material even if it is present in low abundance (usually lower than 1%). Petroleum (or oil) and natural gas are particular examples of sedimentary organic matter. Coals and bitumen shales are examples of sedimentary rocks rich in sedimentary organic matter. Origin: Organic matter is essentially synthesized from mineral carbon (CO2) by autotroph organisms living at the boundaries between the geosphere, the atmosphere and the biosphere.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Motor tic, Obsessions and compulsions, Vocal tic Evaluation Survey** Motor tic, Obsessions and compulsions, Vocal tic Evaluation Survey: The Motor tic, Obsessions and compulsions, Vocal tic Evaluation Survey (MOVES) is a psychological measure used to screen for tics and other behaviors. It measures "motor tics, vocal tics, obsessions, compulsions, and associated symptoms including echolalia, echopraxia, coprolalia, and copropraxia".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dog fashion** Dog fashion: Dog fashion is a popular style or practice, especially in canine clothing and accessories. Dog fashion is a distinctive trend of the style in which people dress their canine companions. This trend dates back to the Egyptian predynastic period and has since expanded due to increased consumer capitalism. Other animals such as cats may also wear fashion. History: There is evidence from ancient Egypt that people were using decorative collars to adorn their dogs. One collar was discovered in the tomb of the ancient Egyptian nobleman Maihar Piri in 1440 BC. It depicts hunting scenes embossed into leather. The dog's name, Tantanuit, is visible on the collar. He was a favorite dog of the nobleman who wished to bring him to the afterlife. There are also silver, gold, silk and velvet decorative dog collars from the time of King Henry VIII which were used to represent how many battles the dog survived. History: During the Renaissance, dogs were seen as objects of possession and thus collars were fitted with padlocks where only the owner of the dog had the key. Nobility and the upper class have been decorating their canine companions for centuries and there is photographic evidence from 1900 of people dressing their dogs in human costumes. History: Today, it is common for people to dress up their dogs, particularly small dogs, and canine clothing has become a global phenomenon. In 2011, there was a dog fashion show in New York called Last Bark at Bryant Park.Dog fashion and style has been greatly influenced with the advent of the internet. New professions have arisen driven by consumer capitalism, such as the pet style expert. Clothing: Dog clothes are available in various price ranges, from inexpensive to high-end designer styles. Typically toy and small breed dogs, such as Chihuahuas and Yorkshire Terriers, are dressed in dog clothes, although even large breeds like Golden Retrievers can wear clothes, too. It is more common to dress small dogs because they are easier to dress and they often suffer from cold temperatures. Dog clothes are made to be either functional or for show. Functional dog clothes are for protection from the elements and allergens. Dog clothes that are purely for show would be used as costumes for holidays and special occasions, such as Halloween or weddings. Clothing: Dog coats are most commonly used for protection against the rain and cold and to provide extra warmth. Dog coats are also used as fashion accessories. Dog sweaters are both functional and fashion accessories. They provide extra warmth for dogs that are hairless or suffer from the cold and come in an array of patterns and styles, such as cable knitted sweaters or hooded sweatshirts with embellishments. Clothing: Dog shirts can be used to help keep a dog clean and as fashion accessories. They can also be used to help protect a dog who is excessively scratching itself due to allergies or prevent hairless dogs from getting sunburned. They are available in a t-shirt style with short or long sleeves as well as a sleeveless tank top for use during warmer weather. Clothing: Dog dresses are purely fashion accessories and are mainly worn on toy and small breed dogs.Dog tuxedos also exist. Some people may involve dogs in formal-wear at their weddings, whether in photos, parties, or at the ceremony itself. There has been at least one documented case of a dog serving as the owner's best man.Dog hats are a fun alternative to clothes if dogs get too warm wearing garments. Dog hats can be worn for warmth or to be fashionable. One hat trend has holes for pointy eared dogs. This allows you to dress up your dog without disguising their breed. Fashion shows: There is a clear distinction between pet shows and pet fashion shows. The pet fashion show's emphasis is on the clothes, not on the dog. In countries all over the world, pet fashion shows are becoming increasingly popular During these shows, well groomed pets strut down the runway wearing high fashion clothes. Some well known designers such as Alexander Wang have designed outfits for dogs. Anthony Rubio, a New York designer, was the first to hold a canine fashion show at New York Fashion Week. Designer fashions: Dog coats, collars, cute sweaters, shirts, stylish dresses, and booties are some of the items people purchase to adorn their dogs with style. Some major international fashion retailers such as Ralph Lauren have launched their own canine clothing lines. Louis Vuitton has a line of leashes and collars for dogs with their trademark LV pattern. Swarovski also has a line of collars and leashes for dogs with crystals. Statistics: The canine fashion industry has become a multi-billion dollar business set to top £30 billion in 2015. In the US, expenditure on pet supplies, including clothing, has been steadily increasing for the last twenty years with 2014 estimated spending at $13.72 billion. As of 2014, an estimated 26.7 million US households owned a dog and an estimated 83.3 million dogs were kept as pets in the United States. The dog fashion industry is projected to continually grow.In 2021, the pet apparel market was valued at $5.7 billion, with dog owners accounting for the majority of sales. Sociological perspective: Humans typically have deep attachments to their dogs because dogs are adept at fulfilling emotionally supportive roles in people's lives which results in high levels of attachment. Dog owners who are single, childless, newly married, empty nesters, divorced, or in a second marriage tend to anthropomorphize their pets more often. Dogs can be emotional substitutes for family members such as children and spouses and they contribute to the moral maintenance of people who live alone.Dogs have become increasingly important and treated as unique individuals. In a world where people are increasingly disconnected from their families, they rely more on their pets, specifically dogs, to fill emotional voids. Pets have become a relatively easy and lovable replacement for children or a strong community.Humans have been dependent on animals as sources of companionship and artistic inspiration since the Paleolithic Period, and animals have continued to mold the shape of human culture and psychology ever since. Sociological perspective: Consumer capitalism viewpoint Increasing affluence means that more people can spend resources on items that are not necessary, like clothes and costumes.People express themselves through fashion. As Georg Simmel says, "Style is the manifestation of our inner feelings and through style we demonstrate our taste, values, and status. We project all of those qualities onto our dogs when we dress them."The appearance of a dog reflects the status of the owner: dressing a dog is more about the owner than the animal. When an owner dresses up their dog they establish a unique bond with the dog that places them at an even more personal and intimate level. Media: Dogs are often shown in movies dressed up in clothing and costumes. This reflects the contemporary trend of dog fashion. In films such as Oliver and Company, one of the characters is a female dog, Georgette, who indulges in luxury fashion and wears leopard print scarves, big hats and jeweled collars. Media: In the Disney film Beverly Hills Chihuahua, a family of chihuahuas portray small dogs wearing fashionable clothes including sunglasses, hats, shirts, dresses, jeweled collars, and bandanas. In one scene, a wedding takes place between two dogs where the female dog wears a bridal gown and the male dog groom wears a tuxedo.A popular trend in various kids films, dogs are shown with the capability of human speech and dressed up in human clothing while remaining quadrupedal. A great example of this would be Air Bud (1997). The film centers around an escaped circus dog turned adopted stray that is taken to a basketball court and is discovered to have incredible talent in the sport. Throughout the film, the dog is shown in various human outfits and attire.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded