text
stringlengths
60
353k
source
stringclasses
2 values
**SignPlot** SignPlot: SignPlot is a software application for the design of UK traffic signs and their supports and foundations, developed and sold by Buchanan Computing. The application is unique as it does not require the use of Computer-aided design (CAD) software. History: Initially created in the early 1980s as a university project by Simon Morgan, SignPlot 3 has undergone several updates. The program is designed to automate almost all the layout and spacing rules of the Traffic Signs Regulations and General Directions, Traffic Signs Manual Chapter 7, and drawings issued by the Welsh Assembly Government for bilingual signing. History: In 2023, graphic designer Margaret Calvert held an exhibition titled Roadworks in Margate, featuring a number of reinterpreted traffic sign designs created with the aid of SignPlot. Calvert previously worked with Simon Morgan for the Museum of Modern Art Automania exhibition in New York during 2021. They used SignPlot software to restore and recreate Calvert's original 1960s designs for the exhibition pieces.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nicomorphine** Nicomorphine: Nicomorphine (Vilan, Subellan, Gevilan, MorZet) is the 3,6-dinicotinate ester of morphine. It is a strong opioid agonist analgesic two to three times as potent as morphine with a side effect profile similar to that of dihydromorphine, morphine, and diamorphine. Nicomorphine was first synthesized in 1904 and was patented as Vilan by Lannacher Heilmittel G.m.b.H. of Austria in 1957. Medical Use: The hydrochloride salt is available as ampoules of 10 mg/ml solution for injection, 5 mg tablets, and 10 mg suppositories. It is possible that other manufacturers distribute 10 mg tablets and other concentrations of injectable nicomorphine in ampoules and multidose vials. It is used, particularly in the German-speaking countries and elsewhere in Central Europe and some other countries in Europe and the former USSR in particular, for post-operative, cancer, chronic non-malignant and other neuropathic pain. It is commonly used in patient-controlled analgesia (PCA) units. The usual starting dose is 5–10 mg given every 3–5 hours. Medical Use: Side effects Nicomorphine's side effects are similar to those of other opioids and include itching, nausea and respiratory depression. It is considered by doctors to be one of the better analgesics for the comprehensive mitigation of suffering, as opposed to purely clouding the noxious pain stimulus, in the alleviation of chronic pain conditions. Chemistry: The method for synthesis of nicomorphine, which involves treating anhydrous morphine base with nicotinic anhydride at 130 °C, was published by Pongratz and Zirm in Monatshefte für Chemie in 1957, simultaneously with the two analogues nicocodeine and nicodicodeine in an article about amides and esters of various organic acids. Legality: Nicomorphine is regulated in much the same fashion as morphine worldwide but is a Schedule I controlled substance in the United States and was never introduced there. Nicomorphine may appear on rare occasions on the European black market and other channels for unsupervised opioid users. It can be produced as part of a mixture of salts and derivatives of morphine by end users by means of treating morphine with nicotinic anhydride or related chemicals in an analogue of the heroin homebake process. CAS number of hydrochloride: 35055-78-8 US DEA ACSCN: 9312 Free base conversion ratios of salts: Nicomorphine Hydrochloride: 0.93 Pharmacology: Pharmacodynamics The 3,6-diesters of morphine are drugs with more rapid and complete central nervous system penetration due to increased lipid solubility and other structural considerations. The prototype for this subgroup of semi-synthetic opiates is heroin and the group also includes dipropanoylmorphine, diacetyldihydromorphine, disalicylmorphine and others. Whilst this produces an enhanced "bang" when the drug is administered intravenously, it cannot be distinguished from morphine via other routes, although the different side effect profile, including lower incidence of nausea, is very apparent. Pharmacology: Pharmacokinetics Nicomorphine is rapidly metabolized when administered by the I.V. route, having a half-life of 3 minutes, into morphine and 6-nicotinoylmorphine, the secondary active metabolite. Half lives of the metabolites were 3–15 minutes for the nicotinoyl metabolite, and 135–190 minutes for morphine.Via the epidural route, a much slower release from epidural space occurs and nicomorphine remains detectable for 1.5 hours or so, and has a longer effect of 18.2 +/- 10.1 hours due to slower release of the active metabolites, morphine and 6-nicotinoylmorphine. Half lives for those compounds is listed in the IV route.Pharmacokinetics via the rectal route differ, and change metabolism. Eight minutes after administration, morphine appeared rapidly, and had a half life of 1.48 +/- 0.48h. This was in turn metabolized to morphine-3- and morphine-6-glucoranides after another 12 minutes, which had similar half-lives to one-another, at about 2.8h. No 6-mononicotinoylmorphine was found, and bioavailability of morphine and metabolic actives was 88%. No remaining nicomorphine was found in urine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Symposium on Personal, Indoor and Mobile Radio Communications** International Symposium on Personal, Indoor and Mobile Radio Communications: The Institute of Electrical and Electronics Engineers (IEEE) is a 501(c)(3) professional association for electronics engineering, electrical engineering, and other related disciplines with its corporate office in New York City and its operations center in Piscataway, New Jersey. The IEEE was formed from the amalgamation of the American Institute of Electrical Engineers and the Institute of Radio Engineers in 1963. History: Origins The IEEE traces its founding to 1884 and the American Institute of Electrical Engineers. In 1912, the rival Institute of Radio Engineers was formed. Although the AIEE was initially larger, the IRE attracted more students and was larger by the mid-1950s. The AIEE and IRE merged in 1963. The IEEE is headquartered in New York City, but most business is done at the IEEE Operations Center in Piscataway, New Jersey, opened in 1975. Growth The Australian Section of the IEEE existed between 1972 and 1985. After this date, it split into state- and territory-based sections.As of 2021, IEEE has over 400,000 members in 160 countries, with the U.S. based membership no longer constituting a majority. Publications: IEEE claims to produce over 30% of the world's literature in the electrical, electronics, and computer engineering fields, publishing approximately 200 peer-reviewed journals and magazines. IEEE publishes more than 1,200 conference proceedings every year. Publications: The published content in these journals as well as the content from several hundred annual conferences sponsored by the IEEE are available in the IEEE Electronic Library (IEL) available through IEEE Xplore platform, for subscription-based access and individual publication purchases.In addition to journals and conference proceedings, the IEEE also publishes tutorials and standards that are produced by its standardization committees. The organization also has its own IEEE paper format. Technical bodies: Technical societies Various technical areas are addressed by IEEE's 39 societies, each one focused on a certain knowledge area. They provide specialized publications, conferences, business networking, and sometimes other services. Other bodies: IEEE Global History Network In September 2008, the IEEE History Committee founded the IEEE Global History Network, which now redirects to Engineering and Technology History Wiki. Other bodies: IEEE Foundation The IEEE Foundation is a charitable foundation established in 1973 to support and promote technology education, innovation, and excellence. It is incorporated separately from the IEEE, although it has a close relationship to it. Members of the Board of Directors of the foundation are required to be active members of IEEE, and one third of them must be current or former members of the IEEE Board of Directors. Other bodies: Initially, the role of the IEEE Foundation was to accept and administer donations for the IEEE Awards program, but donations increased beyond what was necessary for this purpose, and the scope was broadened. In addition to soliciting and administering unrestricted funds, the foundation also administers donor-designated funds supporting particular educational, humanitarian, historical preservation, and peer recognition programs of the IEEE. As of the end of 2014, the foundation's total assets were nearly $45 million, split equally between unrestricted and donor-designated funds. Controversies: Huawei ban In May 2019, IEEE restricted Huawei employees from peer reviewing papers or handling papers as editors due to the "severe legal implications" of U.S. government sanctions against Huawei. As members of its standard-setting body, Huawei employees could continue to exercise their voting rights, attend standards development meetings, submit proposals and comment in public discussions on new standards. The ban sparked outrage among Chinese scientists on social media. Some professors in China decided to cancel their memberships.On June 3, 2019, IEEE lifted restrictions on Huawei's editorial and peer review activities after receiving clearance from the United States government. Controversies: Position on the Russia-Ukraine conflict On February 26, 2022, the chair of the IEEE Ukraine Section, Ievgen Pichkalov, publicly appealed to the IEEE members to "freeze [IEEE] activities and membership in Russia" and requested "public reaction and strict disapproval of Russia's aggression" from the IEEE and IEEE Region 8. On March 17, 2022, an article in the form of Q&A interview with IEEE Russia (Siberia) senior member Roman Gorbunov titled "A Russian Perspective on the War in Ukraine" was published in IEEE Spectrum to demonstrate "the plurality of views among IEEE members" and the "views that are at odds with international reporting on the war in Ukraine". On March 30, 2022, activist Anna Rohrbach created an open letter to the IEEE in an attempt to have them directly address the article, stating that the article used "common narratives in Russian propaganda" on the 2022 Russian invasion of Ukraine and requesting the IEEE Spectrum to acknowledge "that they have unwittingly published a piece furthering misinformation and Russian propaganda." A few days later a note from the editors was added on April 6 with an apology "for not providing adequate context at the time of publication", though the editors did not revise the original article.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RNF20** RNF20: E3 ubiquitin-protein ligase BRE1A is an enzyme that in humans is encoded by the RNF20 gene.The protein encoded by this gene shares similarity with BRE1 of S. cerevisiae. Yeast BRE1 is a ubiquitin ligase required for the ubiquitination of histone H2B and the methylation of histone H3.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Data processing** Data processing: Data processing is the collection and manipulation of digital data to produce meaningful information. Data processing is a form of information processing, which is the modification (processing) of information in any manner detectable by an observer.The term "Data Processing", or "DP" has also been used to refer to a department within an organization responsible for the operation of data processing programs. Data processing functions: Data processing may involve various processes, including: Validation – Ensuring that supplied data is correct and relevant. Sorting – "arranging items in some sequence and/or in different sets." Summarization(statistical) or (automatic) – reducing detailed data to its main points. Aggregation – combining multiple pieces of data. Analysis – the "collection, organization, analysis, interpretation and presentation of data." Reporting – list detail or summary data or computed information. Classification – separation of data into various categories. History: The United States Census Bureau history illustrates the evolution of data processing from manual through electronic procedures. History: Manual data processing Although widespread use of the term data processing dates only from the 1950's, data processing functions have been performed manually for millennia. For example, bookkeeping involves functions such as posting transactions and producing reports like the balance sheet and the cash flow statement. Completely manual methods were augmented by the application of mechanical or electronic calculators. A person whose job was to perform calculations manually or using a calculator was called a "computer." The 1890 United States Census schedule was the first to gather data by individual rather than household. A number of questions could be answered by making a check in the appropriate box on the form. From 1850 to 1880 the Census Bureau employed "a system of tallying, which, by reason of the increasing number of combinations of classifications required, became increasingly complex. Only a limited number of combinations could be recorded in one tally, so it was necessary to handle the schedules 5 or 6 times, for as many independent tallies." "It took over 7 years to publish the results of the 1880 census" using manual processing methods. History: Automatic data processing The term automatic data processing was applied to operations performed by means of unit record equipment, such as Herman Hollerith's application of punched card equipment for the 1890 United States Census. "Using Hollerith's punchcard equipment, the Census Office was able to complete tabulating most of the 1890 census data in 2 to 3 years, compared with 7 to 8 years for the 1880 census. It is estimated that using Hollerith's system saved some $5 million in processing costs" in 1890 dollars even though there were twice as many questions as in 1880. History: Electronic data processing Computerized data processing, or Electronic data processing represents a later development, with a computer used instead of several independent pieces of equipment. The Census Bureau first made limited use of electronic computers for the 1950 United States Census, using a UNIVAC I system, delivered in 1952. Other developments The term data processing has mostly been subsumed by the more general term information technology (IT). The older term "data processing" is suggestive of older technologies. For example, in 1996 the Data Processing Management Association (DPMA) changed its name to the Association of Information Technology Professionals. Nevertheless, the terms are approximately synonymous. Applications: Commercial data processing Commercial data processing involves a large volume of input data, relatively few computational operations, and a large volume of output. For example, an insurance company needs to keep records on tens or hundreds of thousands of policies, print and mail bills, and receive and post payments. Data analysis In science and engineering, the terms data processing and information systems are considered too broad, and the term data processing is typically used for the initial stage followed by a data analysis in the second stage of the overall data handling. Data analysis uses specialized algorithms and statistical calculations that are less often observed in a typical general business environment. For data analysis, software suites like SPSS or SAS, or their free counterparts such as DAP, gretl or PSPP are often used. Systems: A data processing system is a combination of machines, people, and processes that for a set of inputs produces a defined set of outputs. The inputs and outputs are interpreted as data, facts, information etc. depending on the interpreter's relation to the system. A term commonly used synonymously with data or storage (codes) processing system is information system. With regard particularly to electronic data processing, the corresponding concept is referred to as electronic data processing system. Examples Simple example A very simple example of a data processing system is the process of maintaining a check register. Transactions— checks and deposits— are recorded as they occur and the transactions are summarized to determine a current balance. Monthly the data recorded in the register is reconciled with a hopefully identical list of transactions processed by the bank. A more sophisticated record keeping system might further identify the transactions— for example deposits by source or checks by type, such as charitable contributions. This information might be used to obtain information like the total of all contributions for the year. The important thing about this example is that it is a system, in which, all transactions are recorded consistently, and the same method of bank reconciliation is used each time. Real-world example This is a flowchart of a data processing system combining manual and computerized processing to handle accounts receivable, billing, and general ledger
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SQL:2003** SQL:2003: SQL:2003 is the fifth revision of the SQL database query language. The standard consists of 9 parts which are described in detail in SQL. It was updated by SQL:2006. New features: The SQL:2003 standard makes minor modifications to all parts of SQL:1999 (also known as SQL3), and officially introduces a few new features such as: XML-related features (SQL/XML) Window functions the sequence generator, which allows standardized sequences two new column types: auto-generated values and identity-columns the new MERGE statement extensions to the CREATE TABLE statement, to allow "CREATE TABLE AS" and "CREATE TABLE LIKE" removal of the poorly implemented "BIT" and "BIT VARYING" data types OLAP capabilities (initially added in SQL:1999) were extended with a window function. Documentation availability: The SQL standard is not freely available but may be purchased from ISO or ANSI. A late draft is available as a zip archive from Whitemarsh Information Systems Corporation. The zip archive contains a number of PDF files that define the parts of the SQL:2003 specification.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MS-DOS 7** MS-DOS 7: MS-DOS 7 is a real mode operating system for IBM PC compatibles. Unlike earlier versions of MS-DOS it was not released separately by Microsoft, but included in the Windows 9x family of operating systems. Windows 95 RTM reports to be MS-DOS 7.0, while Windows 95 OSR 2.x and Windows 98 report as 7.1. Windows 9x runs under DOS similar to Windows 3.1x, and while according to Microsoft the role of MS-DOS was reduced to a bootloader and acted as the 16-bit legacy device driver layer, it has been stated that there is almost no difference in the relationship between Windows 9x and its included MS-DOS 7.x and Windows 3.x and MS-DOS 6.x. The real-mode MS-DOS 7.x operating system is contained in the IO.SYS file. New features: As the first version in the series, MS-DOS 7.0 added support for long filename (LFN) awareness, and its DIR command for example will show them with an LFN driver such as DOSLFN (earlier versions of MS-DOS wouldn’t show long filenames even with such a driver). It also supports for larger extended memory (up to 4GB) via its HIMEM.SYS driver. Various smaller improvements are also introduced, such as enhanced DOS commands, more efficient use of UMB memory (COMMAND.COM and part of the DOS kernel are loaded high automatically), and the fact that environment variables can be used in the DOS command line directly. New features: MS-DOS 7.1 added FAT32 support (up to 2TB per volume), while MS-DOS 7.0 and earlier versions of MS-DOS only supported FAT12 and FAT16. Logical block addressing (LBA) is also supported in MS-DOS 7.x for accessing large hard disks, unlike earlier versions which only supported cylinder-head-sector (CHS)-based addressing. Year 2000 support was added to DIR command via the new /4 option. New features: MS-DOS 7.x added support for running the graphical interface of Windows 9x, which cannot be run on older MS-DOS releases. Even though VER command usually shows the Windows version, the MS-DOS version is also officially mentioned in other places. For example, if one attempts to run Windows 95 OSR2 or Windows 98’s VMM32.VXD file (renamed to VMM32.EXE) directly from an earlier version of MS-DOS, the following message will be immediately displayed: Cannot run Windows with the installed version of MS-DOS. New features: Upgrade MS-DOS to version 7.1 or higher.In the case of Windows 95 RTM, the version number 7.0 is displayed in place of 7.1. More information: A major difference between earlier versions of MS-DOS is the usage of the MSDOS.SYS file. In version 7 this is not a binary file, but a pure setting file. The older boot style, where Windows is not automatically started and the system boots into a DOS command shell, could keep on using that same style by setting BootGUI=0 in the MSDOS.SYS file. Otherwise, Windows from Windows 95 onward will automatically start up on boot. However this was in reality only an automatic call for the command WIN.COM, the Windows starting program. Windows 95 and 98 are both dependent on MS-DOS to boot the 32-bit kernel and to run legacy 16-bit MS-DOS device drivers, although MS-DOS 7 possibly is more "hidden" than earlier versions of MS-DOS. This is also true for Windows Me, but Me prevents users from booting MS-DOS without booting the 32-bit Windows kernel. Also the paths for (a plausible but actually not necessary) Windows directory and Boot directory are to be set in this new version of the MSDOS.SYS file. Whilst IO.SYS (although binary different) remained as the initial executive startup file which BIOS booting routines fire up, if located correctly. Also the COMMAND.COM file implements the command prompt. The typical DOS setting files CONFIG.SYS and AUTOEXEC.BAT essentially retained their functions from earlier versions of MS-DOS (although memory allocation was no longer needed). More information: Although only included in Windows releases (the last official standalone release of MS-DOS ever was version 6.22a), MS-DOS 7.x can fairly easily be extracted from Windows 95/98, and be used alone on other computers, just as the earlier versions. Actually MS-DOS 7.x works fine on many modern (as of 2016) motherboards (at least with PS2-keyboards), in sharp contrast to Windows 95/98. It has to be installed on an FAT partition, and in the case of MS-DOS 7.0 the partition must be located at "the top" of the hard drive and formatted as FAT12 or FAT16. Another difference is that MS-DOS 7.x requires a 80386 or higher processor, it fails to boot on 80286-class or lower x86 hardware. More information: For manual installation, MS-DOS 7.x can be installed through the SYS command (executing the SYS.COM file), for example from a folder on a Ramdrive created by a bootable disc. Correct versions of IO.SYS (especially) must exist in the same folder as SYS.COM together with MSDOS.SYS and COMMAND.COM (and optionally DRVSPACE.BIN, CONFIG.SYS, and AUTOEXEC.BAT). All other files can be copied thereafter. (In Windows 95/98 they are found in either the root folder or in the C:\WINDOWS\COMMAND folder) Notes: A.^ There was a “MS-DOS 7.1” made by China DOS Union, which is the same as the Windows 9x version, but bundled as a standalone OS with a multitude of utilities. Source: WinWorld.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metahexamide** Metahexamide: Metahexamide (INN) is an anti-diabetic drug from the group of sulfonylureas. It is long-acting and belongs to the first-generation cyclohexyl-containing sulfonylureas. It was first described in 1959.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ARITH Symposium on Computer Arithmetic** ARITH Symposium on Computer Arithmetic: The IEEE International Symposium on Computer Arithmetic (ARITH) is a conference in the area of computer arithmetic. The symposium was established in 1969, initially as three-year event, then as a biennial event, and, finally, from 2015 as an annual symposium. ARITH topics span from theoretical aspects and algorithms for operations, to hardware implementations of arithmetic units and applications of computer arithmetic. ARITH Symposium on Computer Arithmetic: ARITH symposia are sponsored by IEEE Computer Society. They have been described as one of "the most prestigious forums for computer arithmetic" by researchers at the National Institute of Standards and Technology, as the main conference forum for new research publications in computer arithmetic by Parhami (2003), and as a forum for interacting with the "international community of arithmeticians" by participants Peter Kornerup and David W. Matula.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamization** Dynamization: In computer science, dynamization is the process of transforming a static data structure into a dynamic one. Although static data structures may provide very good functionality and fast queries, their utility is limited because of their inability to grow/shrink quickly, thus making them inapplicable for the solution of dynamic problems, where the input data changes. Dynamization techniques provide uniform ways of creating dynamic data structures. Decomposable search problems: We define problem P of searching for the predicate M match in the set S as P(M,S) . Problem P is decomposable if the set S can be decomposed into subsets Si and there exists an operation + of result unification such that P(M,S)=P(M,S0)+P(M,S1)+⋯+P(M,Sn) Decomposition: Decomposition is a term used in computer science to break static data structures into smaller units of unequal size. The basic principle is the idea that any decimal number can be translated into a representation in any other base. For more details about the topic see Decomposition (computer science). For simplicity, binary system will be used in this article but any other base (as well as other possibilities such as Fibonacci numbers) can also be utilized. Decomposition: If using the binary system, a set of n elements is broken down into subsets of sizes with 2i∗ni elements where ni is the i -th bit of n in binary. This means that if n has i -th bit equal to 0, the corresponding set does not contain any elements. Each of the subset has the same property as the original static data structure. Operations performed on the new dynamic data structure may involve traversing log 2⁡(n) sets formed by decomposition. As a result, this will add log ⁡(n)) factor as opposed to the static data structure operations but will allow insert/delete operation to be added. Kurt Mehlhorn proved several equations for time complexity of operations on the data structures dynamized according to this idea. Some of these equalities are listed. If PS(n) is the time to build the static data structure QS(n) is the time to query the static data structure QD(n) is the time to query the dynamic data structure formed by decomposition I¯ is the amortized insertion timethen log ⁡(n)) log ⁡(n)). Decomposition: If QS(n) is at least polynomial, then QD(n)=O(QS(n))
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cobaltocene** Cobaltocene: Cobaltocene, known also as bis(cyclopentadienyl)cobalt(II) or even "bis Cp cobalt", is an organocobalt compound with the formula Co(C5H5)2. It is a dark purple solid that sublimes readily slightly above room temperature. Cobaltocene was discovered shortly after ferrocene, the first metallocene. Due to the ease with which it reacts with oxygen, the compound must be handled and stored using air-free techniques. Synthesis: Cobaltocene is prepared by the reaction of sodium cyclopentadienide (NaC5H5) with anhydrous cobalt(II) chloride in THF solution. Sodium chloride is cogenerated, and the organometallic product is usually purified by vacuum sublimation. Structure and bonding: In Co(C5H5)2 the Co centre is "sandwiched" between two cyclopentadienyl (Cp) rings. The Co–C bond lengths are about 2.1 Å, slightly longer than the Fe–C bond in ferrocene.Co(C5H5)2 belongs to a group of organometallic compounds called metallocenes or sandwich compounds. Cobaltocene has 19 valence electrons, one more than usually found in organotransition metal complexes such as its very stable relative ferrocene. (See 18-electron rule.) This additional electron occupies an orbital that is antibonding with respect to the Co–C bonds. Consequently, the Co–C distances are slightly longer than the Fe–C bonds in ferrocene. Many chemical reactions of Co(C5H5)2 are characterized by its tendency to lose this "extra" electron, yielding an 18-electron cation known as cobaltocenium: Co 19 Co 18 e−2I− The otherwise close relative of cobaltocene, rhodocene does not exist as a monomer, but spontaneously dimerizes by formation of a C–C bond between Cp rings. Reactions: Redox properties Co(C5H5)2 is a common one-electron reducing agent in the laboratory. In fact, the reversibility of the Co(C5H5)2 redox couple is so well-behaved that Co(C5H5)2 may be used in cyclic voltammetry as an internal standard. Its permethylated analogue decamethylcobaltocene (Co(C5Me5)2) is an especially powerful reducing agent, due to inductive donation of electron density from the 10 methyl groups, prompting the cobalt to give up its "extra" electron even more so. These two compounds are rare examples of reductants that dissolve in non-polar organic solvents. The reduction potentials of these compounds follow, using the ferrocene-ferrocenium couple as the reference: The data show that the decamethyl compounds are around 600 mV more reducing than the parent metallocenes. This substituent effect is, however, overshadowed by the influence of the metal: changing from Fe to Co renders the reduction more favorable by over 1.3 volts. Reactions: Carbonylation Treatment of Co(C5H5)2 with carbon monoxide gives the cobalt(I) derivative Co(C5H5)(CO)2, concomitant with loss of one Cp ligand. This conversion is conducted near 130 °C with 500 psi of CO.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Common plantar digital nerves of lateral plantar nerve** Common plantar digital nerves of lateral plantar nerve: The common plantar digital nerves of lateral plantar nerve are nerves of the foot. The common digital nerve communicates with the third common digital branch of the medial plantar nerve and divides into two proper digital nerves which supply the adjoining sides of the fourth and fifth toes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strobilurin** Strobilurin: Strobilurins are a group of natural products and their synthetic analogs. A number of strobilurins are used in agriculture as fungicides. They are part of the larger group of QoIs (Quinone outside Inhibitors), which act to inhibit the respiratory chain at the level of Complex III. The first parent natural products, strobilurins A and B, were extracted from the fungus Strobilurus tenacellus. Commercial strobilurin fungicides were developed through optimization of photostability and activity. Strobilurins represented a major development in fungus-based fungicides. First released in 1996, there are now ten major strobilurin fungicides on the market, which account for 23-25 % of the global fungicide sales. Examples of commercialized strobilurin derivatives are azoxystrobin, kresoxim-methyl, picoxystrobin, fluoxastrobin, oryzastrobin, dimoxystrobin, pyraclostrobin and trifloxystrobin. Strobilurins are mostly contact fungicides with a long half time as they are absorbed into the cuticle and not transported any further. They have a suppressive effect on other fungi, reducing competition for nutrients; they inhibit electron transfer in mitochondria, disrupting metabolism and preventing growth of the target fungi. Natural strobilurins: Strobilurin A Strobilurin A (also known as mucidin) is produced by Oudemansiella mucida, Strobilurus tenacellus, Bolinea lutea, and others. When first isolated it was incorrectly assigned as the E E E geometric isomer but was later identified by total synthesis as being the E Z E isomer, as shown.: 694 9-Methoxystrobilurin A 9-Methoxystrobilurin A is produced by Favolaschia spp. Strobilurin B Strobilurin B is produced by S. tenacellus. Strobilurin C Strobilurin C is produced by X. longipes and X. melanotricha. Strobilurin D and G Strobilurin D is produced by Cyphellopsis anomala. Its structure was originally incorrectly assigned and is now considered to be identical to that of strobilurin G, produced by B. lutea. A related material, hydroxystrobilurin D, with an additional hydroxyl group attached to the methyl of the main chain is produced by Mycena sanguinolenta. Strobilurin E Strobilurin E is produced by Crepidotus fulvotomentosus and Favolaschia spp. Strobilurin F2 Strobilurin F2 is produced by B. lutea. Strobilurin H Strobilurin H is produced by B. lutea. The natural product with a phenolic hydroxy group in place of the aromatic methoxy group of strobilurin H is called strobilurin F1 and is found in C. anomala and Agaricus spp. Strobilurin X Strobilurin X is produced by O. mucida. Oudemansins The oudemansins are closely related to the strobilurins and are also quinone outside inhibitors. Natural strobilurins: Oudemansin A with R1 = R2 = H was first described in 1979, after being isolated from mycelial fermentations of the basidiomycete fungus Oudemansiella mucida. Later it was found in cultures of the basidiomycete fungi Mycena polygramma and Xerula melanotricha. The latter fungus also produces oudemansin B, with R1 = MeO and R2 = Cl. Oudemansin X, with R1 = H and R2 = MeO was isolated from Oudemansiella radicata. Synthetic strobilurins: The discovery of the strobilurin class of fungicides led to the development of a group of commercial fungicides used in agriculture. Examples are shown below.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LibreOffice Writer** LibreOffice Writer: LibreOffice Writer is the free and open-source word processor and desktop publishing component of the LibreOffice software package and is a fork of OpenOffice.org Writer. Writer is a word processor similar to Microsoft Word and Corel's WordPerfect with many similar features, and file format compatibility.LibreOffice Writer is released under the Mozilla Public License v2.0.As with the entire LibreOffice suite, Writer can be used across a variety of platforms, including Linux, FreeBSD, macOS and Microsoft Windows. There are community builds for many other platforms. Ecosystem partner Collabora uses LibreOffice upstream code and provides apps for Android, iOS, iPadOS and ChromeOS. LibreOffice Online is an online office suite which includes the applications Writer, Calc and Impress and provides an upstream for projects such as commercial Collabora Online. Some features: Writer is capable of opening and saving to a number of formats, including OpenDocument (ODT is its default format), Microsoft Word's DOC, DOCX, RTF and XHTML. A spelling and grammar checker (Hunspell) Built-in drawing tools Built-in form building tools Built-in calculation functions Built-in equation editor Export in PDF format, generate hybrid PDF (a standard PDF with attached source ODF file) and create fillable PDF form The ability to import and edit PDF files. Some features: Ability to edit HTML, XHTML files visually without using code with WYSIWYG support Export in HTML, XHTML, XML formats Export in EPUB ebook format Contents, index, bibliography Document signing, password and public-key (GPG) encryption Change tracking during revisions, document comparison (view changes between two files) Database integration, including a bibliography database MailMerge Scriptable and Remote Controllable via the UNO API OpenType stylistic sets and character variants of fonts are not selectable from the menus, but can be specified manually in the font window. For example, fontname:ss06&cv03 will set the font to stylistic set 6 and chose character variant 3. This is based on the same syntax for Graphite font feature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Handheld game console** Handheld game console: A handheld game console, or simply handheld console, is a small, portable self-contained video game console with a built-in screen, game controls and speakers. Handheld game consoles are smaller than home video game consoles and contain the console, screen, speakers, and controls in one unit, allowing people to carry them and play them at any time or place.In 1976, Mattel introduced the first handheld electronic game with the release of Auto Race. Later, several companies—including Coleco and Milton Bradley—made their own single-game, lightweight table-top or handheld electronic game devices. The first commercial successful handheld console was Merlin from 1978 which sold more than 5 million units. The first handheld game console with interchangeable cartridges is the Milton Bradley Microvision in 1979.Nintendo is credited with popularizing the handheld console concept with the release of the Game Boy in 1989 and continues to dominate the handheld console market. The first internet-enabled handheld console and the first with a touchscreen was the Game.com released by Tiger Electronics in 1997. The Nintendo DS, released in 2004, introduced touchscreen controls and wireless online gaming to a wider audience, becoming the best-selling handheld console with over 150 million units sold worldwide. History: Timeline This table describes handheld games consoles over video game generations with over 1 million sales. History: Origins The origins of handheld game consoles are found in handheld and tabletop electronic game devices of the 1970s and early 1980s. These electronic devices are capable of playing only a single game, they fit in the palm of the hand or on a tabletop, and they may make use of a variety of video displays such as LED, VFD, or LCD. In 1978, handheld electronic games were described by Popular Electronics magazine as "nonvideo electronic games" and "non-TV games" as distinct from devices that required use of a television screen. Handheld electronic games, in turn, find their origins in the synthesis of previous handheld and tabletop electro-mechanical devices such as Waco's Electronic Tic-Tac-Toe (1972) Cragstan's Periscope-Firing Range (1951), and the emerging optoelectronic-display-driven calculator market of the early 1970s. This synthesis happened in 1976, when "Mattel began work on a line of calculator-sized sports games that became the world's first handheld electronic games. The project began when Michael Katz, Mattel's new product category marketing director, told the engineers in the electronics group to design a game the size of a calculator, using LED (light-emitting diode) technology." our big success was something that I conceptualized—the first handheld game. I asked the design group to see if they could come up with a game that was electronic that was the same size as a calculator. History: —Michael Katz, former marketing director, Mattel Toys.The result was the 1976 release of Auto Race. Followed by Football later in 1977, the two games were so successful that according to Katz, "these simple electronic handheld games turned into a '$400 million category.'" Mattel would later win the honor of being recognized by the industry for innovation in handheld game device displays. Soon, other manufacturers including Coleco, Parker Brothers, Milton Bradley, Entex, and Bandai began following up with their own tabletop and handheld electronic games. History: In 1979 the LCD-based Microvision, designed by Smith Engineering and distributed by Milton-Bradley, became the first handheld game console and the first to use interchangeable game cartridges. The Microvision game Cosmic Hunter (1981) also introduced the concept of a directional pad on handheld gaming devices, and is operated by using the thumb to manipulate the on-screen character in any of four directions.In 1979, Gunpei Yokoi, traveling on a bullet train, saw a bored businessman playing with an LCD calculator by pressing the buttons. Yokoi then thought of an idea for a watch that doubled as a miniature game machine for killing time. Starting in 1980, Nintendo began to release a series of electronic games designed by Yokoi called the Game & Watch games. Taking advantage of the technology used in the credit-card-sized calculators that had appeared on the market, Yokoi designed the series of LCD-based games to include a digital time display in the corner of the screen. For later, more complicated Game & Watch games, Yokoi invented a cross shaped directional pad or "D-pad" for control of on-screen characters. Yokoi also included his directional pad on the NES controllers, and the cross-shaped thumb controller soon became standard on game console controllers and ubiquitous across the video game industry since. When Yokoi began designing Nintendo's first handheld game console, he came up with a device that married the elements of his Game & Watch devices and the Famicom console, including both items' D-pad controller. The result was the Nintendo Game Boy. History: In 1982, the Bandai LCD Solarpower was the first solar-powered gaming device. Some of its games, such as the horror-themed game Terror House, features two LCD panels, one stacked on the other, for an early 3D effect. In 1983, Takara Tomy's Tomytronic 3D simulates 3D by having two LCD panels that were lit by external light through a window on top of the device, making it the first dedicated home video 3D hardware. History: Beginnings The late 1980s and early 1990s saw the beginnings of the modern-day handheld game console industry, after the demise of the Microvision. As backlit LCD game consoles with color graphics consume a lot of power, they were not battery-friendly like the non-backlit original Game Boy whose monochrome graphics allowed longer battery life. By this point, rechargeable battery technology had not yet matured and so the more advanced game consoles of the time such as the Sega Game Gear and Atari Lynx did not have nearly as much success as the Game Boy. History: Even though third-party rechargeable batteries were available for the battery-hungry alternatives to the Game Boy, these batteries employed a nickel-cadmium process and had to be completely discharged before being recharged to ensure maximum efficiency; lead-acid batteries could be used with automobile circuit limiters (cigarette lighter plug devices); but the batteries had mediocre portability. The later NiMH batteries, which do not share this requirement for maximum efficiency, were not released until the late 1990s, years after the Game Gear, Atari Lynx, and original Game Boy had been discontinued. During the time when technologically superior handhelds had strict technical limitations, batteries had a very low mAh rating since batteries with heavy power density were not yet available. History: Modern game systems such as the Nintendo DS and PlayStation Portable have rechargeable Lithium-Ion batteries with proprietary shapes. Other seventh-generation consoles, such as the GP2X, use standard alkaline batteries. Because the mAh rating of alkaline batteries has increased since the 1990s, the power needed for handhelds like the GP2X may be supplied by relatively few batteries. History: Game Boy Nintendo released the Game Boy on April 21, 1989 (September 1990 for the UK). The design team headed by Gunpei Yokoi had also been responsible for the Game & Watch system, as well as the Nintendo Entertainment System games Metroid and Kid Icarus. The Game Boy came under scrutiny by Nintendo president Hiroshi Yamauchi, saying that the monochrome screen was too small, and the processing power was inadequate. The design team had felt that low initial cost and battery economy were more important concerns, and when compared to the Microvision, the Game Boy was a huge leap forward. History: Yokoi recognized that the Game Boy needed a killer app—at least one game that would define the console, and persuade customers to buy it. In June 1988, Minoru Arakawa, then-CEO of Nintendo of America saw a demonstration of the game Tetris at a trade show. Nintendo purchased the rights for the game, and packaged it with the Game Boy system as a launch title. It was almost an immediate hit. By the end of the year more than a million units were sold in the US. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell over 118 million units worldwide. History: Atari Lynx In 1987, Epyx created the Handy Game; a device that would become the Atari Lynx in 1989. It is the first color handheld console ever made, as well as the first with a backlit screen. It also features networking support with up to 17 other players, and advanced hardware that allows the zooming and scaling of sprites. The Lynx can also be turned upside down to accommodate left-handed players. However, all these features came at a very high price point, which drove consumers to seek cheaper alternatives. The Lynx is also very unwieldy, consumes batteries very quickly, and lacked the third-party support enjoyed by its competitors. Due to its high price, short battery life, production shortages, a dearth of compelling games, and Nintendo's aggressive marketing campaign, and despite a redesign in 1991, the Lynx became a commercial failure. Despite this, companies like Telegames helped to keep the system alive long past its commercial relevance, and when new owner Hasbro released the rights to develop for the public domain, independent developers like Songbird have managed to release new commercial games for the system every year until 2004's Winter Games. History: TurboExpress The TurboExpress is a portable version of the TurboGrafx, released in 1990 for $249.99. Its Japanese equivalent is the PC Engine GT. History: It is the most advanced handheld of its time and can play all the TurboGrafx-16's games (which are on a small, credit-card sized media called HuCards). It has a 66 mm (2.6 in.) screen, the same as the original Game Boy, but in a much higher resolution, and can display 64 sprites at once, 16 per scanline, in 512 colors. Although the hardware can only handle 481 simultaneous colors. It has 8 kilobytes of RAM. The Turbo runs the HuC6820 CPU at 1.79 or 7.16 MHz. History: The optional "TurboVision" TV tuner includes RCA audio/video input, allowing users to use TurboExpress as a video monitor. The "TurboLink" allowed two-player play. Falcon, a flight simulator, included a "head-to-head" dogfight mode that can only be accessed via TurboLink. However, very few TG-16 games offered co-op play modes especially designed with the TurboExpress in mind. Bitcorp Gamate The Bitcorp Gamate is the one of the first handheld game systems created in response to the Nintendo Game Boy. It was released in Asia in 1990 and distributed worldwide by 1991. History: Like the Sega Game Gear, it was horizontal in orientation and like the Game Boy, required 4 AA batteries. Unlike many later Game Boy clones, its internal components were professionally assembled (no "glop-top" chips). Unfortunately the system's fatal flaw is its screen. Even by the standards of the day, its screen is rather difficult to use, suffering from similar ghosting problems that were common complaints with the first generation Game Boys. Likely because of this fact sales were quite poor, and Bitcorp closed by 1992. However, new games continued to be published for the Asian market, possibly as late as 1994. The total number of games released for the system remains unknown. History: Gamate games were designed for stereo sound, but the console is only equipped with a mono speaker. History: Sega Game Gear The Game Gear is the third color handheld console, after the Lynx and the TurboExpress; produced by Sega. Released in Japan in 1990 and in North America and Europe in 1991, it is based on the Master System, which gave Sega the ability to quickly create Game Gear games from its large library of games for the Master System. While never reaching the level of success enjoyed by Nintendo, the Game Gear proved to be a fairly durable competitor, lasting longer than any other Game Boy rivals. History: While the Game Gear is most frequently seen in black or navy blue, it was also released in a variety of additional colors: red, light blue, yellow, clear, and violet. All of these variations were released in small quantities and frequently only in the Asian market. History: Following Sega's success with the Game Gear, they began development on a successor during the early 1990s, which was intended to feature a touchscreen interface, many years before the Nintendo DS. However, such a technology was very expensive at the time, and the handheld itself was estimated to have cost around $289 were it to be released. Sega eventually chose to shelve the idea and instead release the Genesis Nomad, a handheld version of the Genesis, as the successor. History: Watara Supervision The Watara Supervision was released in 1992 in an attempt to compete with the Nintendo Game Boy. The first model was designed very much like a Game Boy, but it is grey in color and has a slightly larger screen. The second model was made with a hinge across the center and can be bent slightly to provide greater comfort for the user. While the system did enjoy a modest degree of success, it never impacted the sales of Nintendo or Sega. The Supervision was redesigned a final time as "The Magnum". Released in limited quantities it was roughly equivalent to the Game Boy Pocket. It was available in three colors: yellow, green and grey. Watara designed many of the games themselves, but did receive some third party support, most notably from Sachen. History: A TV adapter was available in both PAL and NTSC formats that could transfer the Supervision's black-and-white palette to 4 colors, similar in some regards to the Super Game Boy from Nintendo. Hartung Game Master The Hartung Game Master is an obscure handheld released at an unknown point in the early 1990s. Its graphics fidelity was much lower than most of its contemporaries, displaying just 64x64 pixels. It was available in black, white, and purple, and was frequently rebranded by its distributors, such as Delplay, Videojet and Systema. The exact number of games released is not known, but is likely around 20. The system most frequently turns up in Europe and Australia. Late 1990s By this time, the lack of significant development in Nintendo's product line began allowing more advanced systems such as the Neo Geo Pocket Color and the WonderSwan Color to be developed. History: Sega Nomad The Nomad was released in October 1995 in North America only. The release was six years into the market span of the Genesis, with an existing library of more than 500 Genesis games. According to former Sega of America research and development head Joe Miller, the Nomad was not intended to be the Game Gear's replacement; he believed that there was little planning from Sega of Japan for the new handheld. Sega was supporting five different consoles: Saturn, Genesis, Game Gear, Pico, and the Master System, as well as the Sega CD and 32X add-ons. In Japan, the Mega Drive had never been successful and the Saturn was more successful than Sony's PlayStation, so Sega Enterprises CEO Hayao Nakayama decided to focus on the Saturn. By 1999, the Nomad was being sold at less than a third of its original price. History: Game Boy Pocket The Game Boy Pocket is a redesigned version of the original Game Boy having the same features. It was released in 1996. Notably, this variation is smaller and lighter. It comes in seven different colors; red, yellow, green, black, clear, silver, blue, and pink. It has space for two AAA batteries, which provide approximately 10 hours of game play. The screen was changed to a true black-and-white display, rather than the "pea soup" monochromatic display of the original Game Boy. Although, like its predecessor, the Game Boy Pocket has no backlight to allow play in a darkened area, it did notably improve visibility and pixel response-time (mostly eliminating ghosting). History: The first model of the Game Boy Pocket did not have an LED to show battery levels, but the feature was added due to public demand. The Game Boy Pocket was not a new software platform and played the same software as the original Game Boy model. History: Game.com The Game.com (pronounced in TV commercials as "game com", not "game dot com", and not capitalized in marketing material) is a handheld game console released by Tiger Electronics in September 1997. It featured many new ideas for handheld consoles and was aimed at an older target audience, sporting PDA-style features and functions such as a touch screen and stylus. However, Tiger hoped it would also challenge Nintendo's Game Boy and gain a following among younger gamers too. Unlike other handheld game consoles, the first game.com consoles included two slots for game cartridges, which would not happen again until the Tapwave Zodiac, the DS and DS Lite, and could be connected to a 14.4 kbit/s modem. Later models had only a single cartridge slot. History: Game Boy Color The Game Boy Color (also referred to as GBC or CGB) is Nintendo's successor to the Game Boy and was released on October 21, 1998, in Japan and in November of the same year in the United States. It features a color screen, and is slightly bigger than the Game Boy Pocket. The processor is twice as fast as a Game Boy's and has twice as much memory. It also had an infrared communications port for wireless linking which did not appear in later versions of the Game Boy, such as the Game Boy Advance. History: The Game Boy Color was a response to pressure from game developers for a new system, as they felt that the Game Boy, even in its latest incarnation, the Game Boy Pocket, was insufficient. The resulting product was backward compatible, a first for a handheld console system, and leveraged the large library of games and great installed base of the predecessor system. This became a major feature of the Game Boy line, since it allowed each new launch to begin with a significantly larger library than any of its competitors. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell 118.69 million units worldwide. History: The console is capable of displaying up to 56 different colors simultaneously on screen from its palette of 32,768, and can add basic four-color shading to games that had been developed for the original Game Boy. It can also give the sprites and backgrounds separate colors, for a total of more than four colors. History: Neo Geo Pocket Color The Neo Geo Pocket Color (or NGPC) was released in 1999 in Japan, and later that year in the United States and Europe. It is a 16-bit color handheld game console designed by SNK, the maker of the Neo Geo home console and arcade machine. It came after SNK's original Neo Geo Pocket monochrome handheld, which debuted in 1998 in Japan. History: In 2000 following SNK's purchase by Japanese Pachinko manufacturer Aruze, the Neo Geo Pocket Color was dropped from both the US and European markets, purportedly due to commercial failure. History: The system seemed well on its way to being a success in the U.S. It was more successful than any Game Boy competitor since Sega's Game Gear, but was hurt by several factors, such as SNK's infamous lack of communication with third-party developers, and anticipation of the Game Boy Advance. The decision to ship U.S. games in cardboard boxes in a cost-cutting move rather than hard plastic cases that Japanese and European releases were shipped in may have also hurt US sales. History: Wonderswan Color The WonderSwan Color is a handheld game console designed by Bandai. It was released on December 9, 2000, in Japan, Although the WonderSwan Color was slightly larger and heavier (7 mm and 2 g) compared to the original WonderSwan, the color version featured 512 KB of RAM and a larger color LCD screen. In addition, the WonderSwan Color is compatible with the original WonderSwan library of games. History: Prior to WonderSwan's release, Nintendo had virtually a monopoly in the Japanese video game handheld market. After the release of the WonderSwan Color, Bandai took approximately 8% of the market share in Japan partly due to its low price of 6800 yen (approximately US$65). Another reason for the WonderSwan's success in Japan was the fact that Bandai managed to get a deal with Square to port over the original Famicom Final Fantasy games with improved graphics and controls. However, with the popularity of the Game Boy Advance and the reconciliation between Square and Nintendo, the WonderSwan Color and its successor, the SwanCrystal quickly lost its competitive advantage. History: Early 2000s The 2000s saw a major leap in innovation, particularly in the second half with the release of the DS and PSP. Game Boy Advance In 2001, Nintendo released the Game Boy Advance (GBA or AGB), which added two shoulder buttons, a larger screen, and more computing power than the Game Boy Color. History: The design was revised two years later when the Game Boy Advance SP (GBA SP), a more compact version, was released. The SP features a "clamshell" design (folding open and closed, like a laptop computer), as well as a frontlit color display and rechargeable battery. Despite the smaller form factor, the screen remained the same size as that of the original. In 2005, the Game Boy Micro was released. This revision sacrifices screen size and backwards compatibility with previous Game Boys for a dramatic reduction in total size and a brighter backlit screen. A new SP model with a backlit screen was released in some regions around the same time. History: Along with the GameCube, the GBA also introduced the concept of "connectivity": using a handheld system as a console controller. A handful of games use this feature, most notably Animal Crossing, Pac-Man Vs., Final Fantasy Crystal Chronicles, The Legend of Zelda: Four Swords Adventures, The Legend of Zelda: The Wind Waker, Metroid Prime, and Sonic Adventure 2: Battle. As of December 31, 2007, the GBA, GBA SP, and the Game Boy Micro combined have sold 80.72 million units worldwide. History: Game Park 32 The original GP32 was released in 2001 by the South Korean company Game Park a few months after the launch of the Game Boy Advance. It featured a 32-bit CPU, 133 MHz processor, MP3 and Divx player, and e-book reader. SmartMedia cards were used for storage, and could hold up to 128mb of anything downloaded through a USB cable from a PC. The GP32 was redesigned in 2003. A front-lit screen was added and the new version was called GP32 FLU (Front Light Unit). In summer 2004, another redesign, the GP32 BLU, was made, and added a backlit screen. This version of the handheld was planned for release outside South Korea; in Europe, and it was released for example in Spain (VirginPlay was the distributor). While not a commercial success on a level with mainstream handhelds (only 30,000 units were sold), it ended up being used mainly as a platform for user-made applications and emulators of other systems, being popular with developers and more technically adept users. History: N-Gage Nokia released the N-Gage in 2003. It was designed as a combination MP3 player, cellphone, PDA, radio, and gaming device. The system received much criticism alleging defects in its physical design and layout, including its vertically oriented screen and requirement of removing the battery to change game cartridges. The most well known of these was "sidetalking", or the act of placing the phone speaker and receiver on an edge of the device instead of one of the flat sides, causing the user to appear as if they are speaking into a taco. History: The N-Gage QD was later released to address the design flaws of the original. However, certain features available in the original N-Gage, including MP3 playback, FM radio reception, and USB connectivity were removed. Second generation of N-Gage launched on April 3, 2008 in the form of a service for selected Nokia Smartphones. History: Cybiko The Cybiko is a Russian hand-held computer introduced in May 2000 by David Yang's company and designed for teenage audiences, featuring its own two-way radio text messaging system. It has over 430 "official" freeware games and applications. Because of the text messaging system, it features a QWERTY keyboard that was used with a stylus. An MP3 player add-on was made for the unit as well as a SmartMedia card reader. The company stopped manufacturing the units after two product versions and only a few years on the market. Cybikos can communicate with each other up to a maximum range of 300 metres (0.19 miles). Several Cybikos can chat with each other in a wireless chatroom. History: Cybiko Classic: There were two models of the Classic Cybiko. Visually, the only difference was that the original version had a power switch on the side, whilst the updated version used the "escape" key for power management. Internally, the differences between the two models were in the internal memory, and the location of the firmware. Cybiko Xtreme: The Cybiko Xtreme was the second-generation Cybiko handheld. It featured various improvements over the original Cybiko, such as a faster processor, more RAM, more ROM, a new operating system, a new keyboard layout and case design, greater wireless range, a microphone, improved audio output, and smaller size. History: Tapwave Zodiac In 2003, Tapwave released the Zodiac. It was designed to be a PDA-handheld game console hybrid. It supported photos, movies, music, Internet, and documents. The Zodiac used a special version Palm OS 5, 5.2T, that supported the special gaming buttons and graphics chip. Two versions were available, Zodiac 1 and 2, differing in memory and looks. The Zodiac line ended in July 2005 when Tapwave declared bankruptcy. History: Mid 2000s Nintendo DS The Nintendo DS was released in November 2004. Among its new features were the incorporation of two screens, a touchscreen, wireless connectivity, and a microphone port. As with the Game Boy Advance SP, the DS features a clamshell design, with the two screens aligned vertically on either side of the hinge. History: The DS's lower screen is touch sensitive, designed to be pressed with a stylus, a user's finger or a special "thumb pad" (a small plastic pad attached to the console's wrist strap, which can be affixed to the thumb to simulate an analog stick). More traditional controls include four face buttons, two shoulder buttons, a D-pad, and "Start" and "Select" buttons. The console also features online capabilities via the Nintendo Wi-Fi Connection and ad-hoc wireless networking for multiplayer games with up to sixteen players. It is backwards-compatible with all Game Boy Advance games, but like the Game Boy Micro, it is not compatible with games designed for the Game Boy or Game Boy Color. History: In January 2006, Nintendo revealed an updated version of the DS: the Nintendo DS Lite (released on March 2, 2006, in Japan) with an updated, smaller form factor (42% smaller and 21% lighter than the original Nintendo DS), a cleaner design, longer battery life, and brighter, higher-quality displays, with adjustable brightness. It is also able to connect wirelessly with Nintendo's Wii console. History: On October 2, 2008, Nintendo announced the Nintendo DSi, with larger, 3.25-inch screens and two integrated cameras. It has an SD card storage slot in place of the Game Boy Advance slot, plus internal flash memory for storing downloaded games. It was released on November 1, 2008, in Japan, April 2, 2009 in Australia, April 3, 2009 in Europe, and April 5, 2009 in North America. On October 29, 2009, Nintendo announced a larger version of the DSi, called the DSi XL, which was released on November 21, 2009 in Japan, March 5, 2010 in Europe, March 28, 2010 in North America, and April 15, 2010 in Australia. History: As of December 31, 2009, the Nintendo DS, Nintendo DS Lite, and Nintendo DSi combined have sold 125.13 million units worldwide. History: Game King The GameKing is a handheld game console released by the Chinese company TimeTop in 2004. The first model while original in design owes a large debt to Nintendo's Game Boy Advance. The second model, the GameKing 2, is believed to be inspired by Sony's PSP. This model also was upgraded with a backlit screen, with a distracting background transparency (which can be removed by opening up the console). A color model, the GameKing 3 apparently exists, but was only made for a brief time and was difficult to purchase outside of Asia. Whether intentionally or not, the GameKing has the most primitive graphics of any handheld released since the Game Boy of 1989.As many of the games have an "old school" simplicity, the device has developed a small cult following. The Gameking's speaker is quite loud and the cartridges' sophisticated looping soundtracks (sampled from other sources) are seemingly at odds with its primitive graphics. History: TimeTop made at least one additional device sometimes labeled as "GameKing", but while it seems to possess more advanced graphics, is essentially an emulator that plays a handful of multi-carts (like the GB Station Light II). Outside of Asia (especially China) however the Gameking remains relatively unheard of due to the enduring popularity of Japanese handhelds such as those manufactured by Nintendo and Sony. History: PlayStation Portable The PlayStation Portable (officially abbreviated PSP) is a handheld game console manufactured and marketed by Sony Computer Entertainment. Development of the console was first announced during E3 2003, and it was unveiled on May 11, 2004, at a Sony press conference before E3 2004. The system was released in Japan on December 12, 2004, in North America on March 24, 2005, and in the PAL region on September 1, 2005. History: The PlayStation Portable is the first handheld video game console to use an optical disc format, Universal Media Disc (UMD), for distribution of its games. UMD Video discs with movies and television shows were also released. The PSP utilized the Sony/SanDisk Memory Stick Pro Duo format as its primary storage medium. Other distinguishing features of the console include its large viewing screen, multi-media capabilities, and connectivity with the PlayStation 3, other PSPs, and the Internet. History: Gizmondo Tiger's Gizmondo came out in the UK during March 2005 and it was released in the U.S. during October 2005. It is designed to play music, movies, and games, have a camera for taking and storing photos, and have GPS functions. It also has Internet capabilities. It has a phone for sending text and multimedia messages. Email was promised at launch, but was never released before Gizmondo, and ultimately Tiger Telematics', downfall in early 2006. Users obtained a second service pack, unreleased, hoping to find such functionality. However, Service Pack B did not activate the e-mail functionality. History: GP2X Series The GP2X is an open-source, Linux-based handheld video game console and media player created by GamePark Holdings of South Korea, designed for homebrew developers as well as commercial developers. It is commonly used to run emulators for game consoles such as Neo-Geo, Genesis, Master System, Game Gear, Amstrad CPC, Commodore 64, Nintendo Entertainment System, TurboGrafx-16, MAME and others. History: A new version called the "F200" was released October 30, 2007, and features a touchscreen, among other changes. Followed by GP2X Wiz (2009) and GP2X Caanoo (2010). History: Late 2000s Dingoo The Dingoo A-320 is a micro-sized gaming handheld that resembles the Game Boy Micro and is open to game development. It also supports music, radio, emulators (8 bit and 16 bit) and video playing capabilities with its own interface much like the PSP. There is also an onboard radio and recording program. It is currently available in two colors — white and black. Other similar products from the same manufacturer are the Dingoo A-330 (also known as Geimi), Dingoo A-360, Dingoo A-380 (available in pink, white and black) and the recently released Dingoo A-320E. History: PSP Go The PSP Go is a version of the PlayStation Portable handheld game console manufactured by Sony. It was released on October 1, 2009, in American and European territories, and on November 1 in Japan. It was revealed prior to E3 2009 through Sony's Qore VOD service. Although its design is significantly different from other PSPs, it is not intended to replace the PSP 3000, which Sony continued to manufacture, sell, and support. On April 20, 2011, the manufacturer announced that the PSP Go would be discontinued so that they may concentrate on the PlayStation Vita. Sony later said that only the European and Japanese versions were being cut, and that the console would still be available in the US. History: Unlike previous PSP models, the PSP Go does not feature a UMD drive, but instead has 16 GB of internal flash memory to store games, video, pictures, and other media. This can be extended by up to 32 GB with the use of a Memory Stick Micro (M2) flash card. Also unlike previous PSP models, the PSP Go's rechargeable battery is not removable or replaceable by the user. The unit is 43% lighter and 56% smaller than the original PSP-1000, and 16% lighter and 35% smaller than the PSP-3000. It has a 3.8" 480 × 272 LCD (compared to the larger 4.3" 480 × 272 pixel LCD on previous PSP models). The screen slides up to reveal the main controls. The overall shape and sliding mechanism are similar to that of Sony's mylo COM-2 internet device. History: Pandora The Pandora is a handheld game console/UMPC/PDA hybrid designed to take advantage of existing open source software and to be a target for home-brew development. It runs a full distribution of Linux, and in functionality is like a small PC with gaming controls. It is developed by OpenPandora, which is made up of former distributors and community members of the GP32 and GP2X handhelds. History: OpenPandora began taking pre-orders for one batch of 4000 devices in November 2008 and after manufacturing delays, began shipping to customers on May 21, 2010. History: FC-16 Go The FC-16 Go is a portable Super NES hardware clone manufactured by Yobo Gameware in 2009. It features a 3.5-inch display, two wireless controllers, and CRT cables that allow cartridges to be played on a television screen. Unlike other Super NES clone consoles, it has region tabs that only allow NTSC North American cartridges to be played. Later revisions feature stereo sound output, larger shoulder buttons, and a slightly re-arranged button, power, and A/V output layout. History: 2010s Nintendo 3DS The Nintendo 3DS is the successor to Nintendo's DS handheld. The autostereoscopic device is able to project stereoscopic three-dimensional effects without requirement of active shutter or passive polarized glasses, which are required by most current 3D televisions to display the 3D effect. The 3DS was released in Japan on February 26, 2011; in Europe on March 25, 2011; in North America on March 27, 2011, and in Australia on March 31, 2011. The system features backward compatibility with Nintendo DS series software, including Nintendo DSi software except those that require the Game Boy Advance slot. It also features an online service called the Nintendo eShop, launched on June 6, 2011, in North America and June 7, 2011, in Europe and Japan, which allows owners to download games, demos, applications and information on upcoming film and game releases. On November 24, 2011, a limited edition Legend of Zelda 25th Anniversary 3DS was released that contained a unique Cosmo Black unit decorated with gold Legend of Zelda related imagery, along with a copy of The Legend of Zelda: Ocarina of Time 3D. History: There are also other models including the Nintendo 2DS and the New Nintendo 3DS, the latter with a larger (XL/LL) variant, like the original Nintendo 3DS, as well as the New Nintendo 2DS XL. History: Xperia Play The Sony Ericsson Xperia PLAY is a handheld game console smartphone produced by Sony Ericsson under the Xperia smartphone brand. The device runs Android 2.3 Gingerbread, and is the first to be part of the PlayStation Certified program which means that it can play PlayStation Suite games. The device is a horizontally sliding phone with its original form resembling the Xperia X10 while the slider below resembles the slider of the PSP Go. The slider features a D-pad on the left side, a set of standard PlayStation buttons (, , and ) on the right, a long rectangular touchpad in the middle, start and select buttons on the bottom right corner, a menu button on the bottom left corner, and two shoulder buttons (L and R) on the back of the device. It is powered by a 1 GHz Qualcomm Snapdragon processor, a Qualcomm Adreno 205 GPU, and features a display measuring 4.0 inches (100 mm) (854 × 480), an 8-megapixel camera, 512 MB RAM, 8 GB internal storage, and a micro-USB connector. It supports microSD cards, versus the Memory Stick variants used in PSP consoles. The device was revealed officially for the first time in a Super Bowl ad on Sunday, February 6, 2011. On February 13, 2011, at Mobile World Congress (MWC) 2011, it was announced that the device would be shipping globally in March 2011, with a launch lineup of around 50 software titles. History: PlayStation Vita The PlayStation Vita is the successor to Sony's PlayStation Portable (PSP) Handheld series. It was released in Japan on December 17, 2011 and in Europe, Australia, North, and South America on February 22, 2012. The handheld includes two analog sticks, a 5-inch (130 mm) OLED/LCD multi-touch capacitive touchscreen, and supports Bluetooth, Wi-Fi and optional 3G. Internally, the PS Vita features a 4 core ARM Cortex-A9 MPCore processor and a 4 core SGX543MP4+ graphics processing unit, as well as LiveArea software as its main user interface, which succeeds the XrossMediaBar. History: The device is fully backwards-compatible with PlayStation Portable games digitally released on the PlayStation Network via the PlayStation Store. However, PSone Classics and PS2 titles were not compatible at the time of the primary public release in Japan. The Vita's dual analog sticks will be supported on selected PSP games. The graphics for PSP releases will be up-scaled, with a smoothing filter to reduce pixelation. History: On September 20, 2018, Sony announced at Tokyo Game Show 2018 that the Vita would be discontinued in 2019, ending its hardware production. Production of Vita hardware officially ended on March 1, 2019. Razer Switchblade The Razer Switchblade was a prototype pocket-sized like a Nintendo DSi XL designed to run Windows 7, featured a multi-touch LCD screen and an adaptive keyboard that changed keys depending on the game the user would play. It also was to feature a full mouse. It was first unveiled on January 5, 2011, on the Consumer Electronics Show (CES). The Switchblade won The Best of CES 2011 People's Voice award. It has since been in development and the release date is still unknown. The device has likely been suspended indefinitely. Nvidia Shield Project Shield is a handheld system developed by Nvidia announced at CES 2013. It runs on Android 4.2 and uses Nvidia Tegra 4 SoC. The hardware includes a 5-inches multitouch screen with support for HD graphics (720p). The console allows for the streaming of games running on a compatible desktop PC, or laptop. History: Nvidia Shield Portable has received mixed reception from critics. Generally, reviewers praised the performance of the device, but criticized the cost and lack of worthwhile games. Engadget's review noted the system's "extremely impressive PC gaming", but also that due to its high price, the device was "a hard sell as a portable game console", especially when compared to similar handhelds on the market. CNET's Eric Franklin states in his review of the device that "The Nvidia Shield is an extremely well made device, with performance that pretty much obliterates any mobile product before it; but like most new console launches, there is currently a lack of available games worth your time." Eurogamer's comprehensive review of the device provides a detailed account of the device and its features; concluded by saying: "In the here and now, the first-gen Shield Portable is a gloriously niche, luxury product - the most powerful Android system on the market by a clear stretch and possessing a unique link to PC gaming that's seriously impressive in beta form, and can only get better." Nintendo Switch The Nintendo Switch is a hybrid console that can either be used in a handheld form, or inserted into a docking station attached to a television to play on a bigger screen. The Switch features two detachable wireless controllers, called Joy-Con, which can be used individually or attached to a grip to provide a traditional gamepad form. A handheld-only revision named Nintendo Switch Lite was released on September 20, 2019. History: The Switch Lite had sold about 1.95 million units worldwide by September 30, 2019, only 10 days after its launch. History: 2020s Evercade Evercade is a handheld game console developed and manufactured by UK company Blaze Entertainment. It focuses on retrogaming with ROM cartridges that each contain a number of emulated games. Development began in 2018, and the console was released in May 2020, after a few delays. Upon its launch, the console offered 10 game cartridges with a combined total of 122 games. History: Arc System Works, Atari, Data East, Interplay Entertainment, Bandai Namco Entertainment and Piko Interactive have released emulated versions of their games for the Evercade. Pre-existing homebrew games have also been re-released for the console by Mega Cat Studios. The Evercade is capable of playing games originally released for the Atari 2600, the Atari 7800, the Atari Lynx, the NES, the SNES, and the Sega Genesis/Mega Drive. History: Analogue Pocket The Analogue Pocket is a FPGA-based handheld game console designed and manufactured by Analogue, Inc., It is designed to play games designed for handhelds of the fourth, fifth and sixth generation of video game consoles. The console features a design reminiscent of the Game Boy, with additional buttons for the supported platforms. It features a 3.5" 1600x1440 LTPS LCD display, an SD card port, and a link cable port compatible with Game Boy link cables. The Analogue Pocket uses an Altera Cyclone V processor, and is compatible with the original Game Boy, Game Boy Color and Game Boy Advance cartridges out of the box. With cartridge adapters (sold separately) the Analogue Pocket can play Game Gear, Neo Geo Pocket, Neo Geo Pocket Color and Atari Lynx game cartridges. The Analogue Pocket includes an additional FPGA, allowing 3rd party FPGA development. The Analogue Pocket was released in December 2021. History: Steam Deck The Steam Deck is a handheld computer device, developed by Valve, which runs SteamOS 3.0, a tailored distro of Arch Linux and includes support for Proton, a compatibility layer that allows most Microsoft Windows games to be played on the Linux-based operating system. In terms of hardware, the Deck includes a custom AMD APU based on their Zen 2 and RDNA 2 architectures, with the CPU running a four-core/eight-thread unit and the GPU running on eight compute units with a total estimated performance of 1.6 TFLOPS. Both the CPU and GPU use variable timing frequencies, with the CPU running between 2.4 and 3.5 GHz and the GPU between 1.0 and 1.6 GHz based on current processor needs. Valve stated that the CPU has comparable performance to Ryzen 3000 desktop computer processors and the GPU performance to the Radeon RX 6000 series. The Deck includes 16 GB of LPDDR5 RAM in a quad channel configuration.Valve revealed the Steam Deck on July 15, 2021, with pre-orders being made option the next day. The Deck was expected to ship in December 2021 to the US, Canada, the EU and the UK but was delayed to February 2022, with other regions to follow in 2022. Pre-orders were limited to those with Steam accounts opened before June 2021 to prevent resellers from controlling access to the device. Pre-orders reservations on July 16, 2021 through the Steam storefront briefly crashed the servers due to the demand. While initial shipments are still planned by February 2022, Valve has reported to new purchasers that wider availability will be later, with the 64 GB model and 256 GB NVMe model due in Q2 2022, and the 512 GB NVMe model by Q3 2022. Steam Deck was released on February 25, 2022.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Howard Griffiths (scientist)** Howard Griffiths (scientist): Howard Griffiths is a physiological ecologist. He is Professor of Plant Ecology in the Department of Plant Sciences at the University of Cambridge, and a Fellow of Clare College, Cambridge. He formerly worked for the University of Dundee in the Department of Biological Sciences. He applies molecular biology techniques and physiology to investigate the regulation of photosynthesis and plant water-use efficiency. Research: Griffiths' specializations include: Responses to climate change, reflected by his membership of the Cambridge Centre for Climate Science (CCfCS). Global food security, a University of Cambridge Research Theme. Research: Conservation and bioenergy crops, through his membership to the Cambridge Conservation Initiative.Griffiths has a particular interest in introducing the dynamics of plant processes without the need for time-lapse photography. His lectures demonstrate how the spatial segregation of photosystem 1 and photosystem 2 creates a highly dynamic system with lateral mobility and migration of damaged photosynthetic reaction centers through thylakoid membranes.He studies the reaction mechanism of RuBisCO and how plants have evolved. His primary focus being the types of "carbon dioxide concentrating mechanisms" (CCMs) which enhance the operating efficiency of RuBisCO and thereby CO₂-fixation. CCMs of interest include crassulacean acid metabolism (CAM), the biochemical C4 pathway, and the biophysical CCM found within algae, cyanobacteria and hornworts.He uses stable isotopes of carbon and oxygen to compare how different types of plants have evolved their own methods of photosynthesis. Study of these isotopes can also analyse the water use of plants and insects.He collaborated on an international project investigating the possibility of introducing the algal CCM into terrestrial plants called the Combining Algal and Plant Photosynthesis project (CAPP). In 2016, they achieved successful results and they now hope to implement this technique to increase the rate of photosynthesis in plants and hence increase crop yields.His goal in his work is not only to discover new molecular and ecological insights but then use those insights to sustain plant diversity and combat climate change.As part of his work, Griffiths has been a Visiting Research Fellow to the Australian National University in 2006 and 2008. He is part of peer review for the National Environmental Research Council. He has also conducted many field work expeditions to countries including Trinidad, Venezuela, and Panama, as part of his research.As of 2021, his projects^ focus on: "Food security: sustainability and equality in crop production systems" - in collaboration with the Global Food Security Interdisciplinary Research Centre "Defining the algal chloroplast pyrenoid" - a continuation of his RuBisCO work. Research: "Carbon assimilation and hydraulic constraints in C3, C4 and CAM systems" "Epiphyte environmental interactions and climate change" - focussing on samples collected during field work Publications Griffiths has a blog documenting his and his students' research in physiological ecology.He is the author, co-author or editor of several textbooks and monographs, including The Carbon Balance of Forest Biomes with Paul Gordon Jarvis.According to Google Scholar and Scopus, his most highly cited peer-reviewed publications were in The Journal of Experimental Botany, Oecologia, New Phytologist, and Functional Plant Biology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Segmented mirror** Segmented mirror: A segmented mirror is an array of smaller mirrors designed to act as segments of a single large curved mirror. The segments can be either spherical or asymmetric (if they are part of a larger parabolic reflector). They are used as objectives for large reflecting telescopes. To function, all the mirror segments have to be polished to a precise shape and actively aligned by a computer-controlled active optics system using actuators built into the mirror support cell. Segmented mirror: The concept was pioneered by Guido Horn D'Arturo, who built the first working segmented mirror in 1952, after twenty years of research; It was later independently rediscovered and further developed under the leadership of Dr. Jerry Nelson at the Lawrence Berkeley National Laboratory and University of California during the 1980s, and since then all the necessaries technologies have spread worldwide to the point that essentially all future large optical telescopes plan to use segmented mirrors. Application: There is a technological limit for primary mirrors made of a single rigid piece of glass. Such non-segmented, or monolithic mirrors can not be constructed larger than about eight meters in diameter. The largest monolithic mirrors in use are currently the two primary mirrors of the Large Binocular Telescope, each with a diameter of 8.4 meters. The use of segmented mirrors is therefore a key component for large-aperture telescopes. Using a monolithic mirror much larger than 5 meters is prohibitively expensive due to the cost of both the mirror, and the massive structure needed to support it. A mirror beyond that size would also sag slightly under its own weight as the telescope was rotated to different positions, changing the precision shape of the surface. Segments are also easier to fabricate, transport, install, and maintain over very large monolithic mirrors. Application: Segmented mirrors do have the drawback that each segment may require some precise asymmetrical shape, and rely on a complicated computer-controlled mounting system. All of the segments also cause diffraction effects in the final image. Another application for segmented mirrors can be found in the augmented reality sector to minimize the size of the optical components. A partial reflective segmented mirror array is used by tooz to out-couple the light from their light guides, which is used as an optical smartglass element. Telescopes using segmented mirrors: Some of the largest optical telescopes in the world use segmented primary mirrors. These include, but are not limited to the following telescopes: Keck TelescopesThe twin telescopes are the most prominent of the Mauna Kea Observatories at an elevation of 4,145 meters (13,600 ft) near the summit of Mauna Kea in Hawaii, United States. Both telescopes feature 10 m (33 ft) primary mirrors.Hobby-Eberly TelescopeThe HET is a 9.2-meter (30-foot) telescope located at the McDonald Observatory, West Texas at an altitude of 2,026 m (6,647 ft). Its primary mirror is constructed from 91 hexagonal segments. The telescope's main mirror is fixed at a 55 degree angle and can rotate around its base. A target is tracked by moving the instruments at the focus of the telescope; this allows access to about 70–81% of the sky at its location and a single target can be tracked for up to two hours.Southern African Large TelescopeThe SALT is a 10-meter telescope dedicated on spectroscopy for most of its observing time. It shares similarities with the Hobby-Eberly Telescope and also consists of 91 hexagonal mirror segments, each 1 meter across, resulting in a total hexagonal mirror of 11.1 m by 9.8 m. It is located close to the town of Sutherland in the semi-desert region of the Karoo, South Africa. It is a facility of the South African Astronomical Observatory, the national optical observatory of South Africa.Gran Telescopio CanariasAlso known as the GranTeCan, the Canaries Great Telescope uses a total of 36 segmented mirrors. With a primary mirror of 10.4 m (34 ft), it is currently the world's largest optical telescope, located at the Roque de los Muchachos Observatory on the island of La Palma, in the Canary Islands in Spain.LAMOSTThe Large Sky Area Multi-Object Fibre Spectroscopic Telescope is a survey telescope located in the Hebei Province of China. It consists of two rectangular mirrors, made up of 24 and 37 segments, respectively. Each hexagonal segment is 1.1 metre in size.James Webb Space TelescopeThe James Webb Space Telescope's 18 mirror segments were mostly fabricated in 2011. The space telescope was launched by an Ariane 5 from Guiana Space Centre on December 25, 2021. Next-generation telescopes: Three extremely large telescopes will be the next generation of segmented-mirror telescopes and are planned to be commissioned in the 2020s. The Giant Magellan Telescope uses seven large segments and is either grouped with segmented mirrors telescopes or its own category. The Thirty Meter Telescope is to be built at the Mauna Kea Observatories in Hawaii, though construction is on hold. This will use 492 hexagonal segments. The European Extremely Large Telescope will be the largest of all three, using a total of 798 segments for its primary mirror. Its first light is expected for 2027.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Associative classifier** Associative classifier: An associative classifier (AC) is a kind of supervised learning model that uses association rules to assign a target value. The term associative classification was coined by Bing Liu et al., in which the authors defined a model made of rules "whose right-hand side are restricted to the classification class attribute". Model: The model generated by an AC and used to label new records consists of association rules, where the consequent corresponds to the class label. As such, they can also be seen as a list of "if-then" clauses: if the record matches some criteria (expressed in the left side of the rule, also called antecedent), it is then labeled accordingly to the class on the right side of the rule (or consequent). Model: Most ACs read the list of rules in order, and apply the first matching rule to label the new record. Metrics: The rules of an AC inherit some of the metrics of association rules, like the support or the confidence. Metrics can be used to order or filter the rules in the model and to evaluate their quality. Implementations: The first proposal of a classification model made of association rules was FBM. The approach was popularized by CBA, although other authors had also previously proposed the mining of association rules for classification. Other authors have since then proposed multiple changes to the initial model, like the addition of a redundant rule pruning phase or the exploitation of Emerging Patterns.Notable implementations include: CMAR CPAR L³ CAEP GARC ADT.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**March 2006 lunar eclipse** March 2006 lunar eclipse: A penumbral lunar eclipse took place on 14 March 2006, the first of two lunar eclipses in 2006. This was a relatively rare total penumbral lunar eclipse with the moon passing entirely within the penumbral shadow without entering the darker umbral shadow. The tables below contain detailed predictions and additional information on the Penumbral Lunar Eclipse of 14 March 2006. Visibility: It was completely visible over Africa and Europe, seen rising over eastern North America, all of South America, and setting over western Asia. A simulated view of the earth from the center of the moon at maximum eclipse. Map Relation to other lunar eclipses: Eclipses of 2006 A penumbral lunar eclipse on 14 March. A total solar eclipse on 29 March. A partial lunar eclipse on 7 September. An annular solar eclipse on 22 September. Relation to other lunar eclipses: Lunar year series (354 days) Saros series The eclipse belongs to Saros series 113, and is the 63rd of 71 lunar eclipses in the series. The first penumbral eclipse of saros cycle 113 began on 29 April 888 AD, first partial eclipse on 14 July 1014, and total first was on 20 March 1429. The last total eclipse occurred on 7 August 1645, last partial on 21 February 1970, and last penumbral eclipse on 10 June 2150. Relation to other lunar eclipses: Half-Saros cycle A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two total solar eclipses of Solar Saros 120. Metonic cycles (19 years) The Metonic cycle repeats nearly exactly every 19 years and represents a Saros cycle plus one lunar year. Because it occurs on the same calendar date, the earth's shadow will in nearly the same location relative to the background stars. Eclipse season This is the first eclipse this season. Second eclipse this season: 29 March 2006 Total Solar Eclipse
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Space of flows** Space of flows: The space of flows is a high-level cultural abstraction of space and time, and their dynamic interactions with digital age society. The concept was created by the sociologist and cybernetic culture theoretician Manuel Castells to "reconceptualize new forms of spatial arrangements under the new technological paradigm"; a new type of space that allows distant synchronous, real-time interaction.: 146  The space of flows first was mentioned in The Informational City: Information Technology, Economic Restructuring, and the Urban Regional Process (1989). Definitions: Castells defines the concepts as follows: "The material arrangements that allow for simultaneity of social practices without territorial contiguity. It is not purely electronic space...It is made up first of all of a technological infrastructure of information systems, telecommunications, and transportation lines". Theoretic: Traditionally, the concept of space is considered a passive entity, while time is considered a separate and active entity. Space should not be disconnected from time, because space is a dynamic entity related to time. Castells rejected the contention that space will disappear upon the creation of the global city, because space is "the material support of time-sharing social practices". Thus, the space of flows is "the material organization of time-sharing social practices that work through flows".: 147 In 2001, Castells wrote: "the space of flows ... links up distant locales around shared functions and meanings on the basis of electronic circuits and fast transportation corridors, while isolating and subduing the logic of experience embodied in the space of places". Practical: Space is the physical support of the way people live in time. Real world time, the space-and-time to which people are accustomed, is the "space of places", which is unlike the "space of flows" because it lacks the three elements of (i) a proper flow medium, (ii) the proper items composing the flow traversing through it, and (iii) the organisational nodes through which these flows circulate. The space of flows concept comprehends human action and interaction occurring dynamically and at a distance—effected via telecommunications technology containing continuous flows of time-sensitive communications, and the nodes of global computer systems. These informational flows connect people to a continuous, real-time cybernetic community that differs from the global village because the groups' positions in time become more important than their places.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Planing mill** Planing mill: A planing mill is a facility that takes cut and seasoned boards from a sawmill and turns them into finished dimensional lumber. Machines used in the mill include the planer and matcher, the molding machines, and varieties of saws. In the planing mill planer operators use machines that smooth and cut the wood for many different uses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Superior dental plexus** Superior dental plexus: The superior dental plexus is a nerve plexus that innervates the upper/maxillary teeth and as adjacent structures. It is formed by the anterior superior alveolar nerve (ASAN), middle superior alveolar nerve (MSAN), and the posterior superior alveolar nerve (PSAN). It issues dental branches and gingival branches.A cadaveric study found the plexus to be situated in the alveolar process of the maxilla. Anatomy: The PSAN forms the posterior portion of the plexus and is distributed to the upper molar teeth and adjacent gingiva as well as the mucosa of the cheek.The MSAN forms the middle portion of the plexus and is distributed to the upper premolar teeth and the lateral wall of the maxillary sinus.The ASAN forms the anterior portion of the plexus and is distributed to the canine and incisor teeth as well as the anterior portion of the maxillary sinus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SOAtest** SOAtest: Parasoft SOAtest is a testing and analysis tool suite for testing and validating APIs and API-driven applications (e.g., cloud, mobile apps, SOA). Basic testing functionality include functional unit testing, integration testing, regression testing, system testing, security testing, simulation and mocking, runtime error detection, web UI testing, interoperability testing, WS-* compliance testing, and load testing.Supported technologies include Web services, REST, JSON, MQ, JMS, TIBCO, HTTP, XML, EDI, mainframes, and custom message formats.Parasoft SOAtest introduced Service virtualization via server emulation and stubs in 2002; by 2007, it provided an intelligent stubs platform that emulated the behavior of dependent services that were otherwise difficult to access or configure during development and testing. Extended service virtualization functionality is now in Parasoft Virtualize, while SOAtest provides intelligent stubbing. SOAtest: SOAtest is used by organizations such as Cisco, IBM, HP, Fidelity, Bloomberg, Vanguard, AT&T, IRS, CDC, Tata Consultancy Services, Comcast and Sabre.It was recognized as a leader in the Forrester Research's The Forrester Wave™: Modern Application Functional Test Automation Tools, Q4 2016, which evaluated 9 functional test automation tool vendors across 40 criteria. Forrester Research gave SOAtest the highest score among all vendors in the Current Offering category, citing its strength in API testing, UI automation, and key integrations. It also part of the solution recognized as "innovation and technology leader" in Voke's service virtualization market mover array.SOAtest was recognized as a leader by Forrester in the 2018 Forrester Wave Omnichannel Functional Test Tools. The report said "Parasoft shined in our evaluation specifically around effective test maintenance, strong CI/CD and application lifecycle management (ALM) platform integration".In 2018 SOAtest won an award for "Best in DevOps APIs" in the 2018 API Awards from API:WORLD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Approximation Theory** Journal of Approximation Theory: The Journal of Approximation Theory is "devoted to advances in pure and applied approximation theory and related areas."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Аналитик** Аналитик: Analitik (Russian: Аналитик) is a programming language, developed in 1968 at the Institute of Cybernetics of the Academy of Sciences of the Ukrainian SSR in the USSR. It is a development on the ALMIR-65 language, keeping compatibility with it. Distinctive features of the language are abstract data types, calculations in arbitrary algebras, and analytic transformations. It was implemented on MIR-2 machines.Later, a version of Analitik-74 was developed, implemented on MIR-3 machines. At the moment, the language exists as a computer algebra system, Analitik-2010, which is being developed jointly by the Institute of Mathematical Machines and Systems of the National Academy of Sciences of Ukraine and the Poltava National Technical University.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bristlr** Bristlr: Bristlr is a location-based social search mobile app that facilitates communication between bearded men and women who love beards, allowing matched users to chat. Overview: Bristlr was founded by John Kershaw in 2014. The app is popular in Canada, the Netherlands, the United Kingdom and the United States.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Photorefractive effect** Photorefractive effect: The photorefractive effect is a nonlinear optical effect seen in certain crystals and other materials that respond to light by altering their refractive index. The effect can be used to store temporary, erasable holograms and is useful for holographic data storage. It can also be used to create a phase-conjugate mirror or an optical spatial soliton. Mechanism: The photorefractive effect occurs in several stages: A photorefractive material is illuminated by coherent beams of light. (In holography, these would be the signal and reference beams). Interference between the beams results in a pattern of dark and light fringes throughout the crystal. In regions where a bright fringe is present, electrons can absorb the light and be photoexcited from an impurity level into the conduction band of the material, leaving an electron hole (a net positive charge). Impurity levels have an energy intermediate between the energies of the valence band and conduction band of the material. Once in the conduction band, the electrons are free to move and diffuse throughout the crystal. Since the electrons are being excited preferentially in the bright fringes, the net electron diffusion current is towards the dark-fringe regions of the material. Mechanism: While in the conduction band, the electrons may with some probability recombine with the holes and return to the impurity levels. The rate at which this recombination takes place determines how far the electrons diffuse, and thus the overall strength of the photorefractive effect in that material. Once back in the impurity level, the electrons are trapped and can no longer move unless re-excited back into the conduction band (by light). Mechanism: With the net redistribution of electrons into the dark regions of the material, leaving holes in the bright areas, the resulting charge distribution causes an electric field, known as a space charge field to be set up in the crystal. Since the electrons and holes are trapped and immobile, the space charge field persists even when the illuminating beams are removed. Mechanism: The internal space charge field, via the electro–optic effect, causes the refractive index of the crystal to change in the regions where the field is strongest. This causes a spatially varying refractive index grating to occur throughout the crystal. The pattern of the grating that is formed follows the light interference pattern originally imposed on the crystal. The refractive index grating can now diffract light shone into the crystal, with the resulting diffraction pattern recreating the original pattern of light stored in the crystal. Application: The photorefractive effect can be used for dynamic holography, and, in particular, for cleaning of coherent beams. For example, in the case of a hologram, illuminating the grating with just the reference beam causes the reconstruction of the original signal beam. When two coherent laser beams (usually obtained by splitting a laser beam by the use of a beamsplitter into two, and then suitably redirecting by mirrors) cross inside a photorefractive crystal, the resultant refractive index grating diffracts the laser beams. As a result, one beam gains energy and becomes more intense at the expense of light intensity reduction of the other. This phenomenon is an example of two-wave mixing. In this configuration, Bragg diffraction condition is automatically satisfied. Application: The pattern stored inside the crystal persists until the pattern is erased; this can be done by flooding the crystal with uniform illumination which will excite the electrons back into the conduction band and allow them to be distributed more uniformly. Photorefractive materials include barium titanate (BaTiO3), lithium niobate (LiNbO3), vanadium doped zinc telluride (ZnTe:V), organic photorefractive materials, certain photopolymers, and some multiple quantum well structures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fluorine etching** Fluorine etching: Fluorine etching is a printmaking technique developed by a circle of artists working in Cracow and Warsaw in the first two decades of the twentieth century. It is likely that both the detrimental effects on the health of engravers and the fragility of the material resulted in this technique being abandoned. History: The novel way of printmaking known as fluorine etching was developed by the Polish chemist Tadeusz Estreicher (1871–1952) in the early years of the twentieth century. Commercial glass-makers everywhere had been using etching as a standard decoration for drinking and other glasses since the mid-nineteenth century as a cheaper alternative to engraving. Toxic hydrofluoric acid was also used to mark glass tubes used in science laboratories. For fluorine etching, however, as first described in 1912 by Hieronim Wilder, a normal glass plate was coated with colophony (a solid resin obtained from certain types of conifers), followed by the artist etching a design into it, which was then treated with hydrofluoric acid. The glass plate was inked, wiped and printed using a rolling press in the same way as a traditional etched copperplate, with the printer carefully avoiding the danger of the glass breaking under the pressure of the rollers. The glass easily shattered, which may have been why not many prints in this technique were made or have survived. History: A glass matrix had previously been used in France for cliché-verres by François Millet, Camille Corot and Charles Daubigny, among others. In this method, instead of etching an image, a drawing was made with a needle on a glass panel coated with collodion, a chemical used in classical photography. The composition was then exposed to light on a photosensitive paper, as in a photographic process. Current uses: Estreicher suggested to a friend of his, the Polish painter Stanisław Wyspiański (1869–1907), the idea of applying the technique used to mark glass tubes to a glass plate for artistic purposes. Wyspiański developed many compositions in this technique, dubbed fluorine etching, but now only five are known, of which three are represented in Polish collections. This unusual printmaking technique was met with interest among other Polish artists working in Cracow and Warsaw, such as Ferdynand Ruszczyc (1870–1936), Ewa Aszer-Librowicz (1883–1943) – sister of the Jewish architect and artist Jerzy Aszer (1884–1944) – Janina Bobińska (1894–1973), Bolesław Czarkowski (1873–1937), Mikalojus Konstantinas Čiurlionis (1875–1911), Ignacja Johnowa (1867–1953), Jadwiga Kernbaumówna (life dates unknown, but in 1912 she was a member of the Association Jung Art), Eugeniusz Morawski-Dąbrowa (1871–1948), Maria Płonowska (1878–1955), Emilia Wysocka (1888–1973) and Zygmunt Kamiński (1888–1969). Another artist fascinated by the new technique was Leon Wyczółkowski (1852–1936), who, as mentioned in his memoirs, made his first fluorine etching, a self-portrait in five colours, in May 1904. Current uses: The last artist to use the fluorine etching technique was Stanisław Dąbrowski (1882–1973), by whom two prints of 1920, today kept in the National Library in Warsaw, are known.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Implant failure** Implant failure: Implant failure refers to the failure of any medical implant to meet the claims of its manufacturer or the health care provider involved in its installation. Implant failure can have any number of causes. The rates of failure vary for different implants. The monitoring of the safety of implants is conducted within the context of broader pharmacovigilance. Common types of failure: Material degradation Implant failure can occur due to the degradation of the material an implant is made of. With time, mechanical degradation, in the form of wear or fatigue, or electrochemical degradation, in the form of corrosion, can occur. Biotoxicity, particularly in metal implants, can arise due to ion release. Bacterial infection Implants, made of synthetic materials, are naturally coated by a biofilm by the body, which may function as a favorable medium for bacteria growth. Implant failure due to bacterial infection of the implant can occur at any point of implant lifetime. Bacteria may already reside on the implant or be introduced during the implantation. Typical failure mechanisms include tissue damage and implant detachment due to bacteria generated biofilm. Hip replacement failure Hip replacement implants can fail. Outcomes are normally recorded in a joint replacement registry to ensure patterns are picked up upon. In 2013 Johnson & Johnson shared documents which indicated that 40% of a class of hip replacement implants which it manufactured had failed. Common types of failure: Pacemaker failure Pacemaker failure is the inability of an implanted artificial pacemaker to perform its intended function of regulating the beating of the heart. It is defined by the requirement of repeat surgical pacemaker-related procedure after the initial implantation. Causes of pacemaker failure included: lead related failure (lead migration, lead fracture, ventricular perforation), unit malfunction (battery failure or component malfunction), problems at the insertion site (infections, tissue breakdown, battery pack migration), and failures related to exposure to high voltage electricity or high intensity microwaves. Common types of failure: Cochlear implant failure Cochlear implants are used to treat severe to profound hearing loss by electrically stimulating the hearing nerve. Clinical symptoms of cochlear implant failure include auditory symptoms (tinnitus, buzzing, roaring, popping sounds), non-auditory symptoms (pain, shocking sensation, burning sensation, facial stimulation, itching), and decrease in the patient's hearing performance. When such symptoms occur, the patient's clinical team evaluates the patient and the device using in-situ methods, and determines if revision surgery is necessary. The most commonly reported device failures are due to impacts, loss of hermeticity, and electrode lead malfunctions. Most manufacturers provide on their websites the survival rate of their marketed implants, although they are not required to do so. In order to improve and standardize failure reporting practices to the public, the AAMI is developing an American standard for cochlear implants in collaboration with the FDA, major cochlear implant manufacturers, the CALCE center for reliability, doctors, and clinicians. Common types of failure: Dental implant failure Failure of a dental implant is often related to the failure of the implant to osseointegrate correctly with the bone, or vice versa. Common types of failure: A dental implant is considered to be a failure if it is lost, mobile or shows peri-implant (around the implant) bone loss of greater than 1.0 mm in the first year and greater than 0.2 mm a year after.Dental implant failures have been studied. Persons who smoke habitually prior to having dental implants are significantly more likely to have their implants fail. Individuals who have diabetes and those who disregard general oral hygiene are also at higher risk of having their implants fail. Responses to implant failure: In 2012 Royal College of Surgeons of England and the British Orthopaedic Association called for increased regulation of implants to prevent implant failure.A 2011 study by Dr. Diana Zuckerman and Paul Brown of the National Research Center for Women and Families, and Dr. Steven Nissen of the Cleveland Clinic, published in the Archives of Internal Medicine, showed that most medical devices recalled in the last five years for "serious health problems or death" had been previously approved by the FDA using the less stringent, and cheaper, 510(k) process. In a few cases the devices had been deemed so low-risk that they did not need FDA regulation. Of the 113 devices recalled, 35 were for cardiovascular issues. This may lead to a reevaluation of FDA procedures and better oversight.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Filmjölk** Filmjölk: Filmjölk (Swedish: [ˈfîːl.ˌmjœlk]), also known as fil, is a traditional fermented milk product from Sweden, and a common dairy product within the Nordic countries. It is made by fermenting cow's milk with a variety of bacteria from the species Lactococcus lactis and Leuconostoc mesenteroides. The bacteria metabolize lactose, the sugar naturally found in milk, into lactic acid, which means people who are lactose intolerant can tolerate it better than other dairy products. The acid gives filmjölk a sour taste and causes proteins in the milk, mainly casein, to coagulate, thus thickening the final product. The bacteria also produce a limited amount of diacetyl, a compound with a buttery flavor, which gives filmjölk its characteristic taste.Filmjölk has a mild and slightly acidic taste. It has a shelf-life of around 10–14 days at refrigeration temperature. Overview: In the Nordic countries, filmjölk is often eaten with breakfast cereal, muesli or crushed crisp bread on top. Some people add sugar, jam, apple sauce, cinnamon, ginger, fruits, or berries for extra flavor. In Norwegian it is called surmelk (Nynorsk: surmjølk) ('sour milk') but the official name is kulturmelk (Nynorsk: kulturmjølk). The drink is also popular in Latvian kitchens, where it is called rūgušpiens, rūgtpiens ('fermented milk' or 'sour milk') and can be bought ready from stores but is more commonly made at home. It can also be purchased and is popular in the neighboring country, Lithuania, where it is called rūgpienis or raugintas pienas ('sour/fermented milk'). Due to its popularity, it can be bought in many stores alongside kefir. Overview: Manufactured filmjölk is made from pasteurised, homogenised, and standardised cow's milk. Although homemade filmjölk has been around for a long time (written records from the 18th century speak of filmjölk-like products, but it has probably been around since the Viking Age or longer), it was first introduced to the Swedish market as a consumer product in 1931 by the Swedish dairy cooperative Arla. The first filmjölk was unflavoured and contained 3% milkfat. Since the 1960s, different varieties of unflavoured filmjölk have been marketed in Swedish grocery stores. Långfil, a more elastic variant of filmjölk was introduced in 1965; lättfil, filmjölk with 0.5% milkfat was introduced in 1967; and mellanfil, filmjölk with 1.5% milkfat, was introduced in 1990. In 1997, Arla introduced its first flavoured filmjölk: strawberry-flavoured filmjölk. The flavoured filmjölk was so popular that different flavours soon followed. By 2001, almost one third of the filmjölk sold in Sweden was flavoured filmjölk. Since 2007, variations of filmjölk include filmjölk with various fat content, filmjölk flavoured with fruit, vanilla, or honey, as well as filmjölk with probiotic bacteria that is claimed to be extra healthful, such as Onaka fil which contains Bifidobacterium lactis (a strain of bacteria popular in Japan) and Verum Hälsofil which contains Lactococcus lactis L1A in quantities of at least 10 billion live bacteria per deciliter. In English: There is no single accepted English term for fil or filmjölk. Fil and/or filmjölk has been translated to English as sour milk, soured milk, acidulated milk, fermented milk, and curdled milk, all of which are nearly synonymous and describe filmjölk but do not differentiate filmjölk from other types of soured/fermented milk. Filmjölk has also been described as viscous fermented milk and viscous mesophilic fermented milk,. Furthermore, articles written in English can be found that use the Swedish term filmjölk, as well as the Anglicised spellings filmjolk, fil mjölk, and fil mjolk.In baking, when filmjölk is called for, cultured buttermilk can be substituted. In Finland Swedish: In Finland Swedish, the dialects spoken by the Swedish-speaking population of Finland, fil is the equivalent of filbunke in Sweden. Not all variants of filmjölk are found in Finland, normally only filbunke and långfil. Swedish-speakers in Finland usually use the word surmjölk, which is the older name for filmjölk (also in Sweden) or piimä (in Finnish), which is a fermented milk product that is thinner than filmjölk and resembles cultured buttermilk. Types in Sweden: In Sweden, there are five Swedish dairy cooperatives that produce filmjölk: Arla Foods, Falköpings Mejeri, Gefleortens Mejeri, Norrmejerier, and Skånemejerier. In addition, Wapnö AB, a Swedish dairy company, and Valio, a Finnish dairy company, also sell a limited variety of filmjölk in Sweden. Prior to the industrial manufacture of filmjölk, many families made filmjölk at home. Fil culture is a variety of bacterium from the species Lactococcus lactis and Leuconostoc mesenteroides, e.g., Arla's fil culture contains Lactococcus lactis subsp. lactis, Lactococcus lactis subsp. cremoris, Lactococcus lactis biovar. diacetylactis, and Leuconostoc mesenteroides subsp. cremoris. Classic variants Probiotic variants Homemade filmjölk: To make filmjölk, a small amount of bacteria from an active batch of filmjölk is normally transferred to pasteurised milk and then left one to two days to ferment at room temperature or in a cool cellar. The fil culture is needed when using pasteurised milk because the bacteria occurring naturally in milk are killed during the pasteurisation process. Tätmjölk: A variant of filmjölk called tätmjölk, filtäte, täte or långmjölk is made by rubbing the inside of a container with leaves of certain plants: sundew (Drosera, Swedish: sileshår) or butterwort (Pinguicula, Swedish: tätört). Lukewarm milk is added to the container and left to ferment for one to two days. More tätmjölk can then be made by adding completed tätmjölk to milk. In Flora Lapponica (1737), Carl von Linné described a recipe for tätmjölk and wrote that any species of butterwort could be used to make it.Sundew and butterwort are carnivorous plants that have enzymes that degrade proteins, which make the milk thick. How butterwort influences the production of tätmjölk is not completely understood – lactic acid bacteria have not been isolated during analyses of butterwort.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neats and scruffies** Neats and scruffies: In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 70s and was a subject of discussion until the middle 80s."Neats" use algorithms based on a single formal paradigms, such as logic, mathematical optimization or neural networks. Neats verify their programs are correct with theorems and mathematical rigor. Neat researchers and analysts tend to express the hope that this single formal paradigm can be extended and improved to achieve general intelligence and superintelligence. Neats and scruffies: "Scruffies" use any number of different algorithms and methods to achieve intelligent behavior. Scruffies rely on incremental testing to verify their programs and scruffy programming requires large amounts of hand coding or knowledge engineering. Scruffies have argued that general intelligence can only be implemented by solving a large number of essentially unrelated problems, and that there is no magic bullet that will allow programs to develop general intelligence autonomously. John Brockman compares the neat approach to physics, in that it uses simple mathematical models as its foundation. The scruffy approach is more like biology, where much of the work involves studying and categorizing diverse phenomena.Modern AI has elements of both scruffy and neat approaches. In the 1990s AI research applied mathematical rigor to their programs, as the neats did. They also express the hope that there is a single paradigm (a "master algorithm") that will cause general intelligence and superintelligence to emerge. But modern AI also resembles the scruffies: modern machine learning applications require a great deal of hand-tuning and incremental testing; while the general algorithm is mathematically rigorous, accomplishing the specific goals of a particular application is not. Also, in the early 2000s, the field of software development embraced extreme programming, which is a modern version of the scruffy methodology -- try things and test them, without wasting time looking for more elegant or general solutions. Origin in the 1970s: The distinction between neat and scruffy originated in the mid-1970s, by Roger Schank. Schank used the terms to characterize the difference between his work on natural language processing (which represented commonsense knowledge in the form of large amorphous semantic networks) from the work of John McCarthy, Allen Newell, Herbert A. Simon, Robert Kowalski and others whose work was based on logic and formal extensions of logic. Schank described himself as an AI scruffy. He made this distinction in linguistics, arguing strongly against Chomsky's view of language.The distinction was also partly geographical and cultural: "scruffy" attributes were exemplified by AI research at MIT under Marvin Minsky in the 1970s. The laboratory was famously "freewheeling" and researchers often developed AI programs by spending long hours fine-tuning programs until they showed the required behavior. Important and influential "scruffy" programs developed at MIT included Joseph Weizenbaum's ELIZA, which behaved as if it spoke English, without any formal knowledge at all, and Terry Winograd's SHRDLU, which could successfully answer queries and carry out actions in a simplified world consisting of blocks and a robot arm. SHRDLU, while successful, could not be scaled up into a useful natural language processing system, because it lacked a structured design. Maintaining a larger version of the program proved to be impossible, i.e. it was too scruffy to be extended. Origin in the 1970s: Other AI laboratories (of which the largest were Stanford, Carnegie Mellon University and the University of Edinburgh) focused on logic and formal problem solving as a basis for AI. These institutions supported the work of John McCarthy, Herbert Simon, Allen Newell, Donald Michie, Robert Kowalski, and other "neats". Origin in the 1970s: The contrast between MIT's approach and other laboratories was also described as a "procedural/declarative distinction". Programs like SHRDLU were designed as agents that carried out actions. They executed "procedures". Other programs were designed as inference engines that manipulated formal statements (or "declarations") about the world and translated these manipulations into actions. In his 1983 presidential address to Association for the Advancement of Artificial Intelligence, Nils Nilsson discussed the issue, arguing that "the field needed both". He wrote "much of the knowledge we want our programs to have can and should be represented declaratively in some kind of declarative, logic-like formalism. Ad hoc structures have their place, but most of these come from the domain itself." Alex P. Pentland and Martin Fischler of SRI International concurred about the anticipated role of deduction and logic-like formalisms in future AI research, but not to the extent that Nilsson described. Scruffy projects in the 1980s: The scruffy approach was applied to robotics by Rodney Brooks in the mid-1980s. He advocated building robots that were, as he put it, Fast, Cheap and Out of Control, the title of a 1989 paper co-authored with Anita Flynn. Unlike earlier robots such as Shakey or the Stanford cart, they did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical machine learning techniques, and they did not plan their actions using formalizations based on logic, such as the 'Planner' language. They simply reacted to their sensors in a way that tended to help them survive and move.Douglas Lenat's Cyc project was initiated in 1984 one of earliest and most ambitious projects to capture all of human knowledge in machine readable form, is "a determinedly scruffy enterprise". The Cyc database contains millions of facts about all the complexities of the world, each of which must be entered one at a time, by knowledge engineers. Each of these entries is an ad hoc addition to the intelligence of the system. While there may be a "neat" solution to the problem of commonsense knowledge (such as machine learning algorithms with natural language processing that could study the text available over the internet), no such project has yet been successful. The Society of Mind: In 1986 Marvin Minsky published The Society of Mind which advocated a view of intelligence and the mind as an interacting community of modules or agents that each handled different aspects of cognition, where some modules were specialized for very specific tasks (e.g. edge detection in the visual cortex) and other modules were specialized to manage communication and prioritization (e.g. planning and attention in the frontal lobes). Minsky presented this paradigm as a model of both biological human intelligence and as a blueprint for future work in AI. This paradigm is explicitly "scruffy" in that it does not expect there to be a single algorithm that can be applied to all of the tasks involved in intelligent behavior. Minsky wrote: What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. The Society of Mind: As of 1991, Minsky was still publishing papers evaluating the relative advantages of the neat versus scruffy approaches, e.g. “Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy”. Modern AI as both neat and scruffy: New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as mathematical optimization and neural networks. Pamela McCorduck wrote that "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms." This general trend towards more formal methods in AI was described as "the victory of the neats" by Peter Norvig and Stuart Russell in 2003.However, by 2021, Russell and Norvig had changed their minds. Deep learning networks and machine learning in general require extensive fine tuning -- they must be iteratively tested until they begin to show the desired behavior. This is a scruffy methodology. Well-known examples: Neats John McCarthy Allen Newell Herbert A. Simon Edward Feigenbaum Robert Kowalski Judea PearlScruffies Rodney Brooks Terry Winograd Marvin Minsky Roger Schank Doug Lenat
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Azoic hypothesis** Azoic hypothesis: The Azoic hypothesis (sometimes referred to as the Abyssus theory) is a superseded scientific theory proposed by Edward Forbes in 1843, stating that the abundance and variety of marine life decreased with increasing depth and, by extrapolation of his own measurements, Forbes calculated that marine life would cease to exist below 300 fathoms (1,800 ft; 550 m). Overview: The theory was based upon Forbes' findings aboard HMS Beacon (1832), a survey ship to which he had been appointed naturalist by the ship's commander Captain Thomas Graves. With Forbes aboard, HMS Beacon set sail around the Aegean Sea on 17 April 1841, from Malta. It was at this point that Forbes began to take dredging samples at various depths of the ocean, he observed that samples from greater depths displayed a narrower diversity of creatures which were generally smaller in size.Forbes reported his findings from the Aegean Sea in his 1843 report to the British Association entitled Report on the Mollusca and Radiata of the Aegean Sea. His findings were widely accepted by the scientific community and were bolstered by other scientific figures of the time. David Page (1814–1879), a respected geologist, reinforced the theory by stating that "according to experiment, water at the depth of 1000 feet is compressed 1⁄340th of its own bulk; and at this rate of compression we know that at great depths animal and vegetable life as known to us cannot possibly exist – the extreme depressions of seas being thus, like the extreme elevations of the land, barren and lifeless solitudes." The theory was not disproven until the late 1860s when biologist Michael Sars, Professor of Zoology at Christiania (now Oslo) University, discovered life at a depth greater than 300 fathoms. Sars listed 427 animal species which had been found along the Norwegian coast at a depth of 450 fathoms, and gave a description of a crinoid Rhizocrinus lofotensis which his son had recovered from a depth of 300 fathoms in Lofoten. Overview: In 1869, Charles Wyville Thomson dredged marine life from a depth of 2,345 fathoms (14,070 ft; 4,289 m), finally dispelling Forbes' azoic theory.In light of this evidence, the Azoic hypothesis would come to be seen as a false hypothesis and give way to vastly increased efforts in deep-sea exploration and associated marine life. Since being discredited, the theory has been referenced widely in popular culture and alluded to in documentaries that explore and showcase deep-sea marine life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Axicon** Axicon: An axicon is a specialized type of lens which has a conical surface. An axicon transforms a laser beam into a ring shaped distribution. They can be convex or concave and be made of any optical material. The combination with other axicons or lenses allows a wide variety of beam patterns to be generated. It can be used to turn a Gaussian beam into a non-diffractive Bessel-like beam. Axicons were first proposed in 1954 by John McLeod.Axicons are used in atomic traps and for generating plasma in wakefield accelerators. They are used in eye surgery in cases where a ring-shaped spot is useful. The Axicon is usually characterized by the ratio of the diameter of the ring to the distance from the lens tip to image plane d/l. Special features and Bessel beam shaping: Single axicons are usually used to generate an annular light distribution which is laterally constant along the optical axis over a certain range. This special feature results from the generation of (non-diffracting) Bessel-like beams with properties mainly determined by the Axicon angle α. Special features and Bessel beam shaping: There are two areas of interest for a variety of applications: a long range with an almost constant intensity distribution (a) and a ring-shaped distant field intensity distribution (b). The distance (a) depends on the angle α of the Axicon and the diameter (ØEP) of the incident beam. The diameter of the annular distant field intensity distribution (b) is proportional to the length l. The width of the ring is about half the diameter of the incident beam. Applications: One application of axicons is in telescopes, where the usual spherical objective is replaced by an axicon. Such a telescope can be simultaneously in focus for targets at distances from less than a meter to infinity, without making any adjustments. It can be used to simultaneously view two or more small sources placed along the line of sight. Applications: Axicons can be used in laser eye surgery. Their ability to focus a laser beam into a ring is useful in surgery for smoothing and ablating corneal tissue. Using a combination of positive and negative axicons, the diameter of the ring of light can be adjusted to obtain the best performance.Axicons are also used in optical trapping. The ring of light creates attractive and repulsive forces which can trap and hold microparticles and cells in the center of the ring. Applications: Other Solar concentrators Laser resonators Breakdown in light filaments Gradient index, grating axicons Illumination Reflaxicons: The reflective axicon or "reflaxicon" was described in 1973 by W. R. Edmonds. The reflaxicon uses a pair of coaxial, conical reflecting surfaces to duplicate the functionality of the transmissive axicon. The use of reflection rather than transmission improves the damage threshold, chromatic aberration, and group velocity dispersion compared to conventional axicons. Research: In research at Physikalisch-Chemisches-Institut, Heidelberg, Germany, axicon lenses have been used in laser diagnostics of mechanical properties of thin films and solids by surface-wave spectroscopy. In these experiments, laser radiation is focused on the surfaces in a concentric ring. The laser pulse generates concentric surface acoustic waves, with amplitude that reaches a maximum in the center of the ring. This approach makes it possible to study mechanical properties of materials under extreme conditions. Axicons have been used by the research team at Beckman Laser Institute and Medical Clinic to focus a parallel beam into a beam with long focus depth and a highly confined lateral spot, to develop a novel optical coherence tomography (OCT) system.Inphase Technologies researchers use axicons in holographic data storage. Their goal is to determine the effects of axicons on the Fourier distribution of random binary data spectrum of a spatial light modulator (SLM). Research: Wendell T. Hill, III's research group at the University of Maryland is focused on creating elements of atom optics, such as beam splitters and beam switches, out of hollow laser beams. These beams, made using axicons, provide an ideal optical trap to channel cold atoms. An article published by the research team at St. Andrews University in the UK in the Sept. 12 issue of Nature describes axicon use in optical tweezers, which are commonly used for manipulating microscopic particles such as cells and colloids. The tweezers use lasers with a Bessel beam profile produced by illuminating an axicon with a Gaussian beam, which can trap several particles along the beam's axis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Recursive tree** Recursive tree: In graph theory, a recursive tree (i.e., unordered tree) is a labeled, rooted tree. A size-n recursive tree's vertices are labeled by distinct positive integers 1, 2, …, n, where the labels are strictly increasing starting at the root labeled 1. Recursive trees are non-planar, which means that the children of a particular vertex are not ordered; for example, the following two size-3 recursive trees are equivalent: 3/1\2 = 2/1\3. Recursive tree: Recursive trees also appear in literature under the name Increasing Cayley trees. Properties: The number of size-n recursive trees is given by Tn=(n−1)!. Hence the exponential generating function T(z) of the sequence Tn is given by log ⁡(11−z). Combinatorically, a recursive tree can be interpreted as a root followed by an unordered sequence of recursive trees. Let F denote the family of recursive trees. Then exp ⁡(F), where ∘ denotes the node labeled by 1, × the Cartesian product and ∗ the partition product for labeled objects. By translation of the formal description one obtains the differential equation for T(z) exp ⁡(T(z)), with T(0) = 0. Bijections: There are bijective correspondences between recursive trees of size n and permutations of size n − 1. Applications: Recursive trees can be generated using a simple stochastic process. Such random recursive trees are used as simple models for epidemics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intraocular pressure** Intraocular pressure: Intraocular pressure (IOP) is the fluid pressure inside the eye. Tonometry is the method eye care professionals use to determine this. IOP is an important aspect in the evaluation of patients at risk of glaucoma. Most tonometers are calibrated to measure pressure in millimeters of mercury (mmHg). Physiology: Intraocular pressure is determined by the production and drainage of aqueous humour by the ciliary body and its drainage via the trabecular meshwork and uveoscleral outflow. The reason for this is because the vitreous humour in the posterior segment has a relatively fixed volume and thus does not affect intraocular pressure regulation. Physiology: An important quantitative relationship (Goldmann's equation) is as follows: Po=F−UC+Pv Where: Po is the IOP in millimeters of mercury (mmHg) F the rate of aqueous humour formation in microliters per minute (μL/min) U the resorption of aqueous humour through the uveoscleral route (μL/min) C is the facility of outflow in microliters per minute per millimeter of mercury (μL/min/mmHg) Pv the episcleral venous pressure in millimeters of mercury (mmHg).The above factors are those that drive IOP. Measurement: Palpation is one of the oldest, simplest, and least expensive methods for approximate IOP measurement, however it is very inaccurate unless the pressure is very high. Intraocular pressure is measured with a tonometer as part of a comprehensive eye examination. Measurement: Measured values of intraocular pressure are influenced by corneal thickness and rigidity. As a result, some forms of refractive surgery (such as photorefractive keratectomy) can cause traditional intraocular pressure measurements to appear normal when in fact the pressure may be abnormally high. A newer transpalpebral and transscleral tonometry method is not influenced by corneal biomechanics and does not need to be adjusted for corneal irregularities as measurement is done over upper eyelid and sclera. Classification: Current consensus among ophthalmologists and optometrists defines normal intraocular pressure as that between 10 mmHg and 20 mmHg. The average value of intraocular pressure is 15.5 mmHg with fluctuations of about 2.75 mmHg.Ocular hypertension (OHT) is defined by intraocular pressure being higher than normal, in the absence of optic nerve damage or visual field loss.Ocular hypotension, hypotony, or ocular hypotony, is typically defined as intraocular pressure equal to or less than 5 mmHg. Such low intraocular pressure could indicate fluid leakage and deflation of the eyeball. Influencing factors: Daily variation Intraocular pressure varies throughout the night and day. The diurnal variation for normal eyes is between 3 and 6 mmHg and the variation may increase in glaucomatous eyes. During the night, intraocular pressure may not decrease despite the slower production of aqueous humour. Glaucoma patients' 24-hour IOP profiles may differ from those of healthy individuals. Fitness and exercise There is some inconclusive research that indicates that exercise could possibly affect IOP (some positively and some negatively). Musical instruments Playing some musical wind instruments has been linked to increases in intraocular pressure. A 2011 study focused on brass and woodwind instruments observed "temporary and sometimes dramatic elevations and fluctuations in IOP". Another study found that the magnitude of increase in intraocular pressure correlates with the intraoral resistance associated with the instrument, and linked intermittent elevation of intraocular pressure from playing high-resistance wind instruments to incidence of visual field loss. The range of intraoral pressure involved in various classes of ethnic wind instruments, such as Native American flutes, has been shown to be generally lower than Western classical wind instruments. Influencing factors: Drugs Intraocular pressure also varies with a number of other factors such as heart rate, respiration, fluid intake, systemic medication and topical drugs. Alcohol and marijuana consumption leads to a transient decrease in intraocular pressure and caffeine may increase intraocular pressure.Taken orally, glycerol (often mixed with fruit juice to reduce its sweet taste) can cause a rapid, temporary decrease in intraocular pressure. This can be a useful initial emergency treatment of severely elevated pressure.The depolarising muscle relaxant succinylcholine, which is used in anaesthesia, transiently increases IOP by around 10 mmHg for a few minutes. This is significant for example if the patient requires anaesthesia for a trauma and has sustained an eye (globe) perforation. The mechanism is not clear but it is thought to involve contraction of tonic myofibrils and transient dilation of choroidal blood vessels. Ketamine also increases IOP. Significance: Ocular hypertension is the most important risk factor for glaucoma. Intraocular pressure has been measured as an outcome in a systematic review comparing the effect of neuroprotective agents in slowing the progression of open angle glaucoma.Differences in pressure between the two eyes are often clinically significant, and potentially associated with certain types of glaucoma, as well as iritis or retinal detachment. Significance: Intraocular pressure may become elevated due to anatomical problems, inflammation of the eye, genetic factors, or as a side-effect from medication. Intraocular pressure laws follow fundamentally from physics. Any kinds of intraocular surgery should be done by considering the intraocular pressure fluctuation. Sudden increase of intraocular pressure can lead to intraocular micro barotrauma and cause ischemic effects and mechanical stress to retinal nerve fiber layer. Sudden intraocular pressure drop can lead to intraocular decompression that generates micro bubbles that potentially cause multiple micro emboli and leading to hypoxia, ischemia and retinal micro structure damage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heavy mineral sands ore deposits** Heavy mineral sands ore deposits: Heavy mineral sands are a class of ore deposit which is an important source of zirconium, titanium, thorium, tungsten, rare-earth elements, the industrial minerals diamond, sapphire, garnet, and occasionally precious metals or gemstones. Heavy mineral sands are placer deposits formed most usually in beach environments by concentration due to the specific gravity of the mineral grains. It is equally likely that some concentrations of heavy minerals (aside from the usual gold placers) exist within streambeds, but most are of a low grade and are relatively small. Grade and tonnage distribution: The grade of a typical heavy mineral sand ore deposit is usually low. Within the 21st century, the lowest cut-off grades of heavy minerals, as a total heavy mineral (THM) concentrate from the bulk sand, in most ore deposits of this type is around 1% heavy minerals, although several are higher grade. Grade and tonnage distribution: Of this total heavy mineral concentrate (THM), the components are typically Zircon, from 1% of THM to upwards of 50% of THM, Ilmenite, generally of 10% to 60% of THM Rutile, from 5% to 25% of THM Leucoxene, from 1% to 10% of THM Gangue, typically quartz, magnetite, garnet, chromite and kyanite, which usually account for the remaining bulk of the THM content Slimes, typically minerals as above and heavy clay minerals, too fine to be economically extracted.Generally, as zircon is the most valuable component and a critical ore component, high-zircon sands are the most valuable. Thereafter, rutile, leucoxene and then ilmenite in terms of value given to the ore. As a generality, typically the valuable components of the THM concentrate rarely exceed 30%. Grade and tonnage distribution: Being ancient stranded dune systems, the tonnage of most deposits is in excess of several tens of millions of tonnes to several hundred million tonnes. For example; the medium-sized Coburn mineral sands deposit, Western Australia, is 230 million tonnes at 1.1% heavy minerals, and is 13 km long. Grade and tonnage distribution: The Tormin mine, on the Western Cape of South Africa's coastline, provides a unique ocean beach feature, which exposes the resource to wave and tidal conditions that result in a natural jigging effect. Large proportions of the quartz and light heavies waste material are removed by the ocean tidal action, resulting in run of mine (ROM) grades as high as 86% heavy mineral concentrate (HMC). Sources: The source of heavy mineral sands is in a hardrock source within the erosional areas of a river which carries its load of sediment into the ocean, where the sediments are caught up in littoral drift or longshore drift. Rocks are occasionally eroded directly by wave action shed detritus, which is caught up in longshore drift and washed up onto beaches where the lighter minerals are winnowed. Sources: The source rocks which provide the heavy mineral sands determine the composition of the economic minerals. The source of zircon, monazite, rutile, sometimes tungsten, and some ilmenite is usually granite. The source of ilmenite, garnet, sapphire and diamond is ultramafic and mafic rocks, such as kimberlite or basalt. Garnet is also sourced commonly from metamorphic rocks, such as amphibolite schists. Precious metals are sourced from ore deposits hosted within metamorphic rocks. Transport: The accumulation of a heavy mineral deposit requires a source of sediment containing heavy minerals onto a beach system in a volume which exceeds the rate of removal from the trap site. For this reason not all beaches which are supplied by sands containing heavy minerals will form economic concentrations of the minerals. This factor can be qualitatively or quantitatively measured through the ZTR index. Trap: The heavy minerals within the source sediments attain an economic concentration by accumulation within low-energy environments in streams and most usually on beaches. In beach placer deposits the lowest energy zone on the beach is the swash zone, where turbulent surf washes up on the beach face and loses energy. In this zone heavier grains accumulate and become stranded because they are denser than the quartz grains they accompany. It is for this reason that beach placer deposits are often referred to as "strand-line deposits". Trap: The size and position of a heavy mineral deposit is a function of the wave energy reaching the beach, the mean grainsize of the beach sediments, and the current height of the ocean. Trap: Anecdotal reports of certain beach placers forming in modern times suggest that the greatest enrichment tended to occur in storm events energetic enough to remove most of the beach's sediment load—a process favoring the lighter minerals. The resultant 'clinker' sands left behind were mined during low tide following major storm events, suggesting that most beach placer deposits are formed during such cycles. Trap: Fossilised dune systems often are exploited for heavy mineral sands because they are from the ocean and because they are often remnants of previous intraglacial highstands. Trap: Tectonic activity, which results in coastlines rising from the ocean, may also cause a beach system to become stranded above the high-water mark and lock in the heavy mineral sands. Similarly, a beach system that is drowned by the subsidence of a coastline may be preserved, often for millions of years, until it either is covered by sedimentation or rises from the ocean. Trap: Specific trap sites for heavy mineral sand placer deposits are in beaches on the leeward side of headlands, where low-energy zones trap sediments carried along by the longshore drift. Also, sand bars developed at the mouths of rivers that feed the placer deposits are rich trap sites where the winnowing action of the waves is most efficient, because heavy minerals too heavy to be moved will deposit at an isthmus in preference to drifting farther down the beach. Diamond sands: The coast of Namibia is host to economic diamantiferous beach sands, which are exploited by building sea walls and isolating stretches of coastline. The beaches are so isolated that they are sometimes processed in their entirety, down to the bedrock, in search of diamonds. Such deposits have been sought around the world, with sporadic reports of high-value stones but no instances of economic quantities of sediment. Environmental concerns: The mining of beach sands and of fossilized beach placers is often controversial because the operation requires the strip mining of large areas. Often this land is in ecologically sensitive surroundings and contains fragile ecosystems built up on poor sandy soils. Environmental concerns: The mining process is ideally modelled on the extraction operations underway in Australia, where the strip mining is followed by rehabilitation of the mined areas including intensive re-vegetation with ecologically similar species, re-contouring of the land to its original shape, including dunes, and management of groundwater resources. Modern mining practices tend to favor dry mining rather than dredging operations, due to the advent of electrostatic mineral separation processes. Environmental concerns: In practice, not all mining of sub-Saharan African deposits is carried out in such an environmentally responsible manner, although some South African mines do practice dune rehabilitation [1]. The mining of the coast of South America, in particular Chile and Ecuador, is carried out in an environmentally responsible manner. Environmental concerns: Examples of environmentally sensitive and politically sensitive mineral sand mining operations that have gained public attention and galvanised environmental activism responses to mining proposals include the Tuart-Ludlow mineral sands mine, Western Australia, and the culmination of conservationist efforts to preserve Rainbow Beach and Fraser Island, Queensland, Australia. These latter campaigns successfully lobbied government and saw Fraser Island and Rainbow Beach protected by the High Court of Australia; however, the Tuart-Ludlow campaign failed to prevent mining works in the Tuart forests in coastal Western Australia.Similar mineral sand mining operations were carried out for 35 years in and adjacent to National Parks at Hawks Nest, New South Wales, and continue on Stradbroke Island, Queensland.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polyalkylimide** Polyalkylimide: Polyalkylimide is a polymer whose structure contains no free monomers. It is used in permanent dermal fillers to treat soft tissue deficits such as facial lipoatrophy, gluteal atrophy, acne, and scars.In plastic and reconstructive surgery it is used for building facial volume in the cheeks, chin, jaw, and lips. Reports of infections and migration of polyalkylimide in the face has led Canada to remove it from the market, and the manufacturer of Biolcamid ceasing production. A class action lawsuit was filed against the company.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kepler-63** Kepler-63: Kepler-63 is a G-type main-sequence star about 638 light-years away. The star is much younger than the Sun, at 0.21 billion years. Kepler-63 is similar to the Sun in its concentration of heavy elements. Kepler-63: The star is exhibiting strong starspot activity, with relatively cold (4700±300 K) starspots concentrated in two mid-latitude bands similar to the Sun, changing their position in a cycle with a period of 1.27±0.16 years. Due to high magnetic activity associated with its young age, Kepler-63 has a very hot corona heated to 8 million degrees, and produces over ten times the solar amount of x-rays than the Sun.Multiplicity surveys did not detect any stellar companions to Kepler-63 by 2016. Planetary system: In 2013 a transiting hot Jupiter planet b was detected on a tight orbit. The orbit is nearly polar to the equatorial plane of the star.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amiga Hunk** Amiga Hunk: Hunk is the executable file format of tools and programs of the Amiga Operating System based on Motorola 68000 CPU and other processors of the same family. The file format was originally defined by MetaComCo. as part of TRIPOS, which formed the basis for AmigaDOS. This kind of executable got its name from the fact that the software programmed on Amiga is divided in its internal structure into many pieces called hunks, in which every portion could contain either code or data. Hunk structure: The hunks in an Amiga executable file could exist in various types. There are 32-bit hunks, 16-bit hunks, and even some 8-bit hunks. Hunk structure: Types of hunks were standardized in AmigaOS, and well documented in The AmigaDOS Manual edited by Commodore to explain to programmers how to code on the Amiga, during the years in which Commodore manufactured Amiga computers. Their structure was officially codified and could be changed only by a Commodore committee, which then communicated the modifications to the developers for new releases of the Amiga operating system. Hunk structure: The structure of an Amiga hunk is very simple: There is a header at the beginning of the hunk indicating that that kind of "portion of code" is a known and valid Amiga hunk type, then follows an ID which indicates the length of the hunk itself, and at the bottom is the segment of the hunk which contains the real code or data. Features of Amiga executable files: Amiga executable files can be launched either from the graphical shell of the Amiga, the Workbench or from the Amiga's command line interpreter (called CLI, later AmigaShell). Features of Amiga executable files: No particular filename extension is required for Amiga executable files. For example, the calculator applet "Calculator" can be renamed to "Calculator.com", "Calculator.exe", "Calculator.bin", or even "Calculator.jpeg". These are all valid names for programs or tools, because AmigaOS does not differentiate between filename extensions. AmigaOS adopted another method to recognize it is dealing with a valid executable. There is a particular sequence of bytes in the file header, yielding the hexadecimal value $000003f3. This sequence, which signifies an executable file and lets it be self-running, is called a magic cookie (from the magic cookies in Alice's Adventures in Wonderland by Lewis Carroll).This kind of solution to identify executables on the Amiga was taken from similar solutions which were adopted by UNIX/Unix-like operating systems, where magic cookies are called magic numbers. Structure of an Amiga executable file: The internal structure of an Amiga executable file is very simple. In the beginning of the file there is the magic cookie, then is declared the total number of hunks in the executable, and just after this is the progressive numbers of hunks starting from "0" (zero). The first hunk is always numbered zero, so if the executable is (for example) subdivided into three hunks, they will be numbered "0" for the first one, "1" the second and "2" the third hunk, and so on. Just before the real hunks start is a table containing information about the length of any hunks present in the executable, and in the last part of the file are positioned the real hunks, each one described by its type name: HUNK_CODE, HUNK_DATA, et cetera. Representation of the structure: Hunk Types: Known hunk types for the Amiga are: * Extended Hunk Format Metadata: The Amiga could save metadata into hunks, as the hunk structure could be easily adapted to support this feature, but the hunk format of executables was abandoned in favour of ELF and there is no central authority (as the dismissed Commodore) which could implement this feature as one of the Amiga standards. The Amiga saves some metadata into sidecar files known as ".info" (so called from the name of their extension suffix). ".info" files may be created any time a project (datafile) is saved on disk. Example: When user saves a file called "MyProject" two files may be created on disk called "MyProject" and MyProject.info". Metadata: The "MyProject" file contains the real data of the project file, while the "MyProject.info" file contains the icon, and the information regarding the software which originated the file, so any time the project icon is invoked by clicking on it with mouse, the parent software will be opened (users can change this information at any time, allowing other programs to believe they created the project file rather than the original software which physically created it). Metadata: Application Binding does not exist in AmigaOS as in other systems like MacOS. The ".info" file also contains some particular characteristics of the project file and the user comments. ".info" files do not appear on the Workbench Screen (Workbench is the default Amiga Desktop GUI). On the desktop screen only the icon of the project file taken out of the "info" file appears. In fact the icon is the virtual medium that connects the project itself and the metadata stored into ".info". When the user clicks on the icon with left mouse button, the project ".info" calls the program which originated it. When the user clicks on the icon and chooses the appropriate menu item, then a dialog box will appear, allowing the user to interact with metadata contained in the ".info" file. The ".info" files are copied or moved together with their associated project file, by moving the icon with the mouse, and can be viewed as a standalone file through the command line interfaces of Amiga such as AmigaShell, or using third party filemanagers or directory listers like Directory Opus or DiskMaster. Metadata: If the ".info" file represents an executable program, then the ".info" file contains information about the stack of RAM buffers that could be reserved to the executable file (e.g. 4096, 8192 or 16384 or more bytes of RAM) and even the arguments that could be invoked by using a command line interface. For example, an Amiga program could open its own graphic user interface screen independent from the desktop screen. By invoking arguments such as "Screen=800x600" and "Depth=8" into the info file dialog box, the user can save this information into the associated ".info" file and then program would open the productivity software into its own screen sized 800×600 with 8 bits of colour depth (equal to 256 colors). Metadata: The user can also delete ".info" files, but then they will renounce the benefits of having an icon representing the project file on the desktop, and will also lose all the metadata contained in it. Metadata: Icons A brief view of the bitmap icons contained into ".info" metadata files: The icons are RAW bitmap data contained in ".info" files and are not standard Amiga IFF/LBM files. The users can deal with icons by using the AmigaOS standard program "IconEdit", present in the operating system since its early versions. Starting from AmigaOS version 2.0, IconEdit could import and save normal IFF/LBM files used as standard graphics files in AmigaOS.Some Amiga programs like Personal Paint from Cloanto are able to view, load and save bitmap data as normal Amiga Icons or as Amiga ".info" files already existing. Metadata: Legacy Amiga Icons can have two-state icons, using two different bitmap images. The first bitmap contains the data of the "quiet" icon, also known as the "quiet state" of the icon. The second bitmap image contains data of the "selected" state of the icon. When the user clicks on an icon and activates it, then the quiet icon bitmap data is suddenly replaced by the selected icon bitmap data. Such behaviour gives the Amiga icons the effect of moving cartoons. In case this second bitmap does not exist in the ".info" file (it is not mandatory to create both bitmaps), then an inverse color effect is used when the icon is selected. Metadata: Third-party icon "engines" exist, which try to keep the look of AmigaOS up to date with modern standards of other Operating Systems. These programs patch the OS routines dedicated to icon handling, replacing them with custom ones. One of such attempts, NewIcons, has become almost the new de facto standard for AmigaOS 3.x. It was so popular that the new icon system used in AmigaOS 3.5 and above, GlowIcons, is based on its icon file format. Metadata: All modern Amiga-like operating systems (AmigaOS 4, MorphOS and AROS) could associate either RAW bitmap data, IFF/LBM files or also PNG files as standard internal bitmap image of any icon. Overlaid executables: The HUNK_OVERLAY type was intended to reduce the amount of RAM needed to run a program. Executables with an overlay structure have a root node which is in memory at all times, and the rest of the program is split into smaller modules which are loaded and unloaded automatically when needed.The Overlay format works by adding little stubs to code so that when they branch into a sub-module, it calls an overlay manager, which loads the requisite module. Commodore defined a standard overlay manager so that C code could automatically have these stubs inserted, and also generate an overlay table, which the standard overlay manager knew how to read. Overlaid executables: However, the Overlay format was rarely used, especially in the way it was intended. It was more commonly used with a custom overlay manager. A popular use of overlay format was with the Titanics Cruncher, which compressed executables. Instead of loading the entire compressed executable into memory before unpacking, the Titanics Cruncher used an overlay, so only a tiny decruncher was loaded into memory, then it read and decompressed data as it went. Other executable file formats used on Amiga: With third party add-ons AmigaOS up to 3.9 recognizes various kinds of executable files other than Hunk format created for Motorola 68000. ELF Phase5 implemented ELF executables for its PowerUP accelerator boards. It was found cumbersome due to its dynamic linking. This format was then adopted as standard by AmigaOS 4.0, MorphOS and AROS. ELF support was added to WarpUp by 3rd party developers and Hyperion Entertainment released number of WarpUp games in ELF format only. Other executable file formats used on Amiga: Extended Hunk format In 1997 Haage & Partner developer WarpUp PowerPC kernel for PowerUP accelerator boards. Instead of ELF binary format they had chosen to expand existing hunk format. Problem with ELF binary format was that users had to patch their system to load ELF executables and mixing PPC/68k code was not possible. Extended Hunk Format (EHF), developed by Haage & Partner, allowed mixing PPC and 68k code to single executable without modifying the existing system if PowerPC accelerator was not installed. Other executable file formats used on Amiga: . AmigaOS 4 and MorphOS AmigaOS 4.0 and MorphOS can run ELF natively, but as these systems were designed to run on PowerPC processor-based machines, the developers added also compatibility for WarpUP software, used in AmigaOS 3.9. In addition MorphOS implements PowerUp software compatibility as implemented by Phase5 for PowerUP accelerator cards. Both new operating systems can also run Amiga Hunk format because they implement the old Amiga API environment based on AmigaOS 3.1, and can run 68000 code through emulation. Notes:^ See also pages regarding history of the PPC processor on Amiga at Amiga.History site. ^ EHF specifications (also in English) on Haage&Partners site.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barrier pointing** Barrier pointing: Barrier pointing (or "edge pointing") is a term used in human–computer interaction to describe a design technique in which targets are placed on the peripheral borders of touchscreen interfaces to aid in motor control. Where targets are placed alongside raised edges on mobile devices, the user has a physical barrier to aid navigation, useful for situational impairments such as walking; similarly, screen edges that stop the cursor mean that targets placed along screen edges require less precise movements to select. This allows the most common or important functions to be placed on the edge of a user interface, while other functions that may require more precision can utilise the interface's 'open space'. Barrier pointing: Barrier pointing is also a term used in accessible design, as a design technique that makes targets easier to press. For example, barrier pointing using raised edges on touchscreens, alongside a stylus and a 'lift-off' or 'take-off' selection mode, can improve usability for a user with cerebral palsy.One example of assistive technology focused on barrier pointing is the SUPPLE system, which redesigns the size, shape, and arrangement of interfaces based on its measurement of motor articulation input.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**N-Acetylglutamate synthase deficiency** N-Acetylglutamate synthase deficiency: N-Acetylglutamate synthase deficiency is an autosomal recessive urea cycle disorder. Signs and symptoms: The symptoms are visible within the first week of life and if not detected and diagnosed correctly immediately consequences are fatal. Genetics: The chromosome found to be carrying the gene encoding for N-acetyl glutamate synthase is chromosome 17q (q stands for longer arm of the chromosome) in humans and chromosome 11 in mice. In both organisms, the chromosome consists of seven exons and six introns and non-coding sequence. The cause for this disorder is a single base deletion that led to frameshift mutation, and thus the error in gene's coding for this specific enzyme. Mechanism: Carbamoyl phosphate synthase I is an enzyme found in mitochondrial matrix and it catalyzes the very first reaction of the urea cycle, in which carbamoyl phosphate is produced. Carbamoyl phosphate synthase 1, abbreviated as CPS1, is activated by its natural activator N-acetyl glutamate, which in turn is synthesized from acetyl-CoA and glutamic acid in the reaction catalyzed by N-acetyl glutamate synthase, commonly called NAGS. N-acetyl glutamate is required for the urea cycle to take place. Deficiency in N-acetylglutamate synthase or a genetic mutation in the gene coding for the enzyme will lead to urea cycle failure in which ammonia is not converted to urea, but rather accumulated in blood leading to the condition called type I hyperammonemia. This is a severe neonatal disorder with fatal consequences, if not detected immediately upon birth. Treatment: Although there is currently no cure, treatment includes injections of structurally similar compound, carglumic acid, an analogue of N-acetyl glutamate. This analogue likewise activates CPS1. This treatment mitigates the intensity of the disorder. If symptoms are detected early enough and the patient is injected with this compound, levels of severe mental retardation can be slightly lessened, but brain damage is irreversible. Also: hemodialysis for emergent hyperammonemic crisis, Na benzoate, Na phenylacetate, Na phenylbutyrate, low-protein diet supplemented with essential amino acid mixture and arginine, citrulline, experimental attempts at gene therapy, liver transplantation (which is curative), and also N-carbamylglutamate supplementation. Early symptoms include lethargy, vomiting, and deep coma.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NanoGagliato** NanoGagliato: NanoGagliato is an invitational gathering of scientists, physicians, business leaders, artists, and researchers to discuss the most current challenges and opportunities in the fields of nanomedicine and the nanosciences, from a multisciplinary perspective. This series of events takes place each year, at the end of July, in the town of Gagliato, Calabria, Italy. Format: During two days of intense scientific exchanges, the participants to NanoGagliato address the challenges of translating research to the clinic by deploying technological advances born in the field of nanotechnology. On the last two days of the event, the group goes on excursions of renowned localities in Calabria and neighboring regions. Format: Public session The culminating event of NanoGagliato is a town-hall style meeting, traditionally attended by hundreds of people of all ages from Gagliato and neighboring towns. The event is organized in concert with local citizens' associations and public institutions, and is held in Gagalito's town square. Highlights of the attending scientists' research are presented. Time is reserved for an open Q&A session, where members of the public are encouraged to ask about the impact of the presented research, the promise for new treatments for disease, and any ethical concerns. Founding session: The first NanoGagliato was convened in 2008 by Mauro Ferrari, Ph.D, with the help of his wife, Paola, and hosted at his summer residence in Gagliato. Attendees were asked to provide their own transportation. Several countries were represented, including Japan, the United Kingdom, Portugal, the United States, and France. Hospitality was kindly provided by local private residents. Establishment of L'Accademia di Gagliato delle Nanoscienze At the end of the first NanoGagliato, the participants agreed to establish a non profit association, named L'Accademia di Gagliato delle Nanoscienze, and its children's chapter, La Piccola Accademia. These associations are now the organizers of the events of NanoGagliato, and are supported by donations of individuals and corporations. Successive sessions: During the 2010 session, four scholarships were awarded to four young Italian researchers in the field of biomedical engineering, in honor of Prof. Salvatore Venuta, the late Magnifico Rettore of the Magna Græcia University of Catanzaro. As part of the award, the finalists were invited to join the NanoGagliato events alongside the scientists. Children's activities: La Piccola Accademia di Gagliato is the children's chapter of the L' Accademia di Gagliato. The first series of educational activities for schoolchildren was launched in the summer of 2010. Inspired by the NanoDays developed by the NISE (Nanoscale Informal Science Education Network), La Piccola Accademia organized a lively and very successful program including games, presentations, trading cards, and a Q&A session with the scientists. Approximately fifty children from Gagliato and nearby towns attended the first session. Recognition of Gagliato: In recognition of the unique role that the town of Gagliato has come to play as an international magnet for global leaders in nanotechnology, and as host of the NanoGagliato events, Gagliato has received the official appellation of “Paese delle NanoScienze”, town of Nanosciences, attributed by the City Council.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mixed-valence complex** Mixed-valence complex: Mixed valence complexes contain an element which is present in more than one oxidation state. Well-known mixed valence compounds include the Creutz–Taube complex, Prussian blue, and molybdenum blue. Many solids are mixed-valency including indium chalcogenides. Robin–Day classification: Mixed-valence compounds are subdivided into three groups, according to the Robin–Day classification: Class I, where the valences are trapped—localized on a single site—such as Pb3O4 and antimony tetroxide. There are distinct sites with different specific valences in the complex that cannot easily interconvert. Robin–Day classification: Class II, which are intermediate in character. There is some localization of distinct valences, but there is a low activation energy for their interconversion. Some thermal activation is required to induce electron transfer from one site to another via the bridge. These species exhibit an intense Intervalence charge transfer (IT or IVCT) band, a broad intense absorption in the infrared or visible part of the spectrum, and also exhibit magnetic exchange coupling at low temperatures. The degree of interaction between the metal sites can be estimated from the absorption profile of the IVCT band and the spacing between the sites. This type of complex is common when metals are in different ligand fields. For example, Prussian blue is an iron(II,III)–cyanide complex in which there is an iron(II) atom surrounded by six carbon atoms of six cyanide ligands bridged to an iron(III) atom by their nitrogen ends. In the Turnbull's blue preparation, an iron(II) solution is mixed with an iron(III) cyanide (c-linked) complex. An electron-transfer reaction occurs via the cyanide ligands to give iron(III) associated with an iron(II)-cyanide complex. Robin–Day classification: Class III, wherein mixed valence is not distinguishable by spectroscopic methods as the valence is completely delocalized. The Creutz–Taube complex is an example of this class of complexes. These species also exhibit an IT band. Each site exhibits an intermediate oxidation state, which can be half-integer in value. This class is possible when the ligand environment is similar or identical for each of the two metal sites in the complex. In fact, Robson type dianionic tetraimino-diphenolate ligands which provide equivalent N2O2 environments for two metal centres have stabilized the mixed valence diiron complexes of class III. The bridging ligand needs to be very good at electron transfer, be highly conjugated, and be easily reduced. Robin–Day classification: Creutz–Taube ion The Creutz–Taube complex is a robust, readily analyzed, mixed-valence complex consisting of otherwise equivalent Ru(II) and Ru(III) centers bridged by the pyrazine. This complex serves as a model for the bridged intermediate invoked in inner-sphere electron transfer. Mixed valence organic compounds: Organic mixed valence compounds are also known. Mixed valency in fact seems to be required for organic compounds to exhibit electrical conductivity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Personal pronoun** Personal pronoun: Personal pronouns are pronouns that are associated primarily with a particular grammatical person – first person (as I), second person (as you), or third person (as he, she, it, they). Personal pronouns may also take different forms depending on number (usually singular or plural), grammatical or natural gender, case, and formality. The term "personal" is used here purely to signify the grammatical sense; personal pronouns are not limited to people and can also refer to animals and objects (as the English personal pronoun it usually does). Personal pronoun: The re-use in some languages of one personal pronoun to indicate a second personal pronoun with formality or social distance – commonly a second person plural to signify second person singular formal – is known as the T–V distinction, from the Latin pronouns tu and vos. Examples are the majestic plural in English and the use of vous in place of tu in French. Personal pronoun: For specific details of the personal pronouns used in the English language, see English personal pronouns. Types and forms: Pronoun vs pro-form Pronoun is a category of words. A pro-form is a type of function word or expression that stands in for (expresses the same content as) another word, phrase, clause or sentence where the meaning is recoverable from the context. Pronouns mostly function as pro-forms, but there are pronouns that are not pro-forms and pro-forms that are not pronouns.[p. 239] It's a good idea. (pronoun and pro-form) It's raining. (pronoun but not pro-form) I asked her to help, and she did so right away. (pro-form but not pronoun)In [1], the pronoun it "stands in" for whatever was mentioned and is a good idea. In [2], the pronoun it doesn't stand in for anything. No other word can function there with the same meaning; we don't say "the sky is raining" or "the weather is raining". So, it is a pronoun but not a pro-form. Finally, in [3], did so is a verb phrase, not a pronoun, but it is a pro-form standing for "help". Types and forms: Person and number Languages typically have personal pronouns for each of the three grammatical persons: first-person pronouns normally refer to the speaker, in the case of the singular (as the English I), or to the speaker and others, in the case of the plural (as the English we). second-person pronouns normally refer to the person or persons being addressed (as the English you); in the plural they may also refer to the person or persons being addressed together with third parties. third-person pronouns normally refer to third parties other than the speaker or the person being addressed (as the English he, she, it, they).As noted above, within each person there are often different forms for different grammatical numbers, especially singular and plural. Languages which have other numbers, such as dual (e.g. Slovene), may also have distinct pronouns for these. Types and forms: Some languages distinguish between inclusive and exclusive first-person plural pronouns – those that do and do not include their audience. For example, Tok Pisin has seven first-person pronouns according to number (singular, dual, trial, plural) and clusivity, such as mitripela ("they two and I") and yumitripela ("you two and I").Some languages do not have third-person personal pronouns, instead using demonstratives (e.g. Macedonian) or full noun phrases. Latin used demonstratives rather than third-person pronouns (in fact the third-person pronouns in the Romance languages are descended from the Latin demonstratives). Types and forms: In some cases personal pronouns can be used in place of indefinite pronouns, referring to someone unspecified or to people generally. In English and other languages the second-person pronoun can be used in this way: instead of the formal one should hold one's oar in both hands (using the indefinite pronoun one), it is more common to say you should hold your oar in both hands. Types and forms: Gender Personal pronouns, particularly those of the third person, differ depending on the gender of their antecedent or referent. This occurs in English with the third-person singular pronouns, where (simply put) he is used when referring to a man, she to a woman, singular they to a person whose gender is unknown or unspecified at the time that the pronoun is being used or to a person who does not identify as either a man or a woman, and it to something inanimate or an animal of unspecific sex. This is an example of pronoun selection based on natural gender; many languages also have selection based on grammatical gender (as in French, where the pronouns il and elle are used with masculine and feminine antecedents respectively, as are the plurals ils and elles). Sometimes natural and grammatical gender do not coincide, as with the German noun Mädchen ("girl"), which is grammatically neuter but naturally feminine. (See Grammatical gender § Grammatical vs. natural gender for more details.) Issues may arise when the referent is someone of unspecified or unknown gender. In a language such as English, it is derogatory to use the inanimate pronoun it to refer to a person (except in some cases to a small child), and although it is traditional to use the masculine he to refer to a person of unspecified gender, the movement towards gender-neutral language requires that another method be found, such as saying he or she. A common solution, particularly in informal language, is to use singular they. For more details see Gender in English. Types and forms: Similar issues arise in some languages when referring to a group of mixed gender; these are dealt with according to the conventions of the language in question (in French, for example, the masculine ils "they" is used for a group containing both men and women or antecedents of both masculine and feminine gender). Types and forms: A pronoun can still carry gender even if it does not inflect for it; for example, in the French sentence je suis petit ("I am small") the speaker is male and so the pronoun je is masculine, whereas in je suis petite the speaker is female and the pronoun is treated as feminine, the feminine ending -e consequently being added to the predicate adjective. Types and forms: On the other hand, many languages do not distinguish female and male in the third person pronoun. Types and forms: Some languages have or had a non-gender-specific third person pronoun: Malay (including Indonesian and Malaysian standards), Malagasy of Madagascar, Philippine languages, Māori, Rapa Nui, Hawaiian, and other Austronesian languages Chinese, Burmese, and other Sino-Tibetan languages Vietnamese and other Mon–Khmer languages Igbo, Yoruba, and other Volta-Niger languages Swahili, and other Bantu languages Haitian Creole Turkish and other Turkic languages Luo and other Nilo-Saharan languages Hungarian, Finnish, Estonian, and other Uralic languages Hindi-Urdu Georgian Japanese Armenian Korean Mapudungun Basque PersianSome of these languages started to distinguish gender in the third person pronoun due to influence from European languages.Mandarin, for example, introduced, in the early 20th century a different character for she (她), which is pronounced identically as he (他) and thus is still indistinguishable in speech (tā). Types and forms: Korean geunyeo (그녀) is found in writing to translate "she" from European languages. In the spoken language it still sounds awkward and rather unnatural, as it literally translates to "that female". Types and forms: Formality Many languages have different pronouns, particularly in the second person, depending on the degree of formality or familiarity. It is common for different pronouns to be used when addressing friends, family, children and animals than when addressing superiors and adults with whom the speaker is less familiar. Examples of such languages include French, where the singular tu is used only for familiars, the plural vous being used as a singular in other cases (Russian follows a similar pattern); German, where the third-person plural sie (capitalized as Sie) is used as both singular and plural in the second person in non-familiar uses; and Polish, where the noun pan ("gentleman") and its feminine and plural equivalents are used as polite second-person pronouns. For more details, see T–V distinction. Types and forms: Some languages, such as Japanese, Korean and many Southeast Asian languages like Vietnamese, Thai, and Indonesian, have pronouns that reflect deep-seated societal categories. In these languages there is generally a small set of nouns that refer to the discourse participants, but these referential nouns are not usually used (pronoun avoidance), with proper nouns, deictics, and titles being used instead (and once the topic is understood, usually no explicit reference is made at all). A speaker chooses which word to use depending on the rank, job, age, gender, etc. of the speaker and the addressee. For instance, in Japanese, in formal situations, adults usually refer to themselves as watashi or the even more polite watakushi, while young men may use the student-like boku and police officers may use honkan ("this officer"). In informal situations, women may use the colloquial atashi, and men may use the rougher ore. Types and forms: Case Pronouns also often take different forms based on their syntactic function, and in particular on their grammatical case. English distinguishes the nominative form (I, you, he, she, it, we, they), used principally as the subject of a verb, from the oblique form (me, you, him, her, it, us, them), used principally as the object of a verb or preposition. Languages whose nouns inflect for case often inflect their pronouns according to the same case system; for example, German personal pronouns have distinct nominative, genitive, dative and accusative forms (ich, meiner, mir, mich; etc.). Pronouns often retain more case distinctions than nouns – this is true of both German and English, and also of the Romance languages, which (with the exception of Romanian) have lost the Latin grammatical case for nouns, but preserve certain distinctions in the personal pronouns. Types and forms: Other syntactic types of pronouns which may adopt distinct forms are disjunctive pronouns, used in isolation and in certain distinct positions (such as after a conjunction like and), and prepositional pronouns, used as the complement of a preposition. Types and forms: Strong and weak forms Some languages have strong and weak forms of personal pronouns, the former being used in positions with greater stress. Some authors further distinguish weak pronouns from clitic pronouns, which are phonetically less independent.Examples are found in Polish, where the masculine third-person singular accusative and dative forms are jego and jemu (strong) and go and mu (weak). English has strong and weak pronunciations for some pronouns, such as them (pronounced /ðɛm/ when strong, but /ðəm/, /ɛm/, /əm/ or even /m̩/ when weak). Types and forms: Free vs. bound pronouns Some languages—for instance, most Australian Aboriginal languages—have distinct classes of free and bound pronouns. These are distinguished by their morphological independence/dependence on other words respectively. In Australian languages, it is common for free pronouns to be reserved exclusively for human (and sometimes other animate) referents. Examples of languages with animacy restrictions on free pronouns include Wanyjirra, Bilinarra, Warrongo, Guugu Yimidhirr and many others. Bound pronouns can take a variety of forms, including verbal prefixes (these are usually subject markers—see Bardi—but can mark objects as well—see Guniyandi), verbal enclitics (including possessive markers) and auxiliary morphemes. These various forms are exemplified below: Free pronoun (Wangkatja) Verb prefix (Bardi) Enclitic (Ngiyambaa) Auxiliary morpheme (Wambaya) Possessive clitic (Ngaanyatjarra) Reflexive and possessive forms Languages may also have reflexive pronouns (and sometimes reciprocal pronouns) closely linked to the personal pronouns. English has the reflexive forms myself, yourself, himself, herself, themself, theirself, itself, ourselves, yourselves, theirselves, themselves (there is also oneself, from the indefinite pronoun one). These are used mainly to replace the oblique form when referring to the same entity as the subject of the clause; they are also used as intensive pronoun (as in I did it myself). Types and forms: Personal pronouns are also often associated with possessive forms. English has two sets of such forms: the possessive determiners (also called possessive adjectives) my, your, his, her, its, our and their, and the possessive pronouns mine, yours, his, hers, its (rare), ours, theirs (for more details see English possessive). In informal usage both types of words may be called "possessive pronouns", even though the former kind do not function in place of nouns, but qualify a noun, and thus do not themselves function grammatically as pronouns. Types and forms: Some languages, such as the Slavic languages, also have reflexive possessives (meaning "my own", "his own", etc.). These can be used to make a distinction from ordinary third-person possessives. For example, in Slovene: Eva je dala Maji svojo knjigo ("Eva gave Maja her [reflexive] book", i.e. Eva's own book) Eva je dala Maji njeno knjigo ("Eva gave Maja her [non-reflexive] book", i.e. Maja's book)The same phenomenon occurs in the North Germanic languages, for example Danish, which can produce the sentences Anna gav Maria sin bog and Anna gav Maria hendes bog, the distinction being analogous to that in the Slovene example above. Syntax: Antecedents Third-person personal pronouns, and sometimes others, often have an explicit antecedent – a noun phrase which refers to the same person or thing as the pronoun (see anaphora). The antecedent usually precedes the pronoun, either in the same sentence or in a previous sentence (although in some cases the pronoun may come before the antecedent). The pronoun may then be said to "replace" or "stand for" the antecedent, and to be used so as to avoid repeating the antecedent. Some examples: John hid and we couldn't find him. (John is the antecedent of him) After he lost his job, my father set up a small grocer's shop. (my father is the antecedent of he, although it comes after the pronoun) We invited Mary and Tom. He came but she didn't. (Mary is the antecedent of she, and Tom of he) I loved those bright orange socks. Can you lend them to me? (those bright orange socks is the antecedent of them) Jane and I went out cycling yesterday. We did 30 miles. (Jane and I is the antecedent of we)Sometimes pronouns, even third-person ones, are used without specific antecedent, and the referent has to be deduced from the context. In other cases there may be ambiguity as to what the intended antecedent is: Alan was going to discuss it with Bob. He's always dependable. (the meaning of he is ambiguous; the intended antecedent may be either Alan or Bob) Pronoun dropping In some languages, subject or object pronouns can be dropped in certain situations (see Pro-drop language). In particular, in a null-subject language, it is permissible for the subject of a verb to be omitted. Information about the grammatical person (and possibly gender) of the subject may then be provided by the form of the verb. In such languages it is common for personal pronouns to appear in subject position only if they are needed to resolve ambiguity or if they are stressed. Syntax: Dummy pronouns In some cases pronouns are used purely because they are required by the rules of syntax, even though they do not refer to anything; they are then called dummy pronouns. This can be seen in English with the pronoun it in such sentences as it is raining and it is nice to relax. (This is less likely in pro-drop languages, since such pronouns would probably be omitted.) Capitalization: Personal pronouns are not normally capitalized, except in particular cases. In English the first-person subject pronoun I is always capitalized, and in some Christian texts the personal pronouns referring to Jesus or God are capitalized (He, Thou, etc.). In many European languages, but not English, the second-person pronouns are often capitalized for politeness when they refer to the person one is writing to (such as in a letter). For details, see Capitalization § Pronouns. Examples: He shook her* hand. Why do you always rely on me to do your* homework for you? They tried to run away from the hunter, but he set his* dogs after them.*Words like her, your and his are sometimes called (possessive) pronouns; other terms are possessive determiner or possessive adjective.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Streaming instability** Streaming instability: In planetary science a streaming instability is a hypothetical mechanism for the formation of planetesimals in which the drag felt by solid particles orbiting in a gas disk leads to their spontaneous concentration into clumps which can gravitationally collapse. Small initial clumps increase the orbital velocity of the gas, slowing radial drift locally, leading to their growth as they are joined by faster drifting isolated particles. Massive filaments form that reach densities sufficient for the gravitational collapse into planetesimals the size of large asteroids, bypassing a number of barriers to the traditional formation mechanisms. The formation of streaming instabilities requires solids that are moderately coupled to the gas and a local solid to gas ratio of one or greater. The growth of solids large enough to become moderately coupled to the gas is more likely outside the ice line and in regions with limited turbulence. An initial concentration of solids with respect to the gas is necessary to suppress turbulence sufficiently to allow the solid to gas ratio to reach greater than one at the mid-plane. A wide variety of mechanisms to selectively remove gas or to concentrate solids have been proposed. In the inner Solar System the formation of streaming instabilities requires a greater initial concentration of solids or the growth of solid beyond the size of chondrules. Background: Planetesimals and larger bodies are traditionally thought to have formed via a hierarchical accretion, the formation of large objects via the collision and mergers of small objects. This process begins with the collision of dust due to Brownian motion producing larger aggregates held together by van der Waals forces. The aggregates settle toward the mid-plane of the disk and collide due to gas turbulence forming pebbles and larger objects. Further collisions and mergers eventually yield planetesimals 1–10 km in diameter held together by self-gravity. The growth of the largest planetesimals then accelerates, as gravitational focusing increases their effective cross-section, resulting in runaway accretion forming the larger asteroids. Later, gravitational scattering by the larger objects excites relative motions, causing a transition to slower oligarchic accretion that ends with the formation of planetary embryos. In the outer Solar System the planetary embryos grow large enough to accrete gas, forming the giant planets. In the inner Solar System the orbits of the planetary embryos become unstable, leading to giant impacts and the formation of the terrestrial planets.A number of obstacles to this process have been identified: barriers to growth via collisions, the radial drift of larger solids, and the turbulent stirring of planetesimals. As a particle grows the time required for its motion to react to changes in the motion of the gas in turbulent eddies increases. The relative motions of particles, and collision velocities, therefore increases as with the mass of the particles. For silicates the increased collision velocities cause dust aggregates to compact into solid particles that bounce rather than stick, ending growth at the size of chondrules, roughly 1 mm in diameter. Icy solids may not be affected by the bouncing barrier but their growth can be halted at larger sizes due to fragmentation as collision velocities increase. Radial drift is the result of the pressure support of the gas, enabling it to orbit at a slower velocity than the solids. Solids orbiting through this gas lose angular momentum and spiral toward the central star at rates that increase as they grow. At 1 AU this produces a meter-sized barrier, with the rapid loss of large objects in as little as ~1000 orbits, ending with their vaporization as they approach too close to the star. At greater distances the growth of icy bodies can become drift limited at smaller sizes when their drift timescales become shorter than their growth timescales. Turbulence in the protoplanetary disk can create density fluctuations which exert torques on planetesimals exciting their relative velocities. Outside the dead zone the higher random velocities can result in the destruction of smaller planetesimals, and the delay of the onset of runaway growth until planetesimals reach radii of 100 km.Some evidence exists that planetesimal formation may have bypassed these barriers to incremental growth. In the inner asteroid belt all of the low albedo asteroids that have not been identified as part of a collisional family are larger than 35 km. A change in the slope of the size distribution of asteroids at roughly 100 km can be reproduced in models if the minimal diameter of the planetesimals was 100 km and the smaller asteroids are debris from collisions. A similar change in slope has been observed in the size distribution of the Kuiper belt objects. The low numbers of small craters on Pluto has also been cited as evidence the largest KBO's formed directly. Furthermore, if the cold classical KBO's formed in situ from a low mass disk, as suggested by the presence of loosely bound binaries, they are unlikely to have formed via the traditional mechanism. The dust activity of comets indicates a low tensile strength that would be the result of a gentle formation process with collisions at free-fall velocities. Description: Streaming instabilities, first described by Andrew Youdin and Jeremy Goodman, are driven by differences in the motions of the gas and solid particles in the protoplanetary disk. The gas is hotter and denser closer to the star, creating a pressure gradient that partially offsets gravity from the star. The partial support of the pressure gradient allows the gas to orbit at roughly 50 m/s below the Keplerian velocity at its distance. The solid particles, however, are not supported by the pressure gradient and would orbit at Keplerian velocities in the absence of the gas. The difference in velocities results in a headwind that causes the solid particles to spiral toward the central star as they lose momentum to aerodynamic drag. The drag also produces a back reaction on the gas, increasing its velocity. When solid particles cluster in the gas, the reaction reduces the headwind locally, allowing the cluster to orbit faster and undergo less inward drift. The slower drifting clusters are overtaken and joined by isolated particles, increasing the local density and further reducing radial drift, fueling an exponential growth of the initial clusters. In simulations the clusters form massive filaments that can grow or dissipate, and that can collide and merge or split into multiple filaments. The separation of filaments averages 0.2 gas scale heights, roughly 0.02 AU at the distance of the asteroid belt. The densities of the filaments can exceed a thousand times the gas density, sufficient to trigger the gravitational collapse and fragmentation of the filaments into bound clusters.The clusters shrink as energy is dissipated by gas drag and inelastic collisions, leading to the formation of planetesimals the size of large asteroids. Impact speeds are limited during the collapse of the smaller clusters that form 1–10 km asteroids, reducing the fragmentation of particles, leading to the formation of porous pebble pile planetesimals with low densities. Gas drag slows the fall of the smallest particles and less frequent collisions slows the fall of the largest particles during this process, resulting in the size sorting of particles with mid-sized particles forming a porous core and a mix of particle sizes forming denser outer layers. The impact speeds and the fragmentation of particles increase with the mass of the clusters, lowering the porosity and increasing the density of the larger objects such as 100 km asteroid that form from a mixture of pebbles and pebble fragments. Collapsing swarms with excess angular momentum can fragment, forming binary or in some cases trinary objects resembling those in the Kuiper belt. In simulations the initial mass distribution of the planetesimals formed via streaming instabilities fits a power law: dn/dM ~ M−1.6, that is slightly steeper than that of small asteroids, with an exponential cutoff at larger masses. Continued accretion of chondrules from the disk may shift the size distribution of the largest objects toward that of the current asteroid belt. In the outer Solar System the largest objects can continue to grow via pebble accretion, possibly forming the cores of giant planets. Requirements: Streaming instabilities form only in the presence of rotation and the radial drift of solids. The initial linear phase of a streaming instability, begins with a transient region of high pressure within the protoplanetary disk. The elevated pressure alters the local pressure gradient supporting the gas, reducing the gradient on the region's inner edge and increasing the gradient on the region's outer edge. The gas therefore must orbit faster near the inner edge and is able to orbit slower near the outer edge. The Coriolis forces resulting from these relative motions support the elevated pressure, creating a geostropic balance. The motions of the solids near the high pressure regions are also affected: solids at its outer edge face a greater headwind and undergo faster radial drift, solids at its inner edge face a lesser headwind and undergo a slower radial drift. This differential radial drift produces a buildup of solids in higher pressure regions. The drag felt by the solids moving toward the region also creates a back reaction on the gas that reinforces the elevated pressure leading to a runaway process. As more solids are carried toward the region by radial drift this eventually yields a concentration of solids sufficient to drive the increase of the velocity of the gas and reduce the local radial drift of solids seen in streaming instabilities.Streaming instabilities form when the solid particles are moderately coupled to the gas, with Stokes numbers of 0.01 - 3; the local solid to gas ratio is near or larger than 1; and the vertically integrated solid to gas ratio is a few times Solar. The Stokes number is a measure of the relative influences of inertia and gas drag on a particle's motion. In this context it is the product of the timescale for the exponential decay of a particle's velocity due to drag and the angular frequency of its orbit. Small particles like dust are strongly coupled and move with the gas, large bodies such as planetesimals are weakly coupled and orbit largely unaffected by the gas. Moderately coupled solids, sometimes referred to as pebbles, range from roughly cm- to m-sized at asteroid belt distances and from mm- to dm-sized beyond 10 AU. These objects orbit through the gas like planetesimals but are slowed due to the headwind and undergo significant radial drift. The moderately coupled solids that participate in streaming instabilities are those dynamically affected by changes in the motions of gas on scales similar to those of the Coriolis effect, allowing them to be captured by regions of high pressure in a rotating disk. Moderately coupled solids also retain influence on the motion of the gas. If the local solid to gas ratio is near or above 1, this influence is strong enough to reinforce regions of high pressure and to increase the orbital velocity of the gas and slow radial drift. Reaching and maintaining this local solid to gas at the mid-plane requires an average solid to gas ratio in a vertical cross section of the disk that is a few times solar. When the average solid to gas ratio is 0.01, roughly that estimated from measurements of the current Solar System, turbulence at the mid-plane generates a wavelike pattern that puffs up the mid-plane layer of solids. This reduces the solid to gas ratio at the mid-plane to less than 1, suppressing the formation of dense clumps. At higher average solid to gas ratios the mass of solids dampens this turbulence allowing a thin mid-plane layer to form. Stars with higher metallicities are more likely to reach the minimum solid to gas ratio making them favorable locations for planetesimal and planet formation.A high average solid to gas ratio may be reached due to the loss of gas or by the concentration of solids. Gas may be selectively lost due to photoevaporation late in the gas disk epoch, causing solids to be concentrated in a ring at the edge of a cavity that forms in the gas disk, though the mass of planetesimals that forms may be too small to produce planets. The solid to gas ratio can also increase in the outer disk due to photoevaporation, but in the giant planet region the resulting planetesimal formation may be too late to produce giant planets. If the magnetic field of the disc is aligned with its angular momentum the Hall effect increases viscosity which can result in a faster depletion of the inner gas disk. A pile up of solids in the inner disk can occur due to slower rates of radial drift as Stoke's numbers decline with increasing gas densities. This radial pile up is reinforced as the velocity of the gas increases with the surface density of solids and could result in the formation of bands of planetesimals extending from sublimation lines to a sharp outer edges where solid to gas ratios first reach critical values. For some ranges of particle size and gas viscosity outward flow of the gas may occur, reducing its density and further increasing the solid to gas ratio. The radial pile ups may be limited due to a reduction in the gas density as the disk evolves however, and shorter growth timescales of solids closer to the star could instead result in the loss of solids from the inside out. Radial pile-ups also occur at locations where rapidly drifting large solids fragment into smaller slower drifting solids, for example, inside the ice line where silicate grains are released as icy bodies sublimate. This pile up can also increase the local velocity of the gas, extending the pile up to outside the ice line where it is enhanced by the outward diffusion and recondensation of water vapor. The pile-up could be muted, however, if the icy bodies are highly porous, which slows their radial drift. Icy solids can be concentrated outside the ice line due to the outward diffusion and recondensation of water vapor. Solids are also concentrated in radial pressure bumps, where the pressure reaches a local maximum. At these locations radial drift converges from both closer and farther from the star. Radial pressure bumps are present at the inner edge of the dead zone, and can form due to the magnetorotational instability. Pressure bumps may also be produced due to the back-reaction of dust on the gas creating self-induced dust traps. The ice line has also been proposed as the site of a pressure bump, however, this requires a steep viscosity transition. If the back-reaction from the concentration of solids flattens the pressure gradient, the planetesimals formed at a pressure bump may be smaller than predicted at other locations. If the pressure gradient is maintained streaming instabilities may form at the location of a pressure bump even in viscous disks with significant turbulence. Local pressure bumps also form in the spiral arms of a massive self-gravitating disk and in anti-cyclonic vortices. The break-up of vortices could also leave a ring of solids from which a streaming instability may form. Solids may also be concentrated locally if disk winds lower the surface density of the inner disc, slowing or reversing their inward drift, or due to thermal diffusion.Streaming instabilities are more likely to form in regions of the disk where: the growth of solids is favored, the pressure gradient is small, and turbulence is low. Inside the ice-line the bouncing barrier may prevent the growth of silicates large enough to take part in streaming instabilities. Beyond the ice line hydrogen bonding allows particles of water ice to stick at higher collision velocities, possibly enabling the growth of large highly porous icy bodies to Stokes numbers approaching 1 before their growth is slowed by erosion. The condensation of vapor diffusing outward from sublimating icy bodies may also drive the growth of compact dm-size icy bodies outside the ice line. A similar growth of bodies due to recondensation of water could occur over a broader region following an FU Orionis event. At greater distances the growth of solids could again be limited if they are coated with a layer of CO2 or other ices that reduce the collision velocities where sticking occurs. A small pressure gradient reduces the rate of radial drift, limiting the turbulence generated by streaming instabilities. A smaller average solid to gas ratio is then necessary to suppress turbulence at the mid-plane. The diminished turbulence also enables the growth of larger solids by lowering impact velocities. Hydrodynamic models indicate that the smallest pressure gradients occur near the ice-line and in the inner parts of the disk. The pressure gradient also decreases late in the disk's evolution as the accretion rate and the temperature decline. A major source of turbulence in the protoplanetary disk is the magnetorotational instability. The impacts of turbulence generated by this instability could limit streaming instabilities to the dead zone, estimated to form near the mid-plane at 1-20 AU, where the ionization rate is too low to sustain the magnetorotational instability.In the inner Solar System the formation of streaming instabilities requires a larger enhancement of the solid to gas ratio than beyond the ice line. The growth of silicate particles is limited by the bouncing barrier to ~1 mm, roughly the size of the chondrules found in meteorites. In the inner Solar System particles this small have Stokes numbers of ~0.001. At these Stokes numbers a vertically integrated solid to gas ratio greater than 0.04, roughly four times that of the overall gas disk, is required to form streaming instabilities. The required concentration may be reduced by half if the particles are able to grow to roughly cm-size. This growth, possibly aided by dusty rims that absorb impacts, may occur over a period of 10^5 years if a fraction of collisions result in sticking due to a broad distribution of collision velocities. Or, if turbulence and the collision velocities are reduced inside initial weak clumps, a runaway process may occur in which clumping aids the growth of solids and their growth strengthens clumping. A radial pile-up of solids may also lead to conditions that support streaming instabilities in a narrow annulus at roughly 1 AU. This would requires a shallow initial disk profile and that the growth of solids be limited by fragmentation instead of bouncing allowing cm-sized solids to form, however. The growth of particles may be further limited at high temperatures, possibly leading to an inner boundary of planetesimal formation where temperatures reaches 1000K. Alternatives: Instead of actively driving their own concentration, as in streaming instabilities, solids may be passively concentrated to sufficient densities for planetesimals to form via gravitational instabilities. In an early proposal dust settled at the mid-plane until sufficient densities were reached for the disk to gravitationally fragment and collapse into planetesimals. The difference in orbital velocities of the dust and gas, however, produces turbulence which inhibits settling preventing sufficient densities from being reached. If the average dust to gas ratio is increased by an order of magnitude at a pressure bump or by the slower drift of small particles derived from fragmenting larger bodies, this turbulence may be suppressed allowing the formation of planetesimals.The cold classical Kuiper belt objects may have formed in a low mass disk dominated by cm-sized or smaller objects. In this model the gas disk epoch ends with km-sized objects, possibly formed via gravitational instability, embedded in a disk of small objects. The disk remains dynamically cool due to inelastic collisions among the cm-sized objects. The slow encounter velocities result in efficient growth with a sizable fraction of the mass ending in the large objects. The dynamical friction from the small bodies would also aid in the formation of binaries.Planetesimals may also be formed from the concentration of chondrules between eddies in a turbulent disk. In this model the particles are split unequally when large eddies fragment increasing the concentrations of some clumps. As this process cascades to smaller eddies a fraction of these clumps may reach densities sufficient to be gravitationally bound and slowly collapse into planetesimals. Recent research, however, indicates that larger objects such as conglomerates of chondrules may be necessary and that the concentrations produced from chondrules may instead act as the seeds of streaming instabilities.Icy particles are more likely to stick and to resist compression in collisions which may allow the growth of large porous bodies. If the growth of these bodies is fractal, with their porosity increasing as larger porous bodies collide, their radial drift timescales become long, allowing them to grow until they are compressed by gas drag and self-gravity forming small planetesimimals. Alternatively, if the local solid density of the disk is sufficient, they may settle into a thin disk that fragments due to a gravitational instability, forming planetesimals the size of large asteroids, once they grow large enough to become decoupled from the gas. A similar fractal growth of porous silicates may also be possible if they are made up of nanometer-sized grains formed from the evaporation and recondensation of dust. However, the fractal growth of highly porous solids may be limited by the infilling of their cores with small particles generated in collisions due to turbulence; by erosion as the impact velocity due to the relative rates of radial drift of large and small bodies increases; and by sintering as they approach ice lines, reducing their ability to absorb collisions, resulting in bouncing or fragmentation during collisions.Collisions at velocities that would result in the fragmentation of equal sized particles can instead result in growth via mass transfer from the small to the larger particle. This process requires an initial population of 'lucky' particles that have grown larger than the majority of particles. These particles may form if collision velocities have a wide distribution, with a small fraction occurring at velocities that allow objects beyond the bouncing barrier to stick. However, the growth via mass transfer is slow relative to radial drift timescales, although it may occur locally if radial drift is halted locally at a pressure bump allowing the formation of planetesimals in 10^5 yrs.Planetesimal accretion could reproduce the size distribution of the asteroids if it began with 100 meter planetesimals. In this model collisional dampening and gas drag dynamically cool the disk and the bend in the size distribution is caused by a transition between growth regimes. This however require a low level of turbulence in the gas and some mechanism for the formation of 100 meter planetesimals. Size dependent clearing of planetesimals due to secular resonance sweeping could also remove small bodies creating a break in the size distribution of asteroids. Secular resonances sweeping inward through the asteroid belt as the gas disk dissipated would excite the eccentricities of the planetesimals. As their eccentricities were damped due to gas drag and tidal interaction with the disk the largest and smallest objects would be lost as their semi-major axes shrank leaving behind the intermediate sized planetesimals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**User interface design** User interface design: User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. In computer or software design, user interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that clearly communicates to the user what's important. UI design refers to graphical user interfaces and other forms of interface design. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design). User interface design: User interfaces are the points of interaction between users and designs. There are three types: Graphical user interfaces (GUIs) - Users interact with visual representations on a computer's screen. The desktop is an example of a GUI. Interfaces controlled through voice - Users interact with these through their voices. Most smart assistants, such as Siri on smartphones or Alexa on Amazon devices, use voice control. User interface design: Interactive interfaces utilizing gestures- Users interact with 3D design environments through their bodies, e.g., in virtual reality (VR) games.Interface design is involved in a wide range of projects, from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered on their expertise, whether it is software design, user research, web design, or industrial design. User interface design: Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design and typography are utilized to support its usability, influencing how the user performs certain interactions and improving the aesthetic appeal of the design; design aesthetics may enhance or detract from the ability of users to use the functions of the interface. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs. Compared to UX design: Compared to UX design, UI design is more about the surface and overall look of a design. User interface design is a craft in which designers perform an important function in creating the user experience. UI design should keep users informed about what is happening, giving appropriate feedback in a timely manner. The visual look and feel of UI design sets the tone for the user experience. On the other hand, the term UX design refers to the entire process of creating a user experience. Compared to UX design: Don Norman and Jakob Nielsen said: It’s important to distinguish the total user experience from the user interface (UI), even though the UI is obviously an extremely important part of the design. As an example, consider a website with movie reviews. Even if the UI for finding a film is perfect, the UX will be poor for a user who wants information about a small independent release if the underlying database only contains movies from the major studios. Processes: User interface design requires a good understanding of user needs. It mainly focuses on the needs of the platform and its user expectations. There are several phases and processes in the user interface design, some of which are more demanded upon than others, depending on the project. (Note: for the remainder of this section, the word system is used to denote any project whether it is a website, application, or device.) Functionality requirements gathering – assembling a list of the functionality required by the system to accomplish the goals of the project and the potential needs of the users. Processes: User and task analysis – a form of field research, it's the analysis of the potential users of the system by studying how they perform the tasks that the design must support, and conducting interviews to elaborate their goals. Typical questions involve: What would the user want the system to do? How would the system fit in with the user's normal workflow or daily activities? How technically savvy is the user and what similar systems does the user already use? What interface look & feel styles appeal to the user? Information architecture – development of the process and/or information flow of the system (i.e. for phone tree systems, this would be an option tree flowchart and for web sites this would be a site flow that shows the hierarchy of the pages). Processes: Prototyping – development of wire-frames, either in the form of paper prototypes or simple interactive screens. These prototypes are stripped of all look & feel elements and most content in order to concentrate on the interface. Processes: Usability inspection – letting an evaluator inspect a user interface. This is generally considered to be cheaper to implement than usability testing (see step below), and can be used early on in the development process since it can be used to evaluate prototypes or specifications for the system, which usually cannot be tested on users. Some common usability inspection methods include cognitive walkthrough, which focuses the simplicity to accomplish tasks with the system for new users, heuristic evaluation, in which a set of heuristics are used to identify usability problems in the UI design, and pluralistic walkthrough, in which a selected group of people step through a task scenario and discuss usability issues. Processes: Usability testing – testing of the prototypes on an actual user—often using a technique called think aloud protocol where you ask the user to talk about their thoughts during the experience. User interface design testing allows the designer to understand the reception of the design from the viewer's standpoint, and thus facilitates creating successful applications. Processes: Graphical user interface design – actual look and feel design of the final graphical user interface (GUI). These are design’s control panels and faces; voice-controlled interfaces involve oral-auditory interaction, while gesture-based interfaces witness users engaging with 3D design spaces via bodily motions. It may be based on the findings developed during the user research, and refined to fix any usability problems found through the results of testing. Depending on the type of interface being created, this process typically involves some computer programming in order to validate forms, establish links or perform a desired action. Processes: Software maintenance – after the deployment of a new interface, occasional maintenance may be required to fix software bugs, change features, or completely upgrade the system. Once a decision is made to upgrade the interface, the legacy system will undergo another version of the design process, and will begin to repeat the stages of the interface life cycle. Requirements: The dynamic characteristics of a system are described in terms of the dialogue requirements contained in seven principles of part 10 of the ergonomics standard, the ISO 9241. This standard establishes a framework of ergonomic "principles" for the dialogue techniques with high-level definitions and illustrative applications and examples of the principles. The principles of the dialogue represent the dynamic aspects of the interface and can be mostly regarded as the "feel" of the interface. The seven dialogue principles are: Suitability for the task: the dialogue is suitable for a task when it supports the user in the effective and efficient completion of the task. Requirements: Self-descriptiveness: the dialogue is self-descriptive when each dialogue step is immediately comprehensible through feedback from the system or is explained to the user on request. Controllability: the dialogue is controllable when the user is able to initiate and control the direction and pace of the interaction until the point at which the goal has been met. Conformity with user expectations: the dialogue conforms with user expectations when it is consistent and corresponds to the user characteristics, such as task knowledge, education, experience, and to commonly accepted conventions. Error tolerance: the dialogue is error-tolerant if, despite evident errors in input, the intended result may be achieved with either no or minimal action by the user. Suitability for individualization: the dialogue is capable of individualization when the interface software can be modified to suit the task needs, individual preferences, and skills of the user. Requirements: Suitability for learning: the dialogue is suitable for learning when it supports and guides the user in learning to use the system.The concept of usability is defined of the ISO 9241 standard by effectiveness, efficiency, and satisfaction of the user. Part 11 gives the following definition of usability: Usability is measured by the extent to which the intended goals of use of the overall system are achieved (effectiveness). Requirements: The resources that have to be expended to achieve the intended goals (efficiency). The extent to which the user finds the overall system acceptable (satisfaction).Effectiveness, efficiency, and satisfaction can be seen as quality factors of usability. To evaluate these factors, they need to be decomposed into sub-factors, and finally, into usability measures. Requirements: The information presented is described in Part 12 of the ISO 9241 standard for the organization of information (arrangement, alignment, grouping, labels, location), for the display of graphical objects, and for the coding of information (abbreviation, colour, size, shape, visual cues) by seven attributes. The "attributes of presented information" represent the static aspects of the interface and can be generally regarded as the "look" of the interface. The attributes are detailed in the recommendations given in the standard. Each of the recommendations supports one or more of the seven attributes. Requirements: The seven presentation attributes are: Clarity: the information content is conveyed quickly and accurately. Discriminability: the displayed information can be distinguished accurately. Conciseness: users are not overloaded with extraneous information. Consistency: a unique design, conformity with user's expectation. Detectability: the user's attention is directed towards information required. Legibility: information is easy to read. Requirements: Comprehensibility: the meaning is clearly understandable, unambiguous, interpretable, and recognizable.The user guidance in Part 13 of the ISO 9241 standard describes that the user guidance information should be readily distinguishable from other displayed information and should be specific for the current context of use. User guidance can be given by the following five means: Prompts indicating explicitly (specific prompts) or implicitly (generic prompts) that the system is available for input. Requirements: Feedback informing about the user's input timely, perceptible, and non-intrusive. Status information indicating the continuing state of the application, the system's hardware and software components, and the user's activities. Error management including error prevention, error correction, user support for error management, and error messages. On-line help for system-initiated and user-initiated requests with specific information for the current context of use. Research: User interface design has been a topic of considerable research, including on its aesthetics. Standards have been developed as far back as the 1980s for defining the usability of software products. Research: One of the structural bases has become the IFIP user interface reference model. The model proposes four dimensions to structure the user interface: The input/output dimension (the look) The dialogue dimension (the feel) The technical or functional dimension (the access to tools and services) The organizational dimension (the communication and co-operation support)This model has greatly influenced the development of the international standard ISO 9241 describing the interface design requirements for usability. Research: The desire to understand application-specific UI issues early in software development, even as an application was being developed, led to research on GUI rapid prototyping tools that might offer convincing simulations of how an actual application might behave in production use. Some of this research has shown that a wide variety of programming tasks for GUI-based software can, in fact, be specified through means other than writing program code.Research in recent years is strongly motivated by the increasing variety of devices that can, by virtue of Moore's law, host very complex interfaces.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Co-NP-complete** Co-NP-complete: In complexity theory, computational problems that are co-NP-complete are those that are the hardest problems in co-NP, in the sense that any problem in co-NP can be reformulated as a special case of any co-NP-complete problem with only polynomial overhead. If P is different from co-NP, then all of the co-NP-complete problems are not solvable in polynomial time. If there exists a way to solve a co-NP-complete problem quickly, then that algorithm can be used to solve all co-NP problems quickly. Co-NP-complete: Each co-NP-complete problem is the complement of an NP-complete problem. There are some problems in both NP and co-NP, for example all problems in P or integer factorization. However, it is not known if the sets are equal, although inequality is thought more likely. See co-NP and NP-complete for more details. Fortune showed in 1979 that if any sparse language is co-NP-complete (or even just co-NP-hard), then P = NP, a critical foundation for Mahaney's theorem. Formal definition: A decision problem C is co-NP-complete if it is in co-NP and if every problem in co-NP is polynomial-time many-one reducible to it. This means that for every co-NP problem L, there exists a polynomial time algorithm which can transform any instance of L into an instance of C with the same truth value. As a consequence, if we had a polynomial time algorithm for C, we could solve all co-NP problems in polynomial time. Example: One example of a co-NP-complete problem is tautology, the problem of determining whether a given Boolean formula is a tautology; that is, whether every possible assignment of true/false values to variables yields a true statement. This is closely related to the Boolean satisfiability problem, which asks whether there exists at least one such assignment, and is NP-complete.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NCEP/NCAR Reanalysis** NCEP/NCAR Reanalysis: The NCEP/NCAR Reanalysis is an atmospheric reanalysis produced by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). It is a continually updated globally gridded data set that represents the state of the Earth's atmosphere, incorporating observations and numerical weather prediction (NWP) model output from 1948 to present. Accessing the data: The data is available for free download from the NOAA Earth System Research Laboratory and NCEP. It is distributed in Netcdf and GRIB files, for which a number of tools and libraries exist. It is available for download through the NCAR CISL Research Data Archive on the NCEP/NCAR Reanalysis main data page. Uses: Initializing a smaller scale atmospheric model Climate assessment Subsequent updates: Since then NCEP-DOE Reanalysis 2 and the NCEP CFS Reanalysis are released. The former focuses in fixing existing bugs with the NCEP/NCAR Reanalysis system – most notably surface energy and usage of observed precipitation forcing to the land surface, but otherwise uses a similar numerical model and data assimilation system. The latter is based on the NCEP Climate Forecast System.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ADONIS (software)** ADONIS (software): ADONIS is a Business Process Management (BPM) tool, used for documentation, analysis, and optimization of business processes. It enables the visualization of process flows and standard operating procedures (SOPs), provides visibility into how they operate, and helps increase their efficiency by revealing redundant efforts and opportunities for improvement.The ADONIS BPM suite is manufactured and marketed by BOC Group and represents their flagship Business Process Management product. Overview: ADONIS is designed as a golden source of organizations' process data, and thus supports the end-to-end management and improvement of their business processes. It is an HTML 5 web-based application, fully BPMN 2.0 compliant, and conformant with various other international standards and best practices such as BPMM, DMN and ISO 9000. Capabilities and Application Scenarios: The tool covers a wide application field and assists its users in the following domains: Process Management Quality Management & Operational Excellence Digitalization and Automation Customer Journey Mapping and Ideation Audit and Compliance SAP/ERP IntegrationADONIS provides an array of different functionalities, including, but not limited to business process modelling using the BPMN notation, graphical analysis and reporting capabilities, process simulation and data-driven insights, process versioning, publishing and team collaboration, as well as process automation using BPMN 2.0 XML (BPMN DI) and XPDL2.In addition to its out-of-the-box features, ADONIS offers different configuration and customization possibilities, and can be integrated with other tools via the RESTful API interface. Some of the common integration scenarios include the Enterprise Architecture suite ADOIT, Atlassian Confluence, SAP Solution Manager and others. History: ADONIS was first released in 1995 by the BPMS Group (Business Process Management Systems) at the Computer Science and Business Informatics Institute of the University of Vienna, which later evolved into the company BOC Group as its spin-off. Since then, ADONIS has been continuously advanced, and is currently available in its latest version ADONIS 13.0, released in December 2021.In addition to its commercial editions (Enterprise Edition and Starter Edition), in 2008 ADONIS became one of the first BPM tools to release a Community Edition freeware (ADONIS:Community Edition) which now counts over 200.000 registered community users. Awards and Recognitions: Recognized as a 2021 Customers' Choice Tool in the Gartner Voice of The Customer (EBPA) market report Featured in the 2021 Gartner Market Guide for Enterprise Business Process Analysis (EBPA) report Featured in the 2021 Gartner Market Guide for Technologies Supporting a Digital Twin of an Organization report Certified for accessibility by BIK Hamburg Certified SaaS for ISO 27000 Listed as a top performer for interoperability of BPM tools OMG Award 2008 "Best BPM Application Using Standards"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modular Lie algebra** Modular Lie algebra: In mathematics, a modular Lie algebra is a Lie algebra over a field of positive characteristic. The theory of modular Lie algebras is significantly different from the theory of real and complex Lie algebras. This difference can be traced to the properties of Frobenius automorphism and to the failure of the exponential map to establish a tight connection between properties of a modular Lie algebra and the corresponding algebraic group. Although serious study of modular Lie algebras was initiated by Nathan Jacobson in 1950s, their representation theory in the semisimple case was advanced only recently due to the influential Lusztig conjectures, which as of 2007 have been partially proved.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ATP10 protein** ATP10 protein: In molecular biology, ATP10 protein (mitochondrial ATPase complex subunit ATP10) is an ATP synthase assembly factor. It is essential for the assembly of the mitochondrial F1-F0 complex. A yeast nuclear gene (ATP10) encodes a product that is essential for the assembly of a functional mitochondrial ATPase complex. Mutations in ATP10 induce a loss of rutamycin sensitivity in the mitochondrial ATPase, but do not affect the respiratory enzymes. ATP10 has a molecular weight of 30,293 Da and its primary structure is not related to any known subunit of the yeast or mammalian mitochondrial ATPase complexes. ATP10 is associated with the mitochondrial membrane. It is suggested that the ATP10 product is not a subunit of the ATPase complex but rather a protein required for the assembly of the F0 sector of the complex.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GNUnet** GNUnet: GNUnet is a software framework for decentralized, peer-to-peer networking and an official GNU package. The framework offers link encryption, peer discovery, resource allocation, communication over many transports (such as TCP, UDP, HTTP, HTTPS, WLAN and Bluetooth) and various basic peer-to-peer algorithms for routing, multicast and network size estimation.GNUnet's basic network topology is that of a mesh network. GNUnet includes a distributed hash table (DHT) which is a randomized variant of Kademlia that can still efficiently route in small-world networks. GNUnet offers a "F2F topology" option for restricting connections to only the users' trusted friends. The users' friends' own friends (and so on) can then indirectly exchange files with the users' computer, never using its IP address directly. GNUnet: GNUnet uses Uniform resource identifiers (not approved by IANA, although an application has been made). GNUnet URIs consist of two major parts: the module and the module specific identifier. A GNUnet URI is of form gnunet://module/identifier where module is the module name and identifier is a module specific string. GNUnet: The primary codebase is written in C, but there are bindings in other languages to produce an API for developing extensions in those languages. GNUnet is part of the GNU Project. It has gained interest in the hacker community after the PRISM revelations.GNUnet consists of several subsystems, of which essential ones are Transport and Core subsystems. Transport subsystem provides insecure link-layer communications, while Core provides peer discovery and encryption. On top of the core subsystem various applications are built. GNUnet: GNUnet includes various P2P applications in the main distribution of the framework, including filesharing, chat and VPN; additionally, a few external projects (such as secushare) are also extending the GNUnet infrastructure. GNUnet is unrelated to the older Gnutella P2P protocol. Gnutella is not an official GNU project, while GNUnet is. Transport: Originally, GNUnet used UDP for underlying transport. Now GNUnet transport subsystem provides multiple options, such as TCP and SMTP.The communication port, officially registered at IANA, is 2086 (tcp + udp). Trust system: GNUnet provides trust system based on an excess-based economic model. The idea of employing an economic system is taken from the MojoNation network.GNUnet network has no trusted entities so it is impossible to maintain a global reputation. Instead, each peer maintains its own trust for each of its local links. Trust system: When resources, such as bandwidth and CPU time, are in excess, the peer provides them to all requesting neighbors without reducing trust or otherwise charging them. When a node is under stress it drops requests from its neighbor nodes having lower internal trust value. However, when the peer has less resources than enough to fulfill everyone's requests, it denies requests of those neighbors that it trusts less and charges others by reducing their trust. File sharing: The primary application at this point is anonymous, censorship-resistant file-sharing, allowing users to anonymously publish or retrieve information of all kinds. The GNUnet protocol which provides anonymity is called GAP (GNUnet anonymity protocol). GNUnet FS can additionally make use of GNU libextractor to automatically annotate shared files with metadata. File sharing: File encoding Files shared with GNUnet are ECRS (An Encoding for Censorship-Resistant Sharing) coded.All content is represented as GBlocks. Each GBlock contains 1024 bytes. There are several types of GBlocks, each of them serves a particular purpose. Any GBlock B is uniquely identified by its RIPEMD-160 hash H(B) DBlocks store actual file contents and nothing else. File is split at 1024 byte boundaries and resulting chunks are stored in DBlocks. DBlocks are linked together into Merkle tree by means of IBlocks that store DBlock identifiers. File sharing: Blocks are encrypted with a symmetric key derived from H(B) when they are stored in the network. Queries and replies GNUnet Anonymity Protocol consists of queries and replies. Depending on load of the forwarding node, messages are forwarded to zero or more nodes. Queries are used to search for content and request data blocks. Query contains resource identifier, reply address, priority and TTL (Time-to-Live). File sharing: Resource identifier of datum Q is a triple-hash H(H(H(Q))) . Peer that replies to query provides H(H(Q)) to prove that it indeed has the requested resource without providing H(Q) to intermediate nodes, so intermediate nodes can't decrypt Q Reply address is the major difference compared to Freenet protocol. While in Freenet reply always propagates back using the same path as the query, in GNUnet the path may be shorter. Peer receiving a query may drop it, forward it without rewriting reply address or indirect it by replacing reply address with its own address. By indirecting queries peer provides cover traffic for its own queries, while by forwarding them peer avoids being a link in reply propagation and preserves its bandwidth. This feature allows the user to trade anonymity for efficiency. User can specify an anonymity level for each publish, search and download operation. An anonymity level of zero can be used to select non-anonymous file-sharing. GNUnet's DHT infrastructure is only used if non-anonymous file-sharing is specified. The anonymity level determines how much cover traffic a peer must have to hide the user's own actions. File sharing: Priority specifies how much of its trust user wants to spend in case of a resource shortage. TTL is used to prevent queries from staying in the network for too long. File sharing URIs The fs module identifier consists of either chk, sks, ksk or loc followed by a slash and a category specific value. Most URIs contain hashes, which are encoded in base32hex. File sharing: chk identifies files, typically: gnunet://fs/chk/[file hash].[query hash].[file size in bytes]File hash is the hash of the plaintext file, which allows decrypting it once it is downloaded. Query hash is the hash of topmost GBlock which allows downloading the whole tree of GBlocks that contain encrypted file. File size is required to determine the shape of the tree.sks identifies files within namespaces, typically: gnunet://fs/sks/NAMESPACE/IDENTIFIER ksk identifies search queries, typically: gnunet://fs/ksk/KEYWORD[+KEYWORD]* loc identifies a datum on a specific machine, typically: gnunet://fs/loc/PEER/QUERY.TYPE.KEY.SIZE Examples A type of GNUnet filesharing URI pointing to a specific copy of GNU GPL license text: gnunet://fs/chk/9E4MDN4VULE8KJG6U1C8FKH5HA8C5CHSJTILRTTPGK8MJ6VHORERHE68JU8Q0FDTOH1DGLUJ3NLE99N0ML0N9PIBAGKG7MNPBTT6UKG.1I823C58O3LKS24LLI9KB384LH82LGF9GUQRJHACCUINSCQH36SI4NF88CMAET3T3BHI93D4S0M5CC6MVDL1K8GFKVBN69Q6T307U6O.17992 Another type of GNUnet filesharing URI, pointing to the search results of a search with keyword "gpl": gnunet://fs/ksk/gpl GNU Name System: GNUnet includes an implementation of the GNU Name System (GNS), a decentralized and censorship-resistant replacement for DNS. In GNS, each user manages their own zones and can delegate subdomains to zones managed by other users. Lookups of records defined by other users are performed using GNUnet's DHT. Protocol translation: GNUnet can tunnel IP traffic over the peer-to-peer network. If necessary, GNUnet can perform IPv4-IPv6 protocol translation in the process. GNUnet provides a DNS Application-level gateway to proxy DNS requests and map addresses to the desired address family as necessary. This way, GNUnet offers a possible technology to facilitate IPv6 transition. Furthermore, in combination with GNS, GNUnet's protocol translation system can be used to access hidden services — IP-based services that run locally at some peer in the network and which can only be accessed by resolving a GNS name. Social API: Gabor X Toth published in early September 2013 a thesis to present the design of a social messaging service for the GNUnet peer-to-peer framework that offers scalability, extensibility, and end-to-end encrypted communication. The scalability property is achieved through multicast message delivery, while extensibility is made possible by using PSYC (Protocol for SYnchronous Conferencing), which provides an extensible RPC (Remote Procedure Call) syntax that can evolve over time without having to upgrade the software on all nodes in the network. Another key feature provided by the PSYC layer are stateful multicast channels, which are used to store e.g. user profiles. End-to-end encrypted communication is provided by the mesh service of GNUnet, upon which the multicast channels are built. Pseudonymous users and social places in the system have cryptographical identities — identified by their public key — these are mapped to human memorable names using GNS (GNU Name System), where each pseudonym has a zone pointing to its places. Social API: That is the required building block for turning the GNUnet framework into a fully peer-to-peer social networking platform. Chat: A chat has been implemented in the CADET module, for which a GTK interface for GNOME exists, specifically designed for the emerging Linux phones (such as the Librem 5 or the PinePhone).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SIR-Spheres** SIR-Spheres: SIR-Spheres microspheres are used to treat patients with unresectable liver cancer. These are mostly patients with hepatocellular carcinoma (HCC), metastatic colorectal cancer (mCRC), or metastatic neuroendocrine tumours (mNET).Therapy goals are local disease control, downstaging to resection, bridging to transplantation, and extended survival. Description: SIR-Spheres microspheres contain resin based microspheres with an average diameter between 20 and 60 micrometre. The microspheres are impregnated with 90Y, a beta radiating isotope of yttrium with a half-life of 64.1 hours. Mode of action: Once injected into the hepatic artery via a catheter by an interventional radiologist the microspheres will preferably lodge in the vasculature of the tumour. The radiation will lead to damage of tumour tissue and, in the best case to a complete elimination of the tumour. Due to the half-life almost all of the radiation is delivered within two weeks. After one month almost no radioactivity will remain. Mode of action: The procedure is also known as selective internal radiation therapy (SIRT) or radioembolization.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mailsort** Mailsort: Mailsort was a five-digit address-coding scheme used by the Royal Mail (the UK's postal service) and its business customers for the automatic direction of mail until 2012. Mail users who could present mail sorted by Mailsort code and in quantities of 4,000 upwards (1,000 upwards for large letters and packets) receive a discounted postal rate. Use: Mailsort was not widely known to the British public and the code was not written as part of the address; rather it appears elsewhere on the envelope or label. Although the majority of people in the UK use the postcode, the mailsort code was used for automated sorting. As the system was only used by a closed group of Royal Mail customers the scheme could be entirely re-coded from time to time (every 18–24 months). The last such update occurred in September 2010. Use: Unlike posting by ordinary mail it was possible to specify service levels other than 1st or 2nd class with longer delivery times offered. Four Mailsort products were available – known as 70, 120, 700 and 1400 – each based on the customer's ability to sort into increasingly smaller geographical areas. A further Walksort product was available to those who wished to post to many of the addresses in an area and who could present mail sorted first by mailsort code and then by walk number (the second half of the postcode). Use: Service levels Two further services — Presstream 1 and Presstream 2 — were available to publishers of magazines and other periodicals. These services were similar to Mailsort 1400 but offered a greater discount for publications that met certain criteria and had been successfully registered with Royal Mail. Structure: The first three digits, the Residue Selection Code, corresponded to an area which can vary in size from one postal district to several postcode areas, although most codes correspond exactly to one postcode area. For example: 406 corresponds to the KA and ML postcode areas; 451 and 452 correspond to the LS postcode area; 491 corresponds to the London SW postal area; and 502 corresponds to the BN postcode areaThe last two digits, called the Direct Selection code correspond to one or more postal districts. Structure: Mailsort codes were sometimes prefixed by a letter (A-P) which corresponded to sixteen regional divisions of the country, although the letter did not form part of the mailsort code. The letter prefix was used by the sender to ensure that when mail was presented to Royal Mail those items with the furthest to travel were given and processed first while those in the same region as the sender were dealt with last. When mail was presented to Royal Mail it was therefore not given in strict mailsort sequence and furthermore the sequence used would differ from one location in the country to another.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reclazepam** Reclazepam: Reclazepam is a drug which is a benzodiazepine derivative. It has sedative and anxiolytic effects similar to those produced by other benzodiazepine derivatives, and has a short duration of action. Synthesis: The reduction of the lactam in Delorazepam with lithium aluminium hydride gives CID:20333776 (1). Condensation with 2-chloroacetylisocyanate [4461-30-7] (2) proceeds to afford urea, CID:20333773 (3). Reaction of that with sodium iodide and base probably proceeds initially by halogen exchange of iodine for chlorine (Finkelstein reaction). Subsequent replacement of iodide by the enol anion of the urea oxygen results in formation of the oxazolone ring. There is thus obtained reclazepam (4).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bit, byte, gebissen** Bit, byte, gebissen: Bit, byte, gebissen was a German radio program. It was the first program on computer topics, produced by the Bayerischer Rundfunk (Bavarian Broadcasting). Bit, byte, gebissen was broadcast from October 1985 to September 1993. The idea was of the radio program was born out of the boom of home computers and video game consoles starting to fascinate youngsters at the beginning of the 1980s. Another successful program on computer topics for adolescent radio listeners was Chippie from the Hessischer Rundfunk (Hessian Broadcasting), starting in 1990.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Slow architecture** Slow architecture: Slow architecture is a term believed to have grown from the slow food movement of the mid-1980s. Slow architecture is generally architecture that is created gradually and organically, as opposed to building it quickly for short-term goals. It is often combined with an ecological, environmentally sustainable approach.Slow architecture could also be interpreted literally to mean architecture that has taken a very long time to build, for example the Sagrada Família, in Barcelona.When Eduardo Souto de Moura won the 2011 Pritzker Prize, a jury member described his buildings as slow architecture, because it required careful consideration to appreciate its intricacies. Professor Kenneth Frampton said "Souto de Moura's work is sort of more grounded in a way... They have their character coming from the way in which they have been developed as structures." 2012 Pritzker winner Wang Shu was described as "China's champion of Slow architecture". Slow architecture examples: Canada Professor John Brown of the University of Calgary has launched a not-for-profit website designed to promote "slow homes". This follows ten years of research. A slow home is described as attractive, in harmony with the neighbourhood, and energy efficient, using a smaller carbon footprint. Slow architecture examples: China Pritzker Prize winning architect Wang Shu has been described as "China's champion of Slow architecture" His buildings evoke the densely packed architecture of China's older cities, with intimate courtyards, tilting walls and a variety of sloping roofs. "Cities today have become far too large. I’m really worried, because it’s happening too fast and we have already lost so much" he says. Slow architecture examples: Ireland The slow architecture project in Ireland launched a touring exhibition by canal boat in 2010. The boat travelled between seven locations over a six-week period, with artists and architects holding workshops and lectures at each stopping point. United States In 2008, architects from leading US practices took part in a San Francisco-based project called Slow Food Nation. They created constructions that were generally food-related and ecologically motivated, including a variety of pavilions, a water station made from recycled bottles, a compost exhibit and a "soapbox" for farmers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mathematics of three-phase electric power** Mathematics of three-phase electric power: In electrical engineering, three-phase electric power systems have at least three conductors carrying alternating voltages that are offset in time by one-third of the period. A three-phase system may be arranged in delta (∆) or star (Y) (also denoted as wye in some areas, as symbolically it is similar to the letter 'Y'). A wye system allows the use of two different voltages from all three phases, such as a 230/400 V system which provides 230 V between the neutral (centre hub) and any one of the phases, and 400 V across any two phases. A delta system arrangement provides only one voltage, but it has a greater redundancy as it may continue to operate normally with one of the three supply windings offline, albeit at 57.7% of total capacity. Harmonic current in the neutral may become very large if nonlinear loads are connected. Definitions: In a star (wye) connected topology, with rotation sequence L1 - L2 - L3, the time-varying instantaneous voltages can be calculated for each phase A,C,B respectively by: sin ⁡(θ) sin sin ⁡(θ+43π) sin sin ⁡(θ+23π) where: VP is the peak voltage, θ=2πft is the phase angle in radians t is the time in seconds f is the frequency in cycles per second and voltages L1-N, L2-N and L3-N are referenced to the star connection point. Diagrams: The below images demonstrate how a system of six wires delivering three phases from an alternator may be replaced by just three. A three-phase transformer is also shown. Balanced loads: Generally, in electric power systems, the loads are distributed as evenly as is practical among the phases. It is usual practice to discuss a balanced system first and then describe the effects of unbalanced systems as deviations from the elementary case. Constant power transfer An important property of three-phase power is that the instantaneous power available to a resistive load, P=VI=V2R , is constant at all times. Indeed, let PLi=VLi2RPTOT=∑iPLi To simplify the mathematics, we define a nondimensionalized power for intermediate calculations, p=1VP2PTOTR sin sin sin 2⁡(θ−43π)=32 Hence (substituting back): PTOT=3VP22R. Since we have eliminated θ we can see that the total power does not vary with time. This is essential for keeping large generators and motors running smoothly. Balanced loads: Notice also that using the root mean square voltage V=Vp2 , the expression for PTOT above takes the following more classic form: PTOT=3V2R .The load need not be resistive for achieving a constant instantaneous power since, as long as it is balanced or the same for all phases, it may be written as Z=|Z|ejφ so that the peak current is IP=VP|Z| for all phases and the instantaneous currents are sin ⁡(θ−φ) sin ⁡(θ−23π−φ) sin ⁡(θ−43π−φ) Now the instantaneous powers in the phases are sin sin ⁡(θ−φ) sin sin ⁡(θ−23π−φ) sin sin ⁡(θ−43π−φ) Using angle subtraction formulae: cos cos ⁡(2θ−φ)] cos cos ⁡(2θ−43π−φ)] cos cos ⁡(2θ−83π−φ)] which add up for a total instantaneous power cos cos cos cos ⁡(2θ−83π−φ)]} Since the three terms enclosed in square brackets are a three-phase system, they add up to zero and the total power becomes cos ⁡φ or cos ⁡φ showing the above contention. Balanced loads: Again, using the root mean square voltage V=Vp2 , PTOT can be written in the usual form cos ⁡φ No neutral current For the case of equal loads on each of three phases, no net current flows in the neutral. The neutral current is the inverted vector sum of the line currents. See Kirchhoff's circuit laws. Balanced loads: IL1=VL1−NR,IL2=VL2−NR,IL3=VL3−NR−IN=IL1+IL2+IL3 We define a non-dimensionalized current, i=INRVP sin sin sin sin sin cos sin sin ⁡(θ)=0 Since we have shown that the neutral current is zero we can see that removing the neutral core will have no effect on the circuit, provided the system is balanced. Such connections are generally used only when the load on the three phases is part of the same piece of equipment (for example a three-phase motor), as otherwise switching loads and slight imbalances would cause large voltage fluctuations. Unbalanced systems: In practice, systems rarely have perfectly balanced loads, currents, voltages and impedances in all three phases. The analysis of unbalanced cases is greatly simplified by the use of the techniques of symmetrical components. An unbalanced system is analysed as the superposition of three balanced systems, each with the positive, negative or zero sequence of balanced voltages. Unbalanced systems: When specifying wiring sizes in a three-phase system, we only need to know the magnitude of the phase and neutral currents. The neutral current can be determined by adding the three phase currents together as complex numbers and then converting from rectangular to polar co-ordinates. If the three-phase root mean square (RMS) currents are IL1 , IL2 , and IL3 , the neutral RMS current is: cos sin cos sin ⁡(43π) which resolves to IL1−IL212−IL312+j32(IL2−IL3) The polar magnitude of this is the square root of the sum of the squares of the real and imaginary parts, which reduces to IL12+IL22+IL32−IL1IL2−IL1IL3−IL2IL3 Non-linear loads With linear loads, the neutral only carries the current due to imbalance between the phases. Devices that utilize rectifier-capacitor front ends (such as switch-mode power supplies for computers, office equipment and the like) introduce third order harmonics. Third harmonic currents are in-phase on each of the supply phases and therefore will add together in the neutral which can cause the neutral current in a wye system to exceed the phase currents. Revolving magnetic field: Any polyphase system, by virtue of the time displacement of the currents in the phases, makes it possible to easily generate a magnetic field that revolves at the line frequency. Such a revolving magnetic field makes polyphase induction motors possible. Indeed, where induction motors must run on single-phase power (such as is usually distributed in homes), the motor must contain some mechanism to produce a revolving field, otherwise the motor cannot generate any stand-still torque and will not start. The field produced by a single-phase winding can provide energy to a motor already rotating, but without auxiliary mechanisms the motor will not accelerate from a stop. Revolving magnetic field: A rotating magnetic field of steady amplitude requires that all three phase currents be equal in magnitude, and accurately displaced one-third of a cycle in phase. Unbalanced operation results in undesirable effects on motors and generators. Conversion to other phase systems: Provided two voltage waveforms have at least some relative displacement on the time axis, other than a multiple of a half-cycle, any other polyphase set of voltages can be obtained by an array of passive transformers. Such arrays will evenly balance the polyphase load between the phases of the source system. For example, balanced two-phase power can be obtained from a three-phase network by using two specially constructed transformers, with taps at 50% and 86.6% of the primary voltage. This Scott T connection produces a true two-phase system with 90° time difference between the phases. Another example is the generation of higher-phase-order systems for large rectifier systems, to produce a smoother DC output and to reduce the harmonic currents in the supply. Conversion to other phase systems: When three-phase is needed but only single-phase is readily available from the electricity supplier, a phase converter can be used to generate three-phase power from the single phase supply. A motor–generator is often used in factory industrial applications. System measurements: In a three-phase system, at least two transducers are required to measure power when there is no neutral, or three transducers when there is a neutral. Blondel's theorem states that the number of measurement elements required is one less than the number of current-carrying conductors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peptaibol** Peptaibol: Peptaibols are biologically active peptides containing between seven and twenty amino acid residues, some of which are non-proteinogenic amino acids. In particular, they contain α-aminoisobutyric acid along with other unusual aminoacids such as ethylnorvaline, isovaline and hydroxyproline; the N-terminus is acetylated, and the C-terminal amino acid is hydroxylated to an acid alcohol. They are named pebtaibols due to them being peptides containing α-aminoisobutyric acid (Aib) and ending in an alcohol. They are produced by certain fungi, mainly in the genus Trichoderma, as secondary metabolites which function as antibiotics and antifungal agents. Some are referred to as trichorzianines. They are amphipathic which allows them to form voltage-dependent ion channels in cell membranes which create holes in the membrane making them leaky and leading to the death of the cells. As of 2001, over 317 peptaibols had been identified. The most widely known peptaibol is alamethicin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UMER** UMER: The University of Maryland Electron Ring, or UMER, is a scaled electron beam accelerator located at the University of Maryland. The primary purpose of UMER is to investigate accelerator dynamics for beams with intense space charge, such as one finds in ion accelerators and photoinjectors. It deliberately enhances space charge forces by operating at low energies but relatively high currents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Snaps** Snaps: Snaps (pronounced [ˈsnaps] in Danish and Swedish) is a Danish and Swedish word for a small shot of a strong alcoholic beverage taken during the course of a meal. A ritual that is associated with drinking snaps is a tradition in Scandinavia, especially in Denmark and Sweden, where it is very common to drink snaps at holidays such as Midsummer, Christmas and Easter. This ritual has been described by one author as follows: A group of people are clustered around a table for a typical lunch that will include several courses and a clear, fiery drink. The host pours the ice-cold liquid into frosty, conical glasses with long stems. He raises his glass, at which point the diners turn to one another and make eye contact, making certain not to leave anyone out. "Skål!" calls out the host, and everyone takes a sip. Again there is eye contact, and then the glasses are set on the table, not to be lifted again until the host raises his. The liquid is aquavit. The ritual is virtually the same throughout Scandinavia. In Denmark, a snaps will always be akvavit, although there are many varieties of it. In Sweden, snaps is a more general term; it is usually akvavit, although it may also be vodka, bitters/bitter liqueurs or some other kind of brännvin/brændevin. Spirits such as whisky or brandy are seldom drunk as snaps. One of Finland's strongest alcohol drinks served with snaps is Marskin ryyppy, named after Marshal C. G. E. Mannerheim.The word "snaps" also has the same meaning as German Schnapps (German: [ʃnaps]), in the sense of "any strong alcoholic drink". Culture: An entrée consisting of pickled herring and potatoes is typically served with snaps, as is the Swedish surströmming. Swedes, Danes and Swedish-speaking Finns have a tradition of singing songs (called snapsvisor) before drinking snaps. These snapsvisor are typically odes to the joys of drinking snaps. They may praise the flavour of snaps or express a craving for it. Snaps and snapsvisor are essential elements of Swedish crayfish parties, which are notoriously tipsy affairs. Dozens of songs may be sung during such a party, and every song requires a round of snaps. However, the glass need not be emptied every time. Home liquor production in Scandinavia: Distilling snaps at home is illegal in Scandinavian countries, unless approved by tax authorities. Illegal home distilling, however, is a widespread tradition, especially in many rural communities of Sweden, Finland and Norway. Home liquor production in Scandinavia: A tradition of home flavouring snaps exists in Scandinavia. This tradition is strongest in the southern areas, particularly Denmark. A snaps enthusiast will typically buy a commercially made, neutral-tasting snaps, and then add flavour to it by adding herbs found in nature or grown in a garden. For instance, in northern Denmark, various spices are added to snaps to produce a version called "bjesk", which roughly translated means "bitter". In Hirtshals, the Hirtshals Museum tells the story of the "bjesk". Home liquor production in Scandinavia: Popular flavours for home flavouring include Blackthorn, Bog-myrtle, Dill, Persian Walnut, St. John's Wort, Woodruff, and Wormwood. The herbs are commonly used singly, but some enthusiasts experiment with mixing them to achieve the perfect flavour.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SCOFF questionnaire** SCOFF questionnaire: The SCOFF questionnaire utilizes an acronym in a simple five question test devised for use by non-professionals to assess the possible presence of an eating disorder. It was devised by Morgan et al. in 1999. The original SCOFF questionnaire was devised for use in the United Kingdom, thus the original acronym needs to be adjusted for users in the United States and Canada. The "S" in SCOFF stands for "Sick" which in British English means specifically to vomit. In American English and Canadian English it is synonymous with "ill". The "O" is used in the acronym to denote "one stone". A "stone" is an Imperial unit of weight which made up of 14 lbs (equivalent to 6.35 kg). The letters in the full acronym are taken from key words in the questions: Sick Control One stone (14 lbs/6.5 kg) Fat Food Scoring: One point is assigned for every "yes"; a score greater than two (≥2) indicates a possible case of anorexia nervosa or bulimia nervosa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unipolar brush cell** Unipolar brush cell: Unipolar brush cells (UBCs) are a class of excitatory glutamatergic interneuron found in the granular layer of the cerebellar cortex and also in the granule cell domain of the cochlear nucleus. Structure: The UBC has a round or oval cell body with usually a single short dendrite that ends in a brush-like tuft of short dendrioles (dendrites unique to UBCs). These brush dendrioles form very large synaptic junctions. The dendritic brush and the large endings of the axonal branches are involved in the formation of cerebellar glomeruli. The UBC has one short dendrite where the granule cell has four or five. Structure: The brush dendrioles emit numerous, thin evaginations called filopodia, unique to UBCs. The filopodia emanate from all over the neuron, even including the dendritic stem and the cell body in some cells. Although UBC filopodia do not bear synaptic junctions, they are nevertheless involved in cell signaling. Function: UBCs are intrinsically firing neurons and considered as a class of excitatory “local circuit neurons”. They work together with vestibular fibres to integrate signals involving the orientation of the head that modulates reflex behaviour. UBCs function to amplify inputs from the vestibular ganglia and nuclei by spreading and prolonging excitation within the granular layer. They receive glutamatergic inputs on its dendritic brush from a single mossy fibre terminal in the form of a giant glutamatergic synapse and make glutamatergic synapses with granule cells and other UBCs. Location: UBCs are plentiful in those regions linked to vestibular functions. In mammals, UBCs show an uneven distribution within the granule cell domains of the hindbrain, being the most dense in the vermis, part of the flocculus/paraflocculus complex, and layers 2–4 of the dorsal cochlear nucleus. In the rat cerebellum, UBCs outnumber Golgi cells by a factor of 3 and approximately equal the number of Purkinje cells. Like other glutamatergic cells of the cerebellum, UBCs originate in the rhombic lip. History: UBCs were first described in 1977 by Altman and Bayer, who called them "pale cells". The term "unipolar brush cell" was first introduced in the early 1990s, reclassifying pale cells, Rat-302 cells, monodendritic cells, chestnut cells and mitt cells under the same name. The Federative International Committee on Anatomical Terminology (FICAT), which is a subcommittee of the International Federation of Associations of Anatomists (IFAA), officially recognized the “unipolar brush cell” as a new cell type of the cerebellar cortex in 2008. Pathological significance: UBCs situated in cerebellar lobule VII are affected in some cases of Pick's disease, where they develop cytoskeletal anomalies and are recognized by antibodies to abnormally hyperphosphorylated tau proteins. UBCs have also been implicated in the dysfunction of balance and motor coordination present in Down syndrome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lecture circuit** Lecture circuit: The "lecture circuit" is a euphemistic reference to a planned schedule of regular lectures and keynote speeches given by celebrities, often ex-politicians, for which they receive an appearance fee. In Western countries, the lecture circuit has become a way for ex-politicians to earn an income after leaving office or to raise money and their public profile in advance of a run for higher office. The Oxford Dictionary defines the term simply as, "A regular itinerary of venues or events for touring lecturers or public speakers". Lecture circuit: In the United States, the modern lecture circuit was preceded by the Lyceum movement, popular during the 19th century. It encouraged local organisations and institutions to sponsor lectures, debates and instructional talks as a form of adult education and entertainment. The subsequent 20th century formalisation of the lecture circuit as a genuine and accepted vocation has led to the establishment of agencies and the employment of agents dedicated to identifying and filling lucrative speaking engagements; creating a specific media market where speakers are able to put their message to an audience uninterrupted and without challenge. Examples: In an article about the lucrative nature of the Canadian lecture circuit, National Post columnist Tristin Hopper noted; Liberal leader Justin Trudeau was assailed in the House of Commons for skipping work to deliver speaking gigs, CBC anchor Peter Mansbridge had to answer questions about giving a paid speech to the oil lobby and CBC host Amanda Lang has been accused of getting too cozy with RBC after the bank paid her to give speeches. Examples: Having stepped down as United States Secretary of State in 2013, Hillary Clinton has received more than $200,000, in some instances, to deliver lectures to industry associations, universities and other groups. She delivered 14 such speeches in the five months after leaving office.While still a Member of Parliament, former UK Prime Minister Gordon Brown has declared significant income from the lecture circuit. Former Prime Minister Tony Blair, too, is said to have declared approximately £12 million in lecture circuit income per year since leaving office, receiving almost £400,000, in one instance, for two half-hour speeches in the Philippines. In popular culture: In the television series The West Wing, Alan Alda's character Arnold Vinick is urged to go on the lecture circuit after his unsuccessful campaign for the office of President of the United States in order to maintain the lifestyle to which he had become accustomed as a member of the United States Senate. The US version of the television series The Office includes a two-part episode titled Lecture Circuit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polar organic chemical integrative sampler** Polar organic chemical integrative sampler: A polar organic chemical integrative sampler (POCIS) is a passive sampling device which allows for the in situ collection of a time-integrated average of hydrophilic organic contaminants developed by researchers with the United States Geological Survey in Columbia, Missouri. POCIS provides a means for estimating the toxicological significance of waterborne contaminants. The POCIS sampler mimics the respiratory exposure of organisms living in the aquatic environment and can provide an understanding of bioavailable contaminants present in the system. POCIS can be deployed in a wide range of aquatic environments and is commonly used to assist in environmental monitoring studies. Background: The first passive sampling devices were developed in the 1970s to determine concentrations of contaminants in the air. In 1980 this technology was first adapted for the monitoring of organic contaminants in water. The initial type of passive sampler developed for aquatic monitoring purposes was the semipermeable membrane device (SMPD). SPMD samplers are most effective at absorbing hydrophobic pollutants with an octanol-water partition coefficient (Kow) ranging from 4-8. As the global emission of bioconcentratable persistent organic pollutants (POPs) was shown to result in adverse ecological effects, industry developed a wide range of increasing water-soluble, polar hydrophilic organic compounds (HpOCs) to replace them. These compounds generally have lower bioconcentration factors. However, there is evidence that large fluxes of these HpOCs into aquatic environments may be responsible for a number of adverse effects to aquatic organisms, such as altered behavior, neurotoxicity, endocrine disruption, and impaired reproduction. In the late 1990s research was underway to develop a new passive sampler in order to monitor HpOCs with a log Kow value of less than 3. In 1999 the POCIS sampler was under development at the University of Missouri-Columbia. It gathered more support in the early 2000s as concern increased regarding the effects of pharmaceutical and personal care products in surface waters.The United States Geological Survey (USGS) has been heavily involved in the development of passive samplers and has articles in their database regarding the development of POCIS as early as 2000. The USGS Columbia Environmental Research Center (CERC) is a self-proclaimed international leader in the field of passive sampling. There have been recent efforts by the USGS to connect people who have an interest in passive sampling. An international workshop and symposium on passive sampling was held by the USGS in 2013 to connect developers, policy makers and end users in order to discuss ways of monitoring environmental pollution. Fundamentals: The POCIS device was developed and patented by Jimmie D. Petty, James N. Huckins, and David A. Alvarez, of the Columbia Environmental Research Center. Integrative passive samplers are an effective way to monitor the concentration of organic contaminants in aquatic systems over time. Most aquatic monitoring programs rely on collecting individual samples, often called grab samples, at a specific time. The grab sampling method is associated with many disadvantages that can be resolved by passive sampling techniques. When contaminants are present in trace amounts, grab sampling may require the collection of large volumes of water. Also, lab analysis of the sample can only provide a snapshot of contaminant levels at the time of collection. This approach therefore has drawbacks when monitoring in environments where water contamination varies over time and episodic contamination events occur. Passive sampling techniques have been able to provide a time-integrated sample of water contamination with low detection limits and in situ extraction of analytes. Fundamentals: POCIS set-up The POCIS sampler consists of an array of sampling disks mounted on a support rod. Each disk consists of a solid sorbent sandwiched between two polyethersoulfone (PES) microporous membranes which are then compressed between two stainless steel rings which expose a sampling area. A standard POCIS disk consists of a sampling surface area to sorbent mass ratio of approximately 180 cm2g. Because the amount of chemical sampled is directly related to the sample surface area, it is sometimes necessary to combine extracts from multiple POCIS disks into one sample. Stainless steel rings, or other rigid inert material, are essential to prevent sorbent loss as the PES membranes are not able to be heat sealed. The POCIS array is then inserted and deployed within a protective canister. This canister is usually made of stainless steel or PVC and works to deflect debris that may displace the POCIS array during its deployment.The PES membrane acts as a semipermeable barrier between the sorbent and surrounding aquatic environment. It allows dissolved contaminants to pass through the sorbent while selectively excluding any particles larger than 100 nm. The membrane resists biofouling because the polyethersulphone used in the design is less prone than other materials. The POCIS is versatile in that the sorbents can be changed to target different classes of contaminants. However, only two sorbent classes are considered as standards of all POCIS deployments to date. Fundamentals: Theory and modeling Each POCIS disk will sample a certain volume of water per day. The volume of water sampled varies from chemical to chemical and is dependent on the physical and chemical properties of the compound as well as the duration of sampling. The sampling rate of POCIS can vary with changes in the water flow, turbulence, temperature, and the buildup of solids on the sampler’s surface. The accumulation of contaminants into a POCIS device is the result of three successive process occurring at the same time. First, the contaminants have to diffuse across the water boundary layer. The thickness of this layer is dependent on water flow and turbulence around the sampler and can significantly alter sampling rates. Second, the contaminant must transport across the membrane either through the water-filled pores or through the membrane itself. Finally, contaminants transfer from the membrane into the sorbent material mainly through adsorption. These last two steps make the modeling, understanding, and prediction of accumulation by a POCIS device challenging. To date, a limited number of chemical sampling rates have been determined.Accumulation of chemicals by a POCIS device generally follows first order kinetics. The kinetics are characterized by an initial integrative phase, followed by an equilibrium partitioning phase. During the integrative phase of uptake, a passive sampling device accumulates residues linearly relative to time, assuming constant exposure concentrations. Based on current results, the POCIS sampler remains in a linear phase for at least 30 days, and has been observed up to 56 days. Therefore, both laboratory and field data justify the use of a linear uptake model for the calculation of sample rates. In order to estimate the ambient water concentration of contaminants sampled by a POCIS device, there must be available calibration data applicable for in situ conditions regarding the target compound. Currently, this information is limited. Applicability: POCIS can be deployed in a wide range of aquatic environments including stagnant pools, rivers, springs, estuarine systems, and wastewater streams. However, there has been little research into the use of POCIS in strictly marine environments. Prior to deployment of a POCIS device, it is essential to select a study site that will maximize the effectiveness of the sampler. Selecting an area that is shaded will help prevent light sensitive chemicals from being degrading. The site should also allow the sampler to be submerged in the water without being buried in the sediment. It is ideal to place the sampler in moving water in order to increase sampling rates, however, areas with an extremely turbulent water flow should be avoided as to prevent damage to the POCIS device. Passive samplers are very vulnerable to vandalism and it is therefore important to secure the sampler in areas that are not easily visible and that are away from areas frequently used by people.POCIS samplers can be deployed for a period of time ranging from weeks to months. The shortest deployment lengths are typically 7 days but average 2–3 months. It is important to have a long enough deployment period to allow for adequate detection of contaminants at ambient environmental concentrations. Often, the two different types of POCIS devices will be deployed together in order to provide the greatest understanding of contamination. It is also important to deploy enough POCIS devices to ensure a large enough sample of contaminant is recovered for chemical analysis. An estimate or the number of samplers needed at a given site can be determined by the following equation. Applicability: Rs x t x n x Cc x Pr x Et > MQL x Viwhere Cc is the predicted environmental concentration of the contaminant t is the deployment time in days Rs is sampling rate in liters of water extracted by the passive sampler per day(L/day) Pr is the overall method recovery for the analyte (expressed as a factor of one; ::therefore 0.9 is used for 90 percent recovery), n is the number of passive samplers combined into a single sample, Et is the fraction of the total sample extract which is injected into the ::instrument for quantification MQL is the method quantification limit Vi is the volume of standard injection (commonly 1 μL). Applicability: Relevant contaminants Any compound with a log Kow of less than or equal to 3 can concentrate in a POCIS sampler. Applicable classes of contaminants measured by POCIS are pharmaceuticals, household and industrial products, hormones, herbicides, and polar pesticides (Table 1). Currently, there are two POCIS configurations that are targeted for different classes of contaminants. A general POCIS design contains a sorbent that is used to collect pesticides, natural as well as synthetic hormones, and wastewater related chemicals. The pharmaceutical POCIS configuration contains a sorbent that is designed to specifically target classes of pharmaceuticals. Applicability: Applicable contaminants that concentrate in a POCIS device. Not to be considered a complete list. Applicability: POCIS processing Before the POCIS is constructed, all the hardware as well as the sorbents and membrane must be thoroughly cleaned so that any potential interference is removed. During and after sampling the only cleaning necessary is the removal of any sediment that has adhered to the surface of the sampler. After assembly, and prior to deployment, the samplers are stored in frozen airtight containers to avoid any contamination. The samplers should be kept in airtight containers during transportation both to and from the sampling site so that airborne contaminants do not contaminate the sampler. It is ideal to keep the samplers cold while transporting them in order to preserve the integrity of the samples.After the POCIS is retrieved from the field, the membrane is gently cleaned to reduce the possibility of any contamination to the sorbent. The sorbent is placed into a chromatography column so that the chemicals that samples can be recovered using an organic solvent. The solvent used is specifically chosen based on the type of sorbent and chemicals sampled. The sample can go through further processing such as cleanup or fractionation depending on the desired use of the sample. Applicability: Data analysis After the sample has been processed, the extract can be analysed using a variety of data analysis techniques. The chemical analysis and analytical instrumentation used depends on the goal of the study. Many analyses require multiple samples, although in some cases a single POCIS sample can be used for multiple analyses.It is vital to use quality control (QC) procedures when using passive samplers. It is common practice for 10% to 50% of the total number of samples to be used for QC purposes. The number of QC samples depends on the study objectives. The QC samples are used to address issues such as sample contamination and analyte recovery. The types of QC samples commonly used include; reagent blanks, field blanks, matrix spikes, and procedural spikes.A large number of studies have been performed in which POCIS data was combined with bioassays to measure biological endpoints. Testing POCIS extracts in biological assays is useful as a POCIS device samples over its entire deployment period, and biologically active compounds can be effectively monitored. It can also be argued that the use of POCIS is a more relevant from an ecotoxicological perspective as the use of a passive sampler mimics the uptake of compounds by organisms. Another strength in using bioassays to test environmental samples is that they can provide an integrative measure of the toxic potential of a group of chemical compounds, rather than a single contaminant. Other passive samplers: There are many types of passive samplers used that specialize in absorbing different classes of aquatic contaminants found in the environment. Chemcatcher and SMPD are two types of passive samplers that are also commonly used. Monitoring programs use SMPDs to measure to hydrophobic organic contaminants. SPMDs are designed to mimic the bioconcentration of contaminants in fatty tissues (ITRC, 2006). Contaminants applicable to the use of an SPMD include, but are not limited to, polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), organochlorine pesticides, dioxins, and furans.The SPMD consist of a thin-walled, nonporous, polyethylene membrane tube that is filled with high molecular weight lipid. These tubes are approximately 90 cm long and wrap around the inside of a stainless steel deployment canister. SMPDs are efficient at absorbing pollutants with a log Kow of 4-8. This slightly overlaps with the range of contaminants absorbed by POCIS. Because of this, SMPDs and POCIS devices are often used together in monitoring studies to achieve a more representative understanding of contamination. Future development: The POCIS system is continually evaluated for the potential to sample a wide range of contaminants. Calibration data and analyte recovery methods are currently being generated by researchers around the world. Techniques to merge the POCIS device with bioassays are also under development. The POCIS sampler already serves as a versatile, economical, and robust tool for monitoring studies and observing trends in both space and time. However, sampling rates are not yet robust enough to supply reliable contaminant concentrations, particularly when regarding environmental quality standards. A limited number of sampling rates have been determined for chemicals and the determination of additional sampling rate data is necessary for the advancement of passive sampling technology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wei-Shou Hu** Wei-Shou Hu: Wei-Shou Hu is a Taiwanese-American chemical engineer. He earned his B.S. in agricultural chemistry from National Taiwan University in 1974 and his Ph.D. in biochemical engineering from the Massachusetts Institute of Technology under the guidance of Daniel I.C. Wang in 1983. He has been a professor with the University of Minnesota since 1983. Dr. Hu has long impacted the field of cell culture bioprocessing since its infancy by steadfastly introducing quantitative and systematic analysis into this field. His work, which covers areas such as modeling and controlling cell metabolism, modulating glycosylation, and process data mining, has helped shape the advances of biopharmaceutical process technology. He recently led an industrial consortium to embark on genomic research on Chinese hamster ovary cells, the main workhorse of biomanufacturing, and to promote post-genomic research in cell bioprocessing. His research focuses on the field of cell culture bioprocessing, particularly metabolic control of the physiological state of the cell. In addition to his work with Chinese hamster ovary cells, his work has enabled the use of process engineering for cell therapy, especially with liver cells. Dr.Hu has written four different biotechnology books. Also, one of his articles is cited by 63.He is the 2005 recipient of the Marvin Johnson Award from the American Chemical Society, the distinguished service award of Society of Biological Engineers, a special award from Asia Pacific Biochemical Engineering Conference (2009), and the Amgen Award from Engineering Conferences International, as well as both the distinguished service award and the Division award from the Food, Pharmaceuticals and Bioengineering Division of the American Institute of Chemical Engineers. He has authored the books Bioseparations, Cell Culture Technology for Pharmaceutical and Cell-Based Therapies and Cell Culture Bioprocess Engineering He is currently the Distinguished McKnight University Professor of Chemical Engineering and Material Science at the University of Minnesota
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**When Topology Meets Chemistry** When Topology Meets Chemistry: When Topology Meets Chemistry: A Topological Look At Molecular Chirality is a book in chemical graph theory on the graph-theoretic analysis of chirality in molecular structures. It was written by Erica Flapan, based on a series of lectures she gave in 1996 at the Institut Henri Poincaré, and was published in 2000 by the Cambridge University Press and Mathematical Association of America as the first volume in their shared Outlooks book series. Topics: A chiral molecule is a molecular structure that is different from its mirror image. This property, while seemingly abstract, can have big consequences in biochemistry, where the shape of molecules is essential to their chemical function, and where a chiral molecule can have very different biological activities from its mirror-image molecule. When Topology Meets Chemistry concerns the mathematical analysis of molecular chirality. Topics: The book has seven chapters, beginning with an introductory overview and ending with a chapter on the chirality of DNA molecules. Topics: Other topics covered through the book include the rigid geometric chirality of tree-like molecular structures such as tartaric acid, and the stronger topological chirality of molecules that cannot be deformed into their mirror image without breaking and re-forming some of their molecular bonds. It discusses results of Flapan and Jonathan Simon on molecules with the molecular structure of Möbius ladders, according to which every embedding of a Möbius ladder with an odd number of rungs is chiral while Möbius ladders with an even number of rungs have achiral embeddings. It uses the symmetries of graphs, in a result that the symmetries of certain graphs can always be extended to topological symmetries of three-dimensional space, from which it follows that non-planar graphs with no self-inverse symmetry are always chiral. It discusses graphs for which every embedding is topologically knotted or linked. And it includes material on the use of knot invariants to detect topological chirality. Audience and reception: The book is self-contained, and requires only an undergraduate level of mathematics. It includes many exercises, making it suitable for use as a textbook at both the advanced undergraduate and introductory graduate levels. Reviewer Buks van Rensburg describes the book's presentation as "efficient and intuitive", and recommends the book to "every mathematician or chemist interested in the notions of chirality and symmetry".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**McCay cubic** McCay cubic: In mathematics, in triangle geometry, McCay cubic (also called M'Cay cubic or Griffiths cubic) is a cubic plane curve in the plane of the reference triangle and associated with it, and having several remarkable properties. It is the third cubic curve in Bernard Gilbert's Catalogue of Triangle Cubics and it is assigned the identification number K003. Definition: The McCay cubic can be defined by locus properties in several ways. For example, the McCay cubic is the locus of a point P such that the pedal circle of P is tangent to the nine-point circle of the reference triangle ABC. The McCay cubic can also be defined as the locus of point P such that the circumcevian triangle of P and ABC are orthologic. Equation of the McCay cubic: The equation of the McCay cubic in barycentric coordinates x:y:z is cyclic 0. The equation in trilinear coordinates α:β:γ is cos cos cos ⁡C=0 McCay cubic as a stelloid: A stelloid is a cubic that has three real concurring asymptotes making 60° angles with one another. McCay cubic is a stelloid in which the three asymptotes concur at the centroid of triangle ABC. A circum-stelloid having the same asymptotic directions as those of McCay cubic and concurring at a certain (finite) is called McCay stelloid. The point where the asymptoptes concur is called the "radial center" of the stelloid. Given a finite point X there is one and only one McCay stelloid with X as the radial center.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dressage judge** Dressage judge: A dressage judge is responsible for assessing a dressage test and is a certified official. The assessment of a dressage test is done at all levels. Dressage depends on judges because they have to judge the rider during their test. A dressage judge is open and transparent and judges what they see at that moment.A dressage judge must first obtain a certificate to judge. A judge is then a certified official and has the authority to judge official national and if possible international competitions. To become a member of the jury, a judge must undergo education through the national sports federation in the country in which the member of the jury is active. A jury member starts at the bottom of the base, after which he or she educates to a higher level. The form of education differs per national federation. The highest level to judge is the Grand Prix, which is also the highest level in dressage. Dressage judge: Certified national Grand Prix jury members can follow the training to become an international jury member at the FEI if the national federation reports the judge above. The highest level as an international jury member is Level 4 status, formerly known as 'O' jury member or 5* judge. With this status, a Level 4 judge is authorized to judge major championships, such as the World Equestrian Games and the Olympic Games. International judge: International jury members are authorized to judge at international competitions. It is only possible to become an international jury member if a judge is registered by the national federation to follow the training as an international jury member at the FEI. The FEI is the umbrella organization for equestrian sports that is responsible for training and supervising the jury members. Only certified FEI jury members have the authority to judge international competitions. The international competitions are only organized by the FEI and are known as Concours de Dressage International. International judge: There are four different levels as an FEI judge: Level 1 Judge (This is the entry-level for national judges who do not have a Grand Prix Education system in their country)Level 1 judges are licensed to judge international through Prix st. George and Intermediate I level with a limited range of competitions. Level 2 (This is the entry-level for national judges who have a Grand Prix Education System in their country)Level 2 Judges are licensed to judge international through Grand Prix level, except 4* or higher-level competitions and FEI Championships, World Cups and the Olympic Games. International judge: Level 3 Judge (Former 'I' or 4* International judge)Level 3 judges are licensed to judge all international Grand Prix competitions including FEI Championships, except the World Equestrian Games and the Olympic Games Level 4 Judge (Former 'O' or 5* Olympic Judge)Level 4 judges are licensed to judge all international Grand Prix competitions including FEI Championships, World Equestrian Games, and Olympic Games. This is the highest level to reach as an international dressage judge. International judge: Dressage judges worldwide There are currently 192 licensed FEI Dressage Judges from different countries worldwide. The list below shows from which countries how many FEI jury members come (in 2020), different from Level 1 to Level 4. International judge: Former champion riders became judges All judges must have competed themselves and have exercised a certain level. Many former international competition riders decided to become a judge after their riding career to stay involved in the sport. A number of well-known old top riders, who used to participate in major championships such as European Championships, World Championships or Olympic Games in the past, have now been promoted to become an international jury member. Elisabeth Max-Theurer became Olympic Champion during the 1980 Olympic Games and promoted as judge to Level 4 in 2018. Some other former riders who are now judges are Lars Andersson, Olympian Ricky MacMillan, Sandy Phillips, Marian Cunningham, Lorraine Stubbs, Charlotte Bredahl, Hilda Gurney, Karen Pavicic, Sven Rothenberger, Peter Storr and Jennie Loriston-Clarke are a few who decided to focus on International judging. Links: Dressage official education system by the FEI List of FEI Officials
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rabacfosadine** Rabacfosadine: Rabacfosadine, sold under the brand name Tanovea-CA1, is a guanine nucleotide analog used for the treatment of lymphoma in dogs. The drug was granted conditional approval by the U.S. Food and Drug Administration under application number 141-475 for use in treating canine lymphoma in December 2016 pending a full demonstration of effectiveness, and became the first drug to receive full approval for the treatment of canine lymphoma in July 2021.Originally developed by Gilead Sciences as GS-9219, rabacfosadine is no longer being pursued for use in the treatment of lymphoma in humans. Rabacfosadine: The active form of rabacfosadine is a chain-terminating inhibitor of the major deoxyribonucleic acid (DNA) polymerases. In vitro studies have demonstrated that rabacfosadine inhibits DNA synthesis, resulting in S phase arrest and induction of apoptosis. It also inhibits the proliferation of lymphocytes in dogs with naturally occurring lymphoma. Veterinary uses: In July 2021, the U.S. Food and Drug Administration (FDA) approved Tanovea to treat lymphoma in dogs. Lymphoma, also called lymphosarcoma, is a type of cancer that can affect many species, including dogs. Tanovea is the first conditionally approved new animal drug for dogs to achieve the FDA's full approval. Adverse effects: Common side effects of rabacfosadine are decreased white blood cell count, diarrhea, vomiting, decreased appetite or loss of appetite, weight loss, decreased activity level, and skin problems. Other side effects may occur.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Compu-Read** Compu-Read: Compu-Read is an educational program originally developed by Sherwin Steffin of Edu-Ware Services in 1979 for the Apple II. It consists of four modules training the user in rapidly increasing comprehension and retention: Character Recognition, High-speed word recognition, Synonyms; Sentence Comprehension. In each, the user the initial difficulty level, and the computer matches the display speed to the user's performance. Compu-Read: Steffin first wrote Compu-Read as a text-based program while serving as a research analyst at UCLA. The first version was published by Programma International but after being laid off from the university, he revised Compu-Read and used it to launch his new company, Edu-Ware. Edu-Ware upgraded the program to high resolution graphics using its EWS3 graphics engine in 1981, renamed it Compu-Read 3.0 and ported it to the Atari 8-bit family, Commodore 64, and IBM PC. Compu-Read was featured in Edu-Ware's catalogs until its closure in 1985.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indicator bacteria** Indicator bacteria: Indicator bacteria are types of bacteria used to detect and estimate the level of fecal contamination of water. They are not dangerous to human health but are used to indicate the presence of a health risk. Indicator bacteria: Each gram of human feces contains approximately ~100 billion (1×1011) bacteria. These bacteria may include species of pathogenic bacteria, such as Salmonella or Campylobacter, associated with gastroenteritis. In addition, feces may contain pathogenic viruses, protozoa and parasites. Fecal material can enter the environment from many sources including waste water treatment plants, livestock or poultry manure, sanitary landfills, septic systems, sewage sludge, pets and wildlife. If sufficient quantities are ingested, fecal pathogens can cause disease. The variety and often low concentrations of pathogens in environmental waters makes them difficult to test for individually. Public agencies therefore use the presence of other more abundant and more easily detected fecal bacteria as indicators of the presence of fecal contamination. Aside from bacteria being found in fecal matter, it can also be found in oral and gut contents. Criteria for indicator organisms: The US Environmental Protection Agency (EPA) lists the following criteria for an organism to be an ideal indicator of fecal contamination: The organism should be present whenever enteric pathogens are present The organism should be useful for all types of water The organism should have a longer survival time than the hardiest enteric pathogen The organism should not grow in water The organism should be found in warm-blooded animals’ intestines.None of the types of indicator organisms that are currently in use fit all of these criteria perfectly, however, when cost is considered, use of indicators becomes necessary. Types of indicator organisms: Commonly used indicator bacteria include total coliforms, or a subset of this group, fecal coliforms, which are found in the intestinal tracts of warm blooded animals. Total coliforms were used as fecal indicators by public agencies in the US as early as the 1920s. These organisms can be identified based on the fact that they all metabolize the sugar lactose, producing both acid and gas as byproducts. Fecal coliforms are more useful as indicators in recreational waters than total coliforms which include species that are naturally found in plants and soil; however, there are even some species of fecal coliforms that do not have a fecal origin, such as Klebsiella pneumoniae. Perhaps the biggest drawback to using coliforms as indicators is that they can grow in water under certain conditions. Types of indicator organisms: Escherichia coli (E. coli) and enterococci are also used as indicators. Current methods of detection: Membrane filtration and culture on selective media Indicator bacteria can be cultured on media which are specifically formulated to allow the growth of the species of interest and inhibit growth of other organisms. Typically, environmental water samples are filtered through membranes with small pore sizes and then the membrane is placed onto a selective agar. It is often necessary to vary the volume of water sample filtered in order to prevent too few or too many colonies from forming on a plate. Bacterial colonies can be counted after 24 to 48 hours depending on the type of bacteria. Counts are reported as colony forming units per 100 mL (cfu/100 mL). Current methods of detection: Fast detections using chromogenic substances One technique for detecting indicator organisms is the use of chromogenic compounds, which are added to conventional or newly devised media used for isolation of the indicator bacteria. These chromogenic compounds are modified to change color or fluorescence by the addition of either enzymes or specific bacterial metabolites. This enables for easy detection and avoids the need for isolation of pure cultures and confirmatory tests. Current methods of detection: Application of antibodies Immunological methods using monoclonal antibodies can be used to detect indicator bacteria in water samples. Precultivation in select medium must preface detection to avoid detection of dead cells. ELISA antibody technology has been developed to allow for readable detection by the naked eye for rapid identification of coliform microcolonies. Other uses of antibodies in detection use magnetic beads coated with antibodies for the concentration and separation of the oocysts and cysts as described below for immunomagnetic separation (IMS) methods. Current methods of detection: IMS/culture and other rapid culture-based methods Immunomagnetic separation involves purified antigens biotinylated and bound to streptoavidin-coated paramagnetic particles. The raw sample is mixed with the beads, then a specific magnet is used to hold the target organisms against the vial wall and the non-bound material is poured off. This method can be used to recover specific indicator bacteria. Gene sequence-based methods Gene sequence-based methods depend on the recognition of exclusive gene sequences particular to specific strains of organisms. Polymerase chain reaction (PCR) and fluorescence in situ hybridization (FISH) are gene sequence-based methods currently being used to detect specific strains of indicator bacteria. Water quality standards for bacteria: Drinking water standards World Health Organization Guidelines for Drinking Water Quality state that as an indicator organism Escherichia coli provides conclusive evidence of recent fecal pollution and should not be present in water meant for human consumption. In the U.S., the EPA Total Coliform Rule states that a public water system is out of compliance if more than 5 percent of its monthly water samples contain coliforms. Water quality standards for bacteria: Recreational standards Early studies showed that individuals who swam in waters with geometric mean coliform densities above 2300/100 mL for three days had higher illness rates. In the 1960s, these numbers were converted to fecal coliform concentrations assuming 18 percent of total coliforms were fecal. Consequently, the National Technical Advisory Committee in the US recommended the following standard for recreational waters in 1968: 10 percent of total samples during any 30-day period should not exceed 400 fecal coliforms/100 mL or a log mean of 200/100 mL (based on a minimum of 5 samples taken over not more than a 30-day period).Despite criticism, EPA recommended this criterion again in 1976, however, the Agency initiated numerous studies in the 1970s and 1980s to overcome the weaknesses of the earlier studies. In 1986, EPA revised its bacteriological ambient water quality criteria recommendations to include E. coli and enterococci. Water quality standards for bacteria: Canada's National Agri-Environmental Standards Initiative's approach to characterizing risks associated with fecal water pollution bacterial water quality at agricultural sites is to compare these sites with those at reference sites away from human or livestock sources. This approach generally results in lower levels if E. coli being used as a standard or “benchmark” based on a study that indicated pathogens were detected in 80% of water samples with less than 100 cfu E. coli per 100 mL. Risk assessment for exposure to pathogens in recreational waters: Most cases of bacterial gastroenteritis are caused by food-borne enteric microorganisms, such as Salmonella and Campylobacter; however, it is also important to understand the risk of exposure to pathogens via recreational waters. This is especially the case in watersheds where human or animal wastes are discharged to streams and downstream waters are used for swimming or other recreational activities. Other important pathogens other than bacteria include viruses such as rotavirus, hepatitis A and hepatitis E and protozoa like giardia, cryptosporidium and Naegleria fowleri. Due to the difficulties associated with monitoring pathogens in the environment, risk assessments often rely on the use of indicator bacteria. Risk assessment for exposure to pathogens in recreational waters: Epidemiological studies In the 1950s, a series of epidemiological studies were done in the US to determine the relationship between water quality of natural waters and the health of bathers. The results indicated that swimmers were more likely to have gastrointestinal symptoms, eye infections, skin complaints, ear, nose, and throat infections and respiratory illness than non-swimmers and in some cases, higher coliform levels correlated to higher incidence of gastrointestinal illness, although the sample sizes in these studies were small. Since then, studies have been done to confirm causative relations between swimming and certain health outcomes. A review of 22 studies in 1998 confirmed that the health risks for swimmers increased as the number of indicator bacteria increased in recreational waters and that E. coli and enterococci concentrations correlated best with health outcomes among all the indicators studied. The relative risk (RR) of illness for swimmers in polluted freshwater versus swimmers in unpolluted water was between 1-2 for the majority of the data sets reviewed. The same study concluded that bacterial indicators were not well correlated to virus concentrations. Risk assessment for exposure to pathogens in recreational waters: Fate and transport of pathogens Survival of pathogens in waste materials, soil, or water, depends on many environmental factors including temperature, pH, organic matter content, moisture, exposure to light, and the presence of other organisms. Fecal material can be directly deposited, washed into waters by overland runoff, transported through the ground, or discharged to surface waters via sewer lines, pipes, or drainage tiles. Risk of exposure to humans requires: Pathogens to survive and be present; Pathogens to recreate in surface waters; Individuals to come in contact with water for sufficient time, or ingest sufficient volumes of water to receive an infectious dose.Die-off rates of bacteria in the environment are often exponential, therefore, direct deposition of fecal material into waters generally contribute higher concentrations of pathogens than material that must be transported overland or through the subsurface. Risk assessment for exposure to pathogens in recreational waters: Human exposure In general, children, the elderly, and immunocompromised individuals require a lower dose of a pathogenic organism in order to contract an infection. Presently there are very few studies which are able to quantify the amount of time people are likely to spend in recreational waters and how much water they are likely to ingest. In general, children swim more often, stay in the water longer, submerge their heads more often, and swallow more water. This makes people more fearful of water in the sea as more bacteria will be growing on and around them. Risk assessment for exposure to pathogens in recreational waters: Quantitative microbiological risk assessment Quantitative microbiological risk assessments (QMRAs) combine pathogen concentrations in water with dose-response relationships and data reflecting potential exposure to estimate the risk of infection. Risk assessment for exposure to pathogens in recreational waters: Data on water exposure are generally collected using questionnaires, but may also be determined from actual measurements of water ingested, or estimated from previously published data. Respondents are asked to report the frequency and timing and location of exposures, detailed information about the amount of water swallowed and head submersion, and basic demographic characteristics such as age, gender, socioeconomic status and family composition. Once sufficient data are collected and determined to be representative of the general population, they are usually fit with distributions, and these distribution parameters are then used in the risk assessment equations. Monitoring data representing occurrence of pathogens, direct measurement of pathogen concentrations, or estimations deriving pathogen concentrations from indicator bacteria concentrations, are also fit with distributions. Dose is calculated by multiplying the concentration of pathogens per volume by volume. Dose-responses can also be fit with a distribution. Risk assessment for exposure to pathogens in recreational waters: Risk management and policy implications The more assumptions that are made, the more uncertain estimates of risk related to pathogens will be. However, even with considerable uncertainty, QMRAs are a good way to compare different risk scenarios. In a study comparing estimated health risks from exposures to recreational waters impacted by human and non-human sources of fecal contamination, QMRA determined that the risk of gastrointestinal illness from exposure to waters impacted by cattle were similar to those impacted by human waste, and these were higher than for waters impacted by gull, chicken, or pig faeces. Such studies could be useful to risk managers for determining how best to focus their limited resources, however, risk managers must be aware of the limitations of data used in these calculations. For example, this study used data describing concentrations of Salmonella in chicken feces published in 1969. Methods for quantifying bacteria, changes in animal housing practices and sanitation, and many other factors may have changed the prevalence of Salmonella since that time. Also, such an approach often ignores the complicated fate and transport processes that determine bacteria concentrations from the source to the point of exposure. Addressing bacterial water quality problems: In the US, individual states are allowed to develop their own water quality standards based on EPA's recommendations under the Clean Water Act of 1977. Once water quality standards are approved, states are tasked with monitoring their surface waters to determine where impairments occur, and watershed plans called Total Maximum Daily Loads (TMDLs) are developed to direct water quality improvement efforts including changes to allowable bacteria loading by point sources and recommendations for changes to practices that reduce nonpoint-source contributions to bacteria loads. Also, many states have beach monitoring programs to warn swimmers when high levels of indicator bacteria are detected.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aortic body** Aortic body: The aortic bodies are one of several small clusters of peripheral chemoreceptors located along the aortic arch. They are important in measuring partial pressures of oxygen and carbon dioxide in the blood, and blood pH. Structure: The aortic bodies are collections of chemoreceptors present on the aortic arch. Most are located above the aortic arch, while some are located on the posterior side of the aortic arch between it and the pulmonary artery below. They consist of glomus cells and sustentacular cells.Some sources equate the "aortic bodies" and "paraaortic bodies", while other sources explicitly distinguish between the two. When a distinction is made, the "aortic bodies" are chemoreceptors which regulate the circulatory system, while the "paraaortic bodies" are the chromaffin cells which manufacture catecholamines. Function: The aortic bodies measure partial gas pressures and the composition of arterial blood flowing past it. These changes may include: oxygen partial pressure. carbon dioxide partial pressure. Function: pH (indirectly affected by carbon dioxide concentration).They are particularly sensitive to changes in pH. Aortic bodies are more sensitive detectors of total arterial blood oxygen content than the carotid body chemoreceptors, which are more sensitive detectors of the partial pressure of oxygen in the arterial blood.The aortic bodies give feedback to the medulla oblongata, specifically to the dorsal respiratory group, via the afferent branches of the vagus nerve (cranial nerve X). The medulla oblongata, in turn, regulates breathing and blood pressure. Clinical significance: A paraganglioma, also known as a chemodectoma, is a tumor that may involve an aortic body.Swelling can also occur.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lenovo Essential desktops** Lenovo Essential desktops: Lenovo’s line of Essential desktops is a collection of budget-conscious machines designed for consumers, and advertised as being "affordable, space saving, and energy efficient". The Essential desktop line is different from both Lenovo’s ThinkCentre line and Lenovo’s IdeaCentre line. Lenovo defines its ThinkCentre desktops as business-oriented computers, while the IdeaCentre desktops are meant primarily for entertainment. The Essential range of desktops can be categorized as being between the two – meant more for ordinary everyday use.The Essential desktops are frequently (and erroneously) referred to as IdeaCentre desktops. For example, Desktop Review indicated that the C300 was an IdeaCentre. However, the Lenovo U.S. Web site indicates that the C300 is part of their value line, or ‘Essential’ line. The only brand associated with these desktops is ‘Lenovo’ – ‘Essential’ represents a range of products and is not a brand in itself. Product series: There are two lines of Lenovo-branded desktops sold under the ‘Essential’ banner. These are: C Series H Series C Series The Lenovo C Series desktops launched by Lenovo are the C100, C200, C300, and C315. 2010 The Lenovo C Series desktops launched in 2010 were the C200 and C315. Product series: C200 The C200 was an All-in-one (AIO) desktop launched by Lenovo in April 2010. Hardware Bistro indicated that the desktop had entry-level specifications, making it more affordable than the B500. The review also indicated that the desktop’s unique selling point was its 18.5 inch Touchscreen display.The desktop was made available with the Intel Atom D510 processor, 2GB of RAM, and could be configured for up to 500GB storage capacity. The desktop offered options for both integrated and discrete graphics. The integrated graphics option on offer was Intel GMA 3150, while the discrete graphics option was the Nvidia Ion graphics card with 256MB of video RAM. Additional features on the desktop included a DVD multiburner, an integrated Web camera, integrated stereo speakers, LAN, and WiFi. Product series: C315 The C315 was released in 2010 by Lenovo. In its review, silentpcreview said that the “C315 is one of the more interesting all-in-one PCs with which we've crossed paths.” The C315 was equipped with an AMD Athlon II X2 250u processor – a low voltage processor with a speed of 1.6 GHz. The hard disk storage capacity on the desktop was 500GB. The desktop also offered discrete graphics, with an ATI Mobility Radeon HD4530 graphics card. The desktop also offered 4GB of DDR2 RAM and a slim dual-layer DVD writer.Detailed specifications of the desktop are given below: Chipset: AMD 690M ATSC Tuner: Built-in Networking: 10/100 Ethernet, 802.11g Card Reader: 6-in-1 Webcam: 0.3 megapixel (maximum resolution of 640x480) USB Ports: 6 USB 2.0 Operating System: Microsoft Windows 7 Professional x64 Dimensions: 19.05 x 14.12 x 2.56 inches Weight: 16.3 lbs 2009 The Lenovo C Series desktops launched in 2009 by Lenovo were the C100 and C300. Product series: C100 Announced in September 2009, the C100 was an All-in-one (AIO) desktop designed for consumer use. The 18.5 inch display was 2 inches deep, with an aspect ratio of 16:9. The desktop also included software such as Lenovo’s OneKey Antivirus and OneKey Recovery that allowed one-button system scanning and restoration. The dimensions of the desktop were 18.5 x 14.5 x 4 inches. The desktop also was made available with options for either Intel Atom 230 single core processors, or Intel Atom 330 dual core processors. In addition, the desktop also included a DVD reader/writer and four USB ports. Product series: C300 The C300 was an AIO launched in 2009 as part of Lenovo’s Essential product line. Desktop Review listed the pros of the desktop as the good 20-inch display with a resolution of 1600x900, the 3.5 inch hard disk drive, and the optional discrete graphics. The cons were listed as the keyboard, and the standard single core Intel Atom 230 1.6 GHz processor. The desktop’s dimensions were 19.05 x 14.12 x 3.28 inches.Additional specifications of the desktop are as follows: Operating system: Windows Vista Home Basic Memory: 3GB DDR2 Hard drive: 320GB Optical drive: 8X DVD+/-RW Audio: Integrated HD audio Speakers: built-in Graphics: Intel GMA 950 Wireless networking: 802.11b/g Card reader: Built-in SDHC memory card reader H Series The Lenovo H Series desktops launched by Lenovo are the H200, H210, H215, H230, H320, and H405. Product series: 2011 The Lenovo H Series desktops released in 2011 were the H215, H220, and H320. Product series: H215 The H215 offered AMD Athlon II X2 dual core processors, 2GB of DDR3 RAM, and a 320GB hard disk drive. Additional, detailed specifications for the H215 are given below: Chipset: AMD 760G Graphics: ATI Radeon HD 3000 (integrated) Optical drive: dual layer DVD reader/writer Audio: integrated HD audio Media Card reader: integrated, 16-in-1 Operating system: Microsoft Windows 7 Home Premium (32-bit) USB ports: 6 USB 2.0 H220 The specifications of the H220 desktop are as follows: Operating System: DOS Processor: 3 GHz Intel E5500 RAM: 2GB DDR3 Storage: 320GB Optical drive: DVD reader/writer H320 The H320 was a small form factor desktop in the Lenovo H Series desktop line released in 2011. Desktop review called the H320 “a little - but not too little - box that does it all”. Desktop Review listed the pros of the desktop as the Blu-ray drive, the Intel Core i5 processor, and the small form factor. The cons were indicated to be the low graphics capabilities, few USB ports, and the lack of wireless networking.Detailed specifications of the desktop are given below: Processor: 3.20 GHz Intel Core i5-650 RAM: 6GB DDR3 Storage: 640GB 7200 RPM SATA2 Operating system: Windows 7 Home Premium 64-bit Optical drive: Blu-ray ROM DVD reader/writer Graphics: Nvidia GeForce 310 2010 The Lenovo H Series desktop launched in 2010 was the H230. Product series: H230 The H230 was launched at the same time as the Lenovo IdeaCentre K300 desktop. The desktop offered an Intel Core 2 Duo processor, Intel GMA integrated graphics, 4GB of RAM, a 640GB hard disk drive, and a DVD reader/writer. 2009 The Lenovo H Series desktops released in 2009 were the H200 and the H210. Product series: H200 The H200 was announced by Lenovo at CES 2009. It offered the Intel Atom 230 processor, 1GB of RAM, and a 160GB hard disk drive. It was Lenovo’s first desktop with the low power Intel Atom processor. The CPU incorporated a fanless design, minimizing desktop noise and, according to tech2, made the H200 Lenovo’s quietest desktop. The display was 15.4 inch Thin-film Transistor (TFT) screen. Product series: H210 The Lenovo H210 was also released in 2009 as part of the Essential range of desktops. PCWorld listed the pros of the desktop as above average performance for a desktop that cost less than US$500. The cons were listed as average expandability. Although PCWorld reported that the desktop was “one of the better sub-$500 systems”, it was reported not to handle games well. The inability to handle games came from the integrated graphics – Intel GMA 3100 graphics. The H210 could not run PCWorld’s Unreal Tournament 3 benchmark and offered only 24 frames per second on Far Cry (at a resolution of 1280x1024 with no antialiasing).Additional specifications of the H210 include: Processor: 2.5 GHz Intel Pentium Dual Core E5200 RAM: 4GB DDR2-667 Storage: 500GB Operating System: Microsoft Windows Vista Home Premium (32 bit) PCI Express x16 slot: 1 PCI Express x1 slot: 2 PCI slot: 1 Optical drivel: DVD reader/writer USB ports: 6 2008 The Lenovo H Series desktop released in 2008 was the H215. Product series: H215 The H215, released in October 2008, was an entry-level addition to Lenovo’s Essential line of budget PCs. It was praised for its large storage capacity - a total of 1TB. While performance was reported by About.com to be "decent", it was indicated that options to upgrade the desktop were limited. This was due to the low-wattage power supply commonly used in small form factor PCs as opposed to traditional tower PCs. Another point not in the desktop's favor was the recessed optical drive. This was described by About.com as being difficult to open and appearing out of place.Detailed specification of the desktop are as follows: Processor: AMD Athlon II X2 250 Dual Core RAM: 4GB PC3-8500 DDR3 Storage: 1TB 7200rpm SATA Hard Drive Optical drive: 16x DVD+/-RW Dual Layer Burner Graphics: ATI Radeon HD 3000 Integrated Graphics Processor Audio: 7.1 Audio Support Ports and slots: six USB 2.0, HDMI, VGA, 16-in-1 Card Reader
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Telomerization (dimerization)** Telomerization (dimerization): The telomerization is the linear dimerization of 1,3-dienes with simultaneous addition of a nucleophile in a catalytic reaction. Reaction: The reaction was independently discovered by E. J. Smutny at Shell and Takahashi at the Osaka University in the late sixties. The general reaction equation is as follows: The formation of several isomers are possible. In addition to 1,3-butadiene also substituted dienes such as isoprene or cyclic dienes such as cyclopentadiene can be used. A variety of substances such as water, ammonia, alcohols, or C-H-acidic compounds can be used as nucleophiles. When water is used, for example di-unsaturated alcohols are obtained. Reaction: The catalysts used are mainly metal-organic palladium and nickel compounds. In 1991, Kuraray implemented the production of 1-octanol on an industrial scale (5000 t a(-1)). The commercial route to produce 1-octene based on butadiene as developed by Dow Chemical came on stream in Tarragona in 2008. The telomerization of butadiene with methanol in the presence of a palladium catalyst yields 1-methoxy-2,7-octadiene, which is fully hydrogenated to 1-methoxyoctane in the next step. Subsequent cracking of 1-methoxyoctane gives 1-octene and methanol for recycle. Mechanism: While the reaction is catalyzed by Pd(0) complexes, the pre-catalyst can also be a Pd(II) compound that is reduced in situ. Once the Pd(0) catalyst is formed it can coordinate two butadienes which by oxidative coupling give the intermediate B. Even though the oxidative coupling is facile it is nonetheless reversible; the latter is illustrated by the fact that B is only stable at high butadiene concentration. Subsequent protonation of this intermediate by NuH at the 6-position of the η3-,η1-octadienyl ligand leads to intermediate C. Nw direct attack of the nucleophile can take place at either the 1- or 3-position of the η3-octadienyl chain, which leads to the linear or branched product complexes Dn and Diso respectively. Upon displacement by new 1,3-butadiene the product telomer is liberated while the catalyst is regenerated and can continue the cycle. Mechanism: While from purely steric reasons nucleophilic attack at the less substituted side of the allyl is favored, the regioselectivity of nucleophilic attack can heavily depend on the exact nature of ligands positioned trans to the allyl group. Literature: P. Fischer: process concepts for the transition-telomerization of isoprene with water or methanol. Shaker Verlag, 2002, 176 pages, ISBN 3-8322-0414-8, ISBN 978-3-8322-0414-3 Arno Behr, Marc Becker, Thomas Beckmann, Leif Johnen, Julia Leschinski, Sebastian Reyer: Telomerization: Advances and Applications of a Versatile Reaction. In: Angewandte Chemie International Edition. 48, 2009, p. 3598–3614, doi:10.1002/anie.200804599. M.J.-L. Tschan, E.J. Garcıa-Suarez, Z. Freixa, H. Launay, H. Hagen, J. Benet-Buchholz, P.W.N.M. van Leeuwen, J. Am. Chem. Soc. 2010, 132, 6463-6473.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyperprolactinaemia** Hyperprolactinaemia: Hyperprolactinaemia is the presence of abnormally high levels of prolactin in the blood. Normal levels average to about 13 ng/mL in women, and 5 ng/mL in men, with an upper normal limit of serum prolactin levels being 15-25 ng/mL for both. When the fasting levels of prolactin in blood exceed this upper limit, hyperprolactinemia is indicated. Hyperprolactinaemia: Prolactin (PRL) is a peptide hormone produced by lactotroph cells in the anterior pituitary gland. PRL is involved in lactation after pregnancy and plays a vital role in breast development. Hyperprolactinemia may cause galactorrhea (production and spontaneous flow of breast milk), infertility, and disruptions in the normal menstrual period in women; as well as hypogonadism, infertility and erectile dysfunction in men. Hyperprolactinaemia: Although hyperprolactinemia can result from normal physiological changes during pregnancy and breastfeeding, it can also be caused by other etiologies. For example, high prolactin levels could result from diseases affecting the hypothalamus and pituitary gland. Other organs, such as the liver and kidneys, could affect prolactin clearance and consequently, prolactin levels in the serum. The disruption of prolactin regulation could also be attributed to external sources such as medications.In the general population, the prevalence of hyperprolactinemia is 0.4%. The prevalence increases to as high as 17% in women with reproductive diseases, such as polycystic ovary syndrome. In cases of tumor-related hyperprolactinemia, prolactinoma is the most common culprit of consistently high levels of prolactin as well as the most common type of pituitary tumor. For non-tumor related hyperprolactinemia, the most common cause is medication-induced prolactin secretion. Particularly, antipsychotics have been linked to a majority of non-tumor related hyperprolactinemia cases due to their prolactin-rising and prolactin-sparing mechanisms. Typical antipsychotics have been shown to induce significant, dose-dependent increases in prolactin levels up to 10-fold the normal limit. Atypical antipsychotics vary in their ability to elevate prolactin levels, however, medications in this class such as risperidone and paliperidone carry the highest potential to induce hyperprolactinemia in a dose-dependent manner similar to typical antipsychotics. Signs and symptoms: In women, high blood levels of prolactin are typically associated with hypoestrogenism, anovulatory infertility, and changes in menstruation. Menstruation disturbances experienced in women commonly manifests as amenorrhea or oligomenorrhea. In the latter case, irregular menstrual flow may result in abnormally heavy and prolonged bleeding (menorrhagia). Women who are not pregnant or nursing may also unexpectedly begin producing breast milk (galactorrhea), a condition that is not always associated with high prolactin levels. For instance, many premenopausal women experiencing hyperprolactinemia do not experience galactorrhea and only some women who experience galactorrhea will be diagnosed with hyperprolactinemia. Thus, galactorrhea may be observed in individuals with normal prolactin levels and does not necessarily indicate hyperprolactinemia. This phenomenon is likely due to galactorrhea requiring adequate levels of progesterone or estrogen to prepare the breast tissue. Additionally, some women may also experience loss of libido and breast pain, particularly when prolactin levels rise initially, as the hormone promotes tissue changes in the breast.In men, the most common symptoms of hyperprolactinemia are decreased libido, sexual dysfunction, erectile dysfunction/impotence, infertility, and gynecomastia. Unlike women, men do not experience reliable indicators of elevated prolactin such as menstruation to prompt immediate medical consultation. As a result, the early signs of hyperprolactinemia are generally more difficult to detect and may go unnoticed until more severe symptoms are present. For instance, symptoms such as loss of libido and sexual dysfunction are subtle, arise gradually, and may falsely indicate a differential cause. Many men with pituitary tumor-associated hyperprolactinemia may forego clinical help until they begin to experience serious endocrine and vision complications, such as major headaches or eye problems.Long-term hyperprolactinaemia can lead to detrimental changes in bone metabolism as a result of hypoestrogenism and hypoandrogenism. Studies have shown that chronically elevated prolactin levels lead to increased bone resorption and suppression of bone formation, leading to reduced bone density, increased risk of fractures, and increased risk of osteoporosis. The chronic presence of hyperprolactinemia can lead to hypogonadism and osteolysis in men. Causes: Prolactin secretion is regulated by both stimulatory and inhibitory mechanisms. Dopamine acts on pituitary lactotroph D2 receptors to inhibit prolactin secretion while other peptides and hormones, such as thyrotropin releasing hormone (TRH), stimulate prolactin secretion. As a result, hyperprolactinemia may be caused by disinhibition (e.g., compression of the pituitary stalk or reduced dopamine levels) or excess production. The most common cause of hyperprolactinemia is prolactinoma (a type of pituitary adenoma). A blood serum prolactin level of 1000–5000 mIU/L (47-235 ng/mL) may arise from either mechanism, however levels >5000 mIU/L (>235 ng/mL) is likely due to the activity of an adenoma. Prolactin blood levels are typically correlated to the size the tumors. Pituitary tumors smaller than 10 mm in diameter, or microadenomas, tend to have prolactin levels <200 ng/mL. Macroadenomas larger than 10 mm in diameter possess prolactin >1000 ng/mL.Hyperprolactinemia inhibits the secretion of gonadotropin-releasing hormone (GnRH) from the hypothalamus, which in turn inhibits the release of follicle-stimulating hormone (FSH) and luteinizing hormone (LH) from the pituitary gland and results in diminished gonadal sex hormone production (termed hypogonadism). This is the cause of many of the symptoms described below. Causes: In many people, elevated prolactin levels remain unexplained and may represent a form of hypothalamic–pituitary–adrenal axis dysregulation. Causes: Physiological causes Physiological (i.e., non-pathological) causes include: ovulation, pregnancy, breastfeeding, chest wall injury, stress, stress-associated REM sleep, and exercise. During pregnancy, prolactin levels can range up to 600 ng/mL, depending on estrogen concentration. At 6 weeks post-birth (postpartum), estradiol concentrations decrease, and prolactin concentrations return to normal even during breastfeeding. Stress-related factors include physical, exercise, hypoglycemia, myocardial infarction, and surgery. Coitus and sleep can also contribute to an increased prolactin release. Causes: Medications Prolactin secretion in the pituitary is normally suppressed by the brain chemical dopamine, which binds to dopamine receptors. Drugs that block the effects of dopamine at the pituitary or deplete dopamine stores in the brain may cause the pituitary to secrete prolactin without an inhibitory effect. These drugs include the typical antipsychotics: phenothiazines such as chlorpromazine (Thorazine), and butyrophenones such as haloperidol (Haldol); atypical antipsychotics such as risperidone (Risperdal) and paliperidone (Invega); gastroprokinetic drugs used to treat gastro-esophageal reflux and medication-induced nausea (such as that from chemotherapy): metoclopramide (Reglan) and domperidone; less often, alpha-methyldopa and reserpine, used to control hypertension; and TRH. The use of estrogen-containing oral contraceptives are also known to increase prolactin levels when taken in high doses >35 μg. The sleep drug ramelteon (Rozerem) also increases the risk of hyperprolactinaemia. Particularly, the dopamine antagonists metoclopramide and domperidone are both powerful prolactin stimulators and have been used to stimulate breast milk secretion for decades. However, since prolactin is antagonized by dopamine and the body depends on the two being in balance, the risk of prolactin stimulation is generally present with all drugs that deplete dopamine, either directly or as a rebound effect. Causes: Specific diseases Prolactinoma or other tumors arising in or near the pituitary — such as those that cause acromegaly may block the flow of dopamine from the brain to the prolactin-secreting cells, likewise, division of the pituitary stalk or hypothalamic disease. Other causes include chronic kidney failure, hypothyroidism, bronchogenic carcinoma and sarcoidosis. Some women with polycystic ovary syndrome may have mildly-elevated prolactin levels. Causes: Nonpuerperal mastitis may induce transient hyperprolactinemia (neurogenic hyperprolactinemia) of about three weeks' duration; conversely, hyperprolactinemia may contribute to nonpuerperal mastitis.Apart from diagnosing hyperprolactinemia and hypopituitarism, prolactin levels are often checked by physicians in those who have had a seizure, when there is need to differentiate between epileptic seizure or a non-epileptic seizure. Shortly after epileptic seizures, prolactin levels often rise, whereas they are normal in non-epileptic seizures. Diagnosis: An appropriate diagnosis of hyperprolactinemia starts with conducting a complete clinical history before performing any treatment. Physiological causes, systemic disorders, and the use of certain drugs must be ruled out before the condition is diagnosed. Screening is indicated for those who are asymptomatic and those with elevated prolactin without an associated cause. Diagnosis: The most common causes of hyperprolactinemia are prolactinomas, drug-induced hyperprolactinemia, and macroprolactinemia. Individuals with hyperprolactinemia may present with symptoms including galactorrhea, hypogonadism effects, and/or infertility. The magnitude that prolactin is elevated can be used as an indicator of the etiology of the hyperprolactinemia diagnosis. Prolactin levels over 250 ng/mL may suggest prolactinoma. Prolactin levels less than 100 ng/mL may suggest drug-induced hyperprolactinemia, macroprolactinemia, nonfunctioning pituitary adenomas, or systemic disorders.Elevated prolactin blood levels are typically assessed in women with unexplained breast milk secretion (galactorrhea) or irregular menses or infertility, and in men with impaired sexual function and milk secretion. If high prolactin levels are present, all known conditions and medications which raises prolactin secretion must be assessed and excluded for diagnosis. After ruling out other causes and prolactin levels remain high, TSH levels are assessed. If TSH levels are elevated, hyperprolactinemia is secondary to hypothyroidism and treated accordingly. If TSH levels are normal, an MRI or CT scan is conducted to assess for any pituitary adenomas. Although hyperprolactinemia is often uncommon in postmenopausal women, prolactinomas detected after menopause are typically macroadenomas. While a plain X-ray of the bones surrounding the pituitary may reveal the presence of a large macroadenoma, small microadenomas will not be apparent. Magnetic resonance imaging (MRI) is the most sensitive test for detecting pituitary tumors and determining their size. MRI scans may be repeated periodically to assess tumor progression and the effects of therapy. Computed Tomography (CT scan) is another indicator of abnormalities in pituitary gland size; it also gives an image of the pituitary, but is less sensitive than the MRI. In addition to assessing the size of the pituitary tumor, physicians also look for damage to surrounding tissues, and perform tests to assess whether production of other pituitary hormones are normal. Depending on the size of the tumor, physicians may request an eye exam that includes the measurement of visual fields.However, a high measurement of prolactin may also result from the presence of macroprolactin, otherwise known as 'big prolactin' or 'big-big prolactin', in the serum. Macroprolactin occurs when prolactin polymerizes together and can bind with IgG to form complexes. Although this can result in high prolactin levels in some assay tests, macroprolactin is biologically inactive and will not cause symptoms typical of hyperprolactinemia. In those who are asymptomatic or without obvious causes of hyperprolactinemia, macroprolactin should be assessed and ruled out. Treatment: Treatment for hyperprolactinemia is usually dependent upon its cause, ranging from hypothyroidism, drug-induced hyperprolactinemia, hypothalamic disease, idiopathic hyperprolactinemia, macroprolactin, or prolactinoma. Therefore, in order to provide the proper management of hyperprolactinemia, the pathological form and physiological increase in prolactin levels are differentiated, and the correct cause of hyperprolactinemia must be identified before treatment. For functional asymptomatic hyperprolactinemia, the treatment of choice is removing the associated cause, including antipsychotic therapy. However, prolactin levels should be drawn and monitored both prior to any discontinuation or changes to therapy, and afterwards. With symptomatic hyperprolactinemia, stopping antipsychotic drugs for a short trial period are not recommended due to the risk of exacerbation or relapse of symptoms. Options for treatment include decreasing the dose of antipsychotics, adding aripiprazole as an adjunctive therapy, and switching antipsychotics as a last resort. In pharmacologic hyperprolactinemia, the concerning drug can be switched to another treatment or discontinued entirely. Vitex agnus-castus extract may be tried in cases of mild hyperprolactinemia. No treatment is required in asymptomatic macroprolactin and instead, serial prolactin measurements and pituitary imaging is monitored in a regular follow-up appointments.Medical therapy is the preferred treatment in prolactinomas. In most cases, medications that are dopamine agonists, such as cabergoline and bromocriptine (often preferred when pregnancy is possible), are the treatment of choice used to decrease prolactin levels and tumor size upon the presence of microadenomas or macroadenomas. A systematic review and meta-analyses has shown that cabergoline is more effective in treatment of hyperprolactinemia than bromocriptine. Other dopamine agonists that have been used less commonly to suppress prolactin include dihydroergocryptine, ergoloid, lisuride, metergoline, pergolide, quinagolide, and terguride. If the prolactinoma does not initially respond to dopamine agonist therapy, such that prolactin levels are still high or the tumor is not shrinking as expected, the dose of the dopamine agonist can be increased in a stepwise fashion to the maximum tolerated dose. Another option is to consider switching between dopamine agonists. It is possible for the prolactinoma to be resistant to bromocriptine but respond well to cabergoline and vice versa. Surgical therapy can be considered if pharmacologic options have been exhausted.There is evidence to support improvement in outcomes of hyperprolactinemic individuals who have shown to be resistant to or intolerant of the treatment of choice, dopamine agonists, through radiotherapy and surgery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lactic acid fermentation** Lactic acid fermentation: Lactic acid fermentation is a metabolic process by which glucose or other six-carbon sugars (also, disaccharides of six-carbon sugars, e.g. sucrose or lactose) are converted into cellular energy and the metabolite lactate, which is lactic acid in solution. It is an anaerobic fermentation reaction that occurs in some bacteria and animal cells, such as muscle cells.If oxygen is present in the cell, many organisms will bypass fermentation and undergo cellular respiration; however, facultative anaerobic organisms will both ferment and undergo respiration in the presence of oxygen. Sometimes even when oxygen is present and aerobic metabolism is happening in the mitochondria, if pyruvate is building up faster than it can be metabolized, the fermentation will happen anyway. Lactic acid fermentation: Lactate dehydrogenase catalyzes the interconversion of pyruvate and lactate with concomitant interconversion of NADH and NAD+. In homolactic fermentation, one molecule of glucose is ultimately converted to two molecules of lactic acid. Heterolactic fermentation, in contrast, yields carbon dioxide and ethanol in addition to lactic acid, in a process called the phosphoketolase pathway. History: Several chemists discovered during the 19th century some fundamental concepts of the domain of organic chemistry. One of them for example was the French chemist Joseph Louis Gay-Lussac, who was especially interested in fermentation processes, and he passed this fascination to one of his best students, Justus von Liebig. With a difference of some years, each of them described, together with colleagues, the chemical structure of the lactic acid molecule as we know it today. They had a purely chemical understanding of the fermentation process, which means that you can't see it using a microscope, and that it can only be optimized by chemical catalyzers. In 1857, the French chemist Louis Pasteur first described lactic acid as the product of a microbial fermentation. During this time, he worked at the University of Lille, where a local distillery asked him for advice concerning some fermentation problems. Per chance and with the badly equipped laboratory he had at that time, he was able to discover that in this distillery, two fermentations were taking place, a lactic acid one and an alcoholic one, both induced by microorganisms. He then continued the research on these discoveries in Paris, where he also published his theories that presented a stable contradiction to the purely chemical version represented by Liebig and his followers. Even though Pasteur described some concepts that are still accepted today, Liebig refused to accept them. But even Pasteur himself wrote that he was "driven" to a completely new understanding of this chemical phenomenon. Even if Pasteur didn't find every detail of this process, he still discovered the main mechanism of how the microbial lactic acid fermentation works. He was the first to describe fermentation as a "form of life without air."Although this chemical process had not been properly described before Pasteur's work, people had been using microbial lactic acid fermentation for food production much earlier. Chemical analysis of archeological finds show that milk fermentation uses predate the historical period; its first applications were probably a part of the Neolithic Revolution. Since milk naturally contains lactic acid bacteria, the discovery of the fermentation process was quite evident, since it happens spontaneously at an adequate temperature. The problem of these first farmers was that fresh milk is nearly indigestible by adults, so they had an interest to discover this mechanism. In fact, lactic acid bacteria contain the needed enzymes to digest lactose, and their populations multiply strongly during the fermentation. Therefore, milk fermented even a short time contains enough enzymes to digest the lactose molecules, after the milk is in the human body, which allows adults to consume it. Even safer was a longer fermentation, which was practiced for cheesemaking. This process was also discovered a very long time ago, which is proven by recipes for cheese production on Cuneiform scripts, the first written documents that exist, and a bit later in Babylonian and Egyptian texts. History: What is interesting is the theory of the competitive advantage of fermented milk products. The idea of this theory is that the women of these first settled farmer clans could shorten the time between two children thanks to the additional lactose uptake from milk consumption. This factor may have given them an important advantage to out-compete the hunter-gatherer societies. History: With the increasing consumption of milk products these societies developed a lactase persistence by epigenetic inheritance, which means that the milk-digesting enzyme lactase was present in their bodies during the whole lifetime, so they could drink unfermented milk as adults too. This early habituation to lactose consumption in the first settler societies can still be observed today in regional differences of this mutation's concentration. It is estimated that about 65% of world population still lacks it. Since these first societies came from regions around eastern Turkey to central Europe, the gene appears more frequently there and in North America, as it was settled by Europeans. It is because of the dominance of this mutation that Western cultures believe it is unusual to have a lactose intolerance, when it is in fact more common than the mutation. On the contrary, lactose intolerance is much more present in Asian countries. Milk products and their fermentation have had an important influence on some cultures’ development. This is the case in Mongolia, where people often practice a pastoral form of agriculture. The milk that they produce and consume in these cultures is mainly mare milk and has a long tradition. But not every part or product of the fresh milk has the same meaning. For instance, the fattier part on the top, the "deež", is seen as the most valuable part and is therefore often used to honor guests. History: Very important with often a traditional meaning as well are fermentation products of mare milk, like for example the slightly-alcoholic yogurt kumis. Consumption of these peaks during cultural festivities such as the Mongolian lunar new year (in spring). The time of this celebration is called the "white month", which indicates that milk products (called "white food" together with starchy vegetables, in comparison to meat products, called "black food") are a central part of this tradition. The purpose of these festivities is to "close" the past year – clean the house or the yurt, honor the animals for having provided their food, and prepare everything for the coming summer season – to be ready to "open" the new year. Consuming white food in this festive context is a way to connect to the past and to a national identity, which is the great Mongolian empire personified by Genghis Khan. During the time of this empire, the fermented mare milk was the drink to honor and thank warriors and leading persons, it was not meant for everybody. Although it eventually became a drink for normal people, it has kept its honorable meaning. Like many other traditions, this one feels the influence of globalization. Other products, like industrial yogurt, coming mainly from China and western countries, have tended to replace it more and more, mainly in urban areas. However, in rural and poorer regions it is still of great importance. Biochemistry: Homofermentative process Homofermentative bacteria convert glucose to two molecules of lactate and use this reaction to perform substrate-level phosphorylation to make two molecules of ATP: glucose + 2 ADP + 2 Pi → 2 lactate + 2 ATP Heterofermentative process Heterofermentative bacteria produce less lactate and less ATP, but produce several other end products: glucose + ADP + Pi → lactate + ethanol + CO2 + ATPExamples include Leuconostoc mesenteroides, Lactobacillus bifermentous, and Leuconostoc lactis. Biochemistry: Bifidum pathway Bifidobacterium bifidum utilizes a lactic acid fermentation pathway that produces more ATP than either homolactic fermentation or heterolactic fermentation: 2 glucose + 5 ADP + 5 Pi → 3 acetate + 2 lactate + 5 ATP Major genera of lactose-fermenting bacteria: Some major bacterial strains identified as being able to ferment lactose are in the genera Escherichia, Citrobacter, Enterobacter and Klebsiella . All four of these groups fall underneath the family of Enterobacteriaceae. These four genera are able to be separated from each other by using biochemical testing, and simple biological tests are readily available. Apart from whole-sequence genomics, common tests include H2S production, motility and citrate use, indole, methyl red and Voges-Proskauer tests. Applications: Lactic acid fermentation is used in many areas of the world to produce foods that cannot be produced through other methods. The most commercially important genus of lactic acid-fermenting bacteria is Lactobacillus, though other bacteria and even yeast are sometimes used. Two of the most common applications of lactic acid fermentation are in the production of yogurt and sauerkraut. Pickles Fermented fish In some Asian cuisines, fish is traditionally fermented with rice to produce lactic acid that preserves the fish. Examples of these dishes include burong isda of the Philippines; narezushi of Japan; and pla ra of Thailand. The same process is also used for shrimp in the Philippines in the dish known as balao-balao. Kimchi Kimchi also uses lactic acid fermentation. Applications: Sauerkraut Lactic acid fermentation is also used in the production of sauerkraut. The main type of bacteria used in the production of sauerkraut is of the genus Leuconostoc.As in yogurt, when the acidity rises due to lactic acid-fermenting organisms, many other pathogenic microorganisms are killed. The bacteria produce lactic acid, as well as simple alcohols and other hydrocarbons. These may then combine to form esters, contributing to the unique flavor of sauerkraut. Applications: Sour beer Lactic acid is a component in the production of sour beers, including Lambics and Berliner Weisses. Applications: Yogurt The main method of producing yogurt is through the lactic acid fermentation of milk with harmless bacteria. The primary bacteria used are typically Lactobacillus bulgaricus and Streptococcus thermophilus, and United States as well as European law requires all yogurts to contain these two cultures (though others may be added as probiotic cultures). These bacteria produce lactic acid in the milk culture, decreasing its pH and causing it to congeal. The bacteria also produce compounds that give yogurt its distinctive flavor. An additional effect of the lowered pH is the incompatibility of the acidic environment with many other types of harmful bacteria.For a probiotic yogurt, additional types of bacteria such as Lactobacillus acidophilus are also added to the culture. Applications: In vegetables Lactic acid bacteria (LAB) already exists as part of the natural flora in most vegetables. Lettuce and cabbage were examined to determine the types of lactic acid bacteria that exist in the leaves. Different types of LAB will produce different types of silage fermentation, which is the fermentation of the leafy foliage. Silage fermentation is an anaerobic reaction that reduces sugars to fermentation byproducts like lactic acid. Applications: Physiological Lactobacillus fermentation and accompanying production of acid provides a protective vaginal microbiome that protects against the proliferation of pathogenic organisms. Applications: Lactate fermentation and muscle cramps During the 1990s, the lactic acid hypothesis was created to explain why people experienced burning or muscle cramps that occurred during and after intense exercise. The hypothesis proposes that a lack of oxygen in muscle cells results in a switch from cellular respiration to fermentation. Lactic acid created as a byproduct of fermentation of pyruvate from glycolysis accumulates in muscles causing a burning sensation and cramps. Applications: Research from 2006 has suggested that acidosis isn't the main cause of muscle cramps. Instead cramps may be due to a lack of potassium in muscles, leading to contractions under high stress. Animals, in fact, do not produce lactic acid during fermentation. Despite the common use of the term lactic acid in the literature, the byproduct of fermentation in animal cells is lactate.Another change to the lactic acid hypothesis is that when sodium lactate is inside of the body, there is a higher period of exhaustion in the host after a period of exercise.Lactate fermentation is important to muscle cell physiology. When muscle cells are undergoing intense activity, like sprinting, they need energy quickly. There is only enough ATP stored in muscles cells to last a few seconds of sprinting. The cells then default to fermentation, since they are in an anaerobic environment. Through lactate fermentation, muscle cells are able to regenerate NAD+ to continue glycolysis, even under strenuous activity. [5] The vaginal environment is heavily influenced by lactic acid producing bacteria. Lactobacilli spp. that live in the vaginal canal assist in pH control. If the pH in the vagina becomes too basic, more lactic acid will be produced to lower the pH back to a more acidic level. Lactic acid producing bacteria also act as a protective barrier against possible pathogens such as bacterial vaginosis and vaginitis species, different fungi, and protozoa through the production of hydrogen peroxide, and antibacterial compounds. It is unclear if further use of lactic acid, through fermentation, in the vaginal canal is present [6] Benefits for the lactose intolerant In small amounts, lactic acid is good for the human body by providing energy and substrates while it moves through the cycle. In lactose intolerant people, the fermentation of lactose to lactic acid has been shown in small studies to help lactose intolerant people. The process of fermentation limits the amount of lactose available. With the amount of lactose lowered, there is less build up inside of the body, reducing bloating. Success of lactic fermentation was most evident in yogurt cultures. Further studies are being conducted on other milk produces like acidophilous milk.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Experimental radio station** Experimental radio station: Experimental station (also: experimental radio station ) is – according to article 1.98 of the International Telecommunication Union's (ITU) ITU Radio Regulations (RR) – defined as «A station utilizing radio waves in experiments with a view to the development of science or technique. This definition does not include amateur stations.» Each radio station shall be classified by the radiocommunication service in which it operates permanently or temporarily.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Viracor-IBT Laboratories** Viracor-IBT Laboratories: Viracor Eurofins Laboratories is a diagnostic laboratory specializing in infectious disease, immunology and allergy testing for immunocompromised and critical patients. Viracor Eurofins works with medical professionals, transplant teams, reference labs and bio-pharmaceutical companies. Viracor-IBT has CLIA clinical laboratory certification as both an Infectious Disease Laboratory and as an Allergy & Immunology Laboratory. History: Viracor-IBT was created through the merger of two specialty diagnostic testing labs, Viracor Laboratories and IBT Laboratories. Founded by Dr. Konstance Knox and Dr. Donald Carrigan in Milwaukee County in 2000, Viracor was among the first to commercially offer real-time quantitative PCR assays to diagnose patients with Adenovirus, BK virus and JC virus, among others. Founded in 1983, IBT was the first laboratory to offer a test to definitively diagnose autoimmune causes of chronic hives and developed the first commercially available test to measure pneumococcal antibodies. History: On 1 July 2014, Ampersand Capital Partners completed the sale of Viracor-IBT Laboratories to Eurofins Scientific for $255 million. The company continued to be known as Viracor-IBT. Expertise: The core areas of expertise for Viracor-IBT are: Custom Assay Development Infectious Disease Testing Immune Monitoring Biomarkers & Bioanalyitics
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Announcer's test** Announcer's test: An announcer's test is a test sometimes given to those wanting to be a radio or television announcer. The tests usually involve retention, memory, repetition, enunciation, diction, and using every letter in the alphabet a variety of times. History: Origins Announcer's tests originated in the early days of radio broadcasting, around 1920. The tests involved the pronunciation of difficult words, as well as retention, memory, repetition, enunciation, diction, and using every letter in the alphabet a variety of times. An excerpt of one early test, forwarded from Phillips Carlin, who was known for co-announcing the 1926, 1927, and 1928 World Series with Graham McNamee, is: Penelope Cholmondely raised her azure eyes from the crabbed scenario. She meandered among the congeries of her memoirs. There was the Kinetic Algernon, a choleric artificer of icons and triptychs, who wanted to write a trilogy. For years she had stifled her risibilities with dour moods. His asthma caused him to sough like the zephyrs among the tamarack. History: In around 1930, CBS Radio established a school for announcers. The school was headed by Frank Vizetelly, who trained announcers to develop voices that were "clear, clean-cut, pleasant, and carry with them the additional charm of personal magnetism." At about the same time, NBC Radio published standard pronunciation guidelines for its sponsors. According to announcer André Baruch, NBC used to test potential announcers using copy filled with tongue-twisters and foreign names, such as: The seething sea ceased to see, then thus sufficeth thus. History: Another test for an announcer candidate might be to "describe the studio in which you are seated so that a listener can readily visualize it." One hen, two ducks One of the better known tests originated at Radio Central New York in the early 1940s as a cold reading test given to prospective radio talent to demonstrate their speaking ability and breath control. Del Moore, a long time friend of Jerry Lewis, took this test at Radio Central New York in 1941, and passed it on to him. Lewis performed this test on radio, television and stage for many years, and it has become a favorite tongue-twister (and memory challenge) for his fans around the world. Professional announcers would be asked to perform the entire speaking test within a single breath without sounding rushed or out of breath. History: One hen Two ducks Three squawking geese Four limerick oysters Five corpulent porpoises Six pairs of Don Alverzo's tweezers Seven thousand Macedonians in full battle array Eight brass monkeys from the ancient sacred crypts of Egypt Nine apathetic, sympathetic, diabetic old men on roller skates, with a marked propensity toward procrastination and sloth Ten lyrical, spherical, diabolical denizens of the deep who all stall around the corner of the quo of the quay of the quivery, all at the same time.There are many variations to this version, many having been passed from one person to another by word of mouth. One variant is known as the Tibetan Memory Trick and has been performed by Danny Kaye as well as Flo & Eddie of The Turtles. Flo and Eddie incorporated the trick into Frank Zappa's performance of Billy the Mountain in his performance at Carnegie Hall with the Mothers of Invention. It was also used by Boston's WBZ disc jockey Dick Summer in the 1960s as the Nightlighter's Password.The test has also been adopted and adapted for use as a "repeat after me" chant by various Boy Scout units and camps, with several variations in the wording, some including an eleventh line: "Eleven neutramatic synthesizing systems owned by the seriously cybernetic marketing department, shipped via relativistic space flight through the draconian sector seven." This last line may have originated as a tribute to Douglas Adams and the Hitchhiker's Guide to the Galaxy books and has since been corrupted by the oral transmission of this script. The books include references to "Nutrimatic Drink Dispenser systems" owned by the "Sirius Cybernetics Corporation." In So Long, and Thanks for All the Fish, one character mentions, "This hedgehog, that chimney pot, the other pair of Don Alfonso's tweezers." A variant appears in the 1997 novel Matters of Chance by Jeannette Haien. History: Classical radio announcer's audition In the early 1950s, Mike Nichols wrote the following announcer test for radio station WFMT in Chicago. History: The WFMT announcer's lot is not a happy one. In addition to uttering the sibilant, mellifluous cadences of such cacophonous sounds as Hans Schmidt-Isserstedt, Carl Schuricht, Nicanor Zabaleta, Hans Knappertsbusch and the Hammerklavier Sonata, he must thread his vocal way through the complications of L'Orchestre de la Suisse Romande, the Concertgebouw Orchestra of Amsterdam, the Leipzig Gewandhaus Orchestra and other complicated nomenclature. History: However, it must by no means be assumed that the ability to pronounce L'Orchestre de la Société des Concerts du Conservatoire de Paris with fluidity and verve outweighs an ease, naturalness and friendliness of delivery when at the omnipresent microphone. For example, when delivering a diatribe concerning Claudia Muzio, Beniamino Gigli, Hetty Plümacher, Giacinto Prandelli, Hilde Rössel-Majdan and Lina Pagliughi, five out of six is good enough if the sixth one is mispronounced plausibly. Jessica Dragonette and Margaret Truman are taken for granted. History: Poets, although not such a constant annoyance as polysyllabically named singers, creep in now and then. Of course Dylan Thomas and W.B. Yeats are no great worry. Composers occur almost incessantly, and they range all the way from Albéniz, Alfvén and Auric through Wolf-Ferrari and Zeisl. Let us reiterate that a warm, simple tone of voice is desirable, even when introducing the Bach Cantata "Ich hatte viel Bekümmernis," or Monteverdi's opera "L'Incoronazione di Poppea." Such then, is the warp and woof of an announcer's existence "in diesen heil'gen Hallen."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PIGT** PIGT: GPI transamidase component PIG-T is an enzyme that in humans is encoded by the PIGT gene.This gene encodes a protein that is involved in glycosylphosphatidylinositol (GPI)-anchor biosynthesis. The GPI-anchor is a glycolipid found on many blood cells and serves to anchor proteins to the cell surface. This protein is an essential component of the multisubunit enzyme, GPI transamidase. GPI transamidase mediates GPI anchoring in the endoplasmic reticulum, by catalyzing the transfer of fully assembled GPI units to proteins. Interactions: PIGT has been shown to interact with PIGK and GPAA1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BCKDHA** BCKDHA: A 2-oxoisovalerate dehydrogenase subunit alpha, mitochondrial is an enzyme that in humans is encoded by the BCKDHA gene.BCKDHA is a coding gene that is part of the BCKD complex (branched-chain alpha-keto acid dehydrogenase). Discovery: BCKDHA was discovered by John Menkes in 1954. After he had seen a family with four children die only a few months after birth, he found that their urine smelled sweet like maple syrup. While he was not the one to discover the specific gene, he did discover the maple syrup urine disease(MSUD). The BCKD complex is made up of three different catalytic pieces. It was in 1960 when Dancis discovered the gene itself, but this was from Menkes discovering of the disease leading to further investigation of its origin. He found that looking at the branched-chain amino acids and their corresponding alpha-keto acids in turn aided in the realization that they were pathogenetic compounds. Dancis was the one to specifically track down the enzymatic defect in (MSUD) by finding what gene in the pool of human chromosomes was defecting the urine. He found the gene on the level of the decarboxylation of the branched-chain amino acids. Gene location: The cytogenetic location of BCKDHA is on the human chromosome 19, specifically on the cytogenetic band at 19q13.2. This the long arm (q) of the chromosome 19 at 13.2. Looking at the molecular location, the base pairs 41,397,789 to 41,425,005 are on chromosome 19. The cellular localization of this gene is within the mitochondrion matrix. Function: The second major step in the catabolism of the branched-chain amino acids (isoleucine, leucine, and valine) is catalyzed by the branched-chain alpha-keto acid dehydrogenase complex (BCKD; EC 1.2.4.4), an inner-mitochondrial enzyme complex that consists of 3 catalytic components: a heterotetrameric (alpha2, beta2) branched-chain alpha-keto acid decarboxylase (E1), a homo-24-meric dihydrolipoyl transacylase (E2; MIM 248610), and a homodimeric dihydrolipoamide dehydrogenase (E3; MIM 238331). The reaction is irreversible and constitutes the first committed step in BCAA oxidation. The complex also contains 2 regulatory enzymes, a kinase and a phosphorylase. The BCKDHA gene encodes the alpha subunit of E1, and the BCKDHB gene (MIM 248611) encodes the beta subunit of E1.[supplied by OMIM]The normal function of the BCKDHA gene is to provide instructions for making the alpha subunit of the branched-chain alpha-keto dehydrogenase (BCKD) enzyme complex. The alpha subunit is one part of the BCKD enzyme complex. Two beta subunits are produced from the BCKDHB gene and connect to two alpha subunits to form the E1 (decarboxylase) component. The BCKD enzyme complex catalyzes one step in breaking down amino acids. Those amino acids being leucine, isoleucine, and valine. The BCKD enzyme complex can be found in the mitochondria, an organelle known as the powerhouse of the cell. All three amino acids can be found in protein-rich foods and when broken down, they can be used for energy. Mutations in the BCKDHA gene can lead to maple syrup urine disease. Clinical significance: Mutations in the BCKDHA gene occur due to single point mutations in the “alpha subunit of the BCKD enzyme complex”. Earlier cases of this disease show the mutation more frequently occurred by replacing the amino acid tyrosine. This amino acid was replaced with asparagine. The complication with mutations in the BCKDHA gene is that it disrupts the normal function of the BCKD enzyme complex, preventing the gene from going about its normal functions. Thus, the BCKDHA gene would not be able to break down leucine, isoleucine, and valine. When these byproducts start to accumulate it produces a toxic environment for cells and tissues, specifically in the nervous system. This can lead to seizures, developmental delay, but most importantly maple syrup urine disease. Clinical significance: The BCKDHA has been pinpointed in people with maple syrup urine disease, due to over 80 mutations occurring in that gene. Severe symptoms arise from these mutations and cause the disease which shows soon after birth. Due to the sweet odor from the urine, the disease was termed maple syrup urine disease. The disease causes loss of appetite, nausea, lethargy, and delayed development. Clinical significance: BCKDHA mutation: maple syrup urine disease Maple syrup urine disease is an “autosomal recessive inborn error of metabolism. Meaning, as stated earlier, that there is a defect (i.e. error) in the single gene that codes for an enzyme. These enzymes promote conversions for various substrates into products. In terms of maple syrup urine disease, the enzyme defect occurs in the metabolic pathway of the “branched-chain amino acids” leucine, isoleucine, and valine. The buildup of these amino acids lead to “encephalopathy and progressive neurodegeneration”; along with other complications. Clinical significance: There are five forms of maple syrup urine disease: intermediate, intermittent, thiamine-responsive and E3 deficient. The form of disease is dependent upon clinical prognosis, dietary protein tolerance, and thiamine response and level of enzyme activity. Intermediate maple syrup urine disease is a milder form of maple syrup urine disease because it persistently raises branched-chain amino acids and some keto-acid chains. Individuals with this disease have a partial BCKDHA enzyme deficiency. Meaning that it shows up sporadically or reacts to dietary thiamine therapy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Custom home** Custom home: A custom home is a one-of-a-kind house that is designed for a specific client and for a particular location. The custom home builder may use plans created by an architect or by a professional home designer. Custom homes provide consumers with the opportunity to control layout, lot size, and accessibility. In most cases, custom home builders construct on land the home buyer already owns. Some developers sell fully serviced lots specifically for the construction of custom homes. This makes it easy to build a custom home since the lot is construction-ready and builders can focus purely on the design of the home. 4 main types of home builders: Production home builders Only builds on land the builder owns. Tend to use stock plans, but usually offer a variety of plan choices, upgrades and options. Build all types of housing — single-family, condos, town houses, and rental properties. Are large-volume builders. Generally build for all price points. Cost is usually the lowest of all homes built = $ Semi-Custom home builders Utilizes 'stock or standard' home plans that can be customized with a limited number of options for selections. Build on land already owned or the builder owns. Does not typically require an architect. Build single-family homes. More affordable but less flexibility. Cost is usually slightly more than production homes = $$ Custom home builders Build on land already owned. Some custom builders also build on land they own which is known as a spec home – short for speculative. Build unique houses. A custom home is a site-specific home built from a unique set of plans for the wishes of a specific client. Generally works collaboratively with an architect. Some custom builders may offer design/build services. Build single-family homes. Are generally small-volume builders. Tend to build high-end homes. Cost is usually more than production or semi-custom = $$$ Fine Luxury or Estate Home Builders Build on land already owned. Build luxurious one-of-a-kind homes. Work with celebrities and discerning clientele. Collaborate with other professionals such as architect, interior designer, landscape architect. Build single-family homes generally over 5000 sf. Focus of home design is on client's lifestyle. Features often include custom wood working, energy efficiency, aging in place, home automation, home security, etc. Cost is usually the highest = $$$$$ Industry: Australia The Construction Industry attributes to being the fourth largest contributor to GDP in Australian economy, employing 9.1% of the Australian workforce. U.K. Industry: Despite the economic recovery, house building is rapidly increasing. Housing construction grew for a 15th month in April, which is the longest period of growth 2006/07. According to The Times, 13,000 out of 200,000 homes built each year in the UK are self or custom built [April, 2019]. Since 2016, individuals in England have had the right to build their own bespoke home. The Right to Build requires local authorities in England to maintain a register of people who want to custom or self build in their area. This legislation was put in place to accelerate the growth of new housing by empowering consumers to create their ideal home built to their specification.Within the UK there are several routes for custom homes such as serviced plots, customisable homes and collective custom build. Industry: United States In the United States, the home building industry accounts for $82 billion in revenue with an annual growth of 5.8%. The industry currently attributes to 398,391 employees over 163,843 Businesses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alces (journal)** Alces (journal): Alces is a peer-reviewed scientific journal that publishes original papers on the biology and management of moose (Alces alces) throughout their circumpolar distribution, as well as other ungulate or carnivore species that overlap their range. It has been edited in published in Lakehead University (Thunder Bay, Ontario) since 1978. A single volume per year is published; a volume has one or sometimes two issues, with occasional supplements. History: The history of the Alces journal is connected with the North American Moose Conference and Workshop, whose Annual Meetings have taken place since 1963. From the early days, a summary of the events was produced for each meeting in mimeographed form. Since the Fifth meeting of the conference in Alaska in 1968, formal publication of conference proceedings started, becoming regular annual issues since 1972. These proceedings are considered the predecessor of the Alces journal, and are including in the numbering of its volumes. The name Alces was adapted for the journal in 1981 (volume 17). Special issues: Alces has also had special issues for several of the International Moose Symposia. While the materials of the First International Moose Symposium (Québec City, 1973) and Second International Moose Symposium (Uppsala, 1984) appeared in other journals, the proceedings of the Third (Syktyvkar, 1990), Fourth (Fairbanks, 1997) and Fifth (Øyer, Norway) International Moose Symposia appeared as supplements or special issues of Alces. Content: While the majority of the articles in Alces still originate from conference papers, an increasing proportion of the manuscripts are submitted directly to the journal by researchers in Canada, United States, Norway, Sweden, Finland, Germany, Russia, and China. Staff: The first permanent editor of Alces (from 1978 to 1982) was Harold Cumming of the Lakehead University School of Forestry. A number of Canadian and US scientists have edited the journal since then; as of 2006, the co-editors are Art Rodgers and Gerry Redmond (Maritime College of Forest Technology, Fredericton, New Brunswick).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tadd Dameron turnaround** Tadd Dameron turnaround: In jazz, the Tadd Dameron turnaround, named for Tadd Dameron, "is a very common turnaround in the jazz idiom", derived from a typical I−vi−ii−V turnaround through the application of tritone substitution of all but the first chord, thus yielding, in C major: rather than the more conventional: The Tadd Dameron turnaround may feature major seventh chords, and derive from the following series of substitutions, each altering the chord quality: The last step, changing to the major seventh chord, is optional. Tadd Dameron turnaround: Dameron was the first composer to use the turnaround in his standard "Lady Bird", which contains a modulation down a major third (from C to A♭). This key relation is also implied by the first and third chord of the turnaround, CM7 and A♭M7. It has been suggested that this motion down by major thirds would eventually lead to John Coltrane's Coltrane changes. The Dameron turnaround has alternately been called the "Coltrane turnaround".Further examples of pieces including this turnaround are Miles Davis' "Half-Nelson" and John Carisi's "Israel".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Logographic cues** Logographic cues: Logographic cues are visual images embedded with specific, widely understood meaning; they are pictures that represent certain words or concepts. These pictures are "designed to offer readers a high-utility message in a minimum amount of space." Some languages, for example, many East Asian languages, such as Chinese varieties (e.g. Mandarin, Cantonese, Min, and Wu), and partially Korean and Japanese, are written in logographic scripts; single glyphs represent whole morphemes.Examples of logographic cues include traffic signs, restroom signs, and pictorial flashcards. Unsurprisingly, logographic cues tend to be processed in the right brain hemisphere, the side more actively engaged with visuospatial input. Due to advances in technology and the media where logographic cues such as brand logos abound, the ability and tendency to draw meaning from pictures has become more widespread and intuitive. Utility to education: Logographic cues have also become increasingly useful in the domain of education, specifically in the development of reading skills. Many sources of educational advice suggest the use of logographic cues to tap into visual learning and intelligence, which usually takes a subordinate role to verbal education in schools; such sources include literacy expert Kylene Beers and a nationwide reading program, All America Reads. Utility to education: Specific activities that utilize logographic cues include students making symbols within the margins of print text, worksheets that provide a pictorial summary of the information given, and picture flash cards that foster vocabulary development. Teaching methods employing logographic cues can help to encourage and increase word recognition, text reformulation and information organization. The method also helps to tap into the sensory stimulation that encodes information into long-term memory. Utility to education: Criticisms The use of this method has also received some criticism. In reference to the use of logographic cues to develop word recognition the International Journal of Disability, Development and Education writes that "the results of controlled studies show it to be ineffective and potentially detrimental to student learning." The particular study documented in this journal suggested similar but modified alternatives such as Integrated Picture Cueing or the Handle Technique. The Integrated Picture Cueing (IPC) technique makes pictures out of the desired words, themselves, rather than symbolic pictorial depictions. The Handle Technique depicts the word with an extra serif (handle) that helps students encode the word and its meaning. Despite these findings and alternatives, logographic cues are widely used and encouraged in education.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electrostatic discharge** Electrostatic discharge: Electrostatic discharge (ESD) is a sudden and momentary flow of electric current between two electrically charged objects caused by contact, an electrical short or dielectric breakdown. A buildup of static electricity can be caused by tribocharging or by electrostatic induction. The ESD occurs when differently-charged objects are brought close together or when the dielectric between them breaks down, often creating a visible spark. Electrostatic discharge: ESD can create spectacular electric sparks (lightning, with the accompanying sound of thunder, is a large-scale ESD event), but also less dramatic forms which may be neither seen nor heard, yet still be large enough to cause damage to sensitive electronic devices. Electric sparks require a field strength above approximately 40 kV/cm in air, as notably occurs in lightning strikes. Other forms of ESD include corona discharge from sharp electrodes and brush discharge from blunt electrodes. Electrostatic discharge: ESD can cause harmful effects of importance in industry, including explosions in gas, fuel vapor and coal dust, as well as failure of solid state electronics components such as integrated circuits. These can suffer permanent damage when subjected to high voltages. Electronics manufacturers therefore establish electrostatic protective areas free of static, using measures to prevent charging, such as avoiding highly charging materials and measures to remove static such as grounding human workers, providing antistatic devices, and controlling humidity. Electrostatic discharge: ESD simulators may be used to test electronic devices, for example with a human body model or a charged device model. Causes: One of the causes of ESD events is static electricity. Static electricity is often generated through tribocharging, the separation of electric charges that occurs when two materials are brought into contact and then separated. Examples of tribocharging include walking on a rug, rubbing a plastic comb against dry hair, rubbing a balloon against a sweater, ascending from a fabric car seat, or removing some types of plastic packaging. In all these cases, the breaking of contact between two materials results in tribocharging, thus creating a difference of electrical potential that can lead to an ESD event. Causes: Another cause of ESD damage is through electrostatic induction. This occurs when an electrically charged object is placed near a conductive object isolated from the ground. The presence of the charged object creates an electrostatic field that causes electrical charges on the surface of the other object to redistribute. Even though the net electrostatic charge of the object has not changed, it now has regions of excess positive and negative charges. An ESD event may occur when the object comes into contact with a conductive path. For example, charged regions on the surfaces of styrofoam cups or bags can induce potential on nearby ESD sensitive components via electrostatic induction and an ESD event may occur if the component is touched with a metallic tool. Causes: ESD can also be caused by energetic charged particles impinging on an object. This causes increasing surface and deep charging. This is a known hazard for most spacecraft. Types: The most spectacular form of ESD is the spark, which occurs when a heavy electric field creates an ionized conductive channel in air. This can cause minor discomfort to people, severe damage to electronic equipment, and fires and explosions if the air contains combustible gases or particles. Types: However, many ESD events occur without a visible or audible spark. A person carrying a relatively small electric charge may not feel a discharge that is sufficient to damage sensitive electronic components. Some devices may be damaged by discharges as small as 30 V. These invisible forms of ESD can cause outright device failures, or less obvious forms of degradation that may affect the long term reliability and performance of electronic devices. The degradation in some devices may not become evident until well into their service life. Types: Sparks A spark is triggered when the electric field strength exceeds approximately 4–30 kV/cm — the dielectric field strength of air. This may cause a very rapid increase in the number of free electrons and ions in the air, temporarily causing the air to abruptly become an electrical conductor in a process called dielectric breakdown. Types: Perhaps the best known example of a natural spark is lightning. In this case the electric potential between a cloud and ground, or between two clouds, is typically hundreds of millions of volts. The resulting current that cycles through the stroke channel causes an enormous transfer of energy. On a much smaller scale, sparks can form in air during electrostatic discharges from charged objects that are charged to as little as 380 V (Paschen's law). Types: Earth's atmosphere consists of 21% oxygen (O2) and 78% nitrogen (N2). During an electrostatic discharge, such as a lightning flash, the affected atmospheric molecules become electrically overstressed. The diatomic oxygen molecules are split, and then recombine to form ozone (O3), which is unstable, or reacts with metals and organic matter. If the electrical stress is high enough, nitrogen oxides (NOx) can form. Both products are toxic to animals, and nitrogen oxides are essential for nitrogen fixation. Ozone attacks all organic matter by ozonolysis and is used in water purification. Types: Sparks are an ignition source in combustible environments that may lead to catastrophic explosions in concentrated fuel environments. Most explosions can be traced back to a tiny electrostatic discharge, whether it was an unexpected combustible fuel leak invading a known open air sparking device, or an unexpected spark in a known fuel rich environment. The result is the same if oxygen is present and the three criteria of the fire triangle have been combined. Damage prevention in electronics: Many electronic components, especially integrated circuits and microchips, can be damaged by ESD. Sensitive components need to be protected during and after manufacture, during shipping and device assembly, and in the finished device. Grounding is especially important for effective ESD control. It should be clearly defined, and regularly evaluated. Damage prevention in electronics: Protection during manufacturing In manufacturing, prevention of ESD is based on an Electrostatic Discharge Protected Area (EPA). The EPA can be a small workstation or a large manufacturing area. The main principle of an EPA is that there are no highly-charging materials in the vicinity of ESD sensitive electronics, all conductive and dissipative materials are grounded, workers are grounded, and charge build-up on ESD sensitive electronics is prevented. International standards are used to define a typical EPA and can be found for example from International Electrotechnical Commission (IEC) or American National Standards Institute (ANSI). Damage prevention in electronics: ESD prevention within an EPA may include using appropriate ESD-safe packing material, the use of conductive filaments on garments worn by assembly workers, conducting wrist straps and foot-straps to prevent high voltages from accumulating on workers' bodies, anti-static mats or conductive flooring materials to conduct harmful electric charges away from the work area, and humidity control. Humid conditions prevent electrostatic charge generation because the thin layer of moisture that accumulates on most surfaces serves to dissipate electric charges. Damage prevention in electronics: Ionizers are used especially when insulative materials cannot be grounded. Ionization systems help to neutralize charged surface regions on insulative or dielectric materials. Insulating materials prone to triboelectric charging of more than 2,000 V should be kept away at least 12 inches from sensitive devices to prevent accidental charging of devices through field induction. On aircraft, static dischargers are used on the trailing edges of wings and other surfaces. Damage prevention in electronics: Manufacturers and users of integrated circuits must take precautions to avoid ESD. ESD prevention can be part of the device itself and include special design techniques for device input and output pins. External protection components can also be used with circuit layout. Damage prevention in electronics: Due to dielectric nature of electronics component and assemblies, electrostatic charging cannot be completely prevented during handling of devices. Most of ESD sensitive electronic assemblies and components are also so small that manufacturing and handling is done with automated equipment. ESD prevention activities are therefore important with those processes where components come into direct contact with equipment surfaces. In addition, it is important to prevent ESD when an electrostatic discharge sensitive component is connected with other conductive parts of the product itself. An efficient way to prevent ESD is to use materials that are not too conductive but will slowly conduct static charges away. These materials are called static dissipative and have resistivity values below 1012 ohm-meters. Materials in automated manufacturing which will touch on conductive areas of ESD sensitive electronic should be made of dissipative material, and the dissipative material must be grounded. These special materials are able to conduct electricity, but do so very slowly. Any built-up static charges dissipate without the sudden discharge that can harm the internal structure of silicon circuits. Damage prevention in electronics: Protection during transit Sensitive devices need to be protected during shipping, handling, and storage. The buildup and discharge of static can be minimized by controlling the surface resistance and volume resistivity of packaging materials. Packaging is also designed to minimize frictional or triboelectric charging of packs due to rubbing together during shipping, and it may be necessary to incorporate electrostatic or electromagnetic shielding in the packaging material. A common example is that semiconductor devices and computer components are usually shipped in an antistatic bag made of a partially conductive plastic, which acts as a Faraday cage to protect the contents against ESD. Simulation and testing for electronic devices: For testing the susceptibility of electronic devices to ESD from human contact, an ESD Simulator with a special output circuit, called the human body model (HBM) is often used. This consists of a capacitor in series with a resistor. The capacitor is charged to a specified high voltage from an external source, and then suddenly discharged through the resistor into an electrical terminal of the device under test. One of the most widely used models is defined in the JEDEC 22-A114-B standard, which specifies a 100 picofarad capacitor and a 1,500 ohm resistor. Other similar standards are MIL-STD-883 Method 3015, and the ESD Association's ESD STM5.1. For compliance to European Union standards for Information Technology Equipment, the IEC/EN 61000-4-2 test specification is used. Another specification referenced by equipment maker Schaffner calls for C = 150 pF and R = 330 Ω which provides high fidelity results. While the theory is mostly there, very few companies measure the real ESD survival rate. Guidelines and requirements are given for test cell geometries, generator specifications, test levels, discharge rate and waveform, types and points of discharge on the "victim" product, and functional criteria for gauging product survivability. Simulation and testing for electronic devices: A charged device model (CDM) test is used to define the ESD a device can withstand when the device itself has an electrostatic charge and discharges due to metal contact. This discharge type is the most common type of ESD in electronic devices and causes most of the ESD damages in their manufacturing. CDM discharge depends mainly on parasitic parameters of the discharge and strongly depends on size and type of component package. One of the most widely used CDM simulation test models is defined by the JEDEC. Simulation and testing for electronic devices: Other standardized ESD test circuits include the machine model (MM) and transmission line pulse (TLP).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Drug-induced QT prolongation** Drug-induced QT prolongation: QT prolongation is a measure of delayed ventricular repolarisation, which means the heart muscle takes longer than normal to recharge between beats. It is an electrical disturbance which can be seen on an electrocardiogram (ECG). Excessive QT prolongation can trigger tachycardias such as torsades de pointes (TdP). QT prolongation is an established side effect of antiarrhythmics, but can also be caused by a wide range of non-cardiac medicines, including antibiotics, antihistamines, opioids, and complementary medicines. On an ECG, the QT interval represents the summation of action potentials in cardiac muscle cells, which can be caused by an increase in inward current through sodium or calcium channels, or a decrease in outward current through potassium channels. By binding to and inhibiting the “rapid” delayed rectifier potassium current protein, certain drugs are able to decrease the outward flow of potassium ions and extend the length of phase 3 myocardial repolarization, resulting in QT prolongation. Background: A QT interval is a value that is measured on an electrocardiogram. Measurements begin from the start of the Q wave to the end of the T wave. The value is an indication of the time it takes for a ventricle from the beginning of a contraction to the end of relaxation. The value for a normal QT interval is similar in males and females from birth up to adolescence. During infancy, a normal QTc is defined as 400 +/- 20 milliseconds. Before puberty, the 99th percentile of QTc values is 460 milliseconds. After puberty, this value increases to 470 milliseconds in males and 480 milliseconds in females.Torsades de pointes (TdP) is an arrhythmia. More specifically, it is one form of a polymorphic ventricular tachycardia that presents with a long QT interval. Diagnosis is made by electrocardiogram (ECG), which shows rapid irregular QRS complexes. The term "torsades de pointes" is translated from French as "twisting of the peaks" because the complexes appear to undulate, or twist around, the EKG baseline. TdP can be acquired by inheritance of a congenital long QT syndrome, or more commonly from the ingestion of a pharmacologic drug. During TdP episodes, patients have a heart rate of 200 to 250 beats/minute, which may present as palpitations or syncope. TdP often self-resolves, however, it may lead to ventricular fibrillation and cause sudden cardiac death. Risk factors: Although it is difficult to predict which individuals will be affected from drug-induced long QT syndrome, there are general risk factors that can be associated with the use of certain medications.Generally, as the dose of a drug increases, the risk of QT prolongation increases as well. In addition, factors such as rapid infusion, concurrent use of more than one drug known to prolong QT interval, diuretic treatment, electrolyte derangements (hypokalemia, hypomagnesemia, or hypocalcemia), advanced age, bradyarrhythmias, and female sex have all been shown to be risk factors for developing drug-induced QT prolongation. TdP has been shown to occur up to three times more often in female patients compared with males, likely as a result of post-pubertal hormonal influence on cardiac ion channels. The QTc interval is longer in females, as well as having a stronger response to IKr-blocking agents. In males, the presence of testosterone upregulates IKr channels and therefore decreases QT interval. Stated otherwise, estrogens prolong the QT interval, while androgens shorten it and decrease the response to IKr-blocking agents.Structural heart disease, such as heart failure, myocardial infarction, and left ventricular hypertrophy, are also risk factors. Diuretic-induced hypokalemia and/or hypomagnesemia taken for heart failure can induce proarrthymia. The ischemia that results from myocardial infarctions also induce QT prolongation. Risk factors: Drugs that cause QT prolongation The main groups of drugs that can cause QT prolongation are antiarrythmic medications, psychiatric medications, and antibiotics. Other drugs include antivirals and antifungals. Risk factors: Antiarrhythmic agents Source: Class IA Class IA antiarrhythymic drugs work by blocking sodium and potassium channels. Blocking sodium channels tend to shorten the action potential duration, while blocking potassium channels prolongs the action potential. When the drug concentration is at a low to normal concentration, the potassium channel blocking activity takes precedence over the sodium channel blocking activityDisopyramide Procainamide Propafenone Quinidine Because of the predominance of the potassium blocking activity, TdP is seen more frequently with therapeutic levels of quinidine. Sodium blocking activity is dominant with subtherapeutic levels, which does not lead to QT prolongation and TdP. Risk factors: Class III Class III antiarrhythmic drugs are potassium channel blockers that cause QT prolongation and are associated with TdP. Amiodarone Amiodarone works in many ways. It blocks sodium, potassium, and calcium channels, as well as alpha and beta adrenergic receptors. Because of its multiple actions, amiodarone causes QT prolongation but TdP is rarely observed. Dofetilide Ibutilide Ibutilide differs from other class III antiarrhythmic agents in that it activates the slow, delayed inward sodium channels rather than inhibiting outward potassium channels. Sotalol Sotalol has beta-blocking activity. Approximately 2 to 7 percent of patients taking at least 320 mg/day experience proarrhythmia, most often in the form of TdP. The risks and effects are dose-dependent. Psychiatric medications Psychiatric medications include antipsychotics and antidepressants that have been shown to lengthen the QT interval and induce TdP, especially when given intravenously or in higher concentrations. Typical antipsychotics Chlorpromazine Haloperidol Haloperidol functions by blocking the KCNH2 channel, the same pathway that other drug-inducing LQTS block. Patients taking haloperidol are at a higher risk if they also have electrolyte abnormalities (such as hypokalemia and/or hypomagnesemia), congenital LQTS, cardiac abnormalities, hypothyroidism, or if they are concurrently taking other medications known to lengthen the QT interval. Thioridazine (especially high risk; withdrawn by the manufacturer for this precise reason) Atypical antipsychotics Quetiapine Overdoses on quetiapine cause QT prolongation in patients with cardiac risks. Risperidone Mild QT prolongation can be caused by risperidone but there are no specific drug warnings associated with this. Ziprasidone SSRIs An ECG is recommended before patients are prescribed SSRI agents citalopram and escitalopram if the prescribed dose is above 40 mg or 20 mg per day, respectively. Risk factors: Fluoxetine Paroxetine Sertraline SNRIs Venlafaxine Tricyclic antidepressants Amitriptyline Desipramine Doxepin Imipramine Antibiotics Source: Macrolides Azithromycin Clarithromycin Erythromycin When taken independently, erythromycin has been shown to cause both QT prolongation and TdP. Erythromycin works inhibiting the CYP3A protein. Patients who have low CYP3A activity and are also concurrently taking other medications such as disopyramide, which can lead to QT prolongation and TdP. Risk factors: Fluoroquinolones Ciprofloxacin Levofloxacin Moxifloxacin Other agents Chloroquine Cisapride Domperidone Famotidine Foscarnet Hydroxychloroquine Ketoconazole Methadone Octreotide Ondansetron Tacrolimus Tamoxifen Pathophysiology: IKr blockadeOn EKG, the QT interval represents the summation of action potentials in cardiac muscle cells. QT prolongation therefore results from action potential prolongation, which can be caused by an increase in inward current through sodium or calcium channels, or a decrease in outward current through potassium channels. By binding to and inhibiting the “rapid” delayed rectifier potassium current protein, IKr, which is encoded by the hERG gene, certain drugs are able to decrease the outward flow of potassium ions and extend the length of phase 3 myocardial repolarization, which is reflected as QT prolongation. Diagnosis: Most patients with drug-induced QT prolongation are asymptomatic and are diagnosed solely by EKG in association with a history of using medications known to cause QT prolongation. A minority of patients are symptomatic and typically present with one or more signs of arrhythmia, such as lightheadedness, syncope, or palpitations. If the arrhythmia persists, patients may experience sudden cardiac arrest. Management: Treatment requires identifying and removing any causative medications and correcting any underlying electrolyte abnormalities. While TdP often self-resolves, cardioversion may be indicated if patients become hemodynamically unstable, as evidenced by signs such as hypotension, altered mental status, chest pain, or heart failure. Intravenous magnesium sulfate has been proven to be highly effective for both the treatment and prevention of TdP.Managing patients with TdP is dependent on the patient's stability. Vital signs, level of consciousness, and current symptoms are used to assess stability. Patients who are stable should be managed by removing the underlying cause and correcting electrolyte abnormalities, especially hypokalemia. An EKG should be obtained, a cardiac monitor should be attached, IV access should be established, supplemental oxygen should be given, and blood samples should be sent for appropriate studies. Patients should be continually re-evaluated for signs of deterioration until the TdP resolves. In addition to correcting the electrolyte abnormalities, magnesium given intravenously has also been shown to be helpful. Magnesium sulfate given as a 2 g IV bolus mixed with D5W can be given over a period of 15 minutes in patients without cardiac arrest Atrial pacing or administering isoproterenol can normalize the heart rate.Unstable patients exhibit signs of chest pain, hypotension, elevated heart rate, and/or heart failure. Patients who develop cardiac arrest will be pulsesless and unconscious. Defibrillation and resuscitation is indicated in these cases. Patients with cardiac arrest should be given IV magnesium sulfate over a period of two minutes.After diagnosing and treating the cause of LQTS, it is also important to perform a thorough history and EKG screening. Immediate family members should also be screened for inherited and congenital causes of drug-induced QT syndrome. Incidence: Unfortunately, there is no absolute definition that describes the incidence of drug-induced QT prolongation, as most data is obtained from case reports or small observational studies. Although QT interval prolongation is one of the most common reasons for drug withdrawal from the market, the overall incidence of drug-induced QT prolongation is difficult to estimate. One study in France estimated that between 5-7% of reports of ventricular tachycardia, ventricular fibrillation, or sudden cardiac death were in fact due to drug-induced QT prolongation and torsades de pointes. An observational study from the Netherlands showed that 3.1% of patients who experienced sudden cardiac death were also using a QT-prolonging drug.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fisher consistency** Fisher consistency: In statistics, Fisher consistency, named after Ronald Fisher, is a desirable property of an estimator asserting that if the estimator were calculated using the entire population rather than a sample, the true value of the estimated parameter would be obtained. Definition: Suppose we have a statistical sample X1, ..., Xn where each Xi follows a cumulative distribution Fθ which depends on an unknown parameter θ. If an estimator of θ based on the sample can be represented as a functional of the empirical distribution function F̂n: θ^=T(F^n), the estimator is said to be Fisher consistent if: T(Fθ)=θ. Definition: As long as the Xi are exchangeable, an estimator T defined in terms of the Xi can be converted into an estimator T′ that can be defined in terms of F̂n by averaging T over all permutations of the data. The resulting estimator will have the same expected value as T and its variance will be no larger than that of T. Definition: If the strong law of large numbers can be applied, the empirical distribution functions F̂n converge pointwise to Fθ, allowing us to express Fisher consistency as a limit — the estimator is Fisher consistent if lim n→∞F^n)=θ. Finite population example: Suppose our sample is obtained from a finite population Z1, ..., Zm. We can represent our sample of size n in terms of the proportion of the sample ni / n taking on each value in the population. Writing our estimator of θ as T(n1 / n, ..., nm / n), the population analogue of the estimator is T(p1, ..., pm), where pi = P(X = Zi). Thus we have Fisher consistency if T(p1, ..., pm) = θ. Finite population example: Suppose the parameter of interest is the expected value μ and the estimator is the sample mean, which can be written n−1∑i=1n∑j=1mI(Xi=Zj)Zj, where I is the indicator function. The population analogue of this expression is n−1∑i=1n∑j=1mpjZj=n−1∑i=1nμ=μ, so we have Fisher consistency. Role in maximum likelihood estimation: Maximising the likelihood function L gives an estimate that is Fisher consistent for a parameter b if ln at b=b0, where b0 represents the true value of b. Relationship to asymptotic consistency and unbiasedness: The term consistency in statistics usually refers to an estimator that is asymptotically consistent. Fisher consistency and asymptotic consistency are distinct concepts, although both aim to define a desirable property of an estimator. While many estimators are consistent in both senses, neither definition encompasses the other. For example, suppose we take an estimator Tn that is both Fisher consistent and asymptotically consistent, and then form Tn + En, where En is a deterministic sequence of nonzero numbers converging to zero. This estimator is asymptotically consistent, but not Fisher consistent for any n. Relationship to asymptotic consistency and unbiasedness: The sample mean is a Fisher consistent and unbiased estimate of the population mean, but not all Fisher consistent estimates are unbiased. Suppose we observe a sample from a uniform distribution on (0,θ) and we wish to estimate θ. The sample maximum is Fisher consistent, but downwardly biased. Conversely, the sample variance is an unbiased estimate of the population variance, but is not Fisher consistent. Role in decision theory: A loss function is Fisher consistent if the population minimizer of the risk leads to the Bayes optimal decision rule.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poppy-seed bagel theorem** Poppy-seed bagel theorem: In physics, the poppy-seed bagel theorem concerns interacting particles (e.g., electrons) confined to a bounded surface (or body) A when the particles repel each other pairwise with a magnitude that is proportional to the inverse distance between them raised to some positive power s . In particular, this includes the Coulomb law observed in Electrostatics and Riesz potentials extensively studied in Potential theory. Other classes of potentials, which not necessarily involve the Riesz kernel, for example nearest neighbor interactions, are also described by this theorem in the macroscopic regime. Poppy-seed bagel theorem: For N such particles, a stable equilibrium state, which depends on the parameter s , is attained when the associated potential energy of the system is minimal (the so-called generalized Thomson problem). For large numbers of points, these equilibrium configurations provide a discretization of A which may or may not be nearly uniform with respect to the surface area (or volume) of A . The poppy-seed bagel theorem asserts that for a large class of sets A , the uniformity property holds when the parameter s is larger than or equal to the dimension of the set A . For example, when the points ("poppy seeds") are confined to the 2-dimensional surface of a torus embedded in 3 dimensions (or "surface of a bagel"), one can create a large number of points that are nearly uniformly spread on the surface by imposing a repulsion proportional to the inverse square distance between the points, or any stronger repulsion ( s≥2 ). From a culinary perspective, to create the nearly perfect poppy-seed bagel where bites of equal size anywhere on the bagel would contain essentially the same number of poppy seeds, impose at least an inverse square distance repelling force on the seeds. Formal definitions: For a parameter s>0 and an N -point set ωN={x1,…,xN}⊂Rp , the s -energy of ωN is defined as follows: For a compact set A we define its minimal N -point s -energy as where the minimum is taken over all N -point subsets of A ; i.e., ωN⊂A . Configurations ωN that attain this infimum are called N -point s -equilibrium configurations. Poppy-seed bagel theorem for bodies: We consider compact sets A⊂Rp with the Lebesgue measure λ(A)>0 and s⩾p . For every N⩾2 fix an N -point s -equilibrium configuration ωN∗={x1,N,…,xN,N} . Set where δx is a unit point mass at point x . Under these assumptions, in the sense of weak convergence of measures, where μ is the Lebesgue measure restricted to A ; i.e., μ(B)=λ(A∩B)/λ(A) Furthermore, it is true that where the constant Cs,p does not depend on the set A and, therefore, where [0,1]p is the unit cube in Rp Poppy-seed bagel theorem for manifolds: Consider a smooth d -dimensional manifold A embedded in Rp and denote its surface measure by σ . We assume σ(A)>0 . Assume s⩾d As before, for every N⩾2 fix an N -point s -equilibrium configuration ωN∗={x1,N,…,xN,N} and set Then, in the sense of weak convergence of measures, where μ(B)=σ(A∩B)/σ(A) . If Hd is the d -dimensional Hausdorff measure normalized so that Hd([0,1]d)=1 , then where αd=πd/2/Γ(1+d/2) is the volume of a d-ball. The constant Cs,p: For p=1 , it is known that Cs,1=2ζ(s) , where ζ(s) is the Riemann zeta function. Using a modular form approach to linear programming, Viazovska together with coauthors established in a 2022 paper that in dimensions p=8 and 24 , the values of Cs,p , s>p , are given by the Epstein zeta function associated with the E8 lattice and Leech lattice, respectively. The constant Cs,p: It is conjectured that for p=2 , the value of Cs,p is similarly determined as the value of the Epstein zeta function for the hexagonal lattice. Finally, in every dimension p≥1 it is known that when s=p , the scaling of Es(A,N) becomes log ⁡N rather than N2=N1+s/p , and the value of Cs,p can be computed explicitly as the volume of the unit p -dimensional ball: The following connection between the constant Cs,p and the problem of sphere packing is known: where αp is the volume of a p-ball and where the supremum is taken over all families P of non-overlapping unit balls such that the limit exists.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eosinophil** Eosinophil: Eosinophils, sometimes called eosinophiles or, less commonly, acidophils, are a variety of white blood cells and one of the immune system components responsible for combating multicellular parasites and certain infections in vertebrates. Along with mast cells and basophils, they also control mechanisms associated with allergy and asthma. They are granulocytes that develop during hematopoiesis in the bone marrow before migrating into blood, after which they are terminally differentiated and do not multiply. They form about 2 to 3% of white blood cells in the body. Eosinophil: These cells are eosinophilic or "acid-loving" due to their large acidophilic cytoplasmic granules, which show their affinity for acids by their affinity to coal tar dyes: Normally transparent, it is this affinity that causes them to appear brick-red after staining with eosin, a red dye, using the Romanowsky method. The staining is concentrated in small granules within the cellular cytoplasm, which contain many chemical mediators, such as eosinophil peroxidase, ribonuclease (RNase), deoxyribonucleases (DNase), lipase, plasminogen, and major basic protein. These mediators are released by a process called degranulation following activation of the eosinophil, and are toxic to both parasite and host tissues. Eosinophil: In normal individuals, eosinophils make up about 1–3% of white blood cells, and are about 12–17 micrometres in size with bilobed nuclei. While eosinophils are released into the bloodstream, they reside in tissue. They are found in the medulla and the junction between the cortex and medulla of the thymus, and, in the lower gastrointestinal tract, ovaries, uterus, spleen, and lymph nodes, but not in the lungs, skin, esophagus, or some other internal organs under normal conditions. The presence of eosinophils in these latter organs is associated with disease. For instance, patients with eosinophilic asthma have high levels of eosinophils that lead to inflammation and tissue damage, making it more difficult for patients to breathe. Eosinophils persist in the circulation for 8–12 hours, and can survive in tissue for an additional 8–12 days in the absence of stimulation. Pioneering work in the 1980s elucidated that eosinophils were unique granulocytes, having the capacity to survive for extended periods of time after their maturation as demonstrated by ex-vivo culture experiments. Development: TH2 and ILC2 cells both express the transcription factor GATA-3, which promotes the production of TH2 cytokines, including the interleukins (ILs). IL-5 controls the development of eosinophils in the bone marrow, as they differentiate from myeloid precursor cells. Their lineage fate is determined by transcription factors, including GATA and C/EBP. Eosinophils produce and store many secondary granule proteins prior to their exit from the bone marrow. After maturation, eosinophils circulate in blood and migrate to inflammatory sites in tissues, or to sites of helminth infection in response to chemokines like CCL11 (eotaxin-1), CCL24 (eotaxin-2), CCL5 (RANTES), 5-hydroxyicosatetraenoic acid and 5-oxo-eicosatetraenoic acid, and certain leukotrienes like leukotriene B4 (LTB4) and MCP1/4. Interleukin-13, another TH2 cytokine, primes eosinophilic exit from the bone marrow by lining vessel walls with adhesion molecules such as VCAM-1 and ICAM-1. Development: When eosinophils are activated, they undergo cytolysis, where the breaking of the cell releases eosinophilic granules found in extracellular DNA traps. High concentrations of these DNA traps are known to cause cellular damage, as the granules they contain are responsible for the ligand-induced secretion of eosinophilic toxins which cause structural damage. There is evidence to suggest that eosinophil granule protein expression is regulated by the non-coding RNA EGOT. Function: Following activation, eosinophils effector functions include production of the following: Cationic granule proteins and their release by degranulation Reactive oxygen species such as hypobromite, superoxide, and peroxide (hypobromous acid, which is preferentially produced by eosinophil peroxidase) Lipid mediators like the eicosanoids from the leukotriene (e.g., LTC4, LTD4, LTE4) and prostaglandin (e.g., PGE2) families Enzymes, such as elastase Growth factors such as TGF beta, VEGF, and PDGF Cytokines such as IL-1, IL-2, IL-4, IL-5, IL-6, IL-8, IL-9, IL-13, and TNF alphaThere are also eosinophils that play a role in fighting viral infections, which is evident from the abundance of RNases they contain within their granules, and in fibrin removal during inflammation. Eosinophils, along with basophils and mast cells, are important mediators of allergic responses and asthma pathogenesis and are associated with disease severity. They also fight helminth (worm) colonization and may be slightly elevated in the presence of certain parasites. Eosinophils are also involved in many other biological processes, including postpubertal mammary gland development, oestrus cycling, allograft rejection and neoplasia. They have also been implicated in antigen presentation to T cells.Eosinophils are responsible for tissue damage and inflammation in many diseases, including asthma. High levels of interleukin-5 has been observed to up regulate the expression of adhesion molecules, which then facilitate the adhesion of eosinophils to endothelial cells, thereby causing inflammation and tissue damage.An accumulation of eosinophils in the nasal mucosa is considered a major diagnostic criterion for allergic rhinitis (nasal allergies). Granule proteins: Following activation by an immune stimulus, eosinophils degranulate to release an array of cytotoxic granule cationic proteins that are capable of inducing tissue damage and dysfunction. These include: major basic protein (MBP) eosinophil cationic protein (ECP) eosinophil peroxidase (EPX) eosinophil-derived neurotoxin (EDN)Major basic protein, eosinophil peroxidase, and eosinophil cationic protein are toxic to many tissues. Eosinophil cationic protein and eosinophil-derived neurotoxin are ribonucleases with antiviral activity. Major basic protein induces mast cell and basophil degranulation, and is implicated in peripheral nerve remodelling. Eosinophil cationic protein creates toxic pores in the membranes of target cells, allowing potential entry of other cytotoxic molecules to the cell, can inhibit proliferation of T cells, suppress antibody production by B cells, induce degranulation by mast cells, and stimulate fibroblast cells to secrete mucus and glycosaminoglycan. Eosinophil peroxidase forms reactive oxygen species and reactive nitrogen intermediates that promote oxidative stress in the target, causing cell death by apoptosis and necrosis. Clinical significance: Eosinophilia An increase in eosinophils, i.e., the presence of more than 500 eosinophils/microlitre of blood is called an eosinophilia, and is typically seen in people with a parasitic infestation of the intestines; autoimmune and collagen vascular disease (such as rheumatoid arthritis) and Systemic lupus erythematosus; malignant diseases such as eosinophilic leukemia, clonal hypereosinophilia, and Hodgkin lymphoma; lymphocyte-variant hypereosinophilia; extensive skin diseases (such as exfoliative dermatitis); Addison's disease and other causes of low corticosteroid production (corticosteroids suppress blood eosinophil levels); reflux esophagitis (in which eosinophils will be found in the squamous epithelium of the esophagus) and eosinophilic esophagitis; and with the use of certain drugs such as penicillin. But, perhaps the most common cause for eosinophilia is an allergic condition such as asthma. In 1989, contaminated L-tryptophan supplements caused a deadly form of eosinophilia known as eosinophilia-myalgia syndrome, which was reminiscent of the toxic oil syndrome in Spain in 1981. Clinical significance: Eosinophils play an important role in asthma as the number of accumulated eosinophils corresponds to the severity of asthmatic reaction. Eosinophilia in mice models are shown to be associated with high interleukin-5 levels. Furthermore, mucosal bronchial biopsies conducted on patients with diseases such as asthma have been found to have higher levels of interleukin-5 leading to higher levels of eosinophils. The infiltration of eosinophils at these high concentrations causes an inflammatory reaction. This ultimately leads to airway remodelling and difficulty of breathing.Eosinophils can also cause tissue damage in the lungs of asthmatic patients. High concentrations of eosinophil major basic protein and eosinophil-derived neurotoxin that approach cytotoxic levels are observed at degranulation sites in the lungs as well as in the asthmatic sputum. Clinical significance: Treatment Treatments used to combat autoimmune diseases and conditions caused by eosinophils include: corticosteroids – promote apoptosis. Numbers of eosinophils in blood are rapidly reduced monoclonal antibody therapy – e.g., mepolizumab or reslizumab against IL-5, prevents eosinophilopoiesis, or benralizumab against IL-5 receptor, which eliminates eosinophils through ADCC antagonists of leukotriene synthesis or receptors imatinib (STI571) – inhibits PDGF-BB in hypereosinophilic leukemiaMonoclonal antibodies such as dupilumab and lebrikizumab target IL-13 and its receptor, which reduces eosinophilic inflammation in patients with asthma due to lowering the number of adhesion molecules present for eosinophils to bind to, thereby decreasing inflammation. Mepolizumab and benralizumab are other treatment options that target the alpha subunit of the IL-5 receptor, thereby inhibiting its function and reducing the number of developing eosinophils as well as the number of eosinophils leading to inflammation through antibody-dependent cell-mediated cytotoxicity and eosinophilic apoptosis. Animal studies: Within the fat (adipose) tissue of CCR2 deficient mice, there is an increased number of eosinophils, greater alternative macrophage activation, and a propensity towards type 2 cytokine expression. Furthermore, this effect was exaggerated when the mice became obese from a high fat diet. Mouse models of eosinophilia from mice infected with T. canis showed an increase in IL-5 mRNA in mice spleen. Mouse models of asthma from OVA show a higher TH2 response. When mice are administered IL-12 to induce the TH1 response, the TH2 response becomes suppressed, showing that mice without TH2 cytokines are significantly less likely to express asthma symptoms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Slump test** Slump test: The slump test is an orthopedic test used to determine if a patient has sciatic nerve impingement. Purpose: The purpose of this test is to place tension on the dural sheath of the sciatic nerve. Procedure: Patient should be sitting on the edge of the table while the examiner is by the side of the patient. The slump test consists of several different steps: First, the patient slumps forward, rounding the shoulders so the examiner will then apply pressure to the trunk flexion. Next, the patient brings chin to chest and the knee is then actively extended. Afterwards, the ankle is dorsiflexed. If pain is produced during any of the step the examiner does not have to continue the test.The test has several modifications all of which use different sequences of motions that create tension on the dural sheath. Mechanism: The dural sheath around the sciatic nerve is being stressed or stretched as the patient changes positions. Results: Positive sign is any kind of sciatic pain (radiating, sharp, shooting pain) or reproduction of other neurological symptoms. This indicates impingement of the sciatic nerve, dural lining, spinal cord, or nerve roots. This test can have a lot of false-positives and should be used with other orthopedic test to make the final diagnosis. History: Charles Lasègue is credited for creating the slump test. Used with the Bragard maneuver, it has been considered the gold standard in the medical community.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bhutanese animation** Bhutanese animation: Bhutanese animation is a relatively new industry in Bhutan. Local animations have been primarily used for public awareness campaigns about relevant social, economical, and political issues or as a means to promote local culture. Bhutanese animators and clients view the use of animation in public awareness as more effective than brochures and pamphlets. Ap Naka is an example of a Bhutanese animated public awareness video which seeks to educate people on earthquake preparedness.Pema Tshering D made the first Bhutanese 3D-animated film, which was released in 2001. Tshering's first animation was that of a beetle dance, and his first public awareness video was Oye Penjor. In 2005, KLK anImagine and Druk Vision Studio, which are major animation studios in Bhutan, were established. KLK is owned by Kinga Sithup, and Druk Vision Studio is owned by Pema Tshering D. The first local 2D animation was by KLK, which was an awareness campaign on rubella, while the first 3D animation, Oye Penjor, was about AIDS and took around three months to produce. It is a common practice among Bhutanese animators to split 10 pictures into 25 frames or to animate in 24 pictures per second. By 2008, Druk Vision Studio had produced around seven animation films, and KLK had done around 20.The first 3D full length animation film, produced by Athang Animation Studio (established in 2010), was Ap Bokto. It was first screened to the public in September 2014.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tafamidis** Tafamidis: Tafamidis, sold under the brand names Vyndaqel and Vyndamax, is a medication used to delay disease progression in adults with certain forms of transthyretin amyloidosis. It can be used to treat both hereditary forms, familial amyloid cardiomyopathy and familial amyloid polyneuropathy, as well as wild-type transthyretin amyloidosis, which formerly was called senile systemic amyloidosis. It works by stabilizing the quaternary structure of the protein transthyretin. In people with transthyretin amyloidosis, transthyretin falls apart and forms clumps called (amyloid) that harm tissues including nerves and the heart.The U.S. Food and Drug Administration considers tafamidis to be a first-in-class medication. Medical use: Tafamidis is used to delay nerve damage in adults who have transthyretin amyloidosis with polyneuropathy, or heart disease in adults who have transthyretin amyloidosis with cardiomyopathy. It is taken by mouth.Women should not get pregnant while taking it and should not breastfeed while taking it. People with familial amyloid polyneuropathy who have received a liver transplant should not take it. Adverse effects: More than 10% of people in clinical trials had one or more of urinary tract infections, vaginal infections, upper abdominal pain, or diarrhea. Interactions: Tafamidis does not appear to interact with cytochrome P450 but it inhibits ATP-binding cassette super-family G member 2, so is likely to affect the levels of certain drugs including methotrexate, rosuvastatin, and imatinib. It also inhibits organic anion transporter 1 and organic anion transporter 3/solute carrier family 22 member 8 so is likely to interact with non-steroidal anti-inflammatory agents and other drugs that rely on those transporters. Pharmacology: Tafamidis is a pharmacological chaperone that stabilizes the correctly folded tetrameric form of the transthyretin protein by binding in one of the two thyroxine-binding sites of the tetramer. In people with familial amyloid polyneuropathy, the individual monomers fall away from the tetramer, misfold, and aggregate; the aggregates harm nerves.The maximum plasma concentration is achieved around two hours after dosing; in plasma it is almost completely bound to proteins. Based on preclinical data, it appears to be metabolized by glucuronidation and excreted via bile; in humans, around 59% of a dose is recovered in feces, and approximately 22% in urine. Chemistry: The chemical name of tafamidis is 2-(3,5-dichlorophenyl)-1,3-benzoxazole-6-carboxylic acid. The molecule has two crystalline forms and one amorphous form; it is manufactured in one of the possible crystalline forms. It is marketed as a meglumine salt. It is slightly soluble in water. History: The laboratory of Jeffery W. Kelly at The Scripps Research Institute began looking for ways to inhibit transthyretin fibril formation in the 1990s.: 210  Tafamidis was eventually discovered by Kelly's team using a structure-based drug design strategy; the chemical structure was first published in 2003. In 2003, Kelly co-founded a company called FoldRx with Susan Lindquist of the Massachusetts Institute of Technology and the Whitehead Institute, and FoldRx developed tafamidis up through submitting an application for marketing approval in Europe in early 2010. FoldRx was acquired by Pfizer later that year.Tafamidis was approved by the European Medicines Agency in November 2011, to delay peripheral nerve impairment in adults with transthyretin-related hereditary amyloidosis. The U.S. Food and Drug Administration rejected the application for marketing approval in 2012, on the basis that the clinical trial did not show efficacy based on a functional endpoint, and requested further clinical trials. In May 2019, the FDA approved two tafamidis preparations, Vyndaqel (tafamidis meglumine) and Vyndamax (tafamidis), for the treatment of transthyretin-mediated cardiomyopathy. The drug was approved in Japan in 2013; regulators there made the approval dependent on further clinical trials showing better evidence of efficacy.The FDA approved tafamidis meglumine based primarily on evidence from a clinical trial of 441 adult patients conducted at 60 sites in Belgium, Brazil, Canada, Czech Republic, Spain, France, Greece, Italy, Japan, Netherlands, Sweden, Great Britain, and the United States.There was one trial that evaluated the benefits and side effects of tafamidis for the treatment of transthyretin amyloidosis with cardiomyopathy, in which patients were randomly assigned to receive either tafamidis (either 20 or 80 mg) or placebo for 30 months. About 90% of patients in the trial were taking other drugs for heart failure (consistent with the standard of care).The European Medicines Agency designated tafamidis an orphan medicine and the Food and Drug Administration also designated tafamidis meglumine as an orphan drug. Society and culture: Legal status Tafamidis was approved in the European Union in 2011 for the treatment of transthyretin amyloidosis with polyneuropathy, and in Japan in 2013. In the United States, it was rejected for the treatment of transthyretin amyloidosis with polyneuropathy because the Food and Drug Administration saw insufficient evidence for its efficacy.Tafamidis can also be used to treat transthyretin amyloidosis with cardiomyopathy. It was approved for the treatment of this form of the disease in the United States in 2019 and in the European Union in 2020. In the United States, there are two approved preparations: tafamidis meglumine (Vyndaqel) and tafamidis (Vyndamax). The two preparations have the same active moiety, tafamidis, but they are not substitutable on a milligram to milligram basis.Tafamidis (Vyndamax) and tafamidis meglumine (Vyndaqel) were approved for medical use in Australia in March 2020.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SGCG** SGCG: Gamma-sarcoglycan is a protein that in humans is encoded by the SGCG gene. The α to δ-sarcoglycans are expressed predominantly (β) or exclusively (α, γ and δ) in striated muscle. A mutation in any of the sarcoglycan genes may lead to a secondary deficiency of the other sarcoglycan proteins, presumably due to destabilisation of the sarcoglycan complex. The disease-causing mutations in the α to δ genes cause disruptions within the dystrophin-associated protein (DAP) complex in the muscle cell membrane. The transmembrane components of the DAP complex link the cytoskeleton to the extracellular matrix in adult muscle fibres, and are essential for the preservation of the integrity of the muscle cell membrane. Function: Gamma-sarcoglycan is one of several sarcolemmal transmembrane glycoproteins that interact with dystrophin, probably to provide a link between the membrane associated cytoskeleton and the extracellular matrix. Defects in the protein can lead to early onset autosomal recessive muscular dystrophy, in particular limb-girdle muscular dystrophy, type 2C (LGMD2C). Structure: Gene Human SGCG gene maps to chromosome 13 at q12, spans over 100 kb of DNA and includes 8 exons. Protein Gamma-sarcoglycan is a type II transmembrane protein and consists of 291 amino acids. It has a 35 amino acid intracellular N-terminal region, a 25 amino acid single transmembrane domain, and a 231 amino acid extra-cellular C-terminus. Clinical significance: Sarcoglycanopathies are autosomal recessive limb girdle muscular dystrophies (LGMDs) caused by mutations in any of the four sarcoglycan genes: α (LGMD2D), β (LGMD2E), γ (LGMD2C) and δ (LGMD2F). Severe childhood autosomal recessive muscular dystrophy (SCARMD) is a progressive muscle-wasting disorder that segregates with microsatellite markers at γ-sarcoglycan gene. Mutations in the γ-sarcoglycan gene were first described in the Maghreb countries of North Africa, where γ-sarcoglycanopathy has a higher than usual incidence. One common mutation, Δ-521T, which causes a severe phenotype, occurs both in the Maghreb population and in other countries. A Cys283Tyr mutation has been identified in the Gypsy population causing a severe phenotype and a Leu193Ser mutation which causes a mild phenotype. Interactions: SGCG has been shown to interact with FLNC.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minion (chat widget)** Minion (chat widget): Minion (Hangul: 미니온) is a chatting widget developed by DevArzz, the South Korean server. It is created with Python 2.7.x version and twistedmatrix 11.0 library. Minion can be built on to the Web browsers. In addition, there are public and non-public channels; non-public channels are usually installed on the private web pages, but it can be upgraded to public later on as they pay. Functions: Minion has functions that most of chatting programs have. It is similar to the chatting programs of how to use it. Minion mainly has six functions: Call: calls other users on the chatting. It makes beeping sound while calling the users. Whisper: chats to another user. It appears to be purple/pink while whisper to another user. No other users can see the whisper log. Private Channel: creates the private channel. The creator of private channel can invite other users to the channel. However, the users cannot get into the channel unless they got invited. Status: the user can change their status. There are three status: away, do not disturb, and online. It is similar to the status system in Windows Messenger or Skype. IP Ban: bans the IP of user from the channel. It requires the API key to activate this certain type of function. Functions: Cutoff the Chatting: blocks the user from chatting for 30 seconds. The user still can watch the chatting and is able to whisper to other users. It requires the API key to do it. It is also possible to use this function if the user has created his/her own private channel.The chatting will also cutoff if the user either types same text more than three lines, or types 10 lines before the 15 seconds. It gives four warnings, and after the warning, the user's IP will automatically banned from the server. Servers: Minion has different servers to hold up all the users accessing from different locations. There are total 11 servers, 66,500 capacity (of users), and 724 channels. All Minion servers are running through DevArzz server, so web traffic does not shut down because of the over-limited users. The servers are named after the planet and its satellites. Server #1 EARTH EARTH: holds up 500 users, has 31 open channels. MOON: holds up 500 users, has 50 open channels. ISS: holds up 500 users, has no open channel. Server #2 MARS MARS: holds up 5,000 users, has 160 open channels. PHOBOS: holds up 5,000 users, has 186 open channels. DEIMOS: holds up 5,000 users, has 192 open channels. VENUS: holds up 10,000 users, has no open channel. Server #3 JUPITER JUPITER: holds up 10,000 users, has 64 open channels. IO: holds up 10,000 users, has 9 open channels. EUROPA: holds up 10,000 users, has 27 open channels. GANYMEDE: holds up 10,000 users, has 5 open channels. Web Browsers: Minion runs on any type of web browsers. Unfortunately, Minion cannot run on iPod Safari. Minion is developed by JavaScript and Adobe Flash. Since Safari for iPhone (or other similar devices of Apple) does not support the flash movie or.swf formats, Minion cannot be run on Safari. However, it is possible for Minion to run on Apple devices using the Minion Apps.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Great Norwegian Encyclopedia** Great Norwegian Encyclopedia: The Great Norwegian Encyclopedia (Norwegian: Store Norske Leksikon, abbreviated SNL) is a Norwegian-language online encyclopedia.The online encyclopedia is among the most-read Norwegian published sites, with more than two million unique visitors per month. Paper editions 1978–2007: The SNL was created in 1978, when the two publishing houses Aschehoug and Gyldendal merged their encyclopedias and created the company Kunnskapsforlaget.Up until 1978 the two publishing houses of Aschehoug and Gyldendal, Norway's two largest, had published Aschehougs konversasjonsleksikon and Gyldendals konversasjonsleksikon, respectively. The respective first editions were published in 1907–1913 (Aschehoug) and 1933–1934 (Gyldendal).The slump in sales for paper-based encyclopedias around the turn of the 21st century hit Kunnskapsforlaget hard, but a fourth edition of the paper encyclopedia was secured by a grant of ten million Norwegian kroner from the foundation Fritt Ord in 2003. The fourth edition consisted of 16 volumes, a total of 12,000 pages and 280,000 entries. Paper editions 1978–2007: List of paper editions First edition, 1978–1981, 12 volumes. Chief editors Olaf Kortner, Preben Munthe, Egil Tveterås Second edition, 1986–1989, 15 volumes. Chief editors Olaf Kortner, Preben Munthe, Egil Tveterås. Third edition, 1995–1998, 16 volumes. Chief editor Petter Henriksen. Fourth edition, 2005–2007, 16 volumes. Chief editor Petter Henriksen. Online encyclopedia: The online edition of SNL was launched in 2000, and had both private and institutional subscribers. The paywall was removed on 25 February 2009, and the online encyclopedia became free to use.On 12 March 2010, Kunnskapsforlaget announced that they would close the online encyclopedia because of lacklustre sales and failing revenue. It was also announced that the articles would not be given to the Wikimedia Foundation, with chief editor Petter Henriksen stating that: "It is important that the people behind the articles remain visible".In 2011, the foundations Fritt Ord and Sparebankstiftelsen DNB acquired the encyclopedia, hired Anne Marit Godal as the new chief editor and established a new organisation, assisted by the Norwegian Academy of Science and Letters and Norwegian Non-Fiction Writers and Translators Association. In 2014 Foreningen Store norske leksikon ('the Great Norwegian Encyclopedia Association') was established; members of the association are Norwegian universities and other non-profit organisations. In 2016 Erik Bolstad became the new chief editor.As of 2019, the SNL has around 200,000 articles online, updated by approximately 800 affiliated academics. The SNL accepts contributions from users, but all changes to the articles are verified by a topic expert before publication.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quadrisecant** Quadrisecant: In geometry, a quadrisecant or quadrisecant line of a space curve is a line that passes through four points of the curve. This is the largest possible number of intersections that a generic space curve can have with a line, and for such curves the quadrisecants form a discrete set of lines. Quadrisecants have been studied for curves of several types: Knots and links in knot theory, when nontrivial, always have quadrisecants, and the existence and number of quadrisecants has been studied in connection with knot invariants including the minimum total curvature and the ropelength of a knot. Quadrisecant: The number of quadrisecants of a non-singular algebraic curve in complex projective space can be computed by a formula derived by Arthur Cayley. Quadrisecants of arrangements of skew lines touch subsets of four lines from the arrangement. They are associated with ruled surfaces and the Schläfli double six configuration. Definition and motivation: A quadrisecant is a line that intersects a curve, surface, or other set in four distinct points. It is analogous to a secant line, a line that intersects a curve or surface in two points; and a trisecant, a line that intersects a curve or surface in three points.Compared to secants and trisecants, quadrisecants are especially relevant for space curves, because they have the largest possible number of intersection points of a line with a generic curve. In the plane, a generic curve can be crossed arbitrarily many times by a line; for instance, small generic perturbations of the sine curve are crossed infinitely often by the horizontal axis. In contrast, if an arbitrary space curve is perturbed by a small distance to make it generic, there will be no lines through five or more points of the perturbed curve. Nevertheless, any quadrisecants of the original space curve will remain present nearby in its perturbation.One explanation for this phenomenon is visual: looking at a space curve from far away, the space of such points of view can be described as a two-dimensional sphere, one point corresponding to each direction. Pairs of strands of the curve may appear to cross from all of these points of view, or from a two-dimensional subset of them. Three strands will form a triple crossing when the point of view lies on a trisecant, and four strands will form a quadruple crossing from a point of view on a quadrisecant. Each constraint that the crossing of a pair of strands lies on another strand reduces the number of degrees of freedom by one (for a generic curve), so the points of view on trisecants form a one-dimensional (continuously infinite) subset of the sphere, while the points of view on quadrisecants form a zero-dimensional (discrete) subset. C. T. C. Wall writes that the fact that generic space curves are crossed at most four times by lines is "one of the simplest theorems of the kind", a model case for analogous theorems on higher-dimensional transversals.Additionally, for generic space curves, the quadrisecants form a discrete set of lines that in contrast to the trisecants which, when they occur, form continuous families of lines. Depending on the properties of the curve, it may have no quadrisecants, finitely many, or infinitely many. These considerations make it of interest to determine conditions for the existence of quadrisecants, or to find bounds on their number in various special cases, such as knotted curves, algebraic curves, or arrangements of lines. For special classes of curves: Knots and links In three-dimensional Euclidean space, every nontrivial tame knot or link has a quadrisecant. Originally established in the case of knotted polygons and smooth knots by Erika Pannwitz, this result was extended to knots in suitably general position and links with nonzero linking number, and later to all nontrivial tame knots and links.Pannwitz proved more strongly that, for a locally flat disk having the knot as its boundary, the number of singularities of the disk can be used to construct a lower bound on the number of distinct quadrisecants. The existence of at least one quadrisecant follows from the fact that any such disk must have at least one singularity. Morton & Mond (1982) conjectured that the number of distinct quadrisecants of a given knot is always at least n(n−1)/2 , where n is the crossing number of the knot. Counterexamples to this conjecture have since been discovered.Two-component links have quadrisecants in which the points on the quadrisecant appear in alternating order between the two components, and nontrivial knots have quadrisecants in which the four points, ordered cyclically as abcd on the knot, appear in order acbd along the quadrisecant. The existence of these alternating quadrisecants can be used to derive the Fáry–Milnor theorem, a lower bound on the total curvature of a nontrivial knot. Quadrisecants have also been used to find lower bounds on the ropelength of knots.G. T. Jin and H. S. Kim conjectured that, when a knotted curve K has finitely many quadrisecants, K can be approximated with an equivalent polygonal knot with its vertices at the points where the quadrisecants intersect K , in the same order as they appear on K . However, their conjecture is false: in fact, for every knot type, there is a realization for which this construction leads to a self-intersecting polygon, and another realization where this construction produces a knot of a different type. For special classes of curves: It has been conjectured that every wild knot has an infinite number of quadrisecants. For special classes of curves: Algebraic curves Arthur Cayley derived a formula for the number of quadrisecants of an algebraic curve in three-dimensional complex projective space, as a function of its degree and genus. For a curve of degree d and genus g , the number of quadrisecants is This formula assumes that the given curve is non-singular; adjustments may be necessary if it has singular points. For special classes of curves: Skew lines In three-dimensional Euclidean space, every set of four skew lines in general position has either two quadrisecants (also in this context called transversals) or none. Any three of the four lines determine a hyperboloid, a doubly ruled surface in which one of the two sets of ruled lines contains the three given lines, and the other ruling consists of trisecants to the given lines. If the fourth of the given lines pierces this surface, it has two points of intersection, because the hyperboloid is defined by a quadratic equation. The two trisecants of the ruled surface, through these two points, form two quadrisecants of the given four lines. On the other hand, if the fourth line is disjoint from the hyperboloid, then there are no quadrisecants. In spaces with complex number coordinates rather than real coordinates, four skew lines always have exactly two quadrisecants.The quadrisecants of sets of lines play an important role in the construction of the Schläfli double six, a configuration of twelve lines intersecting each other in 30 crossings. If five lines ai (for i=1,2,3,4,5 ) are given in three-dimensional space, such that all five are intersected by a common line b6 but are otherwise in general position, then each of the five quadruples of the lines ai has a second quadrisecant bi , and the five lines bi formed in this way are all intersected by a common line a6 . These twelve lines and the 30 intersection points aibj form the double six.An arrangement of n complex lines with a given number of pairwise intersections and otherwise skew may be interpreted as an algebraic curve with degree n and with genus determined from its number of intersections, and Cayley's aforementioned formula used to count its quadrisecants. The same result as this formula can also be obtained by classifying the quadruples of lines by their intersections, counting the number of quadrisecants for each type of quadruple, and summing over all quadruples of lines in the given set.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scleral buckle** Scleral buckle: A scleral buckle is one of several ophthalmologic procedures that can be used to repair a retinal detachment. Retinal detachments are usually caused by retinal tears, and a scleral buckle can be used to close the retinal break, both for acute and chronic retinal detachments.Scleral buckles come in many shapes and sizes. A silicone sponge (with air filled cells) is a cylindrical element that comes in various sizes. An encircling band is a thin silicone band sewn around the circumference of the sclera of the eye. A solid silicone grooved tyre element is also used. Buckles are often placed under a band to create a dimple on the eye wall.The scleral buckle is secured around the eyeball under the conjunctiva. This moves the wall of the eye closer to the detached retina. This alteration in the relationships of the tissues seems to allow the fluid which has formed under the retina to be pumped out, and the retina to re-attach. The physics or physiology of this process are not fully understood.Retinal detachment surgery usually also involves the use of cryotherapy or laser photocoagulation. The laser or cryotherapy forms a permanent adhesion around the retinal break and prevents further accumulation of fluid and re-detachment. The usage of scleral buckle is a source of debate only for complex retinal detachment surgery amongst surgeons, and research has been conducted to compare safety and effectiveness outcomes of scleral buckling, pars plana vitrectomy with scleral buckle versus pars plana victrectomy without scleral buckle.Scleral buckles are done using local or general anesthesia and are often done as outpatient procedures. In the majority of treatments the buckle is left in place permanently, although in some instances the buckles can be removed after the retina heals. The buckle may also be removed in the event of infection. Scleral buckle: A link between scleral buckles and Adie syndrome may exist.Results from three randomized controlled trials of 274 patients comparing retinal detachment outcomes from pneumatic retinopexy versus scleral buckle found some evidence suggesting that scleral buckle was less likely to result in a recurrence of retinal detachment than pneumatic retinopexy but that overall evidence is of low quality and insufficient.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modchip** Modchip: A modchip (short for modification chip) is a small electronic device used to alter or disable artificial restrictions of computers or entertainment devices. Modchips are mainly used in video game consoles, but also in some DVD or Blu-ray players. They introduce various modifications to its host system's function, including the circumvention of region coding, digital rights management, and copy protection checks for the purpose of using media intended for other markets, copied media, or unlicensed third-party (homebrew) software. Function and construction: Modchips operate by replacing or overriding a system's protection hardware or software. They achieve this by either exploiting existing interfaces in an unintended or undocumented manner, or by actively manipulating the system's internal communication, sometimes to the point of re-routing it to substitute parts provided by the modchip. Function and construction: Most modchips consist of one or more integrated circuits (microcontrollers, FPGAs, or CPLDs), often complemented with discrete parts, usually packaged on a small PCB to fit within the console system it is designed for. Although there are modchips that can be reprogrammed for different purposes, most modchips are designed to work within only one console system or even only one specific hardware version. Function and construction: Modchips typically require some degree of technical skill to install since they must be connected to a console's circuitry, most commonly by soldering wires to select traces or chip legs on a system's circuit board. Some modchips allow for installation by directly soldering the modchip's contacts to the console's circuit ("quicksolder"), by the precise positioning of electrical contacts ("solderless"), or, in rare cases, by plugging them into a system's internal or external connector. Function and construction: Memory cards or cartridges that offer functions similar to modchips work on a completely different concept, namely by exploiting flaws in the system's handling of media. Such devices are not referred to as modchips, even if they are frequently traded under this umbrella term. Function and construction: The diversity of hardware modchips operate on and varying methods they use mean that while modchips are often used for the same goal, they may work in vastly different ways, even if they are intended for use on the same console. Some of the first modchips for the Nintendo Wii, known as drive chips, modify the behaviour and communication of the optical drive to bypass security. On the Xbox 360, a common modchip took advantage of the fact short periods of instability in the CPU could be used to fairly reliably lead it to incorrectly compare security signatures. The precision required in this attack meant that the modchip had to make use of a CPLD. Other modchips, such as the XenoGC and clones for the GameCube, invoke a debug mode where security measures are reduced or absent (in which case a stock Atmel AVR microcontroller was used). A more recent innovation are optical disk drive emulators or ODDE, which replace the optical disk drive and allow data to come from another source bypassing the need to circumvent any security. These often make use of FPGAs to enable them to accurately emulate timing and performance characteristics of the optical drives. History: Most cartridge-based console systems did not have modchips produced for them. They usually implemented copy protection and regional lockout with game cartridges, both on hardware and software level. Converters or passthrough devices have been used to circumvent the restrictions, while flash memory devices (game backup devices) were widely adopted in later years to copy game media. Early in the transition from solid-state to optical media, CD-based console systems did not have regional market segmentation or copy protection measures due to the rarity and high cost of user-writable media at the time. History: Modchips started to surface with the PlayStation system, due to the increasing availability and affordability of CD writers and the increasing sophistication of DRM protocols. At the time, a modchip's sole purpose was to allow the use of imported and copied game media. History: Today, modchips are available for practically every current console system, often in a great number of variations. In addition to circumventing regional lockout and copy protection mechanisms, modern modchips may introduce more sophisticated modifications to the system, such as allowing the use of user-created software (homebrew), expanding the hardware capabilities of its host system, or even installing an alternative operating system to completely re-purpose the host system (e.g. for use as a home theater PC). Anti-modchip measures: Most modchips open the system to copied media, therefore the availability of a modchip for a console system is undesirable for console manufacturers. They react by removing the intrusion points exploited by a modchip from subsequent hardware or software versions, changing the PCB layout the modchips are customized for, or by having the firmware or software detect an installed modchip and refuse operation as a consequence. Since modchips often hook into fundamental functions of the host system that cannot be removed or adjusted, these measures may not completely prevent a modchip from functioning but only prompt an adjustment of its installation process or programming, e.g. to include measures to make it undetectable ("stealth") to its host system. Anti-modchip measures: With the advent of online services to be used by video game consoles, some manufacturers have executed their possibilities within the service's license agreement to ban consoles equipped with modchips from using those services.In an effort to dissuade modchip creation, some console manufacturers included the option to run homebrew software or even an alternative operating system on their consoles. However, some of these features have been withdrawn at a later date. An argument can be made that a console system remains largely untouched by modchips as long as their manufacturers provide an official way of running unlicensed third-party software. Legality: One of the most prominent functions of many modchips—the circumvention of copy protection mechanisms—is outlawed by many countries' copyright laws such as the Digital Millennium Copyright Act in the United States, the European Copyright Directive and its various implementations by the EU member countries, and the Australian Copyright Act. Other laws may apply to the many diversified functions of a modchip, e.g. Australian law specifically allowing the circumvention of region coding. Legality: The ambiguity of applicable law, its nonuniform interpretation by the courts, and constant profound changes and amendments to copyright law do not allow for a definitive statement on the legality of modchips. A modchip's legality under a country's legislature may only be individually asserted in court. Legality: Most of the very few cases that have been brought before a court ended with the conviction of the modchip merchant or the manufacturer under the respective country's anti-circumvention laws. A small number of cases in the United Kingdom and Australia were dismissed under the argument that a system's copy protection mechanism would not be able to prevent the actual infringement of copyright—the actual process of copying game media—and therefore cannot be considered an effective technical protection measure protected by anti-circumvention laws. In 2006, Australian copyright law has been amended to effectively close this legal loophole.In a 2017 lawsuit against a retailer, a Canadian court ruled in favor of Nintendo under anti-circumvention provisions in Canadian copyright law, which prohibit any breaching of technical protection measures. The court ruled that even though the retailer claimed the products could be used for homebrew, thus asserting exemptions for maintaining interoperability, the court ruled that because Nintendo offers development kits for its platforms, interoperability could be achieved without breaching TPMs, and thus the defence is invalid. Alternatives: An alternative of installing a modchip is a process of softmodding a device. A softmodded device does not need to permanently have any additional hardware pieces inside. Instead, the software of a device or its internal part is modified in order to change the device's behaviour.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded