text
stringlengths
60
353k
source
stringclasses
2 values
**Apache Attic** Apache Attic: Apache Attic is a project of Apache Software Foundation to provide processes to make it clear when an Apache project has reached its end-of-life. The Attic project was created in November 2008. Also the retired projects can be retained.Projects may not stay in the attic forever: e.g. Apache XMLBeans is now a project of Apache Poi, but was previously in the attic from July 2013 until June 2018. Sub-Projects: This is a (non-exhaustive) list of Apache Attic projects: Avalon: Apache Avalon was a computer software framework to provide a reusable component framework for container (server) applications. Apex: Apache Apex was a YARN-native platform that unified stream and batch processing. AxKit: Apache AxKit was an XML Apache publishing framework run by the Apache foundation written in Perl. Beehive: Apache Beehive is a Java Application Framework designed to make the development of Java EE based applications quicker and easier. C++ Standard Library: A set of classes and functions, which are written in the core language (code name stdcxx). Click: Apache Click is a page- and component-oriented web application framework for Java EE and is built on top of the Java Servlet API. Crimson: Crimson is a Java XML parser which supports XML 1.0 through Java API for XML Processing (JAXP) 1.1,SAX 2.0, SAX2 Extensions version 1.0 and DOM Level 2 Core Recommendation. Excalibur: Apache Excalibur project produces a set of libraries for component based programming in the Java language. Harmony: Apache Harmony was an open source, free Java implementation. HiveMind: Apache HiveMind was a top level software project, for a framework written in Java. It takes the form of a services and configuration microkernel. iBATIS: iBATIS is a persistence framework which automates the mapping between SQL databases and objects in Java, .NET, and Ruby on Rails. Jakarta: The Jakarta Project created and maintained open source software for the Java platform. Cactus: Cactus was a simple test framework for unit testing server-side Java code (Servlets, EJBs, Tag libs, ...) from the Jakarta Project. ECS: ECS (Element Construction Set) was a Java API for generating elements for any of a variety of markup languages like HTML 4.0 and XML. ORO: ORO was a set of text-processing Java classes that provide Perl5 compatible regular expressions, AWK-like regular expressions, glob expressions, and utility classes for performing substitutions, splits, filtering filenames, etc. Regexp: Regexp was a pure Java Regular Expression package. Slide: Slide is an open-source content management system from the Jakarta project. It is written in Java and implements the WebDAV protocol. Taglibs: Taglibs was a large collection of JSP Tag Libraries. ODE: ODE was a Java-based workflow engine to manage business processes which have been expressed in the Web Services Business Process Execution Language (WS-BPEL). Ojb: Apache ObJectRelationalBridge (OJB) is an Object/Relational mapping tool that allows transparent persistence for Java Objects against relational databases. Quetzalcoatl: Quetzalcoatl was a project charged with the creation and maintenance of open-source software related to mod_python and the Python programming language. Shale: Shale is a web application framework which fundamentally based on JavaServer Faces. Shindig: Shindig is an OpenSocial container. It provides the code to render gadgets, proxy requests, and handle REST and RPC requests. Stratos: Stratos was a highly-extensible Platform-as-a-Service (PaaS) framework that helped run Apache Tomcat, PHP, and MySQL applications, and could be extended to support many more environments on all major cloud infrastructures. Xang: Apache Xang was an XML Web Framework that aggregated multiple data sources, made that data URL addressable and defined custom methods to access that data. Xindice: Apache Xindice was a native XML database. Wink: Apache Wink is an open source framework that enables development and consumption of REST style web services.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bancroft's sign** Bancroft's sign: Bancroft's sign, also known as Moses' sign, is a clinical sign found in patients with deep vein thrombosis of the lower leg involving the posterior tibial veins. The sign is positive if pain is elicited when the calf muscle is compressed forwards against the tibia, but not when the calf muscle is compressed from side to side. Like other clinical signs for deep vein thrombosis, such as Homans sign and Lowenberg's sign, this sign is neither sensitive nor specific for the presence of thrombosis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Customer edge router** Customer edge router: The customer edge router (CE) is the router at the customer premises that is connected to the provider edge router of a service provider IP/MPLS network. The CE router peers with the provider edge router (PE) and exchanges routes with the corresponding VRF inside the PE. The routing protocol used could be static or dynamic (an interior gateway protocol like OSPF or an exterior gateway protocol like BGP). Customer edge router: The customer edge router can either be owned by the customer or service provider.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bolus (digestion)** Bolus (digestion): In digestion, a bolus (from Latin bolus, "ball") is a ball-like mixture of food and saliva that forms in the mouth during the process of chewing (which is largely an adaptation for plant-eating mammals). It has the same color as the food being eaten, and the saliva gives it an alkaline pH. Under normal circumstances, the bolus is swallowed, and travels down the esophagus to the stomach for digestion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OR4C16** OR4C16: Olfactory receptor 4C16 is a protein that in humans is encoded by the OR4C16 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pelvic fascia** Pelvic fascia: The pelvic fasciae are the fascia of the pelvis and can be divided into: (a) the fascial sheaths of the Obturator internus muscle (Fascia of the Obturator internus) the Piriformis muscle (Fascia of the Piriformis) the pelvic floor (b) fascia associated with the organs of the pelvis. Structure: Fascia of pelvic organs Pelvic fascia extends to cover the organs within the pelvis. It is attached to the fascia that runs along the pelvic floor along the tendinous arch. The fascia which covers pelvic organs can be divided according to the organs that are covered: The front is known as the "vesical layer". It forms the anterior and lateral ligaments of the bladder. In males, its middle lamina crosses the floor of the pelvis between the rectum and vesiculæ seminales as the rectovesical septum; in the female this is perforated by the cervix and is named the transverse cervical ligament. At the back, the fascia passes to the side of the rectum; it forms a loose sheath for the rectum, but is firmly attached around the anal canal. This portion is known as the "rectal layer". Fascia of the pelvic floor SuperiorThe part of the pelvic fascia on the pelvic floor covers both surfaces of the Levatores ani muscle. The layer covering the upper surface of the pelvic diaphragm follows, above, the line of origin of the Levator ani and is therefore somewhat variable. In front it is attached to the back of the pubic symphysis about 2 cm above its lower border. It can then be traced laterally across the back of the superior ramus of the pubis for a distance of about 1.25 cm, when it reaches the obturator fascia. It is attached to this fascia along a line which pursues a somewhat irregular course to the spine of the ischium. The irregularity of this line is because the origin of the Levator ani, which in lower forms is from the pelvic brim, is in man lower down, on the obturator fascia. Tendinous fibers of origin of the muscle are therefore often found extending up toward, and in some cases reaching, the pelvic brim, and on these the fascia is carried. Structure: InferiorThe diaphragmatic part of the pelvic fascia covers both surfaces of the Levatores ani. The inferior layer is known as the anal fascia. It is attached above to the obturator fascia along the line of origin of the Levator ani, while below it is continuous with the superior fascia of the urogenital diaphragm, and with the fascia on the Sphincter ani internus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Winaero** Winaero: Winaero is a website hosting freeware tweaking tools for Microsoft Windows. It is made by a Russian software developer, Sergey Tkachenko. The website offfers 50+ freeware tools for modifying the behavior of Microsoft Windows. Notable amongst these are Skip Metro Suite which allows skipping the Windows 8 Start screen, booting straight to the Windows desktop and customizing the Modern UI hot corners. Other notable tools include Ribbon Disabler, which allows disabling the Explorer Ribbon interface and Personalization Panel which replicates the full personalization restricted by low end editions of Windows. The latest addition is Winaero Tweaker which unifies most of the tools under a single tool to modify hidden Windows settings. Winaero: The website also regularly gives tips on tweaking Windows through its blog and offers free themes, visual styles and HD wallpapers to customize Windows. Winaero posts daily topics ranging from tweaks and tips for Windows, to troubleshooting guides, to free visual styles and theme packs. History: Winaero started in July 2011 as a simple English download page for the former Russian original project by Sergey Tkachenko. Winreview later ceased to operate and Winaero became the sole focus of the developer. From August 2012, Winaero started to publish English articles on its blog. Recognition: Winaero's software utilities have been recognized by WinSuperSite, Lifehacker, PCWorld, Engadget and several other reputable news sites and blogs. A number of Winaero tools have been featured on these sites.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**W. Wallace McDowell Award** W. Wallace McDowell Award: The W. Wallace McDowell Award is awarded by the IEEE Computer Society for outstanding theoretical, design, educational, practical, or related innovative contributions that fall within the scope of Computer Society interest. This is the highest technical award made solely by the IEEE Computer Society where selection of the awardee is based on the "highest level of technical accomplishment and achievement". The IEEE Computer Society (with over 85000 members from every field of computing) is "dedicated to advancing the theory, practice, and application of computer and information processing technology." Another award considered to be the "most prestigious technical award in computing" is the A. M. Turing Award awarded by Association for Computing Machinery (ACM). This is popularly referred to as the "computer science's equivalent of the Nobel Prize". The W. Wallace McDowell Award is sometimes popularly referred to as the "IT Nobel".The award is named after W. Wallace McDowell who was director of engineering at IBM, during the development of the landmark product IBM 701. Mr. McDowell was responsible for the transition from electro-mechanical techniques to electronics, and the subsequent transition to solid state devices.The first recipient, in 1966, was Fernando J. Corbató who was a prominent American computer scientist, notable as a pioneer in the development of time-sharing operating systems, then of Massachusetts Institute of Technology. The second recipient, in 1967, was John Backus who was awarded the Mcdowell Award for the development of FORTRAN and the syntactical forms incorporated in ALGOL. John Backus was the developer of FORTRAN, for years one of the best known and most used programming systems in the world.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ACS Applied Energy Materials** ACS Applied Energy Materials: ACS Applied Energy Materials is a monthly peer-reviewed scientific journal that was established in 2018 by the American Chemical Society. It covers aspects of materials, engineering, chemistry, physics, and biology relevant to sustainable applications in energy conversion and storage. The editor in chief is Kirk S. Schanze. According to the Journal Citation Reports, the journal has a 2021 impact factor of 6.4. Scope: ACS Applied Energy Materials publishes letters, articles, reviews, spotlight on application, forum articles, and comments across a given subject area. Specific materials of interest will include, but are not limited to: Fuel cell Supercapacitor Thermoelectrics Photovoltaics Photo-electrosynthesis cells
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Circuit Breakers (video game)** Circuit Breakers (video game): Circuit Breakers is a racing game developed by Supersonic Software and published by Mindscape for the PlayStation. It is the sequel to Supersonic Racers.It was the first (and possibly only) PlayStation title ever to receive expansion packs through Demo discs released with Official UK PlayStation Magazine. A remake for the PlayStation 2 was released in Europe only under the name Circuit Blasters in 2005. Reception: The game received average reviews according to the review aggregation website GameRankings. Edge gave it a favourable review over a month before it was released in Europe. Next Generation said, "If you possess a multitap and three willing friends, this game should be at the very top of your 'must buy' list." However, GameSpot gave the European version a negative review, a few months before it was released Stateside.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eukaryotic transcription** Eukaryotic transcription: Eukaryotic transcription is the elaborate process that eukaryotic cells use to copy genetic information stored in DNA into units of transportable complementary RNA replica. Gene transcription occurs in both eukaryotic and prokaryotic cells. Unlike prokaryotic RNA polymerase that initiates the transcription of all different types of RNA, RNA polymerase in eukaryotes (including humans) comes in three variations, each translating a different type of gene. A eukaryotic cell has a nucleus that separates the processes of transcription and translation. Eukaryotic transcription occurs within the nucleus where DNA is packaged into nucleosomes and higher order chromatin structures. The complexity of the eukaryotic genome necessitates a great variety and complexity of gene expression control. Eukaryotic transcription: Eukaryotic transcription proceeds in three sequential stages: initiation, elongation, and termination.The RNAs transcribed serve diverse functions. For example, structural components of the ribosome are transcribed by RNA polymerase I. Protein coding genes are transcribed by RNA polymerase II into messenger RNAs (mRNAs) that carry the information from DNA to the site of protein synthesis. More abundantly made are the so-called non-coding RNAs account for the large majority of the transcriptional output of a cell. These non-coding RNAs perform a variety of important cellular functions. RNA polymerase: Eukaryotes have three nuclear RNA polymerases, each with distinct roles and properties. RNA polymerase: RNA polymerase I (Pol I) catalyses the transcription of all rRNA genes except 5S. These rRNA genes are organised into a single transcriptional unit and are transcribed into a continuous transcript. This precursor is then processed into three rRNAs: 18S, 5.8S, and 28S. The transcription of rRNA genes takes place in a specialised structure of the nucleus called the nucleolus, where the transcribed rRNAs are combined with proteins to form ribosomes.RNA polymerase II (Pol II) is responsible for the transcription of all mRNAs, some snRNAs, siRNAs, and all miRNAs. Many Pol II transcripts exist transiently as single strand precursor RNAs (pre-RNAs) that are further processed to generate mature RNAs. For example, precursor mRNAs (pre-mRNAs) are extensively processed before exiting into the cytoplasm through the nuclear pore for protein translation. RNA polymerase: RNA polymerase III (Pol III) transcribes small non-coding RNAs, including tRNAs, 5S rRNA, U6 snRNA, SRP RNA, and other stable short RNAs such as ribonuclease P RNA. RNA polymerase: RNA Polymerases I, II, and III contain 14, 12, and 17 subunits, respectively. All three eukaryotic polymerases have five core subunits that exhibit homology with the β, β’, αI, αII, and ω subunits of E. coli RNA polymerase. An identical ω-like subunit (RBP6) is used by all three eukaryotic polymerases, while the same α-like subunits are used by Pol I and III. The three eukaryotic polymerases share four other common subunits among themselves. The remaining subunits are unique to each RNA polymerase. The additional subunits found in Pol I and Pol III relative to Pol II, are homologous to Pol II transcription factors.Crystal structures of RNA polymerases I and II provide an opportunity to understand the interactions among the subunits and the molecular mechanism of eukaryotic transcription in atomic detail. RNA polymerase: The carboxyl terminal domain (CTD) of RPB1, the largest subunit of RNA polymerase II, plays an important role in bringing together the machinery necessary for the synthesis and processing of Pol II transcripts. Long and structurally disordered, the CTD contains multiple repeats of heptapeptide sequence YSPTSPS that are subject to phosphorylation and other posttranslational modifications during the transcription cycle. These modifications and their regulation constitute the operational code for the CTD to control transcription initiation, elongation and termination and to couple transcription and RNA processing. Initiation: The initiation of gene transcription in eukaryotes occurs in specific steps. First, an RNA polymerase along with general transcription factors binds to the promoter region of the gene to form a closed complex called the preinitiation complex. The subsequent transition of the complex from the closed state to the open state results in the melting or separation of the two DNA strands and the positioning of the template strand to the active site of the RNA polymerase. Without the need of a primer, RNA polymerase can initiate the synthesis of a new RNA chain using the template DNA strand to guide ribonucleotide selection and polymerization chemistry. However, many of the initiated syntheses are aborted before the transcripts reach a significant length (~10 nucleotides). During these abortive cycles, the polymerase keeps making and releasing short transcripts until it is able to produce a transcript that surpasses ten nucleotides in length. Once this threshold is attained, RNA polymerase passes the promoter and transcription proceeds to the elongation phase. Initiation: Eukaryotic promoters and general transcription factors Pol II-transcribed genes contain a region in the immediate vicinity of the transcription start site (TSS) that binds and positions the preinitiation complex. This region is called the core promoter because of its essential role in transcription initiation. Different classes of sequence elements are found in the promoters. For example, the TATA box is the highly conserved DNA recognition sequence for the TATA box binding protein, TBP, whose binding initiates transcription complex assembly at many genes. Initiation: Eukaryotic genes also contain regulatory sequences beyond the core promoter. These cis-acting control elements bind transcriptional activators or repressors to increase or decrease transcription from the core promoter. Well-characterized regulatory elements include enhancers, silencers, and insulators. These regulatory sequences can be spread over a large genomic distance, sometimes located hundreds of kilobases from the core promoters.General transcription factors are a group of proteins involved in transcription initiation and regulation. These factors typically have DNA-binding domains that bind specific sequence elements of the core promoter and help recruit RNA polymerase to the transcriptional start site. Initiation: General transcription factors for RNA polymerase II include TFIID, TFIIA, TFIIB, TFIIF, TFIIE, and TFIIH. Initiation: Assembly of preinitiation complex The transcription, a complete set of general transcription factors and RNA polymerase need to be assembled at the core promoter to form the ~2.5 million Dalton preinitiation complex. For example, for promoters that contain a TATA box near the TSS, the recognition of TATA box by the TBP subunit of TFIID initiates the assembly of a transcription complex. The next proteins to enter are TFIIA and TFIIB, which stabilize the DNA-TFIID complex and recruit Pol II in association with TFIIF and additional transcription factors. TFIIB serves as the bridge between the TATA-bound TBP and the RNA polymerase. It also helps to place the active centre of the polymerase in the correct position to initiate transcription. One of the last transcription factors to be recruited to the preinitiation complex is TFIIH, which plays an important role in promoter melting and escape. Initiation: Promoter melting and open complex formation For pol II-transcribed genes, and unlike bacterial RNA polymerase, promoter melting requires hydrolysis of ATP and is mediated by TFIIH. TFIIH is a ten-subunit protein, including both ATPase and protein kinase activities. While the upstream promoter DNA is held in a fixed position by TFIID, TFIIH pulls downstream double-stranded DNA into the cleft of the polymerase, driving the separation of DNA strands and the transition of the preinitiation complex from the closed to open state. TFIIB aids in open complex formation by binding the melted DNA and stabilizing the transcription bubble. Initiation: Abortive initiation Once the initiation complex is open, the first ribonucleotide is brought into the active site to initiate the polymerization reaction in the absence of a primer. This generates a nascent RNA chain that forms a hetero-duplex with the template DNA strand. However, before entering the elongation phase, polymerase may terminate prematurely and release a short, truncated transcript. This process is called abortive initiation. Many cycles of abortive initiation may occur before the transcript grows to sufficient length to promote polymerase escape from the promoter. Throughout abortive initiation cycles, RNA polymerase remains bound to the promoter and pulls downstream DNA into its catalytic cleft in a scrunching-kind of motion. Initiation: Promoter escape When a transcript attains the threshold length of ten nucleotides, it enters the RNA exit channel. The polymerase breaks its interactions with the promoter elements and any regulatory proteins associated with the initiation complex that it no longer needs. Promoter escape in eukaryotes requires ATP hydrolysis and, in the case of Pol II-phosphorylation of the CTD. Meanwhile, the transcription bubble collapses down to 12-14 nucleotides, providing kinetic energy required for the escape. Elongation: After escaping the promoter and shedding most of the transcription factors for initiation, the polymerase acquires new factors for the next phase of transcription: elongation. Transcription elongation is a processive process. Double stranded DNA that enters from the front of the enzyme is unzipped to avail the template strand for RNA synthesis. For every DNA base pair separated by the advancing polymerase, one hybrid RNA:DNA base pair is immediately formed. DNA strands and nascent RNA chain exit from separate channels; the two DNA strands reunite at the trailing end of the transcription bubble while the single strand RNA emerges alone. Elongation: Elongation factors Among the proteins recruited to polymerase are elongation factors, thus called because they stimulate transcription elongation. There are different classes of elongation factors. Some factors can increase the overall rate of transcribing, some can help the polymerase through transient pausing sites, and some can assist the polymerase to transcribe through chromatin. One of the elongation factors, P-TEFb, is particularly important. P-TEFb phosphorylates the second residue (Ser-2) of the CTD repeats (YSPTSPS) of the bound Pol II. P-TEFb also phosphorylates and activates SPT5 and TAT-SF1. SPT5 is a universal transcription factor that helps recruit 5'-capping enzyme to Pol II with a CTD phosphorylated at Ser-5. TAF-SF1 recruits components of the RNA splicing machinery to the Ser-2 phosphorylated CTD. P-TEFb also helps suppress transient pausing of polymerase when it encounters certain sequences immediately following initiation. Elongation: Transcription fidelity Transcription fidelity is achieved through multiple mechanisms. RNA polymerases select correct nucleoside triphosphate (NTP) substrate to prevent transcription errors. Only the NTP which correctly base pairs with the coding base in the DNA is admitted to the active center. RNA polymerase performs two known proof reading functions to detect and remove misincorporated nucleotides: pyrophosphorylytic editing and hydrolytic editing. The former removes the incorrectly inserted ribonucleotide by a simple reversal of the polymerization reaction, while the latter involves backtracking of the polymerase and cleaving of a segment of error-containing RNA product. Elongation factor TFIIS (InterPro: IPR006289; TCEA1, TCEA2, TCEA3) stimulates an inherent ribonuclease activity in the polymerase, allowing the removal of misincorporated bases through limited local RNA degradation. Note that all reactions (phosphodiester bond synthesis, pyrophosphorolysis, phosphodiester bond hydrolysis) are performed by RNA polymerase by using a single active center. Elongation: Pausing, poising, and backtracking Transcription elongation is not a smooth ride along double stranded DNA, as RNA polymerase undergoes extensive co-transcriptional pausing during transcription elongation. In general, RNA polymerase II does not transcribe through a gene at a constant pace. Rather it pauses periodically at certain sequences, sometimes for long periods of time before resuming transcription. This pausing is especially pronounced at nucleosomes, and arises in part through the polymerase entering a transcriptionally incompetent backtracked state. The duration of these pauses ranges from seconds to minutes or longer, and exit from long-lived pauses can be promoted by elongation factors such as TFIIS.This pausing is also sometimes used for proofreading; here the polymerase backs up, erases some of the RNA it has already made and has another go at transcription. In extreme cases, for example, when the polymerase encounters a damaged nucleotide, it comes to a complete halt. More often, an elongating polymerase is stalled near the promoter. Promoter-proximal pausing during early elongation is a commonly used mechanism for regulating genes poised to be expressed rapidly or in a coordinated fashion. Pausing is mediated by a complex called NELF (negative elongation factor) in collaboration with DSIF (DRB-sensitivity-inducing factor containing SPT4/SPT5). The blockage is released once the polymerase receives an activation signal, such as the phosphorylation of Ser-2 of CTD tail by P-TEFb. Other elongation factors such as ELL and TFIIS stimulate the rate of elongation by limiting the length of time that polymerase pauses. Elongation: RNA processing Elongating polymerase is associated with a set of protein factors required for various types of RNA processing. mRNA is capped as soon as it emerges from the RNA-exit channel of the polymerase. After capping, dephosphorylation of Ser-5 within the CTD repeats may be responsible for dissociation of the capping machinery. Further phosphorylation of Ser-2 causes recruitment of the RNA splicing machinery that catalyzes the removal of non-coding introns to generate mature mRNA. Alternative splicing expands the protein complements in eukaryotes. Just as with 5’-capping and splicing, the CTD tail is involved in recruiting enzymes responsible for 3’-polyadenylation, the final RNA processing event that is coupled with the termination of transcription. Termination: The last stage of transcription is termination, which leads to the dissociation of the complete transcript and the release of RNA polymerase from the template DNA.The process differs for each of the three RNA polymerases. The mechanism of termination is the least understood of the three transcription stages. Termination: Factor-dependent The termination of transcription of pre-rRNA genes by polymerase Pol I is performed by a system that needs a specific transcription termination factor. The mechanism used bears some resemblance to the rho-dependent termination in prokaryotes. Eukaryotic cells contain hundreds of ribosomal DNA repeats, sometimes distributed over multiple chromosomes. Termination of transcription occurs in the ribosomal intergenic spacer region that contains several transcription termination sites upstream of a Pol I pausing site. Through a yet unknown mechanism, the 3’-end of the transcript is cleaved, generating a large primary rRNA molecule that is further processed into the mature 18S, 5.8S and 28S rRNAs. Termination: As Pol II reaches the end of a gene, two protein complexes carried by the CTD, CPSF (cleavage and polyadenylation specificity factor) and CSTF (cleavage stimulation factor), recognize the poly-A signal in the transcribed RNA. Poly-A-bound CPSF and CSTF recruit other proteins to carry out RNA cleavage and then polyadenylation. Poly-A polymerase adds approximately 200 adenines to the cleaved 3’ end of the RNA without a template. The long poly-A tail is unique to transcripts made by Pol II. Termination: In the process of terminating transcription by Pol I and Pol II, the elongation complex does not dissolve immediately after the RNA is cleaved. The polymerase continues to move along the template, generating a second RNA molecule associated with the elongation complex. Two models have been proposed to explain how termination is achieved at last. The allosteric model states that when transcription proceeds through the termination sequence, it causes disassembly of elongation factors and/or an assembly of termination factors that cause conformational changes of the elongation complex. The torpedo model suggests that a 5' to 3' exonuclease degrades the second RNA as it emerges from the elongation complex. Polymerase is released as the highly processive exonuclease overtakes it. It is proposed that an emerging view will express a merge of these two models. Termination: Factor-independent RNA polymerase III can terminate transcription efficiently without the involvement of additional factors. The Pol III termination signal consists of a stretch of thymines (on the nontemplate strand) located within 40bp downstream from the 3' end of mature RNAs. The poly-T termination signal pauses Pol III Eukaryotic transcriptional control: The regulation of gene expression in eukaryotes is achieved through the interaction of several levels of control that acts both locally to turn on or off individual genes in response to a specific cellular need and globally to maintain a chromatin-wide gene expression pattern that shapes cell identity. Because eukaryotic genome is wrapped around histones to form nucleosomes and higher-order chromatin structures, the substrates for transcriptional machinery are in general partially concealed. Without regulatory proteins, many genes are expressed at low level or not expressed at all. Transcription requires displacement of the positioned nucleosomes to enable the transcriptional machinery to gain access of the DNA.All steps in the transcription are subject to some degree of regulation. Transcription initiation in particular is the primary level at which gene expression is regulated. Targeting the rate-limiting initial step is the most efficient in terms of energy costs for the cell. Transcription initiation is regulated by cis-acting elements (enhancers, silencers, isolators) within the regulatory regions of the DNA, and sequence-specific trans-acting factors that act as activators or repressors. Gene transcription can also be regulated post-initiation by targeting the movement of the elongating polymerase. Eukaryotic transcriptional control: Global control and epigenetic regulation The eukaryotic genome is organized into a compact chromatin structure that allows only regulated access to DNA. The chromatin structure can be globally "open" and more transcriptionally permissive, or globally "condensed" and transcriptionally inactive. The former (euchromatin) is lightly packed and rich in genes under active transcription. The latter (heterochromatin) includes gene-poor regions such as telomeres and centromeres but also regions with normal gene density but transcriptionally silenced. Transcription can be silenced by histone modification (deacetylation and methylation), RNA interference, and/or DNA methylation.The gene expression patterns that define cell identity are inherited through cell division. This process is called epigenetic regulation. DNA methylation is reliably inherited through the action of maintenance methylases that modify the nascent DNA strand generated by replication. In mammalian cells, DNA methylation is the primary marker of transcriptionally silenced regions. Specialized proteins can recognize the marker and recruit histone deacetylases and methylases to re-establish the silencing. Nucleosome histone modifications could also be inherited during cell division, however, it is not clear whether it can work independently without the direction by DNA methylation. Eukaryotic transcriptional control: Gene-specific activation The two main tasks of transcription initiation are to provide RNA polymerase with an access to the promoter and to assemble general transcription factors with polymerase into a transcription initiation complex. Diverse mechanisms of initiating transcription by overriding inhibitory signals at the gene promoter have been identified. Eukaryotic genes have acquired extensive regulatory sequences that encompass a large number of regulator-binding sites and spread overall kilobases (sometimes hundreds of kilobases) from the promoter–-both upstream and downstream. The regulator binding sites are often clustered together into units called enhancers. Enhancers can facilitate highly cooperative action of several transcription factors (which constitute enhanceosomes). Remote enhancers allow transcription regulation at a distance. Insulators situated between enhancers and promoters help define the genes that an enhancer can or cannot influence. Eukaryotic transcriptional control: Eukaryotic transcriptional activators have separate DNA-binding and activating functions. Upon binding to its cis-element, an activator can recruit polymerase directly or recruit other factors needed by the transcriptional machinery. An activator can also recruit nucleosome modifiers that alter chromatin in the vicinity of the promoter and thereby help initiation. Multiple activators can work together, either by recruiting a common or two mutually dependent components of the transcriptional machinery, or by helping each other bind to their DNA sites. These interactions can synergize multiple signaling inputs and produce intricate transcriptional responses to address cellular needs. Eukaryotic transcriptional control: Gene-specific repression Eukaryotic transcription repressors share some of the mechanisms used by their prokaryotic counterparts. For example, by binding to a site on DNA that overlaps with the binding site of an activator, a repressor can inhibit binding of the activator. But more frequently, eukaryotic repressors inhibit the function of an activator by masking its activating domain, preventing its nuclear localization, promoting its degradation, or inactivating it through chemical modifications. Repressors can directly inhibit transcription initiation by binding to a site upstream of a promoter and interacting with the transcriptional machinery. Repressors can indirectly repress transcription by recruiting histone modifiers (deacetylases and methylases) or nucleosome remodeling enzymes that affect the accessibility of the DNA. Repressing histone and DNA modifications are also the basis of transcriptional silencing that can spread along the chromatin and switch off multiple genes. Eukaryotic transcriptional control: Elongation and termination control The elongation phase starts once assembly of the elongation complex has been completed, and progresses until a termination sequence is encountered. The post-initiation movement of RNA polymerase is the target of another class of important regulatory mechanisms. For example, the transcriptional activator Tat affects elongation rather than initiation during its regulation of HIV transcription. In fact, many eukaryotic genes are regulated by releasing a block to transcription elongation called promoter-proximal pausing. Pausing can influence chromatin structure at promoters to facilitate gene activity and lead to rapid or synchronous transcriptional responses when cells are exposed to an activation signal. Pausing is associated with the binding of two negative elongation factors, DSIF (SPT4/SPT5) and NELF, to the elongation complex. Other factors can also influence the stability and duration of the paused polymerase. Pause release is triggered by the recruitment of the P-TEFb kinase.Transcription termination has also emerged as an important area of transcriptional regulation. Termination is coupled with the efficient recycling of polymerase. The factors associated with transcription termination can also mediate gene looping and thereby determine the efficiency of re-initiation. Transcription-coupled DNA repair: When transcription is arrested by the presence of a lesion in the transcribed strand of a gene, DNA repair proteins are recruited to the stalled RNA polymerase to initiate a process called transcription-coupled repair. Central to this process is the general transcription factor TFIIH that has ATPase activity. TFIIH causes a conformational change in the polymerase, to expose the transcription bubble trapped inside, in order for the DNA repair enzymes to gain access to the lesion. Thus, RNA polymerase serves as damage-sensing protein in the cell to target repair enzymes to genes that are being actively transcribed. Comparisons between prokaryotic and eukaryotic transcription: Eukaryotic transcription is more complex than prokaryotic transcription. For instance, in eukaryotes the genetic material (DNA), and therefore transcription, is primarily localized to the nucleus, where it is separated from the cytoplasm (in which translation occurs) by the nuclear membrane. This allows for the temporal regulation of gene expression through the sequestration of the RNA in the nucleus, and allows for selective transport of mature RNAs to the cytoplasm. Bacteria do not have a distinct nucleus that separates DNA from ribosome and mRNA is translated into protein as soon as it is transcribed. The coupling between the two processes provides an important mechanism for prokaryotic gene regulation.At the level of initiation, RNA polymerase in prokaryotes (bacteria in particular) binds strongly to the promoter region and initiates a high basal rate of transcription. No ATP hydrolysis is needed for the close-to-open transition, promoter melting is driven by binding reactions that favor the melted conformation. Chromatin greatly impedes transcription in eukaryotes. Assembly of large multi-protein preinitiation complex is required for promoter-specific initiation. Promoter melting in eukaryotes requires hydrolysis of ATP. As a result, eukaryotic RNA polymerases exhibit a low basal rate of transcription initiation. Regulation of transcription in cancer: In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes silenced. Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-expressed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dihydroxybenzenes** Dihydroxybenzenes: In organic chemistry, dihydroxybenzenes (benzenediols) are organic compounds in which two hydroxyl groups (−OH) are substituted onto a benzene ring (C6H6). These aromatic compounds are classed as phenols. There are three structural isomers: 1,2-dihydroxybenzene (the ortho isomer) is commonly known as catechol, 1,3-dihydroxybenzene (the meta isomer) is commonly known as resorcinol, and 1,4-dihydroxybenzene (the para isomer) is commonly known as hydroquinone. Dihydroxybenzenes: All three of these compounds are colorless to white granular solids at room temperature and pressure, but upon exposure to oxygen they may darken. All three isomers have the chemical formula C6H6O2. Dihydroxybenzenes: Similar to other phenols, the hydroxyl groups on the aromatic ring of a benzenediol are weakly acidic. Each benzenediol can lose an H+ from one of the hydroxyls to form a type of phenolate ion. The Dakin oxidation is an organic redox reaction in which an ortho- or para-hydroxylated phenyl aldehyde (−CH=O) or ketone (>C=O) reacts with hydrogen peroxide in base to form a benzenediol and a carboxylate. Overall, the carbonyl group (C=O) is oxidized, and the hydrogen peroxide is reduced.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catastrophe modeling** Catastrophe modeling: Catastrophe modeling (also known as cat modeling) is the process of using computer-assisted calculations to estimate the losses that could be sustained due to a catastrophic event such as a hurricane or earthquake. Cat modeling is especially applicable to analyzing risks in the insurance industry and is at the confluence of actuarial science, engineering, meteorology, and seismology. Catastrophes/ Perils: Natural catastrophes (sometimes referred to as "nat cat") that are modeled include: Hurricane (main peril is wind damage; some models can also include storm surge and rainfall) Earthquake (main peril is ground shaking; some models can also include tsunami, fire following earthquakes, liquefaction, landslide, and sprinkler leakage damage) severe thunderstorm or severe convective storms (main sub-perils are tornado, straight-line winds and hail) Flood Extratropical cyclone (commonly referred to as European windstorm) Wildfire Winter stormHuman catastrophes include: Terrorism events Warfare Casualty/liability events Forced displacement crises Cyber data breaches Lines of business modeled: Cat modeling involves many lines of business, including: Personal property Commercial property Workers' compensation Automobile physical damage Limited liabilities Product liability Business Interruption Inputs, Outputs, and Use Cases: The input into a typical cat modeling software package is information on the exposures being analyzed that are vulnerable to catastrophe risk. The exposure data can be categorized into three basic groups: Information on the site locations, referred to as geocoding data (street address, postal code, county/CRESTA zone, etc.) Information on the physical characteristics of the exposures (construction, occupation/occupancy, year built, number of stories, number of employees, etc.) Information on the financial terms of the insurance coverage (coverage value, limit, deductible, etc.)The output of a cat model is an estimate of the losses that the model predicts would be associated with a particular event or set of events. When running a probabilistic model, the output is either a probabilistic loss distribution or a set of events that could be used to create a loss distribution; probable maximum losses ("PMLs") and average annual losses ("AALs") are calculated from the loss distribution. When running a deterministic model, losses caused by a specific event are calculated; for example, Hurricane Katrina or "a magnitude 8.0 earthquake in downtown San Francisco" could be analyzed against the portfolio of exposures. Inputs, Outputs, and Use Cases: Cat models have a variety of use cases for a number of industries, including: Insurers and risk managers use cat modeling to assess the risk in a portfolio of exposures. This might help guide an insurer's underwriting strategy or help them decide how much reinsurance to purchase. Some state departments of insurance allow insurers to use cat modeling in their rate filings to help determine how much premium their policyholders are charged in catastrophe-prone areas. Insurance rating agencies such as A. M. Best and Standard & Poor's use cat modeling to assess the financial strength of insurers that take on catastrophe risk. Reinsurers and reinsurance brokers use cat modeling in the pricing and structuring of reinsurance treaties. European insurers use cat models to derive the required regulatory capital under the Solvency II regime. Cat models are used to derive catastrophe loss probability distributions which are components of many Solvency II internal capital models. Likewise, cat bond investors, investment banks, and bond rating agencies use cat modeling in the pricing and structuring of a catastrophe bond. Open catastrophe modeling: The Oasis Loss Modelling Framework ("LMF") is an open source catastrophe modeling platform. It developed by a nonprofit organisation funded and owned by the Insurance Industry to promote open access to models and to promote transparency. Additionally, some firms within the insurance industry are currently working with the Association for Cooperative Operations Research and Development (ACORD) to develop an industry standard for collecting and sharing exposure data.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Propagation constraint** Propagation constraint: In database systems, a propagation constraint "details what should happen to a related table when we update a row or rows of a target table" (Paul Beynon-Davies, 2004, p.108). Tables are linked using primary key to foreign key relationships. It is possible for users to update one table in a relationship in such a way that the relationship is no longer consistent and this is known as breaking referential integrity. An example of breaking referential integrity: if a table of employees includes a department number for 'Housewares' which is a foreign key to a table of departments and a user deletes that department from the department table then Housewares employees records would refer to a non-existent department number. Propagation constraints are methods used by relational database management systems (RDBMS) to solve this problem by ensuring that relationships between tables are preserved without error. In his database textbook, Beynon-Davies explains the three ways that RDBMS handle deletions of target and related tuples: Restricted Delete - the user cannot delete the target row until all rows that point to it (via foreign keys) have been deleted. This means that all Housewares employees would need to be deleted, or their departments changed, before removing the department from the departmental table. Propagation constraint: Cascades Delete - can delete the target row and all rows that point to it (via foreign keys) are also deleted. The process is the same as a restricted delete, except that the RDBMS would delete the Houseware employees automatically before removing the department. Nullifies Delete - can delete the target row and all foreign keys (pointing to it) are set to null. In this case, after removing the housewares department, employees who worked in this department would have a NULL (unknown) value for their department.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subjectile** Subjectile: The subjectile is a kind of ground used in artistic painting. The word has also been used by Antonin Artaud and Jacques Derrida commented on its use. The subjectile is seen as a theory, not a fact; as a theory the subjectile is a tool that can be employed to analyse art objects to generate hypotheses concerning the relationship between subject and object in art. Subjectile: Derrida mentions that the word subjectile appears in an essay on Pierre Bonnard, published in 1921. The subjectile refers to Bonnard's use of cardboard for painting. Subjectile: The Concise French dictionary translates subjectile as "Art: support (beneath paint, etc.)". Without a support and ground, the subject of a painting could not exist, as it would fall away. Derrida argues that Artaud's subjectile is both ground and a support. It is stretched out, extended, beyond, through and behind the subject, it is not alien to the subject, yet ‘It has two situations’. Derrida holds that the subjectile functions as a hypothesis, and is a subjectile itself. ‘Subjectile, the word or the thing, can take the place of the subject or of the object – being neither one nor the other.’Artaud mentions the subjectile three times in his writing. Derrida, in his essay "To Unsense the Subjectile", states ‘All three times, he is speaking of his own drawings, in 1932, 1946, and 1947’. The first time Artaud used the word was in a letter to André Rolland de Renéville, ‘Herewith a bad drawing in which what is called the subjectile betrayed me.’ In 1946, ‘This drawing is a grave attempt to give life and existence to what until today had never been accepted in art, the botching of the subjectile, the piteous awkwardness of forms crumbling around an idea after having for so many eternities labored to join it. The page is soiled and spoiled, the paper crumpled, the people drawn with the consciousness of a child.’ Finally in February 1947, ‘The figures on the inert page said nothing under my hand. They offered themselves to me like millstones which would not inspire the drawing, and which I could cut. Scrap, file, sew, unsew, slash, and stitch without the subjectile ever complaining through father or through mother.’Derrida's essay was first published in French titled Forcener le Subjectile in 1986 by Gallimard and in an abridged English translation was published later (in 1998 by MIT press). For copyright reasons the images published in the Gallimard book were excluded from the later English translation, which contains instead photographs of Artaud taken by Georges Pastier in 1947, the year before Artaud died.The subjectile is also commented on by Susan Sontag in her introduction to the edited translation of Artaud's works. and further in The Antonin Artaud Critical Reader, which includes texts by Gilles Deleuze, Derrida, and Sontag.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Police car** Police car: A police car (also called a police cruiser, police interceptor, patrol car, area car, cop car, prowl car, squad car, radio car, or radio motor patrol) is a ground vehicle used by police and law enforcement for transportation during patrols and responses to calls for service. A type of emergency vehicle, police cars are used by police officers to patrol a beat, quickly reach incident scenes, and transport and temporarily detain suspects, all while establishing a police presence and providing visible crime deterrence. Police car: Police cars are traditionally sedans, though SUVs, crossovers, station wagons, hatchbacks, pickup trucks, utes, vans, trucks, off-road vehicles, and even performance cars have seen use in both standard patrol roles and specialized applications. Most police cars are existing vehicle models sold on the civilian market that may or may not be modified variants of their original models (such as the Ford Crown Victoria Police Interceptor being a variant of the Ford Crown Victoria); the few purpose-built examples include the canceled Carbon Motors E7 and the Lenco BearCat armored vehicle. Police car: Police cars usually contain communication devices, issued weaponry, and a variety of equipment, with emergency lights, a siren, and livery markings to distinguish the vehicle as a police car. History: The first police car was an electric wagon used by the Akron Police Department in Akron, Ohio in 1899. The first operator of the police patrol wagon was Officer Louis Mueller, Sr. It could reach 16 mph (26 km/h) and travel 30 mi (48 km) before its battery needed to be recharged. The car was built by city mechanical engineer Frank Loomis. The US$2,400 vehicle was equipped with electric lights, gongs, and a stretcher. The car's first assignment was to pick up a drunken man at the junction of Main and Exchange streets.Ford introduced the flathead V8 in the 1932 Ford as the first mass-marketed V8 car; this low-priced, mass-marketed V8 car became popular with police in the United States, establishing strong brand loyalty that continued into the 21st century. Starting in the 1940s, major American automakers, namely the Big Three, began to manufacture specialized police cars. Over time, these became their own dedicated police fleet offerings, such as the Ford Police Interceptor and Chevrolet 9C1. History: In the United Kingdom, Captain Athelstan Popkess, Chief Constable of the Nottingham City Police from 1930 to 1959, transformed British police from their Victorian era foot patrol beat model to the modern car-based reactive response model, through his development of the "Mechanized Division", which used two-way radio communication between police command and police cars. Under Popkess, the Nottingham City Police began to use police cars as an asset that police tactics centered around, such as overlaying police car patrol sectors over foot patrol beats and using police cars to pick up foot patrol officers while responding to crimes.As car ownership increased in the post-World War II economic expansion, police cars became significantly more common in a majority of developed countries as car ownership increased, police jurisdictions expanded farther out into residential and suburban areas, car-oriented urban planning and highways dominated cities, vehicular crimes and police evasion in cars increased, and more equipment was issued to police officers, to the point that vehicles became practically necessary for modern law enforcement. Types: Various types of police car exist. Depending on the organization of the law enforcement agency, the class of vehicle used as a police car, and the environmental factors of the agency's jurisdiction, many of the types below may or may not exist in certain fleets, or their capabilities may be merged to create all-rounded units with shared vehicles as opposed to specialized units with separate vehicles. Types: Patrol car A patrol car is a police car used for standard patrol. Used to replace traditional foot patrols, the patrol car's primary function is to provide transportation for regular police duties, such as responding to calls, enforcing laws, or simply establishing a more visible police presence while on patrol. Driving a patrol car allows officers to reach their destinations more quickly and to cover more ground compared to other methods. Patrol cars are typically designed to be identifiable as police cars to the public and thus almost always have proper markings, roof-mounted emergency lights, and sirens. Types: Response car A response car, also known as a pursuit car, area car, rapid response unit, or fast response car, is a police car used to ensure quick responses to emergencies compared to patrol cars. It is likely to be of a higher specification, capable of higher speeds, and often fitted with unique markings and increased-visibility emergency lights. These cars are generally only used to respond to emergency incidents and may carry specialized equipment not used in regular patrol cars, such as long arms. Types: Traffic car A traffic car, also known as a highway patrol car, traffic enforcement unit, speed enforcement unit, or road policing unit, is a police car tasked with enforcing traffic laws and conducting traffic stops, typically on major roadways such as highways. They are often relatively high-performance vehicles compared to patrol cars, as they must be capable of catching up to fast-moving vehicles. They may have specific markings or special emergency lights to either improve or hinder visibility. Alternatively, some traffic cars may use the same models as patrol cars, and may barely differ from them aside from markings, radar speed guns, and traffic-oriented equipment. Types: Unmarked car An unmarked car is a police car that lacks markings and often easily-visible or roof-mounted emergency lights. They are generally used for varying purposes, ranging from standard patrol and traffic enforcement to sting operations and detective work. They have the advantage of not being immediately recognizable, and are considered a valuable tool in catching criminals in the commission of a crime or by surprise. The resemblance an unmarked police car has to a civilian vehicle varies based on their application: they may use the same models as marked patrol cars, and may be virtually identical to them aside from the lack of roof-mounted emergency lights, with pushbars and spotlights clearly visible; alternatively, they may use common civilian vehicle models that blend in with traffic, with emergency lights embedded in the grille or capable of being hidden and revealed, such as Japanese unmarked cars having retractable beacons built into the car's roof.Unmarked cars typically use regular civilian license plates, occasionally even in jurisdictions where emergency vehicles and government vehicles use unique license plates, though some agencies or jurisdictions may be able to use the unique plates anyway; for example, American federal law enforcement agencies may use either government plates or regular license plates.The term "undercover car" is often used to describe unmarked cars. However, this usage is erroneous; unmarked cars are police cars that lack markings but have police equipment, emergency lights, and sirens, while undercover cars lack these entirely and are essentially civilian vehicles used by law enforcement in undercover operations to avoid detection.The close resemblance of unmarked cars to civilian cars has created concerns of police impersonation. Some police officers advise motorists that they do not have to pull over in a secluded location and instead can wait until they reach somewhere safer. In the UK, officers must be wearing uniforms in order to make traffic stops. Motorists can also ask for the officer's badge and identification or call an emergency number or a police non-emergency number to confirm if the police unit is genuine. Types: Ghost car A ghost car, also known as a stealth car or semi-marked car, is a police car that combines elements of both an unmarked car and a marked patrol car, featuring markings that are either similar colors to the vehicle's body paint, or are reflective graphics that are difficult to see unless illuminated by lights or viewed at certain angles. Ghost cars are often used for traffic enforcement, though they may also be used in lieu of unmarked cars in jurisdictions where they are prohibited or have their enforcement capabilities limited, such as being unable to conduct traffic stops. In these instances, the markings on ghost cars may be sufficient to legally count as marked police cars, despite the markings being difficult to see. Types: Utility vehicle A utility vehicle is a police car used for utility or support purposes as opposed to regular police duties. Utility vehicles are usually all-wheel drive vehicles with cargo space such as SUVs, pickup trucks, vans, utes, or off-road vehicles. They are often used to transport or tow assets such as trailers, equipment, or other vehicles such as police boats; they are alternatively used for or are capable of off-roading, especially in fleets where most other vehicles cannot do so. They can also be used for animal control, if that is the responsibility of police within that jurisdiction. Some utility vehicles can be used for transporting teams of officers and occasionally have facilities to securely detain and transport a small number of suspects, provided there is enough seating space. Types: Police dog vehicle A police dog vehicle, also known as a K-9 vehicle or a police dog unit, is a police car modified to transport police dogs. The models used for these vehicles range from the same as patrol cars to dedicated SUVs, pickup trucks, or vans. To provide sufficient space for the police dog, there is usually a cage in the trunk or rear seats with enough space for the dog, though some agencies may put the cage in the front passenger seat, or may lack a cage entirely and simply have the dog in the rear compartment. There may or may not be space to transport detainees or additional officers. Police dog vehicles almost always have markings noting they have a police dog on board, typically just the agency's standard markings with the added notice. Types: Decoy car A decoy car is a police car used to establish a police presence, typically to deter traffic violations or speeding, without a police officer actually being present. They may be older models retired from use, civilian cars modified to resemble police cars, or demonstration vehicles. In some instances, a "decoy car" may not be a vehicle at all, but rather a life-sized cutout or sign depicting a police car. Use of decoy cars is intended to ensure crime deterrence without having to commit manpower, allowing the officer that would otherwise be there to be freed up for other assignments. In the United Kingdom, decoy liveried police cars and vans may be parked on filling station forecourts to deter motorists dispensing fuel then making off without payment, also known as "bilking".The use of decoy cars is entirely up to the agency, though in 2005, the Virginia General Assembly considered a bill that would make decoy cars a legal requirement for police. The bill stated in part: "Whenever any law-enforcement vehicle is permanently taken out of service ... such vehicle shall be placed at a conspicuous location within a highway median in order to deter violations of motor vehicle laws at that location. Such vehicles shall ... be rotated from one location to another as needed to maintain their deterrent effect." Surveillance car A surveillance car is a police car used for surveillance purposes. Usually SUVs, vans, or trucks, surveillance cars can be marked, unmarked, undercover, or disguised, and may be crewed or remotely monitored. They are used to gather evidence of criminal offenses or provide better vantage points at events or high-traffic areas. The surveillance method used varies, and may include CCTV, hidden cameras, wiretapping devices, or even aerial platforms. Some surveillance cars may also be used as bait cars, deployed to catch car thieves. Types: Armored vehicle A police armored vehicle, also known as a SWAT vehicle, tactical vehicle, or rescue vehicle, is an armored vehicle used in a police capacity. They are typically four-wheeled armored vehicles with similar configurations to military light utility vehicles, infantry mobility vehicles, internal security vehicles. MRAPs, or similar armored personnel carriers, that lack mounted and installed weaponry. As their name suggests, they are typically used to transport police tactical units such as SWAT teams, though they may also be used in riot control or to establish police presence at events. Types: Mobile command center A mobile command center, also known as an emergency operations center, mobile command post, or mobile police station, is a truck used to provide a central command center at the scene of an incident, or to establish a visible police presence or temporary police station at an event. Types: Bomb disposal vehicle A bomb disposal vehicle is a vehicle used by bomb disposal squads to transport equipment and bomb disposal robots, or to store bombs for later disposal. They are often vans or trucks, typically with at least one bomb containment chamber installed in the rear of the vehicle, and ramps to allow bomb disposal robots to access the vehicle. Bomb disposal vehicles are generally not explosive-resistant and are only used for transporting explosives for disposal, not actively disposing of them. Types: Armed vehicle An armed police vehicle is a police vehicle that has lethal weaponry installed on it. These are often technicals or light utility vehicles with machine gun turrets, and may or may not lack emergency lights and sirens. Armed police vehicles are very rare and are usually only used in wartime, in regions with very high violent crime rates, or where combat with organized crime or insurgencies is common to the point that armed police vehicles are necessary; for example, the Iraqi Police received technicals during the Iraq War, and the National Police of Ukraine used armed vehicles during the 2022 Russian invasion of Ukraine, including the STREIT Group Spartan and a modified BMW 6 Series with a mounted machine gun.These should not be confused with police vehicles that have turrets but do not have guns, which are often just police armored vehicles or, if less-lethal munitions are used, riot control vehicles. Types: Riot control vehicle A riot control vehicle, also known as a riot suppression vehicle or simply a riot vehicle, is an armored or reinforced police vehicle used for riot control. A wide array of vehicles, from armored SUVs and vans to dedicated trucks and armored personnel carriers, are used by law enforcement to suppress or intimidate riots, protests, and public order crimes; hold and reinforce a police barricade to keep the scene contained; or simply transport officers and equipment at the scene in a manner safer than what could be achieved with a standard police car. Types: Common modifications include tear gas launchers, shields, and caged windows. Some riot control vehicles also include less-lethal weaponry and devices, such as water cannons and long-range acoustic devices. Types: Community engagement, liaison, and demonstration vehicles A community engagement vehicle, also known as a liaison vehicle, demonstration vehicle, or parade car, is a police car used for display and community policing purposes, but not for patrol duties. These are often high-performance, modified cars, classic police cars, or vehicles seized from convicted criminals and converted to police cars, used to represent the agency in parades, promote a specific program (such as the D.A.R.E. program), or help build connections between law enforcement and certain groups that the vehicle appeals to. Types: Some cars can be visibly marked but not fitted with audio or visual warning devices. These are used by community liaison officers for transport to engagements and making appearances at community events.Some vehicles are produced by automotive manufacturers with police markings to showcase them to police departments; these are usually concepts, prototypes, or reveals of their police fleet offerings. Emergency light and siren manufacturers such as Federal Signal, Whelen, and Code 3 also use unofficial police cars to demonstrate their emergency vehicle equipment. Equipment: Police cars are usually passenger car models which are upgraded to the specifications required by the purchasing police service. Several vehicle manufacturers provide a "police package" option, which is built to police specifications from the factory. Agencies may add to these modifications by adding their own equipment and making their own modifications after purchasing a vehicle. Equipment: Mechanical modifications Modifications a police car might undergo include adjustments for higher durability, speed, high-mileage driving, and long periods of idling at a higher temperature. This is usually accomplished through installing heavy duty suspension, brakes, calibrated speedometer, tires, alternator, transmission, and cooling systems. The car's stock engine may be modified or replaced by a more powerful engine from another vehicle from the manufacturer. The car's electrical system may also be upgraded to accommodate for the additional electronic police equipment. Equipment: Warning systems Police vehicles are often fitted with audible and visual warning systems to alert other motorists of their approach or position on the road. In many countries, use of the audible and visual warnings affords the officer a degree of exemption from road traffic laws (such as the right to exceed speed limits, or to treat red stop lights as a yield sign) and may also suggest a duty on other motorists to yield for the police car and allow it to pass. Equipment: Warning systems on a police vehicle can be of two types: passive or active. Equipment: Passive visual warnings Passive visual warnings are the livery markings on the vehicle. Police vehicle markings usually make use of bright colors or strong contrast with the base color of the vehicle. Some police cars have retroreflective markings that reflect light for better visibility at night, though others may only have painted on or non-reflective markings. Examples of markings and designs used in police liveries include black and white, Battenburg markings, Sillitoe tartan, and "jam sandwich" markings. Equipment: Police vehicle markings include, at the very least, the word "police" (or a similar applicable phrase if the agency does not use that term, such as "sheriff", "gendarmerie", "state trooper", "public safety" etc.) and the agency's name or jurisdiction (such as "national police" or "Chicago Police"). Also common are the agency's seal, the jurisdiction's seal, and a unit number. Text is usually in the national language or local language, though other languages may be used where appropriate, such as in ethnic enclaves or areas with large numbers of tourists. Equipment: Unmarked vehicles generally lack passive visual warnings, while ghost cars have markings that are visible only at certain angles, such as from the rear or sides, making them appear unmarked when viewed from the front.Another unofficial passive visual warning of police vehicles can simply be the vehicle's silhouette if its use as a police car is common, such as that of the Ford Crown Victoria in North America, or the presence of emergency vehicle equipment on the vehicle, such as a pushbar or a roof-mounted lightbar. Equipment: Active visual warnings Active visual warnings are the emergency lights on the vehicle. These lights are used while responding to attract the attention of other road users and coerce them into yielding for the police car to pass. The colors used by police car lights depend on the jurisdiction, though they are commonly blue and red. Several types of flashing lights are used, such as rotating beacons, halogen lamps, or LED strobes. Some agencies use arrow sticks to direct traffic, or message display boards to provide short messages or instructions to motorists. The headlights and tail lights of some vehicles can be made to flash, or small strobe lights can be fitted in the vehicle lights. Equipment: Audible warnings Audible warnings are the sirens on the vehicle. These sirens alert road users to the presence of an emergency vehicle before they can be seen, to warn of their approach. The first audible warnings were mechanical bells, mounted to either the front or roof of the car. A later development was the rotating air siren, which makes noise when air moves past it. Most modern police vehicles use electronic sirens, which can produce a range of different noises. Different models and manufacturers have distinct siren noises; one siren model, the Rumbler, emits a low frequency sound that can be felt through vibrations, allowing those who would not otherwise hear the siren or see the emergency vehicle to still know it is approaching.Different siren noises may be used depending on traffic conditions and the context. For example, on a clear road, "wail" (a long up-and-down unbroken tone) is often used, whereas in heavy slow traffic or at intersections, "yelp" (essentially a sped-up wail) may be preferred. Other noises are used in certain countries and jurisdictions, such as "phaser" (a series of brief sped-up beeps) and "hi-lo" (a two-tone up-down sound). Some vehicles may also be fitted with electronic airhorns. Equipment: Police-specific equipment A wide range of equipment is carried in police cars, used to make police work easier or safer. The installation of this equipment in a police car partially transforms it into a desk. Police officers use their car to fill out different forms, print documents, type on a computer or a console, and examine different screens, all while driving. Ergonomics in layout and installation of these items in the police car plays an important role in the comfort and safety of the police officers at work and preventing injuries such as back pain and musculoskeletal disorders. Equipment: Communication devices Police radio systems are generally standard equipment in police cars, used to communicate between the officers assigned to the car and the dispatcher. Mobile data terminals are also common as alternative ways to communicate with the dispatcher or receive important information, and are typically a tablet or a dashboard-mounted laptop installed in the car. Equipment: Suspect transport enclosure Suspect transport enclosures are typically located at the rear of the vehicle, taking up the rear seats or rear compartment. The seats are sometimes modified to be a hard metal or plastic bench. Separating the transport enclosure is often a partition, a barrier between the front and rear compartments typically made of metal with a window made of reinforced glass, clear plastic, or metal mesh or bars. Some police cars do not have partitions; in these instances, another officer may have to sit in the rear to secure the detainee, or a dedicated transport vehicle may be called. Equipment: Weapon storage Weapons may be stored in the trunk or front compartment of the vehicle. In countries where police officers are already armed with handguns, long guns such as rifles or shotguns may be kept on a gun rack in the front or in the trunk, alongside ammunition. In countries where police are not armed or do not keep their guns on them, handguns may be kept in the car instead; for example, Norwegian Police Service officers are issued handguns, but they keep them in a locked compartment in their car that requires high-ranking authorization to access. Less-lethal weaponry and riot gear may also be stored in the trunk. Equipment: Rescue equipment Rescue equipment such as first aid kits, dressing, fire extinguishers, defibrillators, and naloxone kits are often kept in police cars to provide first aid and rescue when necessary. Scene equipment Tools such as barricade tape, traffic cones, traffic barricades, and road flares are often kept in police cars to secure scenes for further investigation. Recording equipment Recording equipment such as dashcams and interior cameras are installed in some police cars to make audio and video recordings of incidents, police interactions, and evidence. Detectors Detector devices such as radar speed guns, automatic number-plate recognition, and LoJack are used in some police cars, typically in traffic enforcement, to detect speeding violations, read multiple plates for flags (such as warrants or lack of insurance) without having to manually check, and track stolen cars, respectively. Equipment: Pushbar Pushbars, also known as bullbars, rambars, or nudge bars, are fitted to the chassis of a police car to augment the front bumper. They allow the car to push disabled vehicles out of a roadway, breach small and light objects, and conduct PIT maneuvers with less damage to the front of the vehicle. Pushbar designs vary; some are small and only protect the grille, while others have extensions that shield as far as the headlights. Some pushbars also have emergency lights installed on them, providing additional visual warnings. Equipment: Spotlights Spotlights are small searchlights typically installed on the A-pillar of a police car. They are used to provide light in darkened areas or where necessary, such as down alleyways or into a suspect's car during a nighttime traffic stop. These spotlights can be aimed and activated by the officers inside the vehicle. Usually, one or two are installed on the car, though more may occasionally be installed on the roof, grille, bumper, or pushbar. Equipment: Run lock Run locks allow the vehicle's engine to be left running without the keys being in the ignition. This allows adequate power to be supplied to the vehicle's equipment at the scene of an incident without battery drain. The vehicle can only be driven after inserting the keys; if the keys are not inserted, the engine will switch off if the handbrake is disengaged or the footbrake is activated. Equipment: Ballistic protection Some police cars can be optionally upgraded with bullet-resistant armor in the car doors. The armor is typically made from ceramic ballistic plates and aramid baffles. A 2016 news report said that Ford sells 5 to 10 percent of their American police vehicles with ballistic protection in the doors. In 2017, New York City Mayor Bill de Blasio announced that all NYPD patrol cars would have bullet-resistant door panels and bullet-resistant window inserts installed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JD Decompiler** JD Decompiler: JD (Java Decompiler) is a decompiler for the Java programming language. JD is provided as a GUI tool as well as in the form of plug-ins for the Eclipse (JD-Eclipse) and IntelliJ IDEA (JD-IntelliJ) integrated development environments. JD supports most versions of Java from 1.1.8 through 1.7.0 as well as JRockit 90_150, Jikes 1.2.2, Eclipse Java Compiler and Apache Harmony and is thus often used where formerly the popular JAD was operated. Variants: In 2011, Alex Kosinsky initiated a variant of JD-Eclipse which supports the alignment of decompiled code by the line numbers of the originals, which are often included in the original Bytecode as debug information. In 2012, a branch of JDEclipse-Realign by Martin "Mchr3k" Robertson extended the functionality by manual decompilation control and support for Eclipse 4.2 (Juno).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FMN reductase (NADPH)** FMN reductase (NADPH): FMN reductase (NADPH) (EC 1.5.1.38, FRP, flavin reductase P, SsuE) is an enzyme with systematic name FMNH2:NADP+ oxidoreductase. This enzyme catalyses the following chemical reaction: FMNH2 + NADP+ ⇌ FMN + NADPH + H+The enzymes from bioluminescent bacteria contain FMN.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**5-Methylmethiopropamine** 5-Methylmethiopropamine: 5-Methylmethiopropamine (5-MMPA, mephedrene) is a stimulant drug which is a ring-substituted derivative of methiopropamine. It is not a substituted cathinone derivative like mephedrone, as it lacks a ketone group at the β position of the aliphatic side chain, but instead more closely resembles substituted amphetamines. It has been sold as a designer drug, first being identified in Germany in June 2020.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Snake antivenom** Snake antivenom: Snake antivenom is a medication made up of antibodies used to treat snake bites by venomous snakes. It is a type of antivenom. Snake antivenom: It is a biological product that typically consists of venom neutralizing antibodies derived from a host animal, such as a horse or sheep. The host animal is hyperimmunized to one or more snake venoms, a process which creates an immunological response that produces large numbers of neutralizing antibodies against various components (toxins) of the venom. The antibodies are then collected from the host animal, and further processed into snake antivenom for the treatment of envenomation. Snake antivenom: It is on the World Health Organization's List of Essential Medicines. Production: Antivenoms are typically produced using a donor animal, such as a horse or sheep. The donor animal is hyperimmunized with non-lethal doses of one or more venoms to produce a neutralizing antibody response. Then, at certain intervals, the blood from the donor animal is collected and neutralizing antibodies are purified from the blood to produce an antivenom. Regulations: Human Medicine: In the United States, antivenom production and distribution is regulated by the Food and Drug Administration. Veterinary Medicine: In the United States, antivenom production and distribution is regulated by the United States Department of Agriculture's Center for Veterinary Biologics. Classification: Monovalent vs. polyvalent Snake antivenom can be classified by which antigens (venoms) were used in the production process. If the hyperimmunizing venom is obtained from a single species, then it is considered a monovalent antivenom. If the antivenom contains neutralizing antibodies raised against two or more species of snakes, then the composition is considered polyvalent. Classification: Antibody composition Compositions of the antivenom can be classified as whole IgG, or fragments of IgG. Whole antibody products consist of the entire antibody molecule, often immunoglobulin G (IgG), whereas antibody fragments are derived by digesting the whole IgG into Fab (monomeric binding) or F(ab')2 (dimeric binding). The fragment antigen binding, or Fab, is the selective antigen binding region. An antibody, such as IgG, can be digested by papain to produce three fragments: two Fab fragments and one Fc fragment. An antibody can also be digested by pepsin to produce two fragments: a F(ab')2 fragment and a pFc' fragment. Classification: The fragment antigen-binding (Fab fragment) is a region on an antibody that binds to antigens, such as venoms. The molecular size of Fab is approximately 50kDa, making it smaller than F(ab')2 which is approximately 110kDa. These size differences greatly affect the tissue distribution and rates of elimination. Cross neutralization properties: Antivenoms may also have some cross protection against a variety of venoms from snakes within the same family or genera. For instance, Antivipmyn (Instituto Bioclon) is made from the venoms of Crotalus durissus and Bothrops asper. Antivipmyn has been shown to cross neutralize the venoms from all North American pit vipers. Cross neutralization affords antivenom manufacturers the ability to hyperimmunize with fewer venom types to produce geographically suitable antivenoms. Availability: Snake antivenom is complicated for manufacturers to produce. When weighed against profitability (especially for sale in poorer regions), the result is that many snake antivenoms, world-wide, are very expensive. Availability, from region to region, also varies. Availability: Antivenom shortage for New World coral snake As of 2012, the relative rarity of coral snake bites, combined with the high costs of producing and maintaining an antivenom supply, means that antivenom (also called "antivenin") production in the United States has ceased. According to Pfizer, the owner of the company that used to make the antivenom Coralmyn, it would take between $5–$10 million for researching a new synthetic antivenom. The cost was too high in comparison to the small number of cases presented each year. The existing American coral snake antivenom stock technically expired in 2008, but the U.S. Food and Drug Administration has extended the expiration date every year through to at least 30 April 2017.Foreign pharmaceutical manufacturers have produced other coral snake antivenoms, but the costs of licensing them in the United States have stalled availability. Instituto Bioclon is developing a coral snake antivenom. In 2013, Pfizer was reportedly working on a new batch of antivenom but had not announced when it would become available. As of 2016, the Venom Immunochemistry, Pharmacology and Emergency Response (VIPER) institute of the University of Arizona College of Medicine was enrolling participants in a clinical trial of INA2013, a "novel antivenom," according to the Florida Poison Information Center. Families of venomous snakes: Over 600 species are known to be venomous—about a quarter of all snake species. The following table lists some major species.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quarter cubic honeycomb** Quarter cubic honeycomb: The quarter cubic honeycomb, quarter cubic cellulation or bitruncated alternated cubic honeycomb is a space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of tetrahedra and truncated tetrahedra in a ratio of 1:1. It is called "quarter-cubic" because its symmetry unit – the minimal block from which the pattern is developed by reflections – is four times that of the cubic honeycomb. Quarter cubic honeycomb: It is vertex-transitive with 6 truncated tetrahedra and 2 tetrahedra around each vertex. A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions. Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space. It is one of the 28 convex uniform honeycombs. The faces of this honeycomb's cells form four families of parallel planes, each with a 3.6.3.6 tiling. Its vertex figure is an isosceles antiprism: two equilateral triangles joined by six isosceles triangles. John Horton Conway calls this honeycomb a truncated tetrahedrille, and its dual oblate cubille. The vertices and edges represent a Kagome lattice in three dimensions, which is the pyrochlore lattice. Construction: The quarter cubic honeycomb can be constructed in slab layers of truncated tetrahedra and tetrahedral cells, seen as two trihexagonal tilings. Two tetrahedra are stacked by a vertex and a central inversion. In each trihexagonal tiling, half of the triangles belong to tetrahedra, and half belong to truncated tetrahedra. These slab layers must be stacked with tetrahedra triangles to truncated tetrahedral triangles to construct the uniform quarter cubic honeycomb. Slab layers of hexagonal prisms and triangular prisms can be alternated for elongated honeycombs, but these are also not uniform. Symmetry: Cells can be shown in two different symmetries. The reflection generated form represented by its Coxeter-Dynkin diagram has two colors of truncated cuboctahedra. The symmetry can be doubled by relating the pairs of ringed and unringed nodes of the Coxeter-Dynkin diagram, which can be shown with one colored tetrahedral and truncated tetrahedral cells. Related polyhedra: This honeycomb is one of five distinct uniform honeycombs constructed by the A~3 Coxeter group. The symmetry can be multiplied by the symmetry of rings in the Coxeter–Dynkin diagrams: The Quarter cubic honeycomb is related to a matrix of 3-dimensional honeycombs: q{2p,4,2q}
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxidizable carbon ratio dating** Oxidizable carbon ratio dating: Oxidizable carbon ratio dating is a method of dating in archaeology and earth science that can be used to derive or estimate the age of soil and sediment samples up to 35,000 years old. The method is experimental, and it is not as widely used in archaeology as other chronometric methods such as radiocarbon dating. The methodology was introduced by Archaeology Consulting Team from Essex Junction in 1992. Process: This dating method works by measuring the ratio of oxidizable carbon to organic carbon. If the sample is freshly burned there will be no oxidizable carbon because it would have all been removed by the combustion process. Over time this will change and the amount of organic carbon will decrease to be replaced by oxidizable carbon. By measuring the ratio of oxidized carbon to organic carbon (the OCR) and applying it to the following equation the age of the sample can be determined with a very low standard error. Process: OCR DATE OCR Depth Mean temperature Mean rainfall Mean texture pH 14.4888 Evaluations and applications of the method: It is important to note that the OCR dating method is, like any scientific procedure, subject to testing, evaluation, and refinement. Evaluations and applications of the method: The Oxidizable Carbon Ratio method was the subject of a Point–CounterPoint feature of the Society for American Archaeology Bulletin in 1999. In that article, Killick, Jull, and Burr suggest that the OCR method has (1) never been described in a peer-reviewed journal article, (2) that no "scientifically acceptable" demonstration of the accuracy and precision of OCR dating has been published, and (3) that the equation underlying the OCR method is questionable because of site-specific environmental factors. Frink's rejoinder to these comments points out that (1) the OCR method has indeed been described in a peer-reviewed journal article, (2) that the accuracy and precision of the method have been reported in multiple venues and that the concept of "scientifically acceptable" is context- and person-specific (and therefore a red herring), and that (3) the equation underlying the OCR method takes into account the seven factors of soil formation, and that these factors are routinely used in soil science applications without question. In the end, Frink concludes that the OCR method—like any scientific advance—warrants further study, and he points out that even the now venerable "scientifically acceptable" method of radiocarbon dating was much maligned when it was first introduced. Evaluations and applications of the method: Frink and others have published multiple studies demonstrating that OCR dates can correlate well with radiocarbon dates (see list of published references provided below). Fullen's study of the Sarah Peralta site in Louisiana found that the OCR method served as an effective means of inferring time at the site in the absence of radiometrically dateable charcoal. He concludes that whereas debate remains concerning the OCR procedure, "the well-corroborated dates that the LSU Museum of Natural Science has had returned on material processed with OCR and conventional radiocarbon dating...the dates returned on material from Zone 3 will be considered accurate until such time that OCR dating is proven invalid." (ibid. p. 65) The OCR method has been used in a large number of archaeological and geomorphological studies, and an incomplete list of published references is provided below. It has been used to evaluate soil development in a range of temperature regimes including arid, semi-arid , thermic, mesic, and frigic. It has also been applied to a variety of landforms including stratified fluvial deposits, desert pavements and vesicular soils, and glacial deposits. Analyses also include monumental earthworks and geoglyphs. Published references: Abbott, James T., Raymond Mauldin, Patience E. Patterson, W. Nicholas Trierweiler, Robert J. Hard, Christopher R. Lintz, and Cynthia L. Tennis 1997 Significance Standards for Prehistoric Archeological Sites at Fort Bliss: A Design for Further Research and the Management of Cultural Resources. TRC Mariah Associates Inc. Austin, Texas. pp 70–71. Bradbury, Andrew P. 1995 A National Register Evaluation of Twelve Sites in Adair, Cumberland and Metcalfe Counties, Kentucky. Contract Publication Series 95-69. Cultural Resource Analysts, Inc., Lexington, Kentucky. Burkett, Kenneth 1999 Prehistoric Occupations at Fishbasket. Pennsylvania Archaeologist 69(1):1-100. Published references: Cable, John S., Kenneth F. Styer, and Charles E. Cantley n.d. Data Recovery Excavations at the Maple Swamp (38HR309) and Big Jones (38HR315) Sites on the Conway Bypass. Horry County, South Carolina: Prehistoric Sequence and Settlement on the North Coastal Plain of South Carolina. New South Associates, Inc., Stone Mountain, Georgia. Submitted to the South Carolina Department of Transportation, Columbia, South Carolina. Cantley, Charles E., Leslie E. Raymer, Johannes H. N. Loubser, and Mary Beth Reed 1997 Phase III Data Recovery at Four Prehistoric Sites in the Horton Creek Reservoir Project Area, Fayette County, Georgia. New South Associates, Inc., Stone Mountain, Georgia. Submitted to Mallett & Associates, Inc., Smyrna, Georgia. Published references: Cantley, Charles E., Lotta Danielsson-Murphy, Thad Murphy, Undine McEvoy, Leslie E. Raymer, John S. Cable, Robert Yallop, Cindy Rhodes, Mary Beth Reed, and Lawerence A. Abbott 1997 Fort Polk, Louisiana: A Phase I Archaeological Survey of 14,622 Acres in Vernon Parish. New South Associates, Inc., Stone Mountain, Georgia. Submitted to the National Park Service, Atlanta, Georgia. Childress, Mitchell R. and Guy G. Weaver In Prep. (1998) National Register Eligibility Assessment of Four Sites on Upper Roubidoux Creek (23PU483, 23PU458, 23PU354, 23PU264), Fort Leonard Wood, Missouri. Brockington and Associates, Inc., Memphis. Submitted to the United States Army Construction engineering Research Laboratories (USACERL), Champaign, Illinois. Dorn, Ronald I., Edward Stasack, Diane Stasack, and Persis Clarkson 2001 Analyzing Petroglyphs and Geoglyphs with Four New Perspectives: Evaluating What's There and What's Not. American Indian Rock Art 27: 77-96. Elliott, Rita F., Johannes H. N. Loubser, Leslie E. Raymer, Mary Beth Reed, and Charles E. Cantley 1995 Archaeological Testing of Three Sites Along the SR 21, Effingham and Screven Counties, Georgia. New South Associates, Inc., Stone Mountain, Georgia. Submitted to the Georgia Department of Transportation, Atlanta, Georgia. Frink, Douglas S. and Ronald I. Dorn 2001 Beyond Taphonomy: Pedogenic Transformations of the Archaeological Record in Monumental Earthworks. Journal of the Arizona-Nevada Academy of Science 33(3): 182-202. Frink, Douglas S. and Timothy K. Perttula 2001 Analysis of the 39 Oxidizable Carbon Ratio Dates from Mound A, Mound B, and the Village Area at the Calvin Davis or Morse Mounds Site (41SY27). North American Archaeologist 22(2): 143-160. [3] Fullen, Steven R. Published references: 2005 Temporal Trends in Tchula Period Pottery in Louisiana. Unpublished MA thesis, Department of Geography and Anthropology, Louisiana State University and Agricultural and Mechanical College. [4] Gunn, Joel D, Thomas G. Lilly, Cheryl Claassen, John Byrd, and Andrea Brewer Shea 1995 Archaeological Data Recovery Investigations at Sites 38BU905 and 38BU921 Along the Hilton Head Cross Island Expressway, Beaufort County, South Carolina. Garrow & Associates, Inc., Raleigh, North Carolina. Published references: Harrison, Rodney, and Frink, Douglas S. 2000 The OCR Carbon Dating Procedure in Australia: New Dates from Wilinyjibari Rockshelter, Southeast Kimberley, Western Australia. Australian Archaeology 51:6-15. Hoffman, Curtiss, Maryanne MacLeod, and Alan Smith 1999 Symbols in Stone: Chiastolites in New England Archaeology. Bulletin of the Massachusetts Archaeological Society 60(1). Johnson, Jay K., Gena M. Aleo, Rodney T. Stuart, and John Sullivan 1998 The 1996 Excavations at the Batesville Mounds: A Woodland Period Platform Mound Complex in Northwest Mississippi. Submitted to the Panola County Industrial Authority. Keith, Scot 1998 OCR Dating of Prehistoric Features at the Sandhill Site (22-WA-676), Southeast Mississippi. Mississippi Archaeology. 33(2): 77-114 Killick, D.J., A.J.T. Jull, and G.S. Burr 1999 Point/Counterpoint: Failure to Discriminate: Querying Oxidizable Carbon Ratio (OCR) Dating. SAA Bulletin 17(5):32-36. Response: Frink, Douglas S. [5] Kindall, Sheldon 1997 The Oxidizable Carbon Ratio (OCR) Technique: A New, Low-Cost Dating Method. The Steward: Collected Papers on Texas Archeology 4:91-94. Messick, Denise P., Johannes Loubser, Theresa M. Hamby, Joe W. Joseph, Mary Beth Reed, and Leslie Raymer n.d. Prehistoric and Historic Excavations at Site 9Gw347, Annistown Road Improvement Project, Gwinnett County, Georgia. New South Associates, Inc., Stone Mountain Georgia. Submitted to the Gwinnett County Department of Transportation, Lawrenceville, Georgia and Moreland Altobelli Associates, Atlanta, Georgia. Nami, Hugo, and Frink, Douglas S. 1999 Cronologia Obtenida por la Tasa del Carbono Organico Oxidable (OCR) en Markatch Aike 1 (Cuenca del Rio Chico, Santa Cruz). Anales del Instituo de la Patagonia 27:231-237 Patterson, Leland W. 1998 Oxidizable Carbon Ration Dating. La Tierra: Journal of the Southern Texas Archaeological Association 25(1):46-48. n.d. Dates for Formation of Huntington Mound, Fort Bend Co., Texas. Submitted to Houston Archeological Society Journal Patterson, L.W., J.D. Hudgins, S.M. Kindall, W.L. McClure, and S.D. Pollan 1995 Excavations at Site 41WH24, Wharton Co., Texas. Journal of the Houston Archeological Society 113:11-21. Patterson, L.W., J.D. Hudgins, W.L. McClure 1996 Additional Excavations at Marik Site, Wharton Co., Texas. Journal of the Houston Archeological Society 115:9-15. Patterson, L.W., S.D. Hemming, and W.L. McClure 1997 Investigations at Site 41FB245, Fort Bend County, Texas. Fort Bend Archeological Society 5. Perttula, Timothy K., Douglas S. Frink 2001 Results of Recent Oxidizable Carbon Ratio Dating at Lake Naconiche Sites. East Texas Archaeological Society Newsletter 8(6):3-5 Perttula, Timothy K., Mike Turner, and Bo Nelson 1997 Radiocarbon and Oxidizable Carbon Ratio Dates from the Camp Joy Mound (41UR144) in Northeast Texas. Caddoan Archeology 7(4):10-16. Perttula, Timothy K. 1997 A Compendium of Radiocarbon and Oxidizable Carbon Ratio Dates from Archaeological Sites in East Texas, with a Discussion of the Age and Dating of Select Components and Phases. Radiocarbon 39(3): 305-342. Published references: Saunders, Joe W., Rolfe D. Mandel, Roger T. Saucier, E. Thurman Allen, C.T. Hallmark, Jay K. Johnson, Edwin H. Jackson, Charles M. Allen, Gary L. Stringer, Douglas S. Frink, James K. Feathers, Stephen Williams, Kristen J. Gremillion, Malcolm F. Vidrine, and Reca Jones 1997 A Mound Complex in Louisiana at 5400-5000 Years Before Present. Science 277:1796-1799 Steen, Carl, Chrostopher Judge, and James Legg. Published references: 1995 An Archaeological Survey of the Nature Conservancy's Peachtree Rock Preserve. Diachronic Research Foundation, Columbia, S.C. Tennis, Cynthia L. (Ed.), I. Waynne Cox, Jeffrey J. Durst, Donna D. Edmondson, Barbara A. Meissner, Steve A. Tomka, Douglas S. Frink, John G. Jones, and Rick C. Robinson 2001 Archaeological Investigations at Four San Antonio Missions: Mission Trails Underground Conversion Project. Center for Archaeological Research, The University of Texas at San Antonio, Archaeological Survey Report 297. Webb, Paul A., and David S. Leigh 1995 Geomorphological and Archaeological Investigations of a Buried Site on the Yadkin River Floodplain. Southern Indian Studies 44:1-36. Wesler, Kit 1997 The Wickliffe Mounds Project: Implications for Late Mississippi Period Chronology, Settlement and Mortuary Patterns in Western Kentucky. Proceedings of the Prehistoric Society 63:261-283. [6] 2001 Excavations at Wickliffe Mounds. The University of Alabama Press, Tuscaloosa. [7] Worth, J.E. 1996 Upland Lamar, Vining, and Cartersville: An Interim Report from Raccoon Ridge. Early Georgia 24(1): 34-83.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Balloon satellite** Balloon satellite: A balloon satellite, sometimes referred to as a "satelloon", is inflated with gas after it has been put into orbit. Echo 1 and Echo 2 balloon satellites: The first flying body of this type was Echo 1, which was launched into a 1,600-kilometer (990 mi) high orbit on August 12, 1960, by the United States. It originally had a spherical shape measuring 30 meters (98 ft), with a thin metal-coated plastic shell made of Mylar. It served for testing as a "passive" communication and geodetic satellite. Echo 1 and Echo 2 balloon satellites: One of the first radio contacts using the satellite was successful at a distance of nearly 4,000 kilometers (2,500 mi) (between the east coast of the US and California). By the time Echo 1 burned up in 1968, the measurements of its orbit by several dozen earth stations had improved our knowledge of the precise shape of the planet by nearly a factor of ten.Its successor was the similarly built Echo 2 (1964 to about 1970). This satellite circled the Earth about 400 kilometers (250 mi) lower, not at an angle of 47° like that of Echo 1, but in a polar orbit with an average angle of 81°. This enabled radio contact and measurements to be made at higher latitudes. Taking part in the Echo orbit checks to analyze disturbances in its orbit and in the Earth's gravitational field were thirty to fifty professional earth stations, as well as around two hundred amateur astronomers across the planet in "Moonwatch" stations; these contributed around half of all sightings. Range of radio waves, visibility: The Pythagorean theorem allows us to calculate easily how far a satellite is visible at such a great height. It can be determined that a satellite in a 1,500-kilometer (930 mi) orbit rises and sets when the horizontal distance is 4,600 kilometers (2,900 mi). However, the atmosphere causes this figure to vary slightly. Thus if two radio stations are 9,000 kilometers (5,600 mi) apart and the satellite's orbit goes between them, they may be able to receive each other's reflected radio signals if the signals are strong enough. Range of radio waves, visibility: Optical visibility is, however, lower than that of radio waves, because the satellite must be illuminated by the sun the observer needs a dark sky (that is, he must be in the Earth's own shadow on the planet's twilight or night side) the brightness of a sphere depends on the angle between the incident light and the observer (see phases of the moon) the brightness of a sphere is much reduced as it approaches the horizon, as atmospheric extinction swallows up as much as 90% of the lightDespite this there is no problem observing a flying body such as Echo 1 for precise purposes of satellite geodesy, down to a 20° elevation, which corresponds to a distance of 2,900 kilometers (1,800 mi). In theory this means that distances of up to 5,000 kilometers (3,100 mi) between measuring points can be "bridged", and in practice this can be accomplished at up to 3,000–4,000 kilometers (1,900–2,500 mi). Range of radio waves, visibility: For visual and photographic observation of bright satellites and balloons, and regarding their geodetic use, see Echo 1 and Pageos for further information. Other balloon satellites: For special testing purposes two or three satellites of the Explorer series were constructed as balloons (possibly Explorer 19 and 38).Echo 1 was an acknowledged success of radio engineering, but the passive principle of telecommunications (reflection of radio waves on the balloon's surface) was soon replaced by active systems. Telstar 1 (1962) and Early Bird (1965) were able to transmit several hundred audio channels simultaneously in addition to a television program exchanged between continents. Other balloon satellites: Satellite geodesy with Echo 1 and 2 was able to fulfill all expectations not only for the planned 2–3 years, but for nearly 10 years. For this reason NASA soon planned the launch of the even larger 40-meter (130 ft) balloon Pageos. The name is from "passive geodesic satellite", and sounds similar to "Geos", a successful active electronic satellite from 1965. Other balloon satellites: Pageos and the global network Pageos was specially launched for the "global network of satellite geodesy", which occupied about 20 full-time observing teams all over the world until 1973. All together they recorded 3000 usable photographic plates from 46 tracking stations with calibrated all-electronic BC-4 cameras (1:3 / focal length 30 and 45 cm (12 and 18 in)). From these images they were able to calculate the stations' position three-dimensionally with a precision of about 4 meters (13 ft). The coordinator of this project was Professor Hellmut Schmid, from the ETH Zurich. Other balloon satellites: Three stations of the global network were situated in Europe: Catania in Sicily, Hohenpeißenberg in Bavaria and Tromsø in northern Norway. For the completion of the navigational network exact distance measurements were needed; these were taken on four continents and across Europe with a precision of 0.5 millimeters (0.020 in) per kilometer. The global network enabled the calculation of a "geodetic date" (the geocentric position of the measurement system) on different continents, within a few meters. By the early 1970s reliable values for nearly 100 coefficients of the Earth's gravity field could be calculated. 1965-1975: Success with flashing light beacons Bright balloon satellites are well visible and were measurable on fine-grained (less sensitive) photographic plates, even at the beginning of space travel, but there were problems with the exact chronometry of a satellite's track. In those days it could only be determined within a few milliseconds. Since satellites circle the earth at about 7–8 kilometers per second (4.3–5.0 mi/s), a time error of 0.002 second translates into a deviation of about 15 meters (49 ft). In order to meet a new goal of measuring the tracking stations precisely within a couple of years, a method of flashing light beacons was adopted around 1960. To build a three-dimensional measuring network, geodesy needs exactly defined target points, more so than a precise time. This precision is easily reached by having two tracking stations record the same series of flashes from one satellite. Flash beacon technology was already mature in 1965 when the small electronic satellite Geos (later named Geos 1 was launched in November 1965. With its companion, Geos 2, that was launched in January 1968, the GEOS system brought about a remarkable increase in measurement precision. From about 1975 on, almost all optical measurement methods lost their importance, as they were overtaken by speedy progress in electronic distance measurement. Only newly developed methods of observation using CCD and the highly precise star positions of the astrometry satellite Hipparcos made further improvement possible in the measurement of distance. List of balloon satellites: abbreviations: ado = atmospheric density observations pcr = passive communications reflector, satellite reflects microwave signals. spc = solar pressure calculations, estimate impact of solar wind on orbit. tri = satellite triangulation, measuring the Earth's surface. Sources: NSSDC Master Catalog Heavens-Above Jonathan's Space Report (HUGE: 5MB!) Astronautix, Mir EO-9
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apprehension (understanding)** Apprehension (understanding): In psychology, apprehension (Lat. ad, "to"; prehendere, "to seize") is a term applied to a model of consciousness in which nothing is affirmed or denied of the object in question, but the mind is merely aware of ("seizes") it."Judgment" (says Reid, ed. Hamilton, i. p. 414) "is an act of the mind, specifically different from simple apprehension or the bare conception of a thing". "Simple apprehension or conception can neither be true nor false." This distinction provides for the large class of mental acts in which we are simply aware of, or "take in" a number of familiar objects, about which we in general make no judgment, unless our attention is suddenly called by a new feature. Or again, two alternatives may be apprehended without any resultant judgment as to their respective merits.Similarly, G.F. Stout stated that while we have a very vivid idea of a character or an incident in a work of fiction, we can hardly be said in any real sense to have any belief or to make any judgment as to its existence or truth. With this mental state may be compared the purely aesthetic contemplation of music, wherein apart from, say, a false note, the faculty of judgment is for the time inoperative. To these examples may be added the fact that one can fully understand an argument in all its bearings, without in any way judging its validity. Without going into the question fully, it may be pointed out that the distinction between judgment and apprehension is relative. In every kind of thought, there is judgment of some sort in a greater or less degree of prominence.Judgment and thought are in fact psychologically distinguishable merely as different, though correlative, activities of consciousness. Professor Stout further investigates the phenomena of apprehension, and comes to the conclusion that "it is possible to distinguish and identify a whole without apprehending any of its constituent details." On the other hand, if the attention focuses itself for a time on the apprehended object, there is an expectation that such details will, as it were, emerge into consciousness. Hence, he describes such apprehension as "implicit", and insofar as the implicit apprehension determines the order of such emergence, he describes it as "schematic".A good example of this process is the use of formulae in calculations; ordinarily the formula is used without question; if attention is fixed upon it, the steps by which it is shown to be universally applicable emerge, and the "schema " is complete in detail. With this result may be compared Kant's theory of apprehension as a synthetic act (the "synthesis of apprehension") by which the sensory elements of a perception are subjected to the formal conditions of time and space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automatic bug fixing** Automatic bug fixing: Automatic bug-fixing is the automatic repair of software bugs without the intervention of a human programmer. It is also commonly referred to as automatic patch generation, automatic bug repair, or automatic program repair. The typical goal of such techniques is to automatically generate correct patches to eliminate bugs in software programs without causing software regression. Specification: Automatic bug fixing is made according to a specification of the expected behavior which can be for instance a formal specification or a test suite.A test-suite – the input/output pairs specify the functionality of the program, possibly captured in assertions can be used as a test oracle to drive the search. This oracle can in fact be divided between the bug oracle that exposes the faulty behavior, and the regression oracle, which encapsulates the functionality any program repair method must preserve. Note that a test suite is typically incomplete and does not cover all possible cases. Therefore, it is often possible for a validated patch to produce expected outputs for all inputs in the test suite but incorrect outputs for other inputs. The existence of such validated but incorrect patches is a major challenge for generate-and-validate techniques. Recent successful automatic bug-fixing techniques often rely on additional information other than the test suite, such as information learned from previous human patches, to further identify correct patches among validated patches.Another way to specify the expected behavior is to use formal specifications Verification against full specifications that specify the whole program behavior including functionalities is less common because such specifications are typically not available in practice and the computation cost of such verification is prohibitive. For specific classes of errors, however, implicit partial specifications are often available. For example, there are targeted bug-fixing techniques validating that the patched program can no longer trigger overflow errors in the same execution path. Techniques: Generate-and-validate Generate-and-validate approaches compile and test each candidate patch to collect all validated patches that produce expected outputs for all inputs in the test suite. Such a technique typically starts with a test suite of the program, i.e., a set of test cases, at least one of which exposes the bug. An early generate-and-validate bug-fixing systems is GenProg. The effectiveness of generate-and-validate techniques remains controversial, because they typically do not provide patch correctness guarantees. Nevertheless, the reported results of recent state-of-the-art techniques are generally promising. For example, on systematically collected 69 real world bugs in eight large C software programs, the state-of-the-art bug-fixing system Prophet generates correct patches for 18 out of the 69 bugs.One way to generate candidate patches is to apply mutation operators on the original program. Mutation operators manipulate the original program, potentially via its abstract syntax tree representation, or a more coarse-grained representation such as operating at the statement-level or block-level. Earlier genetic improvement approaches operate at the statement level and carry out simple delete/replace operations such as deleting an existing statement or replacing an existing statement with another statement in the same source file. Recent approaches use more fine-grained operators at the abstract syntax tree level to generate more diverse set of candidate patches. Notably, the statement deletion mutation operator, and more generally removing code, is a reasonable repair strategy, or at least a good fault localization strategy.Another way to generate candidate patches consists of using fix templates. Fix templates are typically predefined changes for fixing specific classes of bugs. Examples of fix templates include inserting a conditional statement to check whether the value of a variable is null to fix null pointer exception, or changing an integer constant by one to fix off-by-one errors. Techniques: Synthesis-based Repair techniques exist that are based on symbolic execution. For example, Semfix uses symbolic execution to extract a repair constraint. Angelix introduced the concept of angelic forest in order to deal with multiline patches. Under certain assumptions, it is possible to state the repair problem as a synthesis problem. SemFix uses component-based synthesis. Dynamoth uses dynamic synthesis. S3 is based on syntax-guided synthesis. SearchRepair converts potential patches into an SMT formula and queries candidate patches that allow the patched program to pass all supplied test cases. Techniques: Data-driven Machine learning techniques can improve the effectiveness of automatic bug-fixing systems. One example of such techniques learns from past successful patches from human developers collected from open source repositories in GitHub and SourceForge. It then use the learned information to recognize and prioritize potentially correct patches among all generated candidate patches. Alternatively, patches can be directly mined from existing sources. Example approaches include mining patches from donor applications or from QA web sites.Getafix is a language-agnostic approach developed and used in production at Facebook. Given a sample of code commits where engineers fixed a certain kind of bug, it learns human-like fix patterns that apply to future bugs of the same kind. Besides using Facebook's own code repositories as training data, Getafix learnt some fixes from open source Java repositories. When new bugs get detected, Getafix applies its previously learnt patterns to produce candidate fixes and ranks them within seconds. It presents only the top-ranked fix for final validation by tools or an engineer, in order to save resources and ideally be so fast that no human time was spent on fixing the same bug, yet. Techniques: Template-based repair For specific classes of errors, targeted automatic bug-fixing techniques use specialized templates: null pointer exception repair with insertion of a conditional statement to check whether the value of a variable is null. integer overflow repair buffer overflow repair memory leak repair, with automated insertion of missing memory deallocation statements.Comparing to generate-and-validate techniques, template-based techniques tend to have better bug-fixing accuracy but a much narrowed scope. Use: There are multiple uses of automatic bug fixing: In a development environment: When encountering a bug the developer activates a feature to search for a patch (for instance by clicking on a button). This search can also happen in the background, when the IDE proactively searches for solutions to potential problems, without waiting for explicit action from the developer. At runtime: When a failure happens at runtime, a binary patch can be searched for and applied online. An example of such a repair system is ClearView, which does repair on x86 code, with x86 binary patches. Search space: In essence, automatic bug fixing is a search activity, whether deductive-based or heuristic-based. The search space of automatic bug fixing is composed of all edits that can be possibly made to a program. There have been studies to understand the structure of this search space. Qi et al. showed that the original fitness function of Genprog is not better than random search to drive the search. Long et al.'s study indicated that correct patches can be considered as sparse in the search space and that incorrect overfitting patches are vastly more abundant (see also discussion about overfitting below). Overfitting: Sometimes, in test-suite based program repair, tools generate patches that pass the test suite, yet are actually incorrect, this is known as the "overfitting" problem. "Overfitting" in this context refers to the fact that the patch overfits to the test inputs. There are different kinds of overfitting: incomplete fixing means that only some buggy inputs are fixed, regression introduction means some previously working features are broken after the patch (because they were poorly tested). Early prototypes for automatic repair suffered a lot from overfitting: on the Manybugs C benchmark, Qi et al. reported that 104/110 of plausible GenProg patches were overfitting. In the context of synthesis-based repair, Le et al. obtained more than 80% of overfitting patches. Overfitting: One way to avoid overfitting is to filter out the generated patches. This can be done based on dynamic analysis. Alternatively, Tian et al. propose heuristic approaches to assess patch correctness. Limitations of automatic bug-fixing: Automatic bug-fixing techniques that rely on a test suite do not provide patch correctness guarantees, because the test suite is incomplete and does not cover all cases. A weak test suite may cause generate-and-validate techniques to produce validated but incorrect patches that have negative effects such as eliminating desirable functionalities, causing memory leaks, and introducing security vulnerabilities. One possible approach is to amplify the failing test suite by automatically generating further test cases that are then labelled as passing or failing. To minimize the human labelling effort, an automatic test oracle can be trained that gradually learns to automatically classify test cases as passing or failing and only engages the bug-reporting user for uncertain cases.A limitation of generate-and-validate repair systems is the search space explosion. For a program, there are a large number of statements to change and for each statement there are a large number of possible modifications. State-of-the-art systems address this problem by assuming that a small modification is enough for fixing a bug, resulting in a search space reduction. Limitations of automatic bug-fixing: The limitation of approaches based on symbolic analysis is that real world programs are often converted to intractably large formulas especially for modifying statements with side effects. Benchmarks: Benchmarks of bugs typically focus on one specific programming language. Benchmarks: In C, the Manybugs benchmark collected by GenProg authors contains 69 real world defects and it is widely used to evaluate many other bug-fixing tools for C.In Java, the main benchmark is Defects4J now extensively used in most research papers on program repair for Java. Alternative benchmarks exist, such as the Quixbugs benchmark, which contains original bugs for program repair. Other benchmarks of Java bugs include Bugs.jar, based on past commits. Example tools: Automatic bug-fixing is an active research topic in computer science. There are many implementations of various bug-fixing techniques especially for C and Java programs. Note that most of these implementations are research prototypes for demonstrating their techniques, i.e., it is unclear whether their current implementations are ready for industrial usage or not. C ClearView: A generate-and-validate tool of generating binary patches for deployed systems. It is evaluated on 10 security vulnerability cases. A later study shows that it generates correct patches for at least 4 of the 10 cases. GenProg: A seminal generate-and-validate bug-fixing tool. It has been extensively studied in the context of the ManyBugs benchmark. SemFix: The first solver-based bug-fixing tool for C. CodePhage: The first bug-fixing tool that directly transfer code across programs to generate patch for C program. Note that although it generates C patches, it can extract code from binary programs without source code. LeakFix: A tool that automatically fixes memory leaks in C programs. Prophet: The first generate-and-validate tool that uses machine learning techniques to learn useful knowledge from past human patches to recognize correct patches. It is evaluated on the same benchmark as GenProg and generate correct patches (i.e., equivalent to human patches) for 18 out of 69 cases. SearchRepair: A tool for replacing buggy code using snippets of code from elsewhere. It is evaluated on the IntroClass benchmark and generates much higher quality patches on that benchmark than GenProg, RSRepair, and AE. Angelix: An improved solver-based bug-fixing tool. It is evaluated on the GenProg benchmark. For 10 out of the 69 cases, it generate patches that is equivalent to human patches. Learn2Fix: The first human-in-the-loop semi-automatic repair tool. Extends GenProg to learn the condition under which a semantic bug is observed by systematic queries to the user who is reporting the bug. Only works for programs that take and produce integers. Java PAR: A generate-and-validate tool that uses a set of manually defined fix templates. QACrashFix: A tool that fixes Java crash bugs by mining fixes from Q&A web site. ARJA: A repair tool for Java based on multi-objective genetic programming. NpeFix: An automatic repair tool for NullPointerException in Java, available on Github. Other languages AutoFixE: A bug-fixing tool for Eiffel language. It relies the contracts (i.e., a form of formal specification) in Eiffel programs to validate generated patches. Getafix: Operates purely on AST transformations and thus requires only a parser and formatter. At Facebook it has been applied to Hack, Java and Objective-C. Proprietary DeepCode integrates public and private GitHub, GitLab and Bitbucket repositories to identify code-fixes and improve software.Kodezi utilizes opensource data from GitHub repositories, Stack Overflow, and private trained models to analyze code, provide solutions, and descriptions about the coding bugs instantly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LED art** LED art: LED art is a form of light art constructed from light-emitting diodes. LEDs (light emitting diodes) are very inexpensive to purchase and have become a new way to make street art. Many artists who use LEDs are guerrilla artists, incorporating LEDs to produce temporary pieces in public places. LEDs may be used in installation art, sculptural pieces and interactive artworks. Infamous LED art: In early 2007, there was a bomb scare in Boston, Massachusetts in the United States caused by a guerrilla marketing campaign. An advertising firm working for Cartoon Network to promote the network's animated television show Aqua Teen Hunger Force, hired two artists to produce art for the ad campaign. The artists placed LED signs in various locations across ten cities. However, Boston was the only city that reacted by shutting down bridges and bringing in bomb squads to remove the LEDs. The majority of the light boards were removed and the artists were arrested. Cartoon Network general manager Jim Samples resigned as a result of the incident. Artists and works incorporating LEDs: Jenny Holzer - One of the most well known artists who incorporates LEDs into her work. She uses familiar statements and reinterprets them to alter their meanings. Artists and works incorporating LEDs: Liz LaManche - creates large paintings illuminated by color-changing LED light for a motion effect Liu Dao - an art collective in China that uses actors and filmmakers to make animated LED portraits. The group also combines traditional Chinese arts like papercutting with LEDs to highlight China's journey from tradition and modernity, and is directed by Thomas Charvériat to find originality through international collaboration. Artists and works incorporating LEDs: Titia Ex - artist from Amsterdam who is known for her installation Flower of the Universe Leo Villareal - combines LED lights and encoded computer programming to create illuminated displays Mel and Dorothy Tanner - Creators of Lumonics, a multi-sensory art experience that features their LED-based sculptures, connected to a DMX lighting controller.Blinkies are small electronic devices that make very bright (usually flashing) light using LEDs and small batteries. They are often sold by vendors at night-time events that have fireworks displays such as Independence Day, Canada Day, or Guy Fawkes Night. They are also popular at raves, New Year's Eve parties and nighttime sporting events. Artists and works incorporating LEDs: There is no industry standard or official name for blinkies, but most common names use some combination of the terms flash, magnet, strobe, body, blink, light, and/or jewelry. Common examples are blinkies, blinkees, body lights, blinky body lights, magnetic flashers, or flashing jewelry. Artists and works incorporating LEDs: Uses Blinkies are most often used for amusement at raves, parties and nighttime events. Their other uses include: Blinkies imprinted with company logos at conventions Safety lights for children during Halloween, or nighttime events Fun and safety during camping trips Emergency flashers for disabled automobiles or lost hikers (most blinkies have over a one-mile visibility range at night) The term blinky is often used for bicycle lights which flash. In some countries, blinkies can be used as a primary light on a bicycle. Artists and works incorporating LEDs: Blinkies also can be attached to mobiles (cell phones). When the mobile turns on, makes a call, or receives a call, the blinky will keep flashing. "Winky blinkies" can refer to stage and film props which display lighting effects, or "gags," during a dramatic production. Artists and works incorporating LEDs: Construction Body A typical blinky is a small plastic two-piece cylinder wide enough to accept a button cell battery with a small etched circuit board on the face and threads on the open end, paired with a cylinder cap which screws on to seal and secure as one. The flashing LED circuit board face can be round and enclosed by the cylinder, or a variety larger colored shapes such as logos that are glued to the outside face of the cylinder end. Common designs utilize a rubber ring gasket as an on/off switch. When placed between the batteries and circuit board inside the front cover, tightening the screw base deforms and flattens the gasket forcing the battery tip to contact the back of the printed circuit board, which completes the circuit. Modern blinkees are more likely to use a battery case that opens with a small eyeglasses type screw and a push button on/off switch. Artists and works incorporating LEDs: Back The most common designs use a set of strong magnets, one at the back of the blinky, and another that can be removed. This allows the light to be easily attached to clothes, or stuck onto any magnetic metal such as buttons or belt buckles. Clips are often used to make earrings, a loop can make a pendant, or a ring can be welded to the back to make a finger ring. Double sided adhesive pads are sometimes used to stick the blinky directly to the body, most often in the navel. Artists and works incorporating LEDs: Circuit board The circuit board typically has anywhere from 2 to as many as 25 micro-LEDs. LEDs that emit different colors within the visible spectrum are commonly used, whereas ultraviolet or infrared LEDs are less common in blinkies. Blue, white, violet, and ultra-violet LEDs often require two or more battery cells, due to their higher voltage requirements. The visible side of the etched circuit board can be constructed to flash in a variety of ways, especially where there are multiple LEDs in multiple colours. A clear plastic conformal coating material such as silicone, acrylic or epoxy protects the fragile LEDs on the exposed front of the board. Shaped boards have literally hundreds of variations combined with imprinting. Common shapes (besides the classic small circle) are stars, hearts, flowers, flags, animals, holiday symbols (like Halloween jack-o-lanterns), and sports team logos.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ky Fan lemma** Ky Fan lemma: In mathematics, Ky Fan's lemma (KFL) is a combinatorial lemma about labellings of triangulations. It is a generalization of Tucker's lemma. It was proved by Ky Fan in 1952. Definitions: KFL uses the following concepts. Bn : the closed n-dimensional ball. Sn−1 : its boundary sphere. T: a triangulation of Bn T is called boundary antipodally symmetric if the subset of simplices of T which are in Sn−1 provides a triangulation of Sn−1 where if σ is a simplex then so is −σ. L: a labeling of the vertices of T, which assigns to each vertex a non-zero integer: L:V(T)→Z∖{0} L is called boundary odd if for every vertex v∈Sn−1 , L(−v)=−L(v) An edge of T is called a complementary edge of L if the labels of its two endpoints have the same size and opposite signs, e.g. {−2, +2}. An n-dimensional simplex of T is called an alternating simplex of T if its labels have different sizes with alternating signs, e.g.{−1, +2, −3} or {+3, −5, +7}. Statement: Let T be a boundary-antipodally-symmetric triangulation of Bn and L a boundary-odd labeling of T. If L has no complementary edge, then L has an odd number of n-dimensional alternating simplices. Corollary: Tucker's lemma By definition, an n-dimensional alternating simplex must have labels with n + 1 different sizes. This means that, if the labeling L uses only n different sizes (i.e. L:V(T)→{+1,−1,+2,−2,…,+n,−n} ), it cannot have an n-dimensional alternating simplex. Hence, by KFL, L must have a complementary edge. Proof: KFL can be proved constructively based on a path-based algorithm. The algorithm it starts at a certain point or edge of the triangulation, then goes from simplex to simplex according to prescribed rules, until it is not possible to proceed any more. It can be proved that the path must end in an alternating simplex. The proof is by induction on n. Proof: The basis is n=1 . In this case, Bn is the interval [−1,1] and its boundary is the set {−1,1} . The labeling L is boundary-odd, so L(−1)=−L(+1) . Without loss of generality, assume that L(−1)=−1 and L(+1)=+1 . Start at −1 and go right. At some edge e, the labeling must change from negative to positive. Since L has no complementary edges, e must have a negative label and a positive label with a different size (e.g. −1 and +2); this means that e is a 1-dimensional alternating simplex. Moreover, if at any point the labeling changes again from positive to negative, then this change makes a second alternating simplex, and by the same reasoning as before there must be a third alternating simplex later. Hence, the number of alternating simplices is odd. Proof: The following description illustrates the induction step for n=2 . In this case Bn is a disc and its boundary is a circle. The labeling L is boundary-odd, so in particular L(−v)=−L(v) for some point v on the boundary. Split the boundary circle to two semi-circles and treat each semi-circle as an interval. By the induction basis, this interval must have an alternating simplex, e.g. an edge with labels (+1,−2). Moreover, the number of such edges on both intervals is odd. Using the boundary criterion, on the boundary we have an odd number of edges where the smaller number is positive and the larger negative, and an odd number of edges where the smaller number is negative and the larger positive. We call the former decreasing, the latter increasing. Proof: There are two kinds of triangles. If a triangle is not alternating, it must have an even number of increasing edges and an even number of decreasing edges. If a triangle is alternating, it must have one increasing edge and one decreasing edge, thus we have an odd number of alternating triangles.By induction, this proof can be extended to any dimension.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Creatinase** Creatinase: In enzymology, a creatinase (EC 3.5.3.3) is an enzyme that catalyzes the chemical reaction creatine + H2O ⇌ sarcosine + ureaThus, the two substrates of this enzyme are creatine and H2O, whereas its two products are sarcosine and urea. Creatinase: The native enzyme was shown to be made up of two subunit monomers via SDS-polyacrylamide gel electrophoresis. The molecular weights of these subunits was estimated to be 47,000 g/mol. The enzyme works as a homodimer, and is induced by choline chloride. Each monomer of creatinase has two clearly defined domains, a small N-terminal domain, and a large C-terminal domain. Each of the two active sites is made by residues of the large domain of one monomer and some residues of the small domain of the other monomer. It has been suggested that a sulfhydryl group is located on or near the active site of the enzyme following inhibition experiments. Creatinase has been found to be most active at pH 8 and is most stable between ph 6-8 for 24 hrs. at 37 degrees.This enzyme belongs to the family of hydrolases, those acting on carbon-nitrogen bonds other than peptide bonds, specifically in linear amidines. The systematic name of this enzyme class is creatine amidinohydrolase. This enzyme participates in arginine and proline metabolism. Structural studies: As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1CHM and 1KP0.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KATNA1** KATNA1: Katanin p60 ATPase-containing subunit A1 is an enzyme that in humans is encoded by the KATNA1 gene.Microtubules, polymers of alpha and beta tubulin subunits, form the mitotic spindle of a dividing cell and help to organize membranous organelles during interphase. Katanin is a heterodimer that consists of a 60 kDa ATPase (p60 subunit A 1) and an 80 kDa accessory protein (p80 subunit B 1). The p60 subunit acts to sever and disassemble microtubules, while the p80 subunit targets the enzyme to the centrosome. This gene encodes the p80 subunit. This protein is a member of the AAA family of ATPases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Drysdallite** Drysdallite: Drysdallite is a rare molybdenum selenium sulfide mineral with formula Mo(Se,S)2. It crystallizes in the hexagonal system as small pyramidal crystals or in cleavable masses. It is an opaque metallic mineral with a Mohs hardness of 1 to 1.5 and a specific gravity of 6.25. Like molybdenite it is pliable with perfect cleavage. It was first described in 1973 for an occurrence in an oxidized uranium deposit near Solwezi, Zambia. It was named for Alan Roy Drysdall, the director of the Zambian geological survey.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cilomilast** Cilomilast: Cilomilast (INN, codenamed SB-207,499, proposed trade name Ariflo) is a drug which was developed for the treatment of respiratory disorders such as asthma and chronic obstructive pulmonary disease (COPD). It is orally active and acts as a selective phosphodiesterase-4 inhibitor.Phosphodiesterase (PDE) inhibitors, such as theophylline, have been used to treat COPD for centuries; however, the clinical benefits of these agents have never been shown to outweigh the risks of their numerous adverse effects. Four clinical trials were identified evaluating the efficacy of cilomilast, the usual randomized, double-blind, and placebo-controlled protocols were used. It showed reasonable efficacy for treating COPD, but side effects were problematic and it is unclear whether cilomilast will be marketed, or merely used in the development of newer drugs.Cilomilast is a second-generation PDE4 inhibitor with anti-inflammatory effects that target bronchoconstriction, mucus hypersecretion, and airway remodeling associated with COPD. History: GlaxoSmithKline (GSK) filed for drug approval with the U.S. FDA at the end of 2002 and in January 2003 with the European Medicines Evaluation Agency (EMEA). In October 2003 the FDA issued an approvable letter for use of cilomilast in maintenance of lung function in COPD patients poorly responsive to salbutamol, despite an earlier decision by the FDA advisory panel to reject approval. The rejection was based on concerns over the efficacy of the agent, as well as gastrointestinal side effects. Before issuing final approval, however, the FDA requested additional efficacy and safety data. The development of the drug was finally abandoned by GSK.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**4-Hydroxy-2-oxopentanoic acid** 4-Hydroxy-2-oxopentanoic acid: 4-Hydroxy-2-oxopentanoaic acid, also known as 4-hydroxy-2-oxovalerate, is formed by the decarboxylation of 4-oxalocrotonate by 4-oxalocrotonate decarboxylase, is degraded by 4-hydroxy-2-oxovalerate aldolase, forming acetaldehyde and pyruvate and is reversibly dehydrated by 2-oxopent-4-enoate hydratase to 2-oxopent-4-enoate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MPLAB devices** MPLAB devices: The MPLAB series of devices are programmers and debuggers for Microchip PIC and dsPIC microcontrollers, developed by Microchip Technology. The ICD family of debuggers has been produced since the release of the first Flash-based PIC microcontrollers, and the latest ICD 3 currently supports all current PIC and dsPIC devices. It is the most popular combination debugging/programming tool from Microchip. The REAL ICE emulator is similar to the ICD, with the addition of better debugging features, and various add-on modules that expand its usage scope. The ICE is a family of discontinued in-circuit emulators for PIC and dsPIC devices, and is currently superseded by the REAL ICE. MPLAB ICD: The MPLAB ICD is the first in-circuit debugger product by Microchip, and is currently discontinued and superseded by ICD 2. The ICD connected to the engineer's PC via RS-232, and connected to the device via ICSP.The ICD supported devices within the PIC16C and PIC16F families, and supported full speed execution, or single step interactive debugging. Only one hardware breakpoint was supported by the ICD. MPLAB ICD 2: The MPLAB ICD 2 is a discontinued in-circuit debugger and programmer by Microchip, and is currently superseded by ICD 3. The ICD 2 connects to the engineer's PC via USB or RS-232, and connects to the device via ICSP.The ICD 2 supports most PIC and dsPIC devices within the PIC10, PIC12, PIC16, PIC18, dsPIC, rfPIC and PIC32 families, and supports full speed execution, or single step interactive debugging. At breakpoints, data and program memory can be read and modified using the MPLAB IDE. The ICD 2 firmware is field upgradeable using the MPLAB IDE.The ICD 2 can be used to erase, program or reprogram PIC MCU program memory, while the device is installed on target hardware, using ICSP. Target device voltages from 2.0V to 6.0V are supported. MPLAB ICD 3: The MPLAB ICD 3 is an in-circuit debugger and programmer by Microchip, and is the latest in the ICD series. The ICD 3 connects to the engineer's PC via USB, and connects to the device via ICSP. The ICD 3 is entirely USB-bus-powered, and is 15x faster than the ICD 2 for programming devices.The ICD 3 supports all current PIC and dsPIC devices within the PIC10, PIC12, PIC16, PIC18, dsPIC, rfPIC and PIC32 families, and supports full speed execution, or single step interactive debugging. At breakpoints, data and program memory can be read and modified using the MPLAB IDE. The ICD 3 firmware is field upgradeable using the MPLAB IDE.The ICD 3 can be used to erase, program or reprogram PIC MCU program memory, while the device is installed on target hardware, using ICSP. Target device voltages from 2.0V to 5.5V are supported.The ICD 3 has over-voltage protection in the probe drivers to guard against power surges from the target. All lines have over-current protection. The ICD 3 can also provide power to a target, up to 100 mA. MPLAB REAL ICE: The MPLAB REAL ICE (In-Circuit Emulator) is a high-speed emulator for Microchip devices. It debugs and programs PIC and dsPIC microcontrollers in conjunction with the MPLAB IDE, while the target device is "in-circuit". The REAL ICE is significantly faster than the ICD 2, for programming and debugging.The REAL ICE connects to the engineer's PC via a USB 2.0 interface, and connects to the target device via ICSP (PGC/PGD programming pins), typically using a RJ11 connector. LVDS is also available for high-speed data transfer between the device and the REAL ICE. MPLAB REAL ICE is field upgradeable through firmware downloads in MPLAB IDE. MPLAB REAL ICE: The REAL ICE supports 8-bit devices (PIC10, PIC12, PIC16, PIC18), 16-bit devices (PIC24, dsPIC) and 32-bit devices (PIC32MX). Performance Pak The REAL ICE Performance Pak is an optional add-on to the REAL ICE, that consists of a High Speed Probe Driver and Receiver that employ two CAT5 cables. Debug pins are driven using LVDS communications, and the additional trace connections allow high speed serial trace uploads to the PC. MPLAB REAL ICE: Isolator The REAL ICE Isolator is an optional add-on to the REAL ICE, that enables connectivity to AC and High-voltage applications not referenced to ground. Control signals are magnetically or optically isolated providing up to 2.5 kV equivalent isolation protection. The isolator acts as an isolated bridge, where signals are passed through with complete transparency to the MPLAB REAL ICE or MPLAB IDE. MPLAB ICE2000: The MPLAB ICE2000 is a discontinued in-circuit emulator for PIC and dsPIC devices. It has been superseded by the REAL ICE. The ICE2000 connects to the engineer's PC via a parallel port interface, and a USB converter is available. The ICE2000 requires emulator modules, and the test hardware must provide a socket which can take either an emulator module, or a production device. MPLAB ICE4000: The MPLAB ICE4000 is a discontinued in-circuit emulator for PIC and dsPIC devices. It has been superseded by the REAL ICE. The ICE4000 is no longer directly advertised on Microchip's website, and Microchip states that it is not recommended for new designs.The ICE4000 connects to the engineer's PC via a USB 2.0 interface. PIC devices under debug with the ICE4000 ran at full speed, and the emulator supported unlimited breakpoints, and complex break/trigger logic. The emulator supported multiple external inputs and external outputs to sync with other instruments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thoracic ganglia** Thoracic ganglia: The thoracic ganglia are paravertebral ganglia. The thoracic portion of the sympathetic trunk typically has 12 thoracic ganglia. Emerging from the ganglia are thoracic splanchnic nerves (the cardiopulmonary, the greater, lesser, and least splanchnic nerves) that help provide sympathetic innervation to thoracic and abdominal structures. The thoracic part of sympathetic trunk lies posterior to the costovertebral pleura and is hence not a content of the posterior mediastinum Also, the ganglia of the thoracic sympathetic trunk have both white and gray rami communicantes. The white rami communicantes carry sympathetic fibers arising in the spinal cord into the sympathetic trunk, while the gray rami communicantes carry postganglionic nerve fibers of the sympathetic nervous system back to the spinal nerves.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermodynamic process** Thermodynamic process: Classical thermodynamics considers three main kinds of thermodynamic process: (1) changes in a system, (2) cycles in a system, and (3) flow processes. Thermodynamic process: (1)A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium. In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored. A state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. The equilibrium states are each respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the system, not on the path taken by the processes that produce the state. In general, during the actual course of a thermodynamic process, the system may pass through physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic equilibrium. Non-equilibrium thermodynamics, however, considers processes in which the states of the system are close to thermodynamic equilibrium, and aims to describe the continuous passage along the path, at definite rates of progress. Thermodynamic process: As a useful theoretical but not actually physically realizable limiting case, a process may be imagined to take place practically infinitely slowly or smoothly enough to allow it to be described by a continuous path of equilibrium thermodynamic states, when it is called a "quasi-static" process. This is a theoretical exercise in differential geometry, as opposed to a description of an actually possible physical process; in this idealized case, the calculation may be exact. Thermodynamic process: A really possible or actual thermodynamic process, considered closely, involves friction. This contrasts with theoretically idealized, imagined, or limiting, but not actually possible, quasi-static processes which may occur with a theoretical slowness that avoids friction. It also contrasts with idealized frictionless processes in the surroundings, which may be thought of as including 'purely mechanical systems'; this difference comes close to defining a thermodynamic process.(2) A cyclic process carries the system through a cycle of stages, starting and being completed in some particular state. The descriptions of the staged states of the system are not the primary concern. The primary concern is the sums of matter and energy inputs and outputs to the cycle. Cyclic processes were important conceptual devices in the early days of thermodynamical investigation, while the concept of the thermodynamic state variable was being developed. Thermodynamic process: (3) Defined by flows through a system, a flow process is a steady state of flows into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. Flow processes are of interest in engineering. Kinds of process: Cyclic process Defined by a cycle of transfers into and out of a system, a cyclic process is described by the quantities transferred in the several stages of the cycle. The descriptions of the staged states of the system may be of little or even no interest. A cycle is a sequence of a small number of thermodynamic processes that indefinitely often, repeatedly returns the system to its original state. For this, the staged states themselves are not necessarily described, because it is the transfers that are of interest. It is reasoned that if the cycle can be repeated indefinitely often, then it can be assumed that the states are recurrently unchanged. The condition of the system during the several staged processes may be of even less interest than is the precise nature of the recurrent states. If, however, the several staged processes are idealized and quasi-static, then the cycle is described by a path through a continuous progression of equilibrium states. Kinds of process: Flow process Defined by flows through a system, a flow process is a steady state of flow into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. The states of the inflow and outflow materials consist of their internal states, and of their kinetic and potential energies as whole bodies. Very often, the quantities that describe the internal states of the input and output materials are estimated on the assumption that they are bodies in their own states of internal thermodynamic equilibrium. Because rapid reactions are permitted, the thermodynamic treatment may be approximate, not exact. A cycle of quasi-static processes: A quasi-static thermodynamic process can be visualized by graphically plotting the path of idealized changes to the system's state variables. In the example, a cycle consisting of four quasi-static processes is shown. Each process has a well-defined start and end point in the pressure-volume state space. In this particular example, processes 1 and 3 are isothermal, whereas processes 2 and 4 are isochoric. The PV diagram is a particularly useful visualization of a quasi-static process, because the area under the curve of a process is the amount of work done by the system during that process. Thus work is considered to be a process variable, as its exact value depends on the particular path taken between the start and end points of the process. Similarly, heat may be transferred during a process, and it too is a process variable. Conjugate variable processes: It is often useful to group processes into pairs, in which each variable held constant is one member of a conjugate pair. Pressure – volume The pressure–volume conjugate pair is concerned with the transfer of mechanical energy as the result of work. An isobaric process occurs at constant pressure. An example would be to have a movable piston in a cylinder, so that the pressure inside the cylinder is always at atmospheric pressure, although it is separated from the atmosphere. In other words, the system is dynamically connected, by a movable boundary, to a constant-pressure reservoir. Conjugate variable processes: An isochoric process is one in which the volume is held constant, with the result that the mechanical PV work done by the system will be zero. On the other hand, work can be done isochorically on the system, for example by a shaft that drives a rotary paddle located inside the system. It follows that, for the simple system of one deformation variable, any heat energy transferred to the system externally will be absorbed as internal energy. An isochoric process is also known as an isometric process or an isovolumetric process. An example would be to place a closed tin can of material into a fire. To a first approximation, the can will not expand, and the only change will be that the contents gain internal energy, evidenced by increase in temperature and pressure. Mathematically, δQ=dU . The system is dynamically insulated, by a rigid boundary, from the environment. Conjugate variable processes: Temperature – entropy The temperature-entropy conjugate pair is concerned with the transfer of energy, especially for a closed system. An isothermal process occurs at a constant temperature. An example would be a closed system immersed in and thermally connected with a large constant-temperature bath. Energy gained by the system, through work done on it, is lost to the bath, so that its temperature remains constant. Conjugate variable processes: An adiabatic process is a process in which there is no matter or heat transfer, because a thermally insulating wall separates the system from its surroundings. For the process to be natural, either (a) work must be done on the system at a finite rate, so that the internal energy of the system increases; the entropy of the system increases even though it is thermally insulated; or (b) the system must do work on the surroundings, which then suffer increase of entropy, as well as gaining energy from the system. Conjugate variable processes: An isentropic process is customarily defined as an idealized quasi-static reversible adiabatic process, of transfer of energy as work. Otherwise, for a constant-entropy process, if work is done irreversibly, heat transfer is necessary, so that the process is not adiabatic, and an accurate artificial control mechanism is necessary; such is therefore not an ordinary natural thermodynamic process. Conjugate variable processes: Chemical potential - particle number The processes just above have assumed that the boundaries are also impermeable to particles. Otherwise, we may assume boundaries that are rigid, but are permeable to one or more types of particle. Similar considerations then hold for the chemical potential–particle number conjugate pair, which is concerned with the transfer of energy via this transfer of particles. Conjugate variable processes: In a constant chemical potential process the system is particle-transfer connected, by a particle-permeable boundary, to a constant-µ reservoir. Conjugate variable processes: The conjugate here is a constant particle number process. These are the processes outlined just above. There is no energy added or subtracted from the system by particle transfer. The system is particle-transfer-insulated from its environment by a boundary that is impermeable to particles, but permissive of transfers of energy as work or heat. These processes are the ones by which thermodynamic work and heat are defined, and for them, the system is said to be closed. Thermodynamic potentials: Any of the thermodynamic potentials may be held constant during a process. For example: An isenthalpic process introduces no change in enthalpy in the system. Polytropic processes: A polytropic process is a thermodynamic process that obeys the relation: PVn=C, where P is the pressure, V is volume, n is any real number (the "polytropic index"), and C is a constant. This equation can be used to accurately characterize processes of certain systems, notably the compression or expansion of a gas, but in some cases, liquids and solids. Processes classified by the second law of thermodynamics: According to Planck, one may think of three main classes of thermodynamic process: natural, fictively reversible, and impossible or unnatural. Processes classified by the second law of thermodynamics: Natural process Only natural processes occur in nature. For thermodynamics, a natural process is a transfer between systems that increases the sum of their entropies, and is irreversible. Natural processes may occur spontaneously upon the removal of a constraint, or upon some other thermodynamic operation, or may be triggered in a metastable or unstable system, as for example in the condensation of a supersaturated vapour. Planck emphasised the occurrence of friction as an important characteristic of natural thermodynamic processes that involve transfer of matter or energy between system and surroundings. Processes classified by the second law of thermodynamics: Fictively reversible process To describe the geometry of graphical surfaces that illustrate equilibrium relations between thermodynamic functions of state, no one can fictively think of so-called "reversible processes". They are convenient theoretical objects that trace paths across graphical surfaces. They are called "processes" but do not describe naturally occurring processes, which are always irreversible. Because the points on the paths are points of thermodynamic equilibrium, it is customary to think of the "processes" described by the paths as fictively "reversible". Reversible processes are always quasistatic processes, but the converse is not always true. Processes classified by the second law of thermodynamics: Unnatural process Unnatural processes are logically conceivable but do not occur in nature. They would decrease the sum of the entropies if they occurred. Quasistatic process A quasistatic process is an idealized or fictive model of a thermodynamic "process" considered in theoretical studies. It does not occur in physical reality. It may be imagined as happening infinitely slowly so that the system passes through a continuum of states that are infinitesimally close to equilibrium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic knowledge repository** Dynamic knowledge repository: The dynamic knowledge repository (DKR) is a concept developed by Douglas C. Engelbart as a primary strategic focus for allowing humans to address complex problems. He has proposed that a DKR will enable us to develop a collective IQ greater than any individual's IQ. References and discussion of Engelbart's DKR concept are available at the Doug Engelbart Institute. Definition: A knowledge repository is a computerized system that systematically captures, organizes and categorizes an organization's knowledge. The repository can be searched and data can be quickly retrieved. The effective knowledge repositories include factual, conceptual, procedural and meta-cognitive techniques. The key features of knowledge repositories include communication forums. Definition: A knowledge repository can take many forms to "contain" the knowledge it holds. A customer database is a knowledge repository of customer information and insights – or electronic explicit knowledge. A Library is a knowledge repository of books – physical explicit knowledge. A community of experts is a knowledge repository of tacit knowledge or experience. The nature of the repository only changes to contain/manage the type of knowledge it holds. A repository (as opposed to an archive) is designed to get knowledge out. It should therefore have some rules of structure, classification, taxonomy, record management, etc., to facilitate user engagement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dibenzylaniline** Dibenzylaniline: Dibenzylaniline or N,N-Dibenzylaniline is a chemical compound consisting of aniline with two benzyl groups as substituents on the nitrogen. The substance crystallizes in the monoclinic crystal system. The space group is P21/n. The unit cell dimensions are a=11.751 Å b=9.060 Å c=29.522 Å, and β=94.589°. Each unit cell contains two molecules. In the solid van der Waals forces hold it together. The substance can also crystallize in alternate monoclinic form. Production: One method to produce dibenzylaniline is using a mixture of dibutyl tin dichloride and dibutyl stannane with N-benzilideneaniline along with hexamethylphosphoric triamide dissolved in tetrahydrofuran which yields a tin amide compound. This then reacts with benzyl bromide to yield dibenzylaniline.Another method uses aniline and benzyl bromide. Use: It used to make dyes. A nitroso derivative (made using nitrite and hydroxylamine) can be used in a colourimetric test for palladium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ligamentum venosum** Ligamentum venosum: The ligamentum venosum, also known as Arantius' ligament, is the fibrous remnant of the ductus venosus of the fetal circulation. Usually, it is attached to the left branch of the portal vein within the porta hepatis. It may be continuous with the round ligament of liver. It is invested by the peritoneal folds of the lesser omentum within a fissure on the visceral/posterior surface of the liver between the caudate and main parts of the left lobe. It is grouped with the liver in Terminologia Anatomica.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hemstitch** Hemstitch: Hemstitch or hem-stitch is a decorative drawn thread work or openwork hand-sewing technique for embellishing the hem of clothing or household linens. Unlike an ordinary hem, hemstitching can employ embroidery thread in a contrasting color so as to be noticeable. In hemstitching, one or more threads are drawn out of the fabric parallel and next to the turned hem, and stitches bundle the remaining threads in a variety of decorative patterns while securing the hem in place. Multiple rows of drawn thread work may be used.Hand hemstitching can be imitated by a hemstitching machine which has a piercer that pierces holes into the fabric and two separate needles that sew the hole open. There are also hemstitcher attachments for home sewing machines, and simple decorative stitches can be used over drawn threads to suggest hand-hemstitching.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar eclipse of April 8, 2005** Solar eclipse of April 8, 2005: A total solar eclipse occurred at the Moon's ascending node on April 8, 2005. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Solar eclipse of April 8, 2005: This eclipse is a hybrid event, a narrow total eclipse, and beginning and ending as an annular eclipse.It was visible within a narrow corridor in the Pacific Ocean. The path of the eclipse started south of New Zealand and crossed the Pacific Ocean in a diagonal path and ended in the extreme northwestern part of South America. The total solar eclipse was not visible on any land, while the annular solar eclipse was visible in the southern tip of Puntarenas Province of Costa Rica, Panama, Colombia and Venezuela. Related eclipses: Eclipse season This is the first eclipse this season. Second eclipse this season: 24 April 2005 Penumbral Lunar Eclipse Eclipses of 2005 A hybrid solar eclipse on April 8. A penumbral lunar eclipse on April 24. An annular solar eclipse on October 3. A partial lunar eclipse on October 17. Related eclipses: Tzolkinex Preceded: Solar eclipse of February 26, 1998Followed: Solar eclipse of May 20, 2012 Half-Saros Preceded: Lunar eclipse of April 4, 1996Followed: Lunar eclipse of April 15, 2014 Tritos Preceded: Solar eclipse of May 10, 1994Followed: Solar eclipse of March 9, 2016 Solar Saros 129 Preceded: Solar eclipse of March 29, 1987Followed: Solar eclipse of April 20, 2023 Inex Preceded: Solar eclipse of April 29, 1976Followed: Solar eclipse of March 20, 2034 Solar eclipses 2004–2007 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. Related eclipses: Saros 129 It is a part of Saros cycle 129, repeating every 18 years, 11 days, containing 80 events. The series started with partial solar eclipse on October 3, 1103. It contains annular eclipses on May 6, 1464 through March 18, 1969, hybrid eclipses from March 29, 1987 through April 20, 2023 and total eclipses from April 30, 2041 through July 26, 2185. The series ends at member 80 as a partial eclipse on February 21, 2528. The longest duration of totality was 3 minutes, 43 seconds on June 25, 2131 . All eclipses in this series occurs at the Moon’s ascending node. Related eclipses: Metonic series The metonic series repeats eclipses every 19 years (6939.69 days), lasting about 5 cycles. Eclipses occur in nearly the same calendar date. In addition, the octon subseries repeats 1/5 of that or every 3.8 years (1387.94 days). All eclipses in this table occur at the Moon's ascending node.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spare tire** Spare tire: A spare tire (or stepney in some countries) is an additional tire (or tyre - see spelling differences) carried in a motor vehicle as a replacement for one that goes flat, has a blowout, or has another emergency. Spare tire is generally a misnomer, as almost all vehicles actually carry an entire wheel with a tire mounted on it as a spare rather than just a tire, as fitting a tire to a wheel would require a motorist to carry additional, specialized equipment. However, some spare tires ("space-saver" and "donut" types) are not meant to be driven long distances. Space-savers have a maximum speed of around 50 mph (80 km/h).When replacing a damaged tire, placing the compact spare on a non-drive axle will prevent damage to the drivetrain. If placed on a drivetrain axle, the smaller-diameter tire can put stress on the differential causing damage and reducing handling. History: The early days of motor travel took place on primitive roads that were littered with stray horseshoe nails. Punctures (flat tires) were all too common, and required the motorist to remove the wheel from the car, demount the tire, patch the inner tube, re-mount the tire, inflate the tire, and re-mount the wheel. To alleviate this time-consuming process, Walter and Tom Davies of Llanelli, Wales, invented the spare tire in 1904. At the time, motor cars were made without spare wheels. The wheel was so successful that the brothers started their own company, Stepney Spare Motor Wheel Limited, (named after the location of their workshop on Stepney Street In Llanelli) and started marketing the wheel in Britain, Europe, and the British Empire and colonies. The word "stepney" is sometimes used interchangeably in countries that were once part of the British Empire such as Pakistan, Bangladesh, India, and Malta.The first to equip cars with an inflated spare wheel-and-tire assembly were the Ramblers made by Thomas B. Jeffery Company. The Rambler's interchangeable wheel with a mounted and inflated spare tire meant the motorist could exchange it quickly for the punctured tire that could then be repaired at a more convenient time and place.The pre-mounted spare tire and wheel combination proved so popular with motorists that carrying up to two spare tires became common. Automakers often equipped cars with one or dual sidemounts. The spares were mounted behind the front fenders as they blended into the running boards (a narrow footboard serving as a step beneath the doors). History: In 1941, the U.S. government temporarily prohibited spare tires on new cars as part of the nation's World War II rationing strategy, which led to quotas and laws designed to force conservation, including rubber that was produced overseas and difficult to get. A similar ration prohibition was also implemented by the U.S. during the Korean War in 1951. Usage in the 21st century: Contemporary vehicles may come equipped with full-size spares, limited use minispares, or have run-flat capability. Usage in the 21st century: The spare tire may be identical type and size to those on the vehicle. The spare may either be mounted on a plain steel rim or a matching road wheel as found on the vehicle. Among passenger vehicles, full-sized spares are usually provided for sport utility vehicles and light trucks, since a "limited use" spare would adversely affect such vehicles with higher centers of gravity. Additionally, a "limited use" spare may not be safe on a fully loaded truck or one that is towing a trailer. Due to the size of the full-sized spare, it is often mounted on the outside, such as the rear door of SUVs, and occasionally on the front hood.Many vehicles are provided with a "limited use" spare tire, also known as a "space-saver," temporary spare, "donut", or "compact" spare tire — in an attempt to reduce cost, lower the vehicle's weight, and/or to save on the space that would be needed for a full-size spare tire. Introduced in the late 1970s, as of 2017, temporary spares come standard on 53 percent of 2017 models in the U.S. A space-saver is typically 7 kg (15 lb) lighter than a full-sized wheel and in some cars the so-called 'space-saver' may actually save little to no space. There is also the difficulty of transporting the full-sized wheel and tire once the space-saver has been fitted. The spare is usually mounted on a plain steel rim.They are typically smaller than the normal tires on the vehicle and can only be used for limited distances because of their short life expectancy and low speed rating. As well, due to the different size of a donut compared to regular wheel, electronic stability control and traction control systems will not operate properly and should be disabled until the original wheel is restored. Space saver spare tires could potentially compromise the braking (especially on cars not fitted with anti-lock brakes) and handling of the car.In some cases, automobiles may be equipped with run-flat tires and thus not require a separate spare tire. Other vehicles may carry a can of tire repair foam, to repair punctured tires, although these often do not work in the case of larger punctures, and are useless in the event of a blow-out.Newer vehicles often do not come with a spare tire at all. The reduction of a spare tire increases fuel economy, cost of vehicle, as well as reducing production waste. Storage: Spare tires in automobiles are often stored in a spare tire well – a recessed area in the trunk of a vehicle, usually in the center, where the spare tire is stored while not in use. In most cars, the spare tire is secured with a bolt and wing-nut style fastener. Usually a stiff sheet of cardboard lies on top of the spare tire well with the trunk carpet on top of it to hide the spare tire and provide a pleasant look to and a flat surface for the trunk space. Storage: Other storage locations include a cradle underneath the rear of the vehicle. This cradle is usually secured by a bolt that is accessible from inside the trunk, for security. This arrangement has advantages over storing the tire inside the trunk, including not having to empty the contents of the trunk to access the wheel and this arrangement may also save space in some applications. However it has disadvantages because that tire gets dirty, making the act of changing the tire more unpleasant and the mechanism may also rust on older cars, making it difficult to free the spare. The cradle arrangement is usually only practical on front wheel drive cars, as the cradle would get in the way of the rear axle on most rear or four wheel drive cars. A similar arrangement is also often found on trucks where the spare is often stored beneath the truck bed. Storage: Many sport utility vehicles (SUVs) and off-road vehicles have the spare wheel mounted externally – usually on the rear door, but others may mount them on the roof, the side, or even on the bonnet (hood). In mid-engined and rear-engined cars, the spare tire is generally stored in the front boot. Some vehicles stored the spare tire in the engine bay, such as the Renault 14, First generation Fiat Panda and older Subaru vehicles, such as the Subaru Leone. Vehicles like the Volkswagen Beetle used spare tires for ancillary purposes such as supplying air pressure to the windscreen washer system. Storage: Many models of Bristol cars - those from the 404 of 1953 to the Fighter of 2004 carried a full-size spare wheel and tire in a pannier compartment built into the left-hand wing. This not only increased luggage space and allowed easy access to the spare without having to unload the trunk but improved weight distribution by keeping as much mass as possible within the wheelbase and balancing the weight of the battery, mounted in a similar compartment in the right-hand wing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Physics outreach** Physics outreach: Physics outreach encompasses facets of science outreach and physics education, and a variety of activities by schools, research institutes, universities, clubs and institutions such as science museums aimed at broadening the audience for and awareness and understanding of physics. While the general public may sometimes be the focus of such activities, physics outreach often centers on developing and providing resources and making presentations to students, educators in other disciplines, and in some cases researchers within different areas of physics. History: Ongoing efforts to expand the understanding of physics to a wider audience have been undertaken by individuals and institutions since the early 19th century. Historic works, such as the Dialogue Concerning the Two Chief World Systems, and Two New Sciences by Galileo Galilei, sought to present revolutionary knowledge in astronomy, frames of reference, and kinematics in a manner that a general audience could understand with great effect. History: In the mid 1800s, English physicist and chemist, Michael Faraday gave a series of nineteen lectures aimed towards young adults with the hopes of conveying scientific phenomena. His intentions were to raise awareness, inspire them and generate revenue of the Royal Institution. This series became known as the Christmas lectures, and still continues today. By the early 20th century, the public notoriety of physicists such as Albert Einstein and Marie Curie, and inventions such as radio led to a growing interest in physics. In 1921, in the United States, the establishment of Sigma Pi Sigma physics honor society at universities was instrumental in the expanding number of physics presentations, and led to the creation of physics clubs open to all students.Museums were an important form of outreach but most early science museums were generally focused on natural history. Some specialized museums, such as the Cavendish Museum at University of Cambridge, housed many of the historically important pieces of apparatus that contributed to the major discoveries by Maxwell, Thomson, Rutherford, etc. However, such venues provided little opportunity for hands-on learning or demonstrations. History: In August 1969, Frank Oppenheimer dedicated his new Exploratorium in San Francisco primarily to interactive science exhibits that demonstrated principles in physics. The Exploratorium published the details of their own exhibits in "Cookbooks" that served as an inspiration to many other museums around the world, and since then has diversified into many outreach programs. Oppenheimer had researched European science museums while on a Guggenheim Fellowship in 1965. He noted that three museums served as important influences on the Exploratorium: the Palais de la Découverte, which displayed models to teach scientific concepts and employed students as demonstrators, a practice that directly inspired the Exploratorium's much-lauded High School Explainer Program; the South Kensington Museum of Science and Art, which Oppenheimer and his wife visited frequently; and the Deutsches Museum in Munich, the world's largest science museum, which had a number of interactive displays that impressed the Oppenheimers.In the ensuing years, physics outreach, and science outreach more generally, continued to expand and took on new popular forms, including highly successful television shows such as Cosmos: A Personal Voyage, first broadcast in 1980. History: As a form of outreach within the physics education community for teachers and students, in 1997 the US National Science Foundation (NSF) and Department of Energy USDOE established QuarkNet, a professional teacher development program. In 2012, the University of Notre Dame received a $6.1M, five-year grant to support a nationwide expansion of the Quarknet program. Also in 1997, the European Particle Physics Outreach Group, led by Christopher Llewellyn Smith, FRS, and Director General of CERN, was formed to create a community of scientists, science educators, and communication specialists in science education and public outreach for particle physics. This group became the International Particle Physics Outreach Group (IPPOG) in 2011 after the start up of the LHC. Innovation: Many contemporary initiatives in physics outreach have begun to shift focus, transcending traditional field boundaries, seeking to engage students and the public by integrating elements of aesthetic design and popular culture. The goal has been not only to push physics out of a strictly science education framework but also to draw in professionals and students from other fields to bring their perspectives on physical phenomena. Such work includes artists creating sculptures using ferrofluids, and art photography using high speed and ultra high speed photography. Innovation: Other efforts, such as University of Cambridge's Physics at Work program have created annual events to demonstrate to secondary students uses of physics in everyday life and a Senior Physics Challenge. Seeing the importance these initiatives, Cambridge has established a full-time physics outreach organization, an Educational Outreach Office, and aspirations for a Center of Physics and expanded industrial partnerships that "would include a well equipped core team of outreach officers dedicated to demonstrating the real life applications of physics, showing that physics is an accessible and relevant subject".The French research group, La Physique Autrement (Physics Reimagined), of the Laboratoire de Physique des Solides, works on research about new ways to present modern solid-state physics and to engage the general public. In 2013, Physics Today covered this group in an article entitled "Quantum Physics For Everyone" which discussed how with the help of designers and unconventional demonstrations, the project sought out and succeeded to engage people who never thought of themselves as interested in science.The Science & Entertainment Exchange was developed by the United States National Academy of Sciences (NAS) to increase public awareness, knowledge, and understanding of science and advanced science technology through its representation in television, film, and other media. It was officially launched in 2008 as a partnership between the NAS and Hollywood. The Exchanged is Based in Los Angeles, California. Museums and public venues primarily focused on physical phenomena: Canada Montreal Science Centre (Montreal, Quebec) displays many hands-on activities involving various physics phenomena. Finland Heureka (Helsinki) is an NPO science center run by the Finnish Science Centre Foundation with a broad spectrum of physics-related exhibits. France Cité des Sciences et de l'Industrie (Paris) is the largest French science museum, and contains permanent exhibits and hands-on experiments. Palais de la Découverte (Paris) contains permanent exhibits and interactive experiments with commentaries by lecturers. It includes a Zeiss planetarium with 15-metre dome. It was created in 1937 by the French Nobel Prize physicist Jean Baptiste Perrin. Musée des Arts et Métiers (Paris) focuses on the preservation of scientific instruments and inventions. Other science museums that are part of the Cultural Center of Science, Technology and Industry (CCSTI) exist all across France : Espace des Sciences (Rennes), La Casemate (Grenoble), the Cité de l'espace (Toulouse). Germany Deutsches Museum (Munich) is the world's largest science museum. One of the most popular events is the high voltage demonstration of a Faraday cage as part of their series on electric power. Islamic Republic of Iran Iran Science and Technology Museum (Tehran) is the largest science museum in Iran. This museum, by holding varied scientific and educational programs, provides the required situation for creation and propagation of scientific thought in the society. One of these programs is the "Physics Show". Netherlands NEMO (Amsterdam) is the largest science center in the Netherlands, with hands-on science exhibitions. United States Exploratorium (San Francisco) is one of the foremost interactive science and art museums in the United States dedicated to exploring how the world works and consists of interactive exhibits, experiences and curious exploration. The Exploratorium was opened in 1969, and now attracts over a million visitors annually. The American Museum of Natural History in New York City is both a museum and a research facility with a department in astrophysics. As a natural history museum, it focuses on educating the public about human cultures, the natural world, and the universe, and has many interactive programs and lectures all year round. The Franklin Institute in Philadelphia is one of the oldest centers for science education and research in the United States. Scientific institutions and societies with physics outreach programs: Canada Perimeter Institute for Theoretical Physics was founded in 1999 in Waterloo, Ontario, Canada, the institute is a center for scientific research, training and educational outreach in theoretical physics. Located in Vancouver, British Columbia, TRIUMF is Canada's national laboratory for particle and nuclear physics and accelerator-based science. In addition to its science mission, the laboratory is committed to physics outreach, offering public tours of its facilities, public talks, an artist in residence program, student fellowships, and other opportunities. The Canadian Association of Physicists (CAP), or in French Association canadienne des physiciens et physiciennes (ACP) is a Canadian professional society that focuses on creating awareness amongst Canadians and Canadian legislators of physics issues, sponsoring physics related events, [physics outreach], and publishes Physics in Canada. France French Physics Society has a specific section devoted to outreach and popularization of science. The European Physical Society (EPS) is based in France, but works to promote physics and physicists in Europe. Scientific institutions and societies with physics outreach programs: Germany Deutsche Physikalische Gesellschaft (DPG, German Physical Society) is the world's largest organization of physicists. The DPG actively participates in communication between physics and the general public with several popular scientific publications and events such as the "Highlights of Physics" which is an annual physics festival organized jointly by the DPG and the Federal Ministry of Education and Research. This festival is the largest of its kind in Germany and attracts about 30,000 visitors every year. Scientific institutions and societies with physics outreach programs: United Kingdom Institute of Physics is an international charitable institution that aims to advance physics education, research and application. United States American Association for the Advancement of Science American Association of Physics Teachers American Institute of Physics (AIP) has an outreach program focused on advocating science policy to the US Congress and the general public. Scientific institutions and societies with physics outreach programs: American Physical Society (APS) has a program dedicated to "Communicating the excitement and importance of physics to everyone." Leonardo, the International Society for the Arts, Sciences and Technology (Leonardo/ISAST) is a nonprofit organization that serves the global network of distinguished scholars, artists, scientists, researchers and thinkers. The institution focuses on interdisciplinary work, creative output and innovation. Its journal Leonardo is published by MIT Press. Media and Internet: Media The Big Bang Theory is an American sitcom created in 2007 and revolves around the lives of scientists at the California Institute of Technology. This show has been widely recognized for popularizing science and noted by the New York Times as "helping physics and fiction collide". In 2014, the program was the most popular sitcom and most popular non-sports program on American TV with an average of 20 million viewers. However, the show has been criticized for sometimes portraying the scientific community inaccurately. Media and Internet: C'est pas sorcier is a French educational television program that originally aired from November 5, 1994 to present. 20 shows dealt with astronomy and space topics and 13 about physics. Media and Internet: Particle Fever is a 2013 documentary film that provides an intimate and accessible view of the first experiments at the Large Hadron Collider from the perspectives of the experimental physicists at CERN who run the experiments, as well as the theoretical physicists who attempt to provide a conceptual framework for the LHC's results. Reviewers praised the film for making theoretical arguments seem comprehensible, for making scientific experiments seem thrilling, and for making particle physicists seem human. Media and Internet: Through the Wormhole is an American science documentary television series narrated and hosted by American actor Morgan Freeman and has featured physicists such as such as Michio Kaku and Brian Cox (physicist). Internet MinutePhysics is a series of educational videos created by Henry Reich and disseminated through its YouTube channel. It displays a series of pedagogical short videos about various physics phenomena and theories. Physics World publication, run by the Institute of Physics, started explaining scientific concepts through its YouTube channel. Palais de la Découverte in Paris hosts online videos that display various interviews about science, including physics. Unisciel, a French online university, hosts educational videos through its YouTube channel. Veritasium is a series of educational videos created by Derek Muller and disseminated through its YouTube channel. It displays a series of pedagogical short videos about science, including physics. Saint Mary's Physics Demonstrations is an online repository for physics classroom demonstrations. It shows teachers the experiments they can do in class while also hosting videos of said experiments. Periodic Videos is a portal of educational videos explaining the characteristics of each element and supporting topics such as nuclear reactions. The project is sponsored by the University of Nottingham and hosted by Prof. Sir Martyn Poliakoff. Prominent individuals: Austria Fritjof Capra is an Austrian-born American physicist, who attended the University of Vienna, where he earned his Ph.D. in theoretical physics in 1966. He is a founding director of the Center for Ecoliteracy in Berkeley, California, and is on the faculty of Schumacher College. Capra is the author of several books, including The Tao of Physics (1975) and has also done research in Paris and London. Prominent individuals: France Camille Flammarion was a French astronomer author of many popular science books. Étienne Klein is a French physicist and philosopher of science involved in outreach efforts about particle and quantum physics. Roland Lehoucq is a French astrophysicist known for his outreach efforts especially in relationship with fiction and science fiction. Hubert Reeves is a French Canadian astrophysicist and popularizer of science. United Kingdom Brian Cox (physicist) is a British physicist and musician best known to the public as the presenter of a number of science programs for the BBC. Prominent individuals: Wendy J. Sadler promotes science and engineering as part of popular culture through Science Made Simple, an educational spin-off company of Cardiff University that reaches students through live presentations. She also trains scientists and engineers to improve their communications skills to enable them to extend their research across a broader audience. Sadler was the IoP Young Professional Physicist of the Year in 2005. Prominent individuals: Robert Matthews is a Fellow of the Royal Statistical Society, a Chartered Physicist, a Member of the Institute of Physics, and a Fellow of the Royal Astronomical Society. Matthews is a distinguished science journalist. He is currently anchorman for the science magazine BBC Focus, and a freelance columnist for the Financial Times. In the past, he has been science correspondent for the Sunday Telegraph. Prominent individuals: United States Richard Feynman was a Nobel-prize-winning theoretical physicist also known as a science popularizer through his books and lectures ranging from physics topics (quantum physics, nanophysics...) to autobiographical essays. George Gamow was a theoretical physicist and cosmologist who also wrote popular books on science, some of which are still in print more than a half-century after their original publication Brian Greene is a theoretical physicist involved in various outreach activities (books, TV shows). He co-founded the World Science Festival in 2008. Clifford Victor Johnson is a theoretical physicist involved in various outreach activities (blog, TV shows...). Michio Kaku is a theoretical physicist who is a futurist and communicator and popularizer of physics. He is most well known for his three New York Times Best Sellers on physics: Physics of the Impossible (2008), Physics of the Future (2011), and The Future of the Mind (2014). Prominent individuals: Lawrence M. Krauss is an American theoretical physicist and cosmologist who is Foundation Professor of the School of Earth and Space Exploration at Arizona State University and is known as an advocate of the public understanding of science, of public policy based on sound empirical data, of scientific skepticism and of science education and works to reduce the impact of superstition and religious dogma in pop culture. Prominent individuals: Don Lincoln is a physicist at Fermi National Accelerator Laboratory. While his research focuses on the Large Hadron Collider, he is known for his efforts to spread public awareness of physics and cosmology. He is the face of the Fermilab YouTube channel, where he has made over 150 videos. He is also a frequent contributor to CNN, Forbes, and many other online journals. He is also author of several books, including "Understanding the Universe", published by World Scientific, and "The Large Hadron Collider: The Extraordinary Story of the Higgs Boson and Other Things That Will Blow Your Mind," published by Johns Hopkins University Press. Prominent individuals: Jennifer Ouellette is the former director of the Science & Entertainment Exchange, an initiative of the National Academy of Sciences (NAS) designed to connect entertainment industry professionals with top scientists and engineers to help the creators of television shows, films, video games, and other productions incorporate science into their work. She is currently a freelance writer contributing to a physics outreach dialogue with articles in a variety of publications such as Physics World, Discover magazine, New Scientist, Physics Today, and The Wall Street Journal. Prominent individuals: Carl Sagan was an astrophysicist and science popularizer, one of his important contributions being the 1980 television series Cosmos: A Personal Voyage Neil deGrasse Tyson is an astrophysicist and science communicator who participated to TV and radio shows and wrote various outreach books. Jearl Walker is a physics professor at Cleveland State University. He wrote the Amateur Scientist column in Scientific American from 1978 to 1988 and authored the popular science book The Flying Circus of Physics. Funding sources: American Physical Society awards grants up to $10,000 to help APS members develop new physics outreach activities. Institute for Complex Adaptive Matter (ICAM) provides grants and fellowships for physics outreach. Wellcome Trust, while mostly focused on biological sciences, the Wellcome Trust also touches on physics and encourages physics outreach. They aim to improve biology, chemistry, and physics A levels in the UK. Institute of Physics (IoP) The IoP aims to provide positive and compelling experiences of physics for public audiences through engaging and entertaining activities and events. The public engagement grant scheme is designed to give financial support of up to £1500 to individuals and organisations running physics-based events and activities in the UK and Ireland. Awards: Kalinga Prize for the Popularization of Science is an award given by UNESCO for exceptional skill in presenting scientific ideas to lay people Klopsteg Memorial Award is presented by the American Association of Physics Teachers and given in memory of the physicist Paul E. Klopsteg Kelvin Prize is awarded by the Institute of Physics to acknowledge outstanding contributions to the public understanding of physics. Awards: The Michael Faraday Prize for communicating science to a UK audience is awarded by the Royal Society. Prix Jean Perrin for popularization in physics is attributed by the French Physics Society.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bokashi (horticulture)** Bokashi (horticulture): Bokashi is a process that converts food waste and similar organic matter into a soil amendment which adds nutrients and improves soil texture. It differs from traditional composting methods in several respects. The most important are: The input matter is fermented by specialist bacteria, not decomposed. The fermented matter is fed directly to field or garden soil, without requiring further time to mature. As a result, virtually all input carbon, energy and nutrients enter the soil food web, having been neither emitted in greenhouse gases and heat nor leached out.Other names attributed to this process include bokashi composting, bokashi fermentation and fermented composting. Nomenclature: The name bokashi is transliterated from spoken Japanese (ぼかし). However, Japanese-English dictionaries give the word an older artistic meaning: "shading or gradation" of images – especially applied to woodblock prints. This later extended to mean pixellation or fogging in censored photographs. Therefore, its application to fermented organic matter is of uncertain origin; if both uses are related, unifying concepts may be "alteration" or "fading away". Nomenclature: Bokashi as a food waste process is borrowed in many other languages. As a noun, it has various meanings depending on context, in particular the process itself, the inoculant and the fermented output. This variety can lead to confusion. As an adjective, it qualifies any related noun, such as bokashi bin (a household fermentation vessel), bokashi soil (after adding the preserve), and even bokashi composting – a contradiction in terms. Process: The basic stages of the process are: Organic matter is inoculated with Lactobacilli. These will convert a fraction of the carbohydrates in the input to lactic acid by homolactic fermentation. Fermented anaerobically (more precisely, microaerobically) for a few weeks at typical room temperatures in an airtight vessel, the organic matter is preserved by the acid, in a process closely related to the making of some fermented foods and silage. The preserve is normally applied to soil when ready, or can be stored unopened for later use. The preserve is mixed into soil that has naturally occurring micro-organisms. Process: When water is present (as in the preserve itself or in the soil) the lactic acid progressively dissociates by losing protons to become lactate – the acid's conjugate base or ion salt. Lactate is a fundamental energy carrier in biological processes. It can pass through cell membranes and almost all living organisms have the enzyme lactate dehydrogenase to convert it to pyruvate for energy production. Process: Suffused with lactate, the preserve is readily consumed by the indigenous soil life, primarily the bacteria, 'disappearing' within a very few weeks at normal temperatures. Earthworm action is typically prominent as bacteria are themselves consumed, such that the amended soil acquires a texture associated with vermicompost. Characteristics: Accepted inputs The process is typically applied to food waste from households, workplaces and catering establishments, because such waste normally holds a good proportion of carbohydrates. It is applied to other organic waste by supplementing carbohydrates and hence lactic acid production. Recipes for large scale bokashi in horticulture often include rice, and molasses or sugar. Any carbohydrate-poor waste stream would benefit from this. Characteristics: Homolactic fermentation can process significantly more kinds of food waste than home composting. Even items such as cooked leftovers, meat and skin, fat, cheese and citrus waste are, in effect, pre-digested to enable soil life to consume them. Large pieces may take longer to ferment and concave surfaces may trap air, in which cases cutting down is advised in support literature.Pieces of input are discarded if they are already badly rotten, or show green or black mould. These harbour putrefying organisms which may overwhelm the fermentation. Characteristics: Emissions Carbon, gases and energy Homolactic fermentation and similar anaerobic fermentation pathways in general provide a very small amount of energy to the cell compared to the aerobic process. In homolactic fermentation, 2 ATP molecules are made when one glucose molecule (produced by digesting complex carbohydrates) is converted to 2 lactic acid molecules, only 1⁄15 of what aerobic respiration provides. The process will also halt before all available carbohydrates are used, as the acidity ends up inhibiting all bacteria. As a result, a bokashi bucket barely heats up and remains at ambient temperature.As a waste processing technique, bokashi is notable in that minimal loss of mass in the form of offgassing happens. Compost, which is aerobic, "burns up" much of the carbon into carbon dioxide to sustain the metabolism of microbes as it matures. Biogas production does not burn the carbon, but the bacterial culture is optimized to extract the carbon in the form of methane – a potent greenhouse gas and a useful fuel. In addition, compost can also lose the key plant nutrient nitrogen (in the potent greenhouse gas nitrous oxide and in ammonia), while bokashi almost does not. Characteristics: Runoff When fermentation begins, physical structures start to break down and release some of the input's water content as a liquid runoff. Over time this constitutes more than 10% of the input by weight. The quantity varies with the input: for example cucumber and melon flesh lead to a noticeable increase. The liquid leaches out a valuable fraction of proteins, nutrients and lactic acid. To recover them, and to avoid drowning the fermentation, runoff is captured from the fermentation vessel, either through a tap, into a base of absorbent material such as biochar or waste cardboard, or into a lower chamber. The runoff is sometimes called "bokashi tea". Characteristics: The uses of bokashi tea are not the same as those of "compost tea". It is used most effectively when diluted and sprinkled over a targeted area of soil to feed the soil ecosystem. Dilution makes it less acidic and thus less dangerous to plants. Dilution also causes more acid to convert into lactate which is an attractive food for soil microbes. Other uses are either potentially damaging (e.g. feeding plants with acidic water) or wasteful (e.g. cleaning drains with plant nutrients, feeding plants with nutrients in a form they cannot take up). Characteristics: Volumes Household containers ("bokashi bins") typically give a batch size of 5–10 kilograms (11–22 lb). This is accumulated over a few weeks of regular additions. Each regular addition is best accumulated in a caddy, because it is recommended that one opens the bokashi bin no more frequently than once per day to let anaerobic conditions predominate. Characteristics: In horticultural settings batches can be orders of magnitude greater. Silage technology may be usable if it is adapted to capture runoff. An industrial-scale technique mimics the windrows of large-scale composting, except that bokashi windrows are compacted, covered tightly and left undisturbed, all to promote anaerobic conditions. One study suggests that such windrows lose only minor amounts of carbon, energy and nitrogen. Characteristics: Hygiene Bokashi is inherently hygienic in the following senses: Lactic acid is a strong natural bactericide, with well-known antimicrobial properties. It is an active ingredient of some toilet cleaners. As more is produced, it eventually suppresses even its own makers, the acid-resistant lactobacilli, such that bokashi fermentation eventually slows and stops itself. There is also evidence that mesophilic (ambient temperature) fermentation kills eggs of the Ascaris worm – a parasite of humans – in 14 days. Characteristics: The fermentation bin does not release smells when it is closed. A household bin is only opened for a minute or so to add and inoculate input via the lid or to drain runoff via the tap. At these times the user encounters the sour odour of lacto-fermentation (often described as a "pickle") which is much less offensive than the odour of decomposition. Characteristics: When closed, an airtight fermentation bin cannot attract insects. Bokashi literature claims that scavengers dislike the fermented matter and avoid it in gardens. Characteristics: Addition to soil Fermented bokashi is added to a suitable area of soil. The approach usually recommended by suppliers of household bokashi is along the lines of "dig a trench in the soil in your garden, add the waste and cover over."In practice, regularly finding suitable sites for trenches that will later underlie plants is difficult in an established plot. To address this, an alternative is a 'soil factory'. This is a bounded area of soil into which several loads of bokashi preserve are mixed over time. Amended soil can be taken from it for use elsewhere. It may be of any size. It may be permanently sited or in rotation. It may be enclosed, wire-netted or covered to keep out surface animals. Spent soil or compost, and organic amendments such as biochar may be added, as may non-fermented material, in which case the boundary between bokashi and composting becomes blurred. Characteristics: A proposed alternative is to homogenise (and potentially dilute) the preserve into a slurry, which is spread on the soil surface. This approach requires energy for homogenisation but, logically from the characteristics set out above, should confer several advantages: thoroughly oxidising the preserve; disturbing no deeper layers, except by increased worm action; being of little use to scavenging animals; applicable to large areas; and, if done repeatedly, able to sustain a more extensive soil ecosystem. History: The practice of bokashi is believed to have its earliest roots in ancient Korea. This traditional form ferments waste directly in soil, relying on native bacteria and on careful burial for an anaerobic environment. A modernised horticultural method called Korean Natural Farming includes fermentation by indigenous micro-organisms (IM or IMO) harvested locally, but has numerous other elements too. A commercial Japanese bokashi method was developed by Teruo Higa in 1982 under the 'EM' trademark (short for Effective Microorganisms). EM became the best known form of bokashi worldwide, mainly in household use, claiming to have reached over 120 countries.While none have disputed that EM starts homolactic fermentation and hence produces a soil amendment, other claims have been contested robustly. Controversy relates partly to other uses, such as direct inoculation of soil with EM and direct feeding of EM to animals, and partly to whether the soil amendment's effects are due simply to the energy and nutrient values of the fermented material rather than to particular microorganisms. Arguably, EM's heavy focus on microorganisms has diverted scientific attention away from both the bokashi process as a whole and the particular roles in it of lactic acid, lactate, and soil life above the bacterial level. Alternative approaches: Some organisms in EM, specifically photosynthetic bacteria and yeast, appear to be logically superfluous, as they will first be suppressed by the dark and anaerobic environment of homolactic fermentation, then killed by its lactic acid. Consequently, practitioners have sought to reduce costs and to widen the scale of operations. Success has been reported with: Self-harvested micro-organisms, tested for lacto-fermentation; Lactobacilli alone, i.e. without other EM micro-organisms. Useful sources include acid whey from yogurt and sauerkraut juice. Alternative approaches: Alternative substrates for inoculant, such as newsprint; Home-made airtight fermentation vessels; Larger scale than a household, for example a group of small farmers. No intentional addition of microbes at all, similar to the original Korean method. The resulting mixture will smell worse as acetic acid, propanoic acid, and butyric acid can form instead of lactic acid (see mixed acid fermentation), but works equally well as soil amendment. Uses: The main use of bokashi that is described above is to recover value from organic waste by converting it into a soil amendment. In Europe, food and drink material that is sent to animal feed does not legally constitute waste because it is regarded as 'redistribution.' This may apply to bokashi made from food, because it enters the soil food web, and furthermore is inherently pathogen-free. Uses: A side effect of diverting organic waste to the soil food web is to divert it away from local waste management streams and their associated costs of collection and disposal. To encourage this, for example most UK local authorities subsidise household bokashi starter kits through a National Home Composting Framework.Another side effect is to increase the organic carbon content of the amended soil. Some of this is a relatively long-term carbon sink – insofar as the soil ecosystem creates humus – and some is temporary for as long as the richer ecosystem is sustained by measures such as permanent planting, no-till cultivation and organic mulch. An example of these measures is seen at the Ferme du Bec Hellouin in France. Bokashi would therefore have potential uses in enabling communities to speed up the conversion of land from chemical to organic horticulture and agriculture, to regenerate degraded soil, and to develop urban and peri-urban horticulture close to the sources of input. Uses: The anti-pathogenic nature of bokashi is applied to sanitation, in particular to the treatment of faeces. Equipment and supplies to treat pet faeces are sold commercially but do not always give prominence to the hygiene risks. Treatment of human faeces for soil amendment has been extensively studied, notably with the use of biochar (a soil improver in its own right) to remove odours and retain nutrients. Social acceptability is a major obstacle, but niche markets such as emergency aid sanitation, outdoor events and temporary workplaces may develop the technology into a disruptive innovation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comic Sans** Comic Sans: Comic Sans MS (also known by its most common name Comic Sans) is a sans-serif typeface designed by Vincent Connare and released in 1994 by Microsoft Corporation. It is a non-connecting script inspired by comic book lettering, intended for use in cartoon speech bubbles, as well as in other casual environments, such as informal documents and children's materials.The typeface has been supplied with Microsoft Windows since the introduction of Windows 95, initially as a supplemental font in Microsoft Plus! Pack and later in Microsoft Comic Chat. Describing it, Microsoft has explained that "this casual but legible face has proved very popular with a wide variety of people."The typeface's widespread use, often in situations it was not intended for, has been the subject of criticism and ridicule. History: Development and release Microsoft designer Vincent Connare began working on Comic Sans in 1994 after having already created other fonts for various applications. When he saw a beta version of Microsoft Bob that used Times New Roman in the word balloons of its cartoon characters, he believed the typeface gave the software an overly formal appearance. He believed this was inappropriate for the aesthetics of the program, which was created to introduce younger users to computers. In order to make Microsoft Bob look more suitable for its intended purposes, he decided to create a new typeface with only a mouse and cursor, based on the lettering style of comic books he had in his office, specifically The Dark Knight Returns (lettered by John Costanza) and Watchmen (lettered by Dave Gibbons).He completed Comic Sans too late for inclusion in Microsoft Bob, and the typeface would go unreleased until the programmers of Microsoft 3D Movie Maker, which also used cartoon guides and speech bubbles, adopted it. The speech bubbles were eventually phased out and replaced by actual sound, but Comic Sans stayed for the program's pop-up windows and help sections. The typeface later shipped with the Windows 95 Plus! Pack. It was later included as a system font for the OEM versions of Windows 95. Finally, it became one of the default fonts for Microsoft Publisher and Microsoft Internet Explorer. Comic Sans is also used in Microsoft Comic Chat, which was released in 1996 with Internet Explorer 3.0. History: Comic Sans is pre-installed in macOS and Windows Phone but not Android, iOS or Linux. History: Comic Sans Pro (2011) Comic Sans Pro is an updated version of Comic Sans created by Terrance Weinzierl from Monotype Imaging. While retaining the original designs of the core characters, it expands the typeface by adding new italic variants, in addition to swashes, small capitals, extra ornaments and symbols including speech bubbles, onomatopoeia and dingbats, as well as text figures and other stylistic alternatives. Originally appearing as part of Ascender 2010 Font Pack as Comic Sans 2010, it was first released on April Fools' Day, causing some to initially assume it was a joke.The italicized variant later appeared in Windows 8. Misuse: Comic Sans has become most infamous for its use in serious circumstances, like warning signs and formal documents, in which it might appear too informal, unprofessional, or inappropriate.During the summer of 2010, NBA superstar LeBron James left the Cleveland Cavaliers in free agency, in a highly publicized media affair that culminated in a TV special called The Decision. The majority owner of the team (at the time), Dan Gilbert, reacted by posting a letter to Cavalier fans. The letter was criticized for its use of Comic Sans.In October 2012, a Dutch World War II memorial called Verzoening ("Reconciliation") was revealed on which the names of Jewish, Allied and German military deaths alike were written alongside each other in Comic Sans. The names were eventually scraped off after complaints from Jewish organizations, but the rewritten message was once again in Comic Sans. According to the city government, this was done because the letters fit the shape of the stone and were easily visible from a distance. It was, however, criticized for making the memorial appear "ugly" and "cheap".In September 2014, The Sydney Morning Herald printed a front page with Comic Sans, causing an uproar, despite its use being within speech bubbles in keeping with the origin of the typeface.In August 2015, a number of Greek Prime Minister Alexis Tsipras's Syriza party members split and formed a new party, headed by Panagiotis Lafazanis. The official document of resignation was allegedly written in Comic Sans.In July 2018, a statue of former Chilean President Pedro Aguirre Cerda was inaugurated in Santiago. The plaques on the monument were written in Comic Sans, drawing negative attention on social media.In October 2019, when the United States House Intelligence Committee requested that two of Rudy Giuliani's associates, Lev Parnas and Igor Fruma, present documentation regarding their involvement in the Ukraine scandal, former Trump attorney John Dowd penned a letter of explanation printed in Comic Sans.That same month, as part of the United Kingdom's Brexit debate, the Conservative Party tweeted an image stating "MPs must come together and get Brexit done" using Comic Sans. The post was heavily mocked, but some commentators saw it as a deliberate attempt to use the typeface's notoriety in order to bring their message to a wider audience. Legibility: A research article published by Cognition in 2010 showed disfluency could lead to improved retention and classroom performance. The article stated that disfluency can be produced merely by adopting fonts that are slightly more difficult to read. In the case studies cited in the article, Comic Sans was used to introduce disfluency. A 2010 Princeton University study involving presenting students with text in a font slightly harder to read found that they consistently retained more information from material displayed in fonts perceived as ugly or disfluent (Monotype Corsiva, Haettenschweiler, and Comic Sans Italic) than in a simpler, more traditional typeface such as Helvetica.More often, however, Comic Sans is described as especially legible, and is frequently used in school settings or as an aid for people with dyslexia. Some people have reported that typing in Comic Sans has helped to clear writer's block, claiming that its casual appearance and high legibility create less mental tension. Compared to other typefaces, Comic Sans has fewer rotated and mirror-image glyphs (ex. the letters "b", "d", "p", and "q"), has particularly wide letter spacing, and is sans serif. Reception and legacy: Several reinterpretations of Comic Sans have been created as a result of its popularity. In April 2014, font designer Craig Rozynski released a modernized version of Comic Sans called Comic Neue. In 2015, graphic designer Ben Harman created Comic Papyrus (later renamed "Comic Parchment" for legal reasons), which combines the features of Comic Sans with the similarly panned typeface Papyrus. In 2019, Tabular Type Foundry released Comic Code, a monospaced version of the typeface.In 2017, it was reported that Vincent Connare, the typeface's designer, had only used it once. Reception and legacy: Opposition Because of its ubiquity and misuse, Comic Sans has been opposed by graphic designers. The Boston Phoenix reported on disgruntlement over the widespread use of the typeface, especially its incongruous use for writing on serious subjects, with the complaints urged on by a campaign started by two Indianapolis graphic designers, Dave and Holly Combs, via their website "Ban Comic Sans". The movement was conceived in 1999 by the two designers after an employer insisted that one of them use Comic Sans in a children's museum exhibit. The website's main argument is that a typeface should match the tone of its text and that the humorous appearance of Comic Sans often contrasted with a serious message, such as a "do not enter" sign. The movement ran until 2019, when it was renamed "Use Comic Sans," which was because Dave Combs believed the hatred had "gotten out of hand" and "it's gotten to be so bad that it's almost cool again."Dave Gibbons, whose work was one of the inspirations for Comic Sans, said that it was "a shame they couldn't have used just the original font, because [Comic Sans] is a real mess. I think it's a particularly ugly letter form."Film producer and The New York Times essayist Errol Morris wrote in an August 2012 posting, "The conscious awareness of Comic Sans promotes—at least among some people—contempt and summary dismissal." With the help of a professor, he conducted an online experiment and found that Comic Sans, in comparison with five other typefaces (Baskerville, Helvetica, Georgia, Trebuchet MS, and Computer Modern), makes readers slightly less likely to believe that a statement they are reading is true. Reception and legacy: Defense In the Netherlands, radio DJs Coen Swijnenberg and Sander Lantinga decided to celebrate the typeface by having a Comic Sans day on the first Friday of July. Comic Sans Day has been held since 2009. Some Dutch companies have their website in Comic Sans on this day.According to a 2020 Twitter poll held by TES, 44% of teachers sampled used Comic Sans in their teaching resources. Comic Sans is widely used in schools due to its high legibility. Other reasons include: It is more suitable for dyslexic students. Reception and legacy: It is well-suited for modeling handwriting, due to the single-story lowercase "a" and "g", and distinct appearances of the letters I and l. Reception and legacy: It is aesthetically pleasing to some children.Vincent Connare is reportedly not offended by the negative backlash over Comic Sans. At the Fourth Annual Boring Conference, he claimed to find the contempt for his work to be "mildly amusing." He has stated that he is proud of his creation, offering different rationales. One of these was that "Comic Sans does what it was commissioned to do, it is loved by kids, mums, dads and many family members. So it did its job very well. It matched the brief!" He has also referred to it as "the best joke I've ever told." In 2014, commenting on Comic Sans' critics and fans alike, Connare said, "If you love it, you don't know much about typography, [but] if you hate it, you really don't know much about typography either, and you should get another hobby."Lauren Hudgins of The Establishment argued that people who use Comic Sans should be treated with respect, not mockery, because "people without dyslexia need empathy for those who need concessions to manage the disability." In popular culture In the 2005 session of the youth model parliament in Ontario, Canada, the New Democratic Party included the clause, "Ban the font known as Comic Sans" in an omnibus ban bill.On May 22, 2012, The Comic Sans Song was released by YouTube content creator and musician Gunnarolla, in collaboration with musician Andrew Huang. The song makes reference to Comic Sans and features commentary around the impact the font has had on pop-culture. Reception and legacy: In July 2012, when the discovery of the Higgs boson was announced at CERN, Fabiola Gianotti, the spokesperson of the ATLAS experiment, attracted comment by using Comic Sans in her presentation of the results. As a 2014 April Fools' Day joke, CERN claimed that it would be switching all its publications to Comic Sans.The Internet meme Doge, which became popular in late 2013, consists of different colored sets of words in broken English written in Comic Sans around the head of a Shiba Inu dog.In April 2014, OpenBSD announced the LibreSSL project in Comic Sans, claiming to be the first to "weaponize" it as a means for soliciting donations.In the 2015 video game Undertale and its followup Deltarune, the character Sans is a comic (i.e. comedian) named after the typeface. His dialogue is displayed in lowercase Comic Sans. He is paired with his brother named Papyrus, in reference to the typeface of the same name.In October 2022, Comic Sans became the representative of Dyslexia Scotland and their ad campaign, There's Nothing Comic About Dyslexia. The campaign's purpose is to inform people on the typeface's benefits among dyslexic people, and to encourage the creation of typefaces that are more formal, but also dyslexic-friendly. Reception and legacy: The song "Tacky" by Weird Al Yankovic, a parody of "Happy" by Pharrell Williams, features Yankovic listing a number of "tacky" behaviors ranging from stylish faux pas to obnoxious and rude behaviors as examples of what makes a person potentially "tacky." Among them is writing a resume in Comic Sans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Variation and Evolution in Plants** Variation and Evolution in Plants: Variation and Evolution in Plants is a book written by G. Ledyard Stebbins, published in 1950. It is one of the key publications embodying the modern synthesis of evolution and genetics, as the first comprehensive publication to discuss the relationship between genetics and natural selection in plants. The book has been described by plant systematist Peter H. Raven as "the most important book on plant evolution of the 20th century" and it remains one of the most cited texts on plant evolution.[1] Origin: The book is based on the Jesup Lectures that Stebbins delivered at Columbia University in October and November 1946 and is a synthesis of his ideas and the then current research on the evolution of seed plants in terms of genetics. Contents: The book is written in fourteen parts: Description and analysis of variation patterns Examples of variation patterns within species and genera The basis of individual variation Natural selection and variation in populations Genetic systems as factors in evolution Isolation and the origin of species Hybridization and its effects Polyploidy I: occurrence and nature of polyploid types Polyploidy II: geographic distribution and significance of polyploidy Apomixis in relation to variation and evolution Structural hybridity and the genetic system Evolutionary trends I: the karyotype Evolutionary trends II: External morphology Fossils, modern distribution patterns and rates of evolution Significance: The 643-page book cites more than 1,250 references and was the longest of the four books associated with the modern evolutionary synthesis. The other key works of the modern synthesis, whose publication also followed their authors' Jesup lectures, are Theodosius Dobzhansky's Genetics and the Origin of Species, Ernst Mayr's Systematics and the Origin of Species and George Gaylord Simpson's Tempo and Mode in Evolution. The great significance of Variation and Evolution in Plants is that it effectively killed any serious belief in alternative mechanisms of evolution for plants, such as Lamarckian evolution or soft inheritance, which were still upheld by some botanists.[2] Legacy: Stebbins book Flowering Plants: Evolution Above the Species Level was published in 1974 and was based on the Prather Lectures which he gave at Harvard. It is considered as an update to Variation and Evolution. In January 2000 a colloquium was held in Irvine, California, to celebrate the fiftieth anniversary of the publication of Variation and Evolution in Plants.[3] A 16 chapter book entitled Variation and evolution in Plants and Microorganisms: Toward a New Synthesis 50 Years After Stebbins (ISBN 0-309-07099-6) was released to mark the occasion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hadamard product (matrices)** Hadamard product (matrices): In mathematics, the Hadamard product (also known as the element-wise product, entrywise product: ch. 5  or Schur product) is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French-Jewish mathematician Jacques Hadamard or German-Jewish mathematician Issai Schur. Hadamard product (matrices): The Hadamard product is associative and distributive. Unlike the matrix product, it is also commutative. Definition: For two matrices A and B of the same dimension m × n, the Hadamard product A∘B (or A⊙B ) is a matrix of the same dimension as the operands, with elements given by (A∘B)ij=(A⊙B)ij=(A)ij(B)ij. For matrices of different dimensions (m × n and p × q, where m ≠ p or n ≠ q), the Hadamard product is undefined. For example, the Hadamard product for two arbitrary 2 × 3 matrices is: 72 10 ] Properties: The Hadamard product is commutative (when working with a commutative ring), associative and distributive over addition. That is, if A, B, and C are matrices of the same size, and k is a scalar: The identity matrix under Hadamard multiplication of two m × n matrices is an m × n matrix where all elements are equal to 1. This is different from the identity matrix under regular matrix multiplication, where only the elements of the main diagonal are equal to 1. Furthermore, a matrix has an inverse under Hadamard multiplication if and only if none of the elements are equal to zero. Properties: For vectors x and y, and corresponding diagonal matrices Dx and Dy with these vectors as their main diagonals, the following identity holds:: 479  where x* denotes the conjugate transpose of x. In particular, using vectors of ones, this shows that the sum of all elements in the Hadamard product is the trace of ABT where superscript T denotes the matrix transpose, that is, tr ⁡(ABT)=1T(A∘B)1 . A related result for square A and B, is that the row-sums of their Hadamard product are the diagonal elements of ABT: Similarly, Furthermore, a Hadamard matrix-vector product can be expressed as: where diag ⁡(M) is the vector formed from the diagonals of matrix M. Properties: The Hadamard product is a principal submatrix of the Kronecker product. The Hadamard product satisfies the rank inequality If A and B are positive-definite matrices, then the following inequality involving the Hadamard product holds: where λi(A) is the ith largest eigenvalue of A. Properties: If D and E are diagonal matrices, then The Hadamard product of two vectors a and b is the same as matrix multiplication of one vector by the corresponding diagonal matrix of the other vector: The vector to diagonal matrix diag operator may be expressed using the Hadamard product as: where 1 is a constant vector with elements 1 and I is the identity matrix. The mixed-product property: where ⊗ is Kronecker product, assuming A has the same dimensions of C and B with D where ∙ denotes face-splitting product. where ∗ is column-wise Khatri–Rao product. Schur product theorem: The Hadamard product of two positive-semidefinite matrices is positive-semidefinite. This is known as the Schur product theorem, after Russian mathematician Issai Schur. For two positive-semidefinite matrices A and B, it is also known that the determinant of their Hadamard product is greater than or equal to the product of their respective determinants: In programming languages: Hadamard multiplication is built into certain programming languages under various names. In MATLAB, GNU Octave, GAUSS and HP Prime, it is known as array multiplication, or in Julia broadcast multiplication, with the symbol .*. In Fortran, R, APL, J and Wolfram Language (Mathematica), it is done through simple multiplication operator * or ×, whereas the matrix product is done through the function matmul, %*%, +.×, +/ .* and the . operators, respectively. In programming languages: In Python with the NumPy numerical library, multiplication of array objects as a*b produces the Hadamard product, and multiplication as a@b produces the matrix product. With the SymPy symbolic library, multiplication of array objects as both a*b and a@b will produce the matrix product, the Hadamard product can be obtained with a.multiply_elementwise(b). In C++, the Eigen library provides a cwiseProduct member function for the Matrix class (a.cwiseProduct(b)), while the Armadillo library uses the operator % to make compact expressions (a % b; a * b is a matrix product). R package matrixcalc introduces function hadamard.prod() for Hadamard Product of numeric matrices or vectors. Applications: The Hadamard product appears in lossy compression algorithms such as JPEG. The decoding step involves an entry-for-entry product, in other words the Hadamard product.In image processing, the Hadamard operator can be used for enhancing, suppressing or masking image regions. One matrix represents the original image, the other acts as weight or masking matrix. It is used in the machine learning literature, for example, to describe the architecture of recurrent neural networks as GRUs or LSTMs.It is also used to study the statistical properties of random vectors and matrices. Analogous operations: Other Hadamard operations are also seen in the mathematical literature, namely the Hadamard root and Hadamard power (which are in effect the same thing because of fractional indices), defined for a matrix such that: For and for The Hadamard inverse reads: A Hadamard division is defined as: The penetrating face product: According to the definition of V. Slyusar the penetrating face product of the p×g matrix A and n-dimensional matrix B (n > 1) with p×g blocks ( B=[Bn] ) is a matrix of size B of the form: Example If then Main properties A[∘]B=B[∘]A; M∙M=M[∘](M⊗1T), where ∙ denotes the face-splitting product of matrices, c∙M=c[∘]M, where c is a vector. Applications The penetrating face product is used in the tensor-matrix theory of digital antenna arrays. This operation can also be used in artificial neural network models, specifically convolutional layers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyperbaric treatment schedules** Hyperbaric treatment schedules: Hyperbaric treatment schedules or hyperbaric treatment tables, are planned sequences of events in chronological order for hyperbaric pressure exposures specifying the pressure profile over time and the breathing gas to be used during specified periods, for medical treatment. Hyperbaric therapy is based on exposure to pressures greater than normal atmospheric pressure, and in many cases the use of breathing gases with oxygen content greater than that of air. Hyperbaric treatment schedules: A large number of hyperbaric treatment schedules are intended primarily for treatment of underwater divers and hyperbaric workers who present symptoms of decompression illness during or after a dive or hyperbaric shift, but hyperbaric oxygen therapy may also be used for other conditions. Hyperbaric treatment schedules: Most hyperbaric treatment is done in hyperbaric chambers where environmental hazards can be controlled, but occasionally treatment is done in the field by in-water recompression when a suitable chamber cannot be reached in time. The risks of in-water recompression include maintaining gas supplies for multiple divers and people able to care for a sick patient in the water for an extended period of time. Background: Recompression of diving casualties presenting symptoms of decompression sickness has been the treatment of choice since the late 1800s. This acceptance was primarily based on clinical experience.John Scott Haldane's decompression procedures and the associated tables developed in the early 1900s greatly reduced the incidence of decompression sickness, but did not eliminate it entirely. It was, and remains, necessary to treat incidences of decompression sickness. Background: Hyperbaric chamber recompression During the building of the Brooklyn Bridge, workers with decompression sickness were recompressed in an iron chamber built for this purpose. They were recompressed to the same pressure they had been exposed to while working, and when the pain was relieved, decompressed slowly to atmospheric pressure.Although recompression and slow decompression were the accepted treatment, there was not yet a standard for either the recompression pressure or the rate of decompression. This changed when the first standard table for recompression treatment with air was published in the US Navy Diving Manual in 1924. These tables were not entirely successful - there was a 50% relapse rate, and the treatment, though fairly effective for mild cases, was less effective in serious cases. Background: Field results showed that the 1944 oxygen treatment table was not yet satisfactory, so a series of tests were conducted by staff from the Navy Medical Research Institute and the Navy Experimental Diving Unit using human subjects to verify and modify the treatment tables.Tests were conducted using the 100-foot air-oxygen treatment table and the 100-foot air treatment table, which were found to be satisfactory. Other tables were extended until they produced satisfactory results. The resulting tables were used as the standard treatment for the next 20 years, and these tables and slight modifications were adopted by other navies and industry. Over time, evidence accumulated that the success of these table for severe decompression sickness was not very good.These low success rates led to the development of the oxygen treatment table by Goodman and Workman in 1965, variations of which are still in general use as the definitive treatment for most cases of decompression sickness. Background: In water recompression Treatment of DCS utilizing the US Navy Treatment Table 6 with oxygen at 18 m is a standard of care. Significant delay to treatment, difficult transport, and facilities with limited experience may lead one to consider on site treatment. Surface oxygen for first aid has been proven to improve the efficacy of recompression and decreased the number of recompression treatments required when administered within four hours post dive. IWR to 9 m breathing oxygen is one option that has shown success over the years. IWR is not without risk and should be undertaken with certain precautions. IWR would only be suitable for an organised and disciplined group of divers with suitable equipment and practical training in the procedure. Applications: Treatment of decompression sickness, arterial gas embolism, and other medical applications. Equipment: Recompression chamber The type of chamber which can be used depends on the maximum pressure required for the schedule, and what gases are used for treatment. Most treatment protocols for diving injuries require an attendant in the chamber, and a medical lock to transfer medical supplies into the chamber while under pressure. Equipment: Monoplace chambers Outside of the diving industry, most chambers are intended for a single occupant, and not all of them are fitted with built-in breathing systems (BIBS). This limits the schedules which can be safely used in them. Some schedules have been developed specifically for hyperbaric oxygen treatment in monoplace chambers, and some hyperbaric treatment schedules nominally intended for chambers with BIBS have been shown to be acceptable for use without air breaks if the preferred facilities are not available. Equipment: Treatment gases Originally therapeutic recompression was done using air as the only breathing gas, and this is reflected in several of the tables detailed below. However, work by Yarbrough and Behnke showed that use of oxygen as a treatment gas is usually beneficial and this has become the standard of care for treatment of DCS. Pure oxygen can be used at pressures up to 60 fsw (18 msw) with acceptable risk of CNS oxygen toxicity, which generally has acceptable consequences in the chamber environment when an inside tender is at hand. At greater pressures, treatment gas mixtures using Nitrogen or Helium as a diluent to limit partial pressure of oxygen to 3 ata (3 bar) or less are preferred to air as they are more effective both at elimination of inert gases and oxygenating injured tissues in comparison with air. Nitrox and Heliox mixtures are recommended by the US Navy for treatment gases at pressures exceeding 60 fsw (18 msw), and Heliox is preferred at pressures exceeding 165 fsw (50 msw) to reduce nitrogen narcosis. High oxygen fraction gas mixtures may also be substituted for pure oxygen at pressures less than 60 fsw if the patient does not tolerate 100% oxygen. Equipment: Built in breathing system Treatment gases are generally oxygen or oxygen rich mixtures which would constitute an unacceptable fire hazard if used as the chamber gas. Chamber oxygen concentration is limited due to fire hazard and the high risk of fatality or severe injury in the event of a chamber fire. US Navy specifications for oxygen content of chamber air allow a range from 19% to 25%. If the oxygen fraction rises above this limit the chamber must be ventilated with air to bring the concentration to an acceptable level. To minimize the requirement for venting, oxygen-rich treatment gases are usually provided to the patient by built in breathing system (BIBS) masks, which vent exhaled gas outside the chamber. BIBS masks are provided with straps to hold them in place over the mouth and nose, but are often held in place manually, so they will fall away if the user has an oxygen toxicity convulsion. Equipment: BIBS masks provide gas on demand (inhalation), much like a diving regulator, and use a similar system to control outflow to the normobaric environment. They are connected to supply lines plumbed through the pressure hull of the chamber, valved on both sides, and supplied from banks of storage cylinders, usually kept near the chamber. The BIBS system is normally used with medical oxygen, but can be connected to other breathing gases as required. Chamber gas oxygen content is usually monitored by bleeding chamber gas past an electro-galvanic oxygen sensor cell. Units of measurement used in hyperbaric treatment: The commonly used units of pressure for hyperbaric treatment are metres of sea water (msw) and feet of sea water (fsw) which indicate the pressure of treatment in terms of the height of water column that would be supported in a manometer. These units are also used for measuring the depth of a surface supplied diver using a pneumofathometer and directly relate the pressure to an equivalent depth. The pressure gauges used on diving chambers are often calibrated in both of these units. Elapsed time of treatment is usually recorded in minutes, or hours and minutes, and may be measured from the start of pressurisation, or from the time when treatment pressure is reached. Hyperbaric chamber treatment schedules: The schedules listed here include both historical procedures and schedules currently in use. As a general rule, more recent tables from the same source have a greater success rate than the superseded schedules. Some of the older procedures are now considered to be dangerous. US Navy 1943 100-foot Air Treatment Table Use: Treatment of decompression sickness where relief is obtained at or less than 66 fsw. Obsolete Oxygen is not used Maximum pressure 100 fsw (30 msw) Run time 3 hours 37 minutes US Navy 1943 150-foot Air Treatment Table Use: Treatment of decompression sickness where relief is obtained at or less than 116 fsw. Obsolete Oxygen is not used Maximum pressure 150 fsw (46 msw) Run time 4 hours 55 minutes US Navy 1943 200-foot Air Treatment Table Use: Treatment of decompression sickness where relief is obtained at or less than 166 fsw. Obsolete Oxygen is not used Maximum pressure 200 fsw (61 msw) Run time 5 hours 58 minutes US Navy 1943 250-foot Air Treatment Table Use: Treatment of decompression sickness where relief is obtained at or less than 216 fsw. Obsolete Oxygen is not used Maximum pressure 250 fsw (76 msw) Run time 6 hours 46 minutes US Navy 1943 300-foot Air Treatment Table Use: Treatment of decompression sickness where relief is obtained at or less than 266 fsw. Obsolete Oxygen is not used Maximum pressure 300 fsw (91 msw) Run time 7 hours 29 minutes US Navy 1944 Long Air Recompression Treatment Table Use: Treatment of moderate to severe decompression sickness when oxygen is not available or the patient cannot tolerate the elevated oxygen partial pressure. Oxygen is not used Maximum pressure 165 fsw (50 msw) Run time 5 hours 39 minutes US Navy 1944 Long Air Recompression Treatment Table with Oxygen Use: Treatment of moderate to severe decompression sickness when oxygen is available. Oxygen is used Maximum pressure 165 fsw (50 msw) Run time 3 hours 0 minutes US Navy 1944 Short Air Recompression Treatment Table Use: Treatment of mild decompression sickness when oxygen is not available or the patient cannot tolerate the elevated oxygen partial pressure. Oxygen is not used Maximum pressure 100 fsw (30 msw) Run time 5 hours 5 minutes US Navy 1944 Short Oxygen Recompression Treatment Table Use: Treatment of mild decompression sickness. Oxygen is used Maximum pressure 100 fsw (30 msw) Run time 2 hours 17 minutes US Navy Recompression Treatment Table 1 Use: Treatment of pain only decompression sickness. Hyperbaric chamber treatment schedules: Pain is relieved at less than 66 fsw (20 msw) Oxygen is available Maximum pressure 100 fsw (100 msw) Run time 2 hours 21 minutes Omitted from the US Navy Diving Manual since Revision 6 US Navy Air Treatment Table 1A Table 1A is included in the US Navy Diving Manual Revision 6 and is authorized for use as a last resort when oxygen is not available. This table has been revised by decreasing the ascent rate from 1 minute between stops to 1 fsw per minute since the original was published in 1958.Use: For treatment of pain only decompression sickness. Hyperbaric chamber treatment schedules: Pain is relieved at less than 66 fsw. (20 msw) Air only, No oxygen. Maximum pressure 100 fsw (30 msw) Run time 7 hours 52 minutes US Navy Recompression Treatment Table 2 Use: Treatment of pain-only decompression sickness. Hyperbaric chamber treatment schedules: Pain is relieved at greater than 66 fsw (20 msw) Oxygen available Maximum pressure 165 fsw (50 msw) Run time 4 hours 1 minute US Navy Air Treatment Table 2a Table 2A is included in the US Navy Diving Manual Revision 6 and is authorized for use as a last resort when oxygen is not available. This table has been revised by decreasing the ascent rate from 1 minute between stops to 1 fsw per minute since the original was published in 1958.Use: Treatment of pain only decompression sickness when oxygen cannot be used. Hyperbaric chamber treatment schedules: Pain is relieved at a depth greater than 66 fsw (20 msw). Hyperbaric chamber treatment schedules: Oxygen not available Maximum pressure 165 fsw (50 msw) Run time 13 hours 33 minutes US Navy Air Treatment Table 3 Table 3 is included in the US Navy Diving Manual Revision 6 and is authorized for use as a last resort when oxygen is not available. This table has been revised by decreasing the ascent rate from 1 minute between stops to 1 fsw per minute since the original was published in 1958.Use: Treatment of serious symptoms when oxygen cannot be used and symptoms are relieved within 30 minutes at 165 feet. Hyperbaric chamber treatment schedules: Oxygen not available Maximum pressure 165 fsw (50 msw) Run time 21 hours 33 minutes US Navy Recompression Treatment Table 4 This table is in the US Navy Diving Manual Revision 6 and is currently authorized for use.Use: Treatment of serious symptoms when oxygen can be used and symptoms are not relieved within 30 minutes at 165 fsw (50 msw). Hyperbaric chamber treatment schedules: Oxygen enriched treatment gases and Oxygen may be used. Air may be used if nothing better is available. If oxygen breathing is interrupted no compensation to the times is required. Oxygen partial pressure may not exceed 3 ata (3 bar). Maximum depth 165 fsw (50 msw) Time at 165 fsw optional from 30 minutes to 2 hours including compression Total run time 39 hours 6 minutes to 40 hours 36 minutes US Navy Recompression Treatment Table 5 Use: Treatment of pain-only decompression sickness when oxygen can be used and symptoms are relieved within 10 minutes at 60 ft. Treatment Table 5 is currently included in the US Navy Diving Manual and is approved for use. Oxygen treatment Maximum pressure 60 fsw (18 msw) Standard run time 2 hours 16 minutes The table may be extended by two oxygen-breathing periods at the 30 fsw (9 msw) stop US Navy Recompression Treatment Table 5a Use: Treatment of gas embolism when oxygen can be used and symptoms are relieved within 15 minutes at 165 fsw (50 msw). Treatment table 5a is not currently included in the US Navy Diving Manual (Revision 6). Oxygen treatment Maximum pressure 165 fsw (50 msw) Run time 2 hours 34 minutes US Navy Recompression Treatment Table 6 Use: Treatment of pain-only decompression sickness when oxygen can be used and symptoms are not relieved within 10 minutes at 60 fsw (18 msw). Oxygen treatment Maximum pressure 60 fsw (18 msw) Run time 4 hours 45 minutes Catalina modification The Catalina treatment table is a modification of Treatment Table 6. Oxygen cycles are 20 minutes, and air breaks 5 minutes. The full Catalina Table allows for up to 5 extensions at 60 fsw. Shorter versions include: 3 oxygen cycles at 60 fsw followed by a minimum of 6 oxygen cycles at 30 fsw. (equivalent to USN Table 6) 4 oxygen cycles at 60 fsw followed by a minimum of 9 oxygen cycles at 30 fsw. 5 to 8 oxygen cycles at 60 fsw followed by a minimum of 12 oxygen cycles at 30 fsw.Tenders breathe oxygen for 60 minutes at 30 fsw. Further treatments may follow after at least 12 hours on air at the surface. US Navy Recompression Treatment Table 6a Use: Treatment of gas embolism when oxygen can be used and symptoms moderate to a major extent within 30 minutes at 165 ft. This treatment table is included in the US Navy Diving Manual Revision 6 and is currently authorized for use. It has been updated since original publication. Hyperbaric chamber treatment schedules: Oxygen treatment Optional treatment with oxygen enriched gases (Heliox or Nitrox) not exceeding 3.0 ata (3 bar) partial pressure of oxygen if available Maximum pressure 165 fsw (50 msw) Nominal run time 5 hours 50 minutes from reaching full pressureAt 50msw (absolute pressure 6 bar) an oxygen fraction of 50% will produce a partial pressure of 3 bar, This could be a nitrox, heliox or trimix blend with 50% oxygen. Hyperbaric chamber treatment schedules: US Navy Treatment Table 7 Use: Treatment of non-responding severe gas embolism or life-threatening decompression sickness. It is used when loss of life may result from decompression from 60 fsw. It is not used to treat residual symptoms that do not improve at 60 fsw, or to treat residual pain. Treatment table 7 is included in the US Navy Diving Manual Revision 6 and is currently authorized for use. Hyperbaric chamber treatment schedules: Oxygen is used if practicable Maximum pressure 60 fsw (18 msw) Minimum time at 60 fsw is 12 hours. Decompression following this length of exposure is generally considered decompression from saturation, so the decompression profile is not affected by longer exposure at 60 fsw. Use of this table may be preceded by initial treatment on table 6, 6A or 4. Table 7 treatment begins on arrival at 60 fsw. Hyperbaric chamber treatment schedules: Duration of decompression is 36 hours Decompression comprises an approximated continuous ascent with stops every 2 fsw as shown in the graphic profile, with a stop at 4 fsw for 4 hours to avoid inadvertent loss of pressure due to seal failure at low pressure differences. US Navy Treatment Table 8 Use: Mainly for treating deep uncontrolled ascents when more than 60 minutes of decompression have been omitted. Treatment table 8 is included in the US Navy Diving Manual Revision 6 and is currently authorized for use. Adapted from Royal Navy Treatment Table 65. Patient is recompressed to pressure of symptomatic relief but not to exceed 225 fsw and treatment initiated Once begun, decompression is continuous, but may be interrupted at 60 fsw or shallower. Heliox mixtures may be used at pressures exceeding 165 fsw to reduce nitrogen narcosis. Heliox 64/36 is the preferred treatment gas. Hyperbaric chamber treatment schedules: Heliox or Nitrox with partial pressure not exceeding 3 ata may be used as treatment gas at pressures less than 165 fsw 100% oxygen may be used as treatment gas at pressures less than 60 fsw Decompression is done by 2 fsw pressure decrements unless the start depth is an odd number, in which case the first stop is at a 3 fsw reduction in pressure. Stop times vary according to the depth range of the stop. Shorter stops are done at greater pressures, and the stop time increases as the stops get shallower. Hyperbaric chamber treatment schedules: Nominal total ascent time from 225 fsw is 56 hours 29 minutes. US Navy Treatment Table 9 Use: Hyperbaric oxygen treatment as prescribed by Diving Medical Officer for: Residual symptoms after treatment for AGE/DCS Cases of carbon monoxide or cyanide poisoning Smoke inhalation Initial treatment of patients urgently needing definitive medical care for severe injuries. Maximum pressure 45 fsw (13.5 msw) Nominal elapsed time excluding pressurization 102 minutes Treatment depth may be reduced to 30 fsw (9 msw) if patient cannot tolerate oxygen at 45 fsw (13.5 msw). Table may be extended to a maximum of 4 hours oxygen breathing time. US Navy Treatment Table for decompression sickness occurring on saturation dives Use: For treatment of decompression sickness manifested as musculoskeletal pains only, during decompression from saturation. Maximum pressure specified is 1600 fsw Recompression in increments of 10 fsw at 5 fsw per minute until diver reports improvement. It is not usually necessary or desirable to recompress by more than 30 fsw. Treatment gas with oxygen partial pressure of up to 2.5 atm may be administered by BIBS mask for periods of 20 minutes, with breaks of 5 minutes on chamber gas during recompression and holding periods. Pure oxygen may be used at pressures less than 60 fsw.Use: For treatment of serious decompression sickness resulting from upward excursion. Recompression immediately at 30 fsw per minute to at least the depth from which the excursion started. If this does not provide complete relief compression should continue until relief is reported. Hold at relief depth for at least 2 hours for pain only symptoms and at least 12 hours for serious symptoms.Decompress after treatment according to normal saturation decompression schedule from the treatment depth. Tektite I and II Treatment and emergency decompression schedule for a 42 to 50-foot saturation dive Treatment of Tektite aquanauts after emergency surfacing. Saturation gas mixture Nitrox 9% Oxygen available Maximum pressure 60 fsw (18 msw) Run time 14 hours 40 minutes Tektite II Treatment and emergency decompression schedule for the 100-foot saturation dive Treatment of Tektite aquanauts after emergency surfacing. Oxygen available Maximum pressure up to 200 fsw Run time variable depending on circumstances Royal Navy 1943 Recompression Treatment Procedure Treatment of any decompression sickness symptoms. Hyperbaric chamber treatment schedules: Oxygen not used Maximum pressure variable up to 225 fsw (68 msw) Run time 4 hours 57 minutes to 5 hours 57 minutes Royal Navy Table 51 - Air Recompression Therapy Use: Treatment of pain-only decompression sickness when oxygen is not available and pain is relieved within 10 minutes at or less than 20 msw (667 fsw) Oxygen not used Maximum pressure 30 msw (98 fsw) Run time 7 hours 5 minutes Royal Navy Table 52 - Air Recompression Therapy Use: Treatment of pain-only decompression sickness when oxygen is not available and pain is not relieved within 10 minutes at or less than 20 msw (66 fsw) but does have relief within 10 minutes at 50 msw (165 fsw). Hyperbaric chamber treatment schedules: Oxygen not used Maximum pressure 50 msw (164 fsw) Run time 9 hours 58 minutes Royal Navy Table 53 - Air Recompression Therapy Use: Treatment of joint pain plus a more serious symptom of decompression sickness when oxygen is not available and symptoms are relieved within 30 minutes at or less than 50 msw (164 fsw) Oxygen not used Maximum pressure 50msw (164 fsw) Run time 19 hours 48 minutes Royal Navy Table 54 - Air Recompression Therapy Use: Treatment of joint pain plus a more serious symptom of decompression sickness when oxygen is available and symptoms are not relieved within 30 minutes at or less than 50 metres (164 ft) Oxygen available Maximum pressure 50 msw (164 fsw) Run time 39 hours 0 minutes Royal Navy Table 55 - Air Recompression Therapy Use: Treatment of joint pain plus a more serious symptom of decompression sickness when oxygen is not available and symptoms are not relieved within 30 minutes at or less than 50msw (164 fsw) Oxygen not available Maximum pressure 50 msw (164 fsw) Run time 43 hours 0 minutes Royal Navy Table 61 - Oxygen Recompression Therapy Use: Treatment of pain only decompression sickness when oxygen is available and pain is relieved within 10 minutes or at less than 18 msw (59 fsw), or for serious symptoms where a specialist medical officer is present. Hyperbaric chamber treatment schedules: Oxygen treatment Maximum pressure 18 msw (59 fsw) Run time 2 hours 17 minutes Royal Navy Table 62 - Oxygen Recompression Therapy Use: Treatment of pain only decompression sickness when oxygen is available and pain is not relieved within 10 minutes at 18 msw (59 fsw), or for serious symptoms where a specialist medical officer is present. Oxygen treatment Maximum pressure 18 msw (59 fsw) Run time 4 hours 47 minutes Royal Navy Table 71 - Modified Air Recompression Table Use: Treatment of any decompression symptom if a specialist medical officer is present. Oxygen not available Maximum pressure 70 msw (230 fsw) Run time 47 hours 44 minutes Royal Navy Table 72 - Modified Air Recompression Therapy Use: Treatment of any decompression symptom if a specialist medical officer is present. Applicable for multiple recompression of submarine survivors. Oxygen not available Maximum pressure 50 msw (164 fsw) Run time 46 hours 45 minutes RNPL Therapeutic Decompression from a Helium-Oxygen Recompression Use: Treatment of decompression sickness occurring during decompression from a Heliox dive. Oxygen not used Maximum pressure variable. May be greater than 137 msw (450 fsw) Run time depends on treatment depth French Navy Recompression Treatment Table 1 (GERS 1962) Use: Treatment of mild decompression sickness. Oxygen is available Maximum pressure 30 msw (98 fsw) Run time 4 hours 12 minutes French Navy Recompression Treatment Table 2 (GERS 1962) Use: Treatment of mild to moderate decompression sickness. Oxygen is available Maximum pressure 50 msw (164 fsw) Run time 6 hours 44 minutes French Navy Recompression Treatment Table 3 (GERS 1962) Use: Treatment of moderate to severe decompression sickness. Oxygen is available Maximum pressure 50 msw (164 fsw) Run time 12 hours 44 minutes French Navy Recompression Treatment Table 4 (GERS 1962) Use: Treatment of severe decompression sickness. Oxygen is available Maximum pressure 50 msw (164 fsw) Run time 36 hours 14 minutes or 37 hours 44 minutes French Navy Recompression Treatment Table 4A (GERS 1962) Use: Treatment of severe decompression sickness. Oxygen is not available Maximum pressure 50 msw (164 fsw) Run time 38 hours 14 minutes or 39 hours 39 minutes French Navy Air Recompression Treatment Table (GERS 1964) Use: Treatment of decompression sickness. Oxygen is not available or the patient cannot tolerate high partial pressures of oxygen Maximum pressure 50 msw (164 fsw) Run time 73 hours 10 minutes French Navy Air Recompression Treatment Table (GERS 1964) Use: Treatment of decompression sickness. Oxygen is not available or the patient cannot tolerate high partial pressures of oxygen Maximum pressure 50 msw (164 fsw) Run time 76 hours 40 minutes French Navy High-Oxygen Recompression Treatment Table (GERS 1964) Use: Treatment of moderately severe decompression sickness. Oxygen is available Maximum pressure 30 msw (98 fsw) Run time between 20 hours 33 minutes and 36 hours 3 minutes French Navy Recompression Treatment Table A (GERS 1968) Use: Treatment of mild decompression sickness after dives to less than 40 m depth. Oxygen is available Maximum pressure 30 msw (98 fsw) Run time 5 hours 33 minutes French Navy Recompression Treatment Table B (GERS 1968) Use: Treatment of mild decompression sickness after dives to more than 40 m depth. Oxygen is available Maximum pressure 30 msw (98 fsw) Run time 8 hours 3 minutes French Navy Recompression Treatment Table C (GERS 1968) Use: Treatment of moderately severe decompression sickness after dives to more than 40m depth or severe decompression sickness after dives shallower than 40m. Oxygen is available Maximum pressure 30 msw (98 fsw) Run time 14 hours 29 minutes to 36 hours 57 minutes French Navy Recompression Treatment Table D (GERS 1968) Use: Treatment of moderately severe and severe decompression sickness. Oxygen is not available or cannot be tolerated by the patient Maximum pressure 50 msw (164 fsw) Run time 69 hours 45 minutes or 77 hours 45 minutes French Navy Recompression Treatment Table 1A (GERS 1968) Use: Treatment of mild decompression sickness after dives to less than 40 m. Oxygen is not available or cannot be tolerated by the patient Maximum pressure 30 msw (98 fsw) Run time 7 hours 18 minutes French Navy Recompression Treatment Table 2A (GERS 1968) Use: Treatment of mild decompression sickness after dives to more than 40 m. Oxygen is not available or cannot be tolerated by the patient Maximum pressure 50 msw (164 fsw) Run time 12 hours 45 minutes French Navy Recompression Treatment Table 3A (GERS 1968) Use: Treatment of moderate or severe decompression sickness. Oxygen is not available or cannot be tolerated by the patient Maximum pressure 50 msw (164 fsw) Run time 20 hours 45 minutes Comex Therapeutic Table CX 12 Use: Treatment of musculoskeletal decompression sickness following normal decompression if symptoms are relieved within 4 minutes or at less than 8 msw. Oxygen is available Maximum pressure 12 msw (40 fsw) Run time 2 hours 10 minutes Comex Therapeutic Table 18C Use: Treatment of musculoskeletal decompression sickness following normal or shortened decompression if symptoms are not relieved within 4 minutes at 8 msw, but are relieved within 15 minutes at or less than 18 msw. Oxygen is available Maximum pressure 18 msw (60 fsw) Run time 2 hours 54 minutes Comex Therapeutic Table 18L Use: Treatment of musculoskeletal decompression sickness following normal or shortened decompression if symptoms are not relieved within 15 minutes at 18 msw. Oxygen is available Maximum pressure 18 msw (60 fsw) Run time 4 hours 59 minutes Comex Therapeutic Table CX 30 Use: Treatment of vestibular and general neurological decompression sickness following normal or shortened decompression. Oxygen and Heliox 50 or Nitrox 50 is available Maximum pressure 30 msw (100 fsw) Run time 7 hours 2 minutes Comex Therapeutic Table CX 30A Use: Treatment of musculoskeletal decompression sickness when signs of oxygen toxicity are present. Oxygen is available Maximum pressure 30 msw Run time 8 hours 44 minutes Comex Therapeutic Table CX 30AL Use: Treatment of vestibular and general neurological decompression sickness when signs of oxygen toxicity are present. Oxygen is available Maximum pressure 30 sw Run time 11 hours 8 minutes Russian Therapeutic Recompression Regimen I Use: Treatment of light forms of decompression sickness when the symptoms are completely resolved when reaching a pressure of 29 msw (96 fsw). Oxygen is not used Maximum pressure 49 msw (160 fsw) Run time 13 hours 9 minutes Russian Therapeutic Recompression Regimen II Use: Treatment of light forms of decompression sickness when the symptoms are completely resolved when reaching a pressure of 49 msw (160 fsw), or if there is a relapse after use of Regimen I. Oxygen is not used Maximum pressure 49 msw (160 fsw) Run time 26 hours 11 minutes Russian Therapeutic Recompression Regimen III Use: Treatment of moderately severe decompression sickness, or if there is a relapse after use of Regimen II. Oxygen is not used Maximum pressure 68 msw (224 fsw) Run time 31 hours 26 minutes Russian Therapeutic Recompression Regimen IV Use: Treatment of severe decompression sickness, or if there is a relapse after use of Regimen III. Oxygen is not used Maximum pressure 97 msw (320 fsw) Run time 39 hours 2 minutes Russian Therapeutic Recompression Regimen V Use: Treatment of very severe decompression sickness, or if there is a relapse after use of Regimen IV. Oxygen is not used. Helium may optionally be used for compression below 224 fsw in addition to the air used for initial compression. Hyperbaric chamber treatment schedules: Maximum pressure 97 msw (320 fsw) Run time 87 hours 7 minutes (3 days 15 hours 7 minutes) German Short Air Recompression Treatment Table used during the Rendsburg pedestrian tunnel project Use: Treatment of mild decompression sickness where relief occurs within 30 minutes at 30 msw (98 fsw) Oxygen not used Maximum pressure 30 msw (98 fsw) Run time 2 hours 18 minutes German Recompression Treatment Table used during the Rendsburg pedestrian tunnel project Use: Treatment of mild decompression sickness where relief does not occur within 30 minutes at 30 msw (98 fsw) Oxygen is used Maximum pressure 30 msw (98 fsw) Run time 5 hours 24 minutes German Recompression Treatment Table used during the Rendsburg pedestrian tunnel project Use: Treatment of severe decompression sickness where relief does not occur within 30 minutes at 30 msw (98 fsw) Oxygen is used Maximum pressure 30 msw (98 fsw) Run time 36 hours 55 minutes or 38 hours 25 minutes Oxygen tables designed for monoplace chambers: (specifically for chambers without facility for air breaks) Hart monoplace table 100% oxygen for 30 minutes at 3.0 ATA followed by 60 minutes at 2.5 ATA. Oxygen tables designed for monoplace chambers: Kindwall's monoplace table Indication: Pain only or skin bends for symptoms that resolve within 10 minutes of reaching treatment depth:30 minutes at 2.8 bar (60 fsw) Continuous decompression to 1.9 bar over 15 minutes 60 minutes at 1.9 bar (30 fsw) Continuous decompression to surface over 15 minutes Neurological decompression sickness, arterial gas embolism or unresolved symptoms after 10 minutes at treatment pressure:30 minutes at 2.8 bar (60 fsw) Continuous decompression to 1.9 bar over 30 minutes 60 minutes at 1.9 bar (30 fsw) Continuous decompression to surface over 30 minutes Repeat after 30 minutes on air at surface pressure if symptoms have not resolved. In-water recompression schedules: In-water recompression (IWR) or underwater oxygen treatment is the emergency treatment of decompression sickness (DCS) by sending the diver back underwater to allow the gas bubbles in the tissues, which are causing the symptoms, to resolve. It is a risky procedure that should only ever be used when the time to travel to the nearest recompression chamber is too long to save the victim's life.Carrying out in-water recompression when there is a nearby recompression chamber or without special equipment and training is never a favoured option. The risk of the procedure comes from the fact that a diver with DCS is seriously ill and may become paralysed, unconscious or stop breathing whilst under water. Any one of these events is likely to result in the diver drowning or further injury to the diver during a subsequent rescue to the surface. In-water recompression schedules: Six IWR treatment tables have been published in the scientific literature. Each of these methods have several commonalities including the use of a full face mask, a tender to supervise the diver during treatment, a weighted recompression line and a means of communication. The history of the three older methods for providing oxygen at 9 m (30 fsw) was described in great detail by Drs. Richard Pyle and Youngblood. The fourth method for providing oxygen at 7.5 m (25 fsw) was described by Pyle at the 48th Annual UHMS Workshop on In-water Recompression in 1999. The Clipperton method involves recompression to 9 m (30 fsw) while the Clipperton(a) rebreather method involves a recompression to 30 m (98 fsw).Recommended equipment common to these tables includes: a means of securely holding the casualty at a measured depth, such as a harness and 20 metre lazy shot line with a 20 kg lead weight at the bottom and a buoy at the top of at least 40 litres buoyancy a means of allowing the casualty to ascend slowly, such as loops in the line to which the harness could be clipped full face diving masks for the casualty and for an in-water attendant diver with two-way communication to the surface and an umbilical gas supply system surface supplied breathing gases including pure oxygen and air delivered to the casualty by umbilical Australian In-water Recompression Table The Australian IWR Tables were developed by the Royal Australian Navy in the 1960s in response to their need for treatment in remote locations far away from recompression chambers. It was the shallow portion of the table developed for recompression chamber use.Oxygen is breathed the entire portion of the treatment without any air breaks and is followed by alternating periods (12 hours) of oxygen and air breathing on the surface. In-water recompression schedules: Clipperton In-water Recompression Tables The Clipperton and Clipperton(a) methods were developed for use on a scientific mission to the atoll of Clipperton, 1,300 km from the Mexican coast. The two versions are based on the equipment available for treatment with the Clipperton(a) table being designed for use with rebreathers. In-water recompression schedules: Both methods begin with 10 minutes of surface oxygen. For the Clipperton IWR table, oxygen is then breathed the entire portion of the treatment without any air breaks. For the Clipperton(a) IWR table, descent is made to the initial treatment depth maintaining a partial pressure of 1.4 ATA. Oxygen breathing on the surface for 6 hours post treatment and intravenous fluids are also administered following both treatment tables. In-water recompression schedules: Hawaiian In-water Recompression Table The Hawaiian IWR table was first described by Farm et al. while studying the diving habits of Hawaii's diving fishermen.The initial portion of the treatment involves descent on air to the depth of relief plus 30 fsw or a maximum of 165 fsw for ten minutes. Ascent from initial treatment depth to 30 fsw occurs over 10 minutes. The diver then completes the treatment breathing oxygen and is followed by oxygen breathing on the surface for 30 minutes post treatment. In-water recompression schedules: The Hawaiian IWR Table with Pyle modifications can be found in the proceedings of the DAN 2008 Technical Diving Conference (In Press) or through download from DAN here. In-water recompression schedules: Pyle In-water Recompression Table The Pyle IWR table was developed by Dr. Richard Pyle as a method for treating DCS in the field following scientific dives.This method begins with a 10-minute surface oxygen evaluation period. Compression to 25 fsw on oxygen for another 10-minute evaluation period. The table is best described by the treatment algorithm (Pyle IWR algorithm[Usurped!]). This table does include alternating air breathing periods or "air breaks". In-water recompression schedules: US Navy In-water Recompression Tables The US Navy developed two IWR treatment tables. The table used depends on the symptoms diagnosed by the medical officer.: 20‑4.4.2.2 Oxygen is breathed the entire duration of the treatment without any air breaks and is followed by 3 hours of oxygen breathing on the surface. In-water recompression schedules: Diver descends to 30 feet accompanied by a standby diver, and remains there for 60 minutes for Type I symptoms and 90 minutes for Type II symptoms, after this ascends to 20 feet even if symptoms have not resolved, and decompresses for 60 minutes at 20 feet and 60 minutes at 10 feet. Oxygen is breathed for another 3 hours after surfacing.: 20‑4.4.2.2 Royal Navy Table 81 - Emergency therapy in the water Use: Emergency in-water recompression when no chamber is available. In-water recompression schedules: Oxygen is not used Maximum depth 30 m (98 ft) for 5 minutes Continuous ascent to 20 m at 4.5 minutes per metre Continuous ascent to 10 m at 8 minutes per metre Continuous ascent to surface at 15 metres per minute Run time 4 hours 41 minutes "Informal" in-water recompression: Although in-water recompression is regarded as risky, and to be avoided, there is increasing evidence that technical divers who surface and demonstrate mild DCS symptoms may often get back into the water and breathe pure oxygen at a depth 20 feet (6.1 meters) for a period of time to seek to alleviate the symptoms. This trend is noted in paragraph 3.6.5 of DAN's 2008 accident report. The report also notes that whilst the reported incidents showed very little success, "[w]e must recognize that these calls were mostly because the attempted IWR failed. In case the IWR were successful, [the] diver would not have called to report the event. Thus we do not know how often IWR may have been used successfully." Other tables to be fitted in later: Lambertsen/Solus Ocean Systems Table 7A Used in commercial diving for: symptoms that develop at pressure. recompression deeper than 165 fsw (50 msw)' or Where extended decompression is necessary.Depth limit 200 fsw for air. IANTD in-water recompression schedules Used for emergency recompression of technical divers in remote areas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Drugs in Cambodia** Drugs in Cambodia: In Cambodia, drugs are readily available, thus, easy to access, including illegal substances. Illegal drugs: The Cambodian black market trade of illicit drugs includes cannabis, methamphetamine, ketamine, MDMA and heroin.Cambodia remains a major supplier of cannabis to countries in East and Southeast Asia and other parts of the world. Large amounts of heroin are also smuggled throughout the (Golden Triangle). Drug abuse is increasing among street children and rates of HIV/AIDS are increasing due to intravenous drug usage. Illegal drugs: Golden Triangle The Golden Triangle is one of Asia's main opium-producing areas with the other area being the Golden Crescent. The triangle is an area of roughly 350,000 square kilometers that overlaps the mountainous regions of four nations of Southeast Asian: Myanmar also known as Burma, Laos, and Thailand. The Golden Crescent is located at the intersection of Central, South, and Western Asia. This area overlaps the borders of Afghanistan, Iran, and Pakistan with the mountainous regions which define the location of the crescent. In both of these areas illicit drugs are produced and traffickers export these drugs out of the country and or shipped throughout neighboring nations. The Ruak and Mekong rivers of these areas also influences the ease of how drugs are transported through this area. Illegal drugs: The drugs are transported through the Triangle by horse and donkey caravans to refineries where the drugs are further processed and refined to become more pure. They are then most frequently brought to the United States and other countries by couriers traveling on commercial airlines or they are smuggled into the country by shipping containers. Illegal drugs: Teng Bunma Teng Bunma was one of the wealthiest people in Cambodia with connections to leading politicians, military officials, and businessmen. He is believed to have been one of the main Cambodian drug lords. Bunma was the owner of the luxury hotel "Intercontinental" in Phnom Penh and the „"Rasmei Kampuchea," the largest daily newspaper in Cambodia with a circulation of about 18,000. For years Bunma was denied entry into the USA because of his appearance on the list of suspected drug dealers. A 1996 article, "Medellin on the Mekong" in the Hong Kong-based Far Eastern Economic Review, by United States journalist Nate Thayer, described Teng Bunma as a significant figure in Cambodia's international drug-smuggling trade. Illegal drugs: Recent developments Pertaining to the (online) drug market/trade which exists in Cambodia, more recently the trade took on its international intervention in which officials were able to prevent 10,000 tablets of Codeine and Valium being sent out of Cambodia to the United States and United Kingdom. Illegal drugs: Johanne Vinther Axelsen, 55 of Denmark, was arrested and convicted of drug trafficking and was sentenced to 15 years in prison. It is alleged she tried to mail out illegal drugs that she claims did not know were illegal out of Cambodia to the United States and United Kingdom on the behalf of her drug dealing son, Niels Eikeland. Many human rights activists in Denmark are advocating for the release of Axelsen due to the inhumane conditions of the prison and the Danish government are trying to work with Cambodian officials in order to appeal the court ruling. She was later released due to work of diplomacy and a private payment of 8.000 $ to officials so they would bend their policies In mid-2008 three foreign nationals were arrested in Cambodia on charges of possessing, consuming, and intending to sell drugs A Taiwanese-American, Pakistani, and Britain national along with a Cambodian citizen were among the arrested. They were allegedly caught in possession of nearly half a kilo of methamphetamines and cocaine. The men potentially face life sentences if they are to be convicted.These recent developments and arrests only further emphasizes that Cambodia is cracking down on the drug trade which exists in the country. According to secretary-general of the National Authority to Combat Drugs they emphasized that ‘There will be further raids. We are cracking down hard on people who provide illegal drugs.’ On April 10, 2009, four drug traders and traffickers, including two Cambodian nationals were prosecuted and arrested. The four had established a ring to transport synthetic drugs from Cambodia to Vietnam across the border for the drugs to be distributed in cafes, bars, and discos. According to initial investigators, the ring had been operating for nearly a decade
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mercedes Benz sign** Mercedes Benz sign: Mercedes Benz sign is a radiological sign seen due to the presence of gallstones. It is a triradiate shadow, characteristic of the Mercedes-Benz automobile trademark. The sign occurs due to the gas fissuring within the gallstone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1 Persei** 1 Persei: 1 Persei (1 Per) is an eclipsing binary star in the constellation Perseus. Its uneclipsed apparent magnitude is 5.49. The binary star consists of two B2 type main-sequence stars in a 25.9 day eccentric orbit. The stars are surrounded by a faint cloud of gas visible in mid-infrared, although whether they are the origin of the gas or simply passing through it is unclear. Observational history: The possible eclipsing binary nature of 1 Persei was first noticed by Donald Kurtz in 1977 when it was used as a comparison star to test for photometric variability of HD 11408. In 1979 French amateur observers succeeded in determining an orbital period of 25.9 days. During the primary eclipse, the brightness drops to magnitude 5.85. In the secondary eclipses, the brightness drops to magnitude 5.74. The eclipses each last for approximately 25 hours.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Riser card** Riser card: A Riser card is a printed circuit board that gives a computer motherboard the option for additional expansion cards to be added to the computer. Usage: A riser is usually connected to the mainboard's slot through an edge connector, though some, such as NLX and Next Unit of Computing Extreme, instead are plugged into an edge connector on the mainboard itself. In general, the main purpose is to change the orientation of the expansion cards such that they fit a limited space within casing. Usage: Riser cables Riser cables are an evolution of riser cards utilizing improved specifications (specifically the use of PCI Express) and better materials, which allows further distances of data transmission and greater orientation flexibility than traditional riser cards. These cables use a Riser Card PCB and an edge connector on each side of the cable, with a copper alloy surrounded by a plastic insulator that allows for the further data transmission distances.Such cables are now commonly used in modern household gaming PC's to allow for different positioning of PCI Express Cards and GPU cards in a computer case. This allows for customization and the addition of additional parts to suit the creator or builders needs. They can additionally be installed into vertical brackets to function similarly to a riser card, but with further flexibility. They are also used in small-form-factor PC's to allow for a GPU to be positioned behind a computer motherboard. Usage: Specifications There are only a few specified standards in regards to riser designs. Most use PCI Express edge connectors for data transfer. This allows for maximum data transfer speeds of 32 GB/s when using PCIe 4.0, along with 75W of power to be delivered from the host device. Other specifications used for these cards include ExpressCard and PCI-X. Applications: Riser cards have applications in both industrial and consumer spaces. Applications: Industrial In servers, height for expansion cards is limited by rack units. A unit (U) is the traditional measurement used for server height. One server unit is equal to 1.75", 2U servers are 3.5", and so forth. Traditional 1U riser cards each fit 1 PCI slot, and 2U riser cards can fit 2 or 3 PCI slots, depending on whether they obstruct access to any PCI-E slots. Applications: Consumer In small-form-factor (SFF) computers built by computer enthusiasts, PCI-E riser cards are used in a similar sense to a server application. They are used to sandwich a graphics card closer to a computer motherboard and are made to the same heights as server units for most applications. The additional flexibility afforded by PCI Express can allow for a GPU to be placed "behind" the mainboard, allowing space-efficient orientation without limiting the GPU's airflow.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hierarchy problem** Hierarchy problem: In theoretical physics, the hierarchy problem is the problem concerning the large discrepancy between aspects of the weak force and gravity. There is no scientific consensus on why, for example, the weak force is 1024 times stronger than gravity. Technical definition: A hierarchy problem occurs when the fundamental value of some physical parameter, such as a coupling constant or a mass, in some Lagrangian is vastly different from its effective value, which is the value that gets measured in an experiment. This happens because the effective value is related to the fundamental value by a prescription known as renormalization, which applies corrections to it. Typically the renormalized value of parameters are close to their fundamental values, but in some cases, it appears that there has been a delicate cancellation between the fundamental quantity and the quantum corrections. Hierarchy problems are related to fine-tuning problems and problems of naturalness. Over the past decade many scientists argued that the hierarchy problem is a specific application of Bayesian statistics. Technical definition: Studying renormalization in hierarchy problems is difficult, because such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important. Because we do not know the precise details of the shortest-distance theory of physics, we cannot even address how this delicate cancellation between two large terms occurs. Therefore, researchers are led to postulate new physical phenomena that resolve hierarchy problems without fine-tuning. Overview: Suppose a physics model requires four parameters which allow it to produce a very high-quality working model, generating predictions of some aspect of our physical universe. Suppose we find through experiments that the parameters have values: 1.2, 1.31, 0.9 and 404,331,557,902,116,024,553,602,703,216.58 (roughly 4×1029). Overview: Scientists might wonder how such figures arise. But in particular, might be especially curious about a theory where three values are close to one, and the fourth is so different; in other words, the huge disproportion we seem to find between the first three parameters and the fourth. We might also wonder if one force is so much weaker than the others that it needs a factor of 4×1029 to allow it to be related to them in terms of effects, how did our universe come to be so exactly balanced when its forces emerged? In current particle physics, the differences between some parameters are much larger than this, so the question is even more noteworthy. Overview: One answer given by philosophers is the anthropic principle. If the universe came to exist by chance, and perhaps vast numbers of other universes exist or have existed, then life capable of physics experiments only arose in universes that by chance had very balanced forces. All of the universes where the forces were not balanced didn't develop life capable of asking this question. So if lifeforms like human beings are aware and capable of asking such a question, humans must have arisen in a universe having balanced forces, however rare that might be. Overview: A second possible answer is that there is a deeper understanding of physics that we currently do not possess. There might be parameters that we can derive physical constants from that have less unbalanced values, or there might be a model with fewer parameters. Examples in particle physics: The Higgs mass In particle physics, the most important hierarchy problem is the question that asks why the weak force is 1024 times as strong as gravity. Both of these forces involve constants of nature, the Fermi constant for the weak force and the Newtonian constant of gravitation for gravity. Furthermore, if the Standard Model is used to calculate the quantum corrections to Fermi's constant, it appears that Fermi's constant is surprisingly large and is expected to be closer to Newton's constant unless there is a delicate cancellation between the bare value of Fermi's constant and the quantum corrections to it. Examples in particle physics: More technically, the question is why the Higgs boson is so much lighter than the Planck mass (or the grand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears unless there is an incredible fine-tuning cancellation between the quadratic radiative corrections and the bare mass. Examples in particle physics: The problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings. Theoretical solutions There have been many proposed solutions by many physicists. UV/IR mixing In 2019, a pair of researchers proposed that IR/UV mixing resulting in the breakdown of the effective quantum field theory could resolve the hierarchy problem. In 2021, another group of researchers showed that UV/IR mixing could resolve the hierarchy problem in string theory. Examples in particle physics: Supersymmetry Some physicists believe that one may solve the hierarchy problem via supersymmetry. Supersymmetry can explain how a tiny Higgs mass can be protected from quantum corrections. Supersymmetry removes the power-law divergences of the radiative corrections to the Higgs mass and solves the hierarchy problem as long as the supersymmetric particles are light enough to satisfy the Barbieri–Giudice criterion. This still leaves open the mu problem, however. The tenets of supersymmetry are being tested at the LHC, although no evidence has been found so far for supersymmetry. Examples in particle physics: Each particle that couples to the Higgs field has an associated Yukawa coupling λf. The coupling with the Higgs field for fermions gives an interaction term LYukawa=−λfψ¯Hψ , with ψ being the Dirac field and H the Higgs field. Also, the mass of a fermion is proportional to its Yukawa coupling, meaning that the Higgs boson will couple most to the most massive particle. This means that the most significant corrections to the Higgs mass will originate from the heaviest particles, most prominently the top quark. By applying the Feynman rules, one gets the quantum corrections to the Higgs mass squared from a fermion to be: ΔmH2=−|λf|28π2[ΛUV2+...]. Examples in particle physics: The ΛUV is called the ultraviolet cutoff and is the scale up to which the Standard Model is valid. If we take this scale to be the Planck scale, then we have the quadratically diverging Lagrangian. However, suppose there existed two complex scalars (taken to be spin 0) such that: λS=|λf|2 (the couplings to the Higgs are exactly the same).Then by the Feynman rules, the correction (from both scalars) is: 16 π2[ΛUV2+...]. Examples in particle physics: (Note that the contribution here is positive. This is because of the spin-statistics theorem, which means that fermions will have a negative contribution and bosons a positive contribution. This fact is exploited.) This gives a total contribution to the Higgs mass to be zero if we include both the fermionic and bosonic particles. Supersymmetry is an extension of this that creates 'superpartners' for all Standard Model particles. Examples in particle physics: Conformal Without supersymmetry, a solution to the hierarchy problem has been proposed using just the Standard Model. The idea can be traced back to the fact that the term in the Higgs field that produces the uncontrolled quadratic correction upon renormalization is the quadratic one. If the Higgs field had no mass term, then no hierarchy problem arises. But by missing a quadratic term in the Higgs field, one must find a way to recover the breaking of electroweak symmetry through a non-null vacuum expectation value. This can be obtained using the Weinberg–Coleman mechanism with terms in the Higgs potential arising from quantum corrections. Mass obtained in this way is far too small with respect to what is seen in accelerator facilities and so a conformal Standard Model needs more than one Higgs particle. This proposal has been put forward in 2006 by Krzysztof Antoni Meissner and Hermann Nicolai and is currently under scrutiny. But if no further excitation is observed beyond the one seen so far at LHC, this model would have to be abandoned. Examples in particle physics: Extra dimensions No experimental or observational evidence of extra dimensions has been officially reported. Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions. However, extra dimensions could explain why the gravity force is so weak, and why the expansion of the universe is faster than expected.If we live in a 3+1 dimensional world, then we calculate the gravitational force via Gauss's law for gravity: g(r)=−Gmerr2 (1)which is simply Newton's law of gravitation. Note that Newton's constant G can be rewritten in terms of the Planck mass. Examples in particle physics: G=ℏcMPl2 If we extend this idea to δ extra dimensions, then we get: g(r)=−merMPl3+1+δ2+δr2+δ (2)where MPl3+1+δ is the 3+1+ δ dimensional Planck mass. However, we are assuming that these extra dimensions are the same size as the normal 3+1 dimensions. Let us say that the extra dimensions are of size n ≪ than normal dimensions. If we let r'≪n, then we get (2). However, if we let r≫n, then we get our usual Newton's law. However, when r ≫ n, the flux in the extra dimensions becomes a constant, because there is no extra room for gravitational flux to flow through. Thus the flux will be proportional to nδ because this is the flux in the extra dimensions. The formula is: g(r)=−merMPl3+1+δ2+δr2nδ −merMPl2r2=−merMPl3+1+δ2+δr2nδ which gives: 1MPl2r2=1MPl3+1+δ2+δr2nδ⇒ MPl2=MPl3+1+δ2+δnδ. Examples in particle physics: Thus the fundamental Planck mass (the extra-dimensional one) could actually be small, meaning that gravity is actually strong, but this must be compensated by the number of the extra dimensions and their size. Physically, this means that gravity is weak because there is a loss of flux to the extra dimensions. This section is adapted from "Quantum Field Theory in a Nutshell" by A. Zee. Examples in particle physics: Braneworld models In 1998 Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali proposed the ADD model, also known as the model with large extra dimensions, an alternative scenario to explain the weakness of gravity relative to the other forces. This theory requires that the fields of the Standard Model are confined to a four-dimensional membrane, while gravity propagates in several additional spatial dimensions that are large compared to the Planck scale.In 1998–99 Merab Gogberashvili published on arXiv (and subsequently in peer-reviewed journals) a number of articles where he showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space then it is possible to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. It was also shown that four-dimensionality of the Universe is the result of stability requirement since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with one of the conditions of stability. Examples in particle physics: Subsequently, there were proposed the closely related Randall–Sundrum scenarios which offered their solution to the hierarchy problem. Examples in particle physics: The cosmological constant In physical cosmology, current observations in favor of an accelerating universe imply the existence of a tiny, but nonzero cosmological constant. This problem, called the cosmological constant problem, is a hierarchy problem very similar to that of the Higgs boson mass problem, since the cosmological constant is also very sensitive to quantum corrections, but it is complicated by the necessary involvement of general relativity in the problem. Proposed solutions to the cosmological constant problem include modifying and/or extending gravity, adding matter with unvanishing pressure, and UV/IR mixing in the Standard Model and gravity. Some physicists have resorted to anthropic reasoning to solve the cosmological constant problem, but it is disputed whether anthropic reasoning is scientific.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spike-triggered average** Spike-triggered average: The spike-triggered averaging (STA) is a tool for characterizing the response properties of a neuron using the spikes emitted in response to a time-varying stimulus. The STA provides an estimate of a neuron's linear receptive field. It is a useful technique for the analysis of electrophysiological data. Spike-triggered average: Mathematically, the STA is the average stimulus preceding a spike. To compute the STA, the stimulus in the time window preceding each spike is extracted, and the resulting (spike-triggered) stimuli are averaged (see diagram). The STA provides an unbiased estimate of a neuron's receptive field only if the stimulus distribution is spherically symmetric (e.g., Gaussian white noise).The STA has been used to characterize retinal ganglion cells, neurons in the lateral geniculate nucleus and simple cells in the striate cortex (V1) . It can be used to estimate the linear stage of the linear-nonlinear-Poisson (LNP) cascade model. The approach has also been used to analyze how transcription factor dynamics control gene regulation within individual cells.Spike-triggered averaging is also commonly referred to as “reverse correlation″ or “white-noise analysis”. The STA is well known as the first term in the Volterra kernel or Wiener kernel series expansion. It is closely related to linear regression, and identical to it in common circumstances. Mathematical definition: Standard STA Let xi denote the spatio-temporal stimulus vector preceding the i 'th time bin, and yi the spike count in that bin. The stimuli can be assumed to have zero mean (i.e., E[x]=0 ). If not, it can be transformed to have zero-mean by subtracting the mean stimulus from each vector. The STA is given STA=1nsp∑i=1Tyixi, where nsp=∑yi , the total number of spikes. Mathematical definition: This equation is more easily expressed in matrix notation: let X denote a matrix whose i 'th row is the stimulus vector xiT and let y denote a column vector whose i th element is yi . Then the STA can be written STA=1nspXTy. Mathematical definition: Whitened STA If the stimulus is not white noise, but instead has non-zero correlation across space or time, the standard STA provides a biased estimate of the linear receptive field. It may therefore be appropriate to whiten the STA by the inverse of the stimulus covariance matrix. This resolves the spatial dependency issue, however we still assume the stimulus is temporally independent. The resulting estimator is known as the whitened STA, which is given by STAw=(1T∑i=1TxixiT)−1(1nsp∑i=1Tyixi), where the first term is the inverse covariance matrix of the raw stimuli and the second is the standard STA. In matrix notation, this can be written STAw=Tnsp(XTX)−1XTy. Mathematical definition: The whitened STA is unbiased only if the stimulus distribution can be described by a correlated Gaussian distribution (correlated Gaussian distributions are elliptically symmetric, i.e. can be made spherically symmetric by a linear transformation, but not all elliptically symmetric distributions are Gaussian). This is a weaker condition than spherical symmetry. The whitened STA is equivalent to linear least-squares regression of the stimulus against the spike train. Mathematical definition: Regularized STA In practice, it may be necessary to regularize the whitened STA, since whitening amplifies noise along stimulus dimensions that are poorly explored by the stimulus (i.e., axes along which the stimulus has low variance). A common approach to this problem is ridge regression. The regularized STA, computed using ridge regression, can be written STAridge=Tnsp(XTX+λI)−1XTy, where I denotes the identity matrix and λ is the ridge parameter controlling the amount of regularization. This procedure has a simple Bayesian interpretation: ridge regression is equivalent to placing a prior on the STA elements that says they are drawn i.i.d. from a zero-mean Gaussian prior with covariance proportional to the identity matrix. The ridge parameter sets the inverse variance of this prior, and is usually fit by cross-validation or empirical Bayes. Statistical properties: For responses generated according to an LNP model, the whitened STA provides an estimate of the subspace spanned by the linear receptive field. The properties of this estimate are as follows Consistency The whitened STA is a consistent estimator, i.e., it converges to the true linear subspace, if The stimulus distribution P(x) is elliptically symmetric, e.g., Gaussian. (Bussgang's theorem) The expected STA is not zero, i.e., nonlinearity induces a shift in the spike-triggered stimuli. Statistical properties: Optimality The whitened STA is an asymptotically efficient estimator if The stimulus distribution P(x) is Gaussian The neuron's nonlinear response function is the exponential, exp(x) .For arbitrary stimuli, the STA is generally not consistent or efficient. For such cases, maximum likelihood and information-based estimators have been developed that are both consistent and efficient.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cefacetrile** Cefacetrile: Cefacetrile (INN, also spelled cephacetrile) is a broad-spectrum first generation cephalosporin antibiotic effective in gram-positive and gram-negative bacterial infections. It is a bacteriostatic antibiotic. Cefacetrile is marketed under the trade names Celospor, Celtol, and Cristacef, and as Vetimast for the treatment of mammary infections in lactating cows. Synthesis: It was made by reacting 7-ACA (7-aminocephalosporanic acid) with cyanoacetyl chloride in the presence of tributylamine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISIRI 13139** ISIRI 13139: ISIRI 13139 is a standard published by the Institute of Standards and Industrial Research of Iran (ISIRI) in 2011 based on Directive 2009/61/EC. It defines "Installation of lighting and light-signalling devices on wheeled agricultural and forestry tractors". Related sources: Other related sources are as follows: Directive 2003/37/EC of 26 May 2003 on type approval of agricultural or forestry tractors, their trailers and interchangeable towed machinery, together with their systems, components and separate technical units and repealing Directive 74/150/EEC. ISO R 1724: 1970- Electrical connections for vehicles with 6 or 12 volt electrical systems applying more specifically to private motor cars and lightweight trailers or caravans. ISO R 1185: 1970- Electrical connections between towing and towed vehicles having 24 volt electrical systems used for international commercial transport purposes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Life with PlayStation** Life with PlayStation: Life with PlayStation was an online multimedia application for the PlayStation 3 video game console on the PlayStation Network. The application had four channels, all of which revolved around a virtual globe that displayed information according to the channel. The application also included a client for Folding@home, a distributed computing project aimed at disease research. As of November 2012 the service has been discontinued. History: In August 2006, Stanford University in Silicon Valley, announced that a protein folding client would be available to run on the PS3.On December 19, 2007, Sony updated the Folding@home client to version 1.3. The update allowed users to run music stored on their hard drives while contributing to Folding@home and automatically shut down their console after existing simulation work was done. The software update also added the Generalized Born implicit solvent model, which broadened the PS3 client's computing capabilities.On September 18, 2008 the PS3 version of the Folding@home client became Life With PlayStation. The application became available for the PS3 in March 2007 and became a channel on Life with PlayStation when it was released. This update also provided more advanced simulation of protein folding and a new ranking system.Following the release of system software version 4.30, the Folding@home PS3 client and all other services under Life with PlayStation were discontinued on November 6, 2012. Life with PlayStation was then removed from the XrossMediaBar for new users. Channels: Life with PlayStation featured five channels which were updated frequently with new information. The application provided the user with access to information "channels", the first of which was the Live Channel which offered news headlines and weather through a 3D globe. The user could rotate and zoom into any part of the world to access information provided by Google News and The Weather Channel, among other sources. Channels: Live Channel Live Channel was a news, time zone and weather feed, which provided users with information from Google News and The Weather Channel organized by city. The content included live camera feeds and cloud data, similar to Google Earth. Live Cameras was provided by earthTV and the Webcams.travel website. The application only supported certain cities of the world, with limited coverage, such as with the continent Africa, with only four cities covered by the Live Channel. Channels: Folding@Home Life with PlayStation also hosted an application for Folding@home, a distributed computing project for disease research that simulated protein folding and other molecular dynamics. Users were able to contribute to the project by leaving their client to run Folding@home while not playing games. The application displayed a live rendering of the protein being folded and some statistical information in front of a virtual globe background. Channels: PlayStation Network Game Trailers Channel For users in the United States, the PSN game trailers channel allowed direct access to the streaming of the PlayStation Store's game trailers. It also allowed the ability to purchase titles from the store, without having to leave the application. Channels: United Village United Village, provided by its respective website, and hosted by Frontier International Inc., was a cultural documentary-like project that gathered stories, interviews and articles worldwide. It targeted rural stories from largely from developing countries with some rural parts of other countries. The contents of the channel include culture, development, education, social issues and tourism. The United Village channel was discontinued on March 30, 2011. Channels: World Heritage World Heritage, by α Clock, showed UNESCO-selected locations of special cultural or physical significance around the world. These World Heritage sites linked to their respective articles on Wikipedia. Each location included the introduction directly from the Wikipedia articles. Features: In addition to the channels, the PlayStation 3's also featured photo slideshow viewing, music & video playback. Also Life with PlayStation had a virtual globe that was periodically updated from servers to ensure that cloud, weather, live camera feed and news update data were all up to date. All channels except for the Live Channel included a static representation of our planet Earth, while the Live Channel itself showed day and night time effects. Additionally, the Folding@home protein folding could be disabled.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Why Things Bite Back** Why Things Bite Back: Why Things Bite Back: Technology and the Revenge of Unintended Consequences is a 1997 book by former executive editor for physical science and history at Princeton University Press Edward Tenner that is an account and geography of modern technology.Edward Tenner's book describes how technology has had unintended effects on society.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PELO** PELO: Protein pelota homolog is a protein that in humans is encoded by the PELO gene.This gene encodes a protein which contains a conserved nuclear localization signal. The encoded protein may have a role in spermatogenesis, cell cycle control, and in meiotic cell division. In yeasts, the Dom34-Hbs1 complex (with ABCE1) that it forms is responsible for reactivating ribosomes and for recovering those stuck on mRNAs. It is a paralog of the release factor eRF1. PELO: The Drosophila homolog was first discovered in 1993. Mutants exhibit G2/M arrest in meiosis and large nebenkern form in late spermatocytes. Human, yeast (Dom34), plant, and worm homologs are reported in 1995, followed by one found in archaea.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Desmoglein** Desmoglein: The desmogleins are a family of desmosomal cadherins consisting of proteins DSG1, DSG2, DSG3, and DSG4. They play a role in the formation of desmosomes that join cells to one another. Pathology: Desmogleins are targeted in the autoimmune disease pemphigus.Desmoglein proteins are a type of cadherin, which is a transmembrane protein that binds with other cadherins to form junctions known as desmosomes between cells. These desmoglein proteins thus hold cells together, but, when the body starts producing antibodies against desmoglein, these junctions break down, and this results in subsequent blister or vesicle formation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**V2 word order** V2 word order: In syntax, verb-second (V2) word order is a sentence structure in which the finite verb of a sentence or a clause is placed in the clause's second position, so that the verb is preceded by a single word or group of words (a single constituent). V2 word order: Examples of V2 in English include (brackets indicating a single constituent): "Neither do I", "[Never in my life] have I seen such things"If English used V2 in all situations, then it would feature such sentences like: "*[In school] learned I about animals", "*[When she comes home from work] takes she a nap"V2 word order is common in the Germanic languages and is also found in Northeast Caucasian Ingush, Uto-Aztecan O'odham, and fragmentarily in Romance Sursilvan (a Rhaeto-Romansh variety) and Finno-Ugric Estonian. Of the Germanic family, English is exceptional in having predominantly SVO order instead of V2, although there are vestiges of the V2 phenomenon. V2 word order: Most Germanic languages do not normally use V2 order in embedded clauses, with a few exceptions. In particular, German, Dutch, and Afrikaans revert to VF (verb final) word order after a complementizer; Yiddish and Icelandic do, however, allow V2 in all declarative clauses: main, embedded, and subordinate. Kashmiri (an Indo-Aryan language) has V2 in 'declarative content clauses' but VF order in relative clauses. Examples of verb second (V2): The example sentences in (1) from German illustrate the V2 principle, which allows any constituent to occupy the first position as long as the second position is occupied by the finite verb. Sentences (1a) through to (1d) have the finite verb spielten 'played' in second position, with various constituents occupying the first position: in (1a) the subject is in first position; in (1b) the object is; in (1c) the temporal modifier is in first position; and in (1d) the locative modifier is in first position. (1) (a) Die Kinder spielten vor der Schule im Park Fußball. Examples of verb second (V2): The children played before school in the park football/soccer (b) Fußball spielten die Kinder vor der Schule im Park. Soccer played the children before school in the park (c) Vor der Schule spielten die Kinder im Park Fußball. Before school played the children in the park football/soccer. (d) Im Park spielten die Kinder vor der Schule Fußball. In the park played the children before school football/soccer. Classical accounts of verb second (V2): In major theoretical research on V2 properties, researchers discussed that verb-final orders found in German and Dutch embedded clauses suggest an underlying SOV order with specific syntactic movement rules which change the underlying SOV order, deriving a surface form where the finite verb is in the second position of the clause.We first see a "verb preposing" rule, which moves the finite verb to the left-most position in the sentence, then a "constituent preposing" rule, which moves a constituent in front of the finite verb. Following these two rules will always result with the finite verb in second position. "I like the man" (a) Ich den Mann mag --> Underlying form in Modern German I the man like (b) mag ich den Mann --> Verb movement to left edge like I the man (c) den Mann mag ich --> Constituent moved to left edge the man like I Non-finite verbs and embedded clauses: Non-finite verbs The V2 principle regulates the position of finite verbs only; its influence on non-finite verbs (infinitives, participles, etc.) is indirect. Non-finite verbs in V2 languages appear in varying positions depending on the language. In German and Dutch, for instance, non-finite verbs appear after the object (if one is present) in clause final position in main clauses (OV order). Swedish and Icelandic, in contrast, position non-finite verbs after the finite verb but before the object (if one is present) (VO order). That is, V2 operates on only the finite verb. Non-finite verbs and embedded clauses: V2 in embedded clauses (In the following examples, finite verb forms are in bold, non-finite verb forms are in italics and subjects are underlined.) Germanic languages vary in the application of V2 order in embedded clauses. They fall into three groups. V2 in Swedish, Danish, Norwegian, Faroese In these languages, the word order of clauses is generally fixed in two patterns of conventionally numbered positions. Both end with positions for (5) non-finite verb forms, (6) objects, and (7), adverbials. In main clauses, the V2 constraint holds. The finite verb must be in position (2) and sentence adverbs in position (4). The latter include words with meanings such as 'not' and 'always'. The subject may be position (1), but when a topical expression occupies the position, the subject is in position (3). Non-finite verbs and embedded clauses: In embedded clauses, the V2 constraint is absent. After the conjunction, the subject must immediately follow; it cannot be replaced by a topical expression. Thus, the first four positions are in the fixed order (1) conjunction, (2) subject, (3) sentence adverb, (4) finite verb The position of the sentence adverbs is important to those theorists who see them as marking the start of a large constituent within the clause. Thus the finite verb is seen as inside that constituent in embedded clauses, but outside that constituent in V2 main clauses. Non-finite verbs and embedded clauses: Swedish Main clause Front Finite verb Subject Sentence adverb __ Non-finite verb Object Adverbial Embedded clause __ Conjunction Subject Sentence adverb Finite verb Non-finite verb Object Adverbial Main clause (a) I dag ville Lotte inte läsa tidningen today wanted Lotte not read the newspaper "Lotte didn't want to read the paper today." Embedded clause (b) att Lotte inte ville koka kaffe i dag that Lotte not wanted brew coffee today "that Lotte didn't want to make coffee today." Danish So-called Perkerdansk is an example of a variety that does not follow the above. Non-finite verbs and embedded clauses: Norwegian (with multiple adverbials and multiple non-finite forms, in two varieties of the language) Faroese Unlike continental Scandinavian languages, the sentence adverb may either precede or follow the finite verb in embedded clauses. A (3a) slot is inserted here for the following sentence adverb alternative. V2 in German In main clauses, the V2 constraint holds. As with other Germanic languages, the finite verb must be in the second position. However, any non-finite forms must be in final position. The subject may be in the first position, but when a topical expression occupies the position, the subject follows the finite verb. In embedded clauses, the V2 constraint does not hold. The finite verb form must be adjacent to any non-finite at the end of the clause. German grammarians traditionally divide sentences into fields. Subordinate clauses preceding the main clause are said to be in the first field (Vorfeld), clauses following the main clause in the final field (Nachfeld). The central field (Mittelfeld) contains most or all of a clause, and is bounded by left bracket (Linke Satzklammer) and right bracket (Rechte Satzklammer) positions. Non-finite verbs and embedded clauses: In main clauses, the initial element (subject or topical expression) is said to be located in the first field, the V2 finite verb form in the left bracket, and any non-finite verb forms in the right bracket.In embedded clauses, the conjunction is said to be located in the left bracket, and the verb forms in the right bracket. In German embedded clauses, a finite verb form follows any non-finite forms. Non-finite verbs and embedded clauses: German V2 in Dutch and Afrikaans V2 word order is used in main clauses, the finite verb must be in the second position. However, in subordinate clauses two word orders are possible for the verb clusters. Non-finite verbs and embedded clauses: Main clauses: Dutch This analysis suggests a close parallel between the V2 finite form in main clauses and the conjunctions in embedded clauses. Each is seen as an introduction to its clause-type, a function which some modern scholars have equated with the notion of specifier. The analysis is supported in spoken Dutch by the placement of clitic pronoun subjects. Forms such as ze cannot stand alone, unlike the full-form equivalent zij. The words to which they may be attached are those same introduction words: the V2 form in a main clause, or the conjunction in an embedded clause. Non-finite verbs and embedded clauses: Subordinate clauses: In Dutch subordinate clauses two word orders are possible for the verb clusters and are referred to as the "red": omdat ik heb gewerkt, "because I have worked": like in English, where the auxiliary verb precedes the past particle, and the "green": omdat ik gewerkt heb, where the past particle precedes the auxiliary verb, "because I worked have": like in German. In Dutch, the green word order is the most used in speech, and the red is the most used in writing, particularly in journalistic texts, but the green is also used in writing as is the red in speech. Unlike in English however adjectives and adverbs must precede the verb: ''dat het boek groen is'', "that the book green is". Non-finite verbs and embedded clauses: V2 in Icelandic and Yiddish These languages freely allow V2 order in embedded clauses. Icelandic Two word-order patterns are largely similar to continental Scandinavian. However, in main clauses an extra slot is needed for when the front position is occupied by Það. In these clauses the subject follows any sentence adverbs. In embedded clauses, sentence adverbs follow the finite verb (an optional order in Faroese). In more radical contrast with other Germanic languages, a third pattern exists for embedded clauses with the conjunction followed by the V2 order: front-finite verb-subject. Yiddish Unlike Standard German, Yiddish normally has verb forms before Objects (SVO order), and in embedded clauses has conjunction followed by V2 order. Non-finite verbs and embedded clauses: V2 in root clauses One type of embedded clause with V2 following the conjunction is found throughout the Germanic languages, although it is more common in some than it is others. These are termed root clauses. They are declarative content clauses, the direct objects of so-called bridge verbs, which are understood to quote a statement. For that reason, they exhibit the V2 word order of the equivalent direct quotation. Non-finite verbs and embedded clauses: Danish Items other than the subject are allowed to appear in front position. Swedish Items other than the subject are occasionally allowed to appear in front position. Generally, the statement must be one with which the speaker agrees. This order is not possible with a statement with which the speaker does not agree. Norwegian German Root clause V2 order is possible only when the conjunction dass is omitted. In such cases, formal usage also places the finite verb form into the present subjunctive (German Konjunktiv I) if the verb form is clearly distinguishable from the indicative; if not, the past subjunctive (German Konjunktiv II) is used. Non-finite verbs and embedded clauses: By contrast, a form with an embedded first-person subject would usually use the past subjunctive here, since the present indicative and subjunctive appear identical: Er behauptet, ich hätte (instead of habe) es zur Post gebracht.Compare the normal embed-clause order after dass Perspective effects on embedded V2 There are a limited number of V2 languages that can allow for embedded verb movement for a specific pragmatic effect similar to that of English. This is due to the perspective of the speaker. Languages such as German and Swedish have embedded verb second. The embedded verb second in these kinds of languages usually occur after 'bridge verbs'.(Bridge verbs are common verbs of speech and thoughts such as "say", "think", and "know", and the word "that" is not needed after these verbs. For example: I think he is coming.) Based on an assertion theory, the perspective of a speaker is reaffirmed in embedded V2 clauses. A speaker's sense of commitment to or responsibility for V2 in embedded clauses is greater than a non-V2 in embedded clause. This is the result of V2 characteristics. As shown in the examples below, there is a greater commitment to the truth in the embedded clause when V2 is in place. Variations of V2: Variations of V2 order such as V1 (verb-initial word order), V3 and V4 orders are widely attested in many Early Germanic and Medieval Romance languages. These variations are possible in the languages however it is severely restricted to specific contexts. V1 word order V1 (verb-initial word order) is a type of structure that contains the finite verb as the initial clause element. In other words the verb appears before the subject and the object of the sentence. Variations of V2: (a) Max y-il [s no' tx;i;] [o naq Lwin]. (Mayan) PFV A3-see CLF dog CLF Pedro 'The dog saw Pedro.' V3 word order V3 (verb-third word order) is a variation of V2 in which the finite verb is in third position with two constituents preceding it. In V3, like in V2 word order, the constituents preceding the finite verb are not categorically restricted, as the constituents can be a DP, a PP, a CP and so on. V2 and left edge filling trigger (LEFT): V2 is fundamentally derived from a morphological obligatory exponence effect at sentence level. The left edge filling trigger (LEFT) effects are usually seen in classical V2 languages such as Germanic languages and Old Romance languages. The left edge filling trigger is independently active in morphology as EPP effects are found in word-internal levels. The obligatory exponence derives from absolute displacement, ergative displacement and ergative doubling in inflectional morphology. In addition, second position rules in clitic second languages demonstrate post-syntactic rules of LEFT movement. Using the language Breton as an example, absence of a pre-tense expletive will allow for the LEFT to occur to avoid tense-first. The LEFT movement is free from syntactic rules which is evidence for a post-syntactic phenomenon. With the LEFT movement, V2 word order can be obtained as seen in the example below. V2 and left edge filling trigger (LEFT): In this Breton example, the finite head is phonetically realized and agrees with the category of the preceding element. The pre-tense "Bez" is used in front of the finite verb to obtain the V2 word order. (finite verb "nevo" is bolded). Syntactic verb second: It is said that V2 patterns are a syntactic phenomenon and therefore have certain environments where it can and cannot be tolerated. Syntactically, V2 requires a left-peripheral head (usually C) with an occupied specifier and paired with raising the highest verb-auxiliary to that head. V2 is usually analyzed as the co-occurrence of these requirements, which can also be referred to as "triggers". The left-peripheral head, which is a requirement that causes the effect of V2, sets further requirements on a phrase XP that occupies the initial position, so that this phrase XP may always have specific featural characteristics. V2 in English: Modern English differs greatly in word order from other modern Germanic languages, but earlier English shared many similarities. For this reason, some scholars propose a description of Old English with V2 constraint as the norm. The history of English syntax is thus seen as a process of losing the constraint. Old English In these examples, finite verb forms are in green, non-finite verb forms are in orange and subjects are blue. V2 in English: Main clauses Position of object In examples b, c and d, the object of the clause precedes a non-finite verb form. Superficially, the structure is verb-subject-object- verb. To capture generalities, scholars of syntax and linguistic typology treat them as basically subject-object-verb (SOV) structure, modified by the V2 constraint. Thus Old English is classified, to some extent, as an SOV language. However, example a represents a number of Old English clauses with object following a non-finite verb form, with the superficial structure verb-subject-verb object. A more substantial number of clauses contain a single finite verb form followed by an object, superficially verb-subject-object. Again, a generalisation is captured by describing these as subject–verb–object (SVO) modified by V2. Thus Old English can be described as intermediate between SOV languages (like German and Dutch) and SVO languages (like Swedish and Icelandic). V2 in English: Effect of subject pronouns When the subject of a clause was a personal pronoun, V2 did not always operate. However, V2 verb-subject inversion occurred without exception after a question word or the negative ne, and with few exceptions after þa even with pronominal subjects. Inversion of a subject pronoun also occurred regularly after a direct quotation. Embedded clauses Embedded clauses with pronoun subjects were not subject to V2. Even with noun subjects, V2 inversion did not occur. Yes–no questions In a similar clause pattern, the finite verb form of a yes–no question occupied the first position Middle English Continuity Early Middle English generally preserved V2 structure in clauses with nominal subjects. As in Old English, V2 inversion did not apply to clauses with pronoun subjects. Change Late Middle English texts of the fourteenth and fifteenth centuries show increasing incidence of clauses without the inversion associated with V2. Negative clauses were no longer formed with ne (or na) as the first element. Inversion in negative clauses was attributable to other causes. Vestiges in Modern English As in earlier periods, Modern English normally has subject-verb order in declarative clauses and inverted verb-subject order in interrogative clauses. However these norms are observed irrespective of the number of clause elements preceding the verb. V2 in English: Classes of verbs in Modern English: auxiliary and lexical Inversion in Old English sentences with a combination of two verbs could be described in terms of their finite and non-finite forms. The word which participated in inversion was the finite verb; the verb which retained its position relative to the object was the non-finite verb. In most types of Modern English clause, there are two verb forms, but the verbs are considered to belong to different syntactic classes. The verbs which participated in inversion have evolved to form a class of auxiliary verbs which may mark tense, aspect and mood; the remaining majority of verbs with full semantic value are said to constitute the class of lexical verbs. The exceptional type of clause is that of declarative clause with a lexical verb in a present simple or past simple form. V2 in English: Questions Like Yes/No questions, interrogative Wh- questions are regularly formed with inversion of subject and auxiliary. Present Simple and Past Simple questions are formed with the auxiliary do, a process known as do-support. (see subject-auxiliary inversion in questions) With topic adverbs and adverbial phrases In certain patterns similar to Old and Middle English, inversion is possible. However, this is a matter of stylistic choice, unlike the constraint on interrogative clauses. V2 in English: negative or restrictive adverbial first (see negative inversion)comparative adverb or adjective first After the preceding classes of adverbial, only auxiliary verbs, not lexical verbs, participate in inversion locative or temporal adverb first prepositional phrase first (see locative inversion, directive inversion)After the two latter types of adverbial, only one-word lexical verb forms (Present Simple or Past Simple), not auxiliary verbs, participate in inversion, and only with noun-phrase subjects, not pronominal subjects. V2 in English: Direct quotations When the object of a verb is a verbatim quotation, it may precede the verb, with a result similar to Old English V2. Such clauses are found in storytelling and in news reports. (see quotative inversion) Declarative clauses without inversion Corresponding to the above examples, the following clauses show the normal Modern English subject-verb order. Declarative equivalents Equivalents without topic fronting French: Modern French is a subject-verb-object (SVO) language like other Romance languages (though Latin was a subject-object-verb language). However, V2 constructions existed in Old French and were more common than in other early Romance language texts. It has been suggested that this may be due to influence from the Germanic Frankish language. Modern French has vestiges of the V2 system similar to those found in modern English. French: The following sentences have been identified as possible examples of V2 syntax in Old French: Old French Similarly to Modern French, Old French allows a range of constituents to precede the finite verb in the V2 position. French: Old Occitan A language that is compared to Old French is Old Occitan, which is said to be the sister of Old French. Although the two languages are thought to be sister languages, Old Occitan exhibits a relaxed V2 whereas Old French has a much more strict V2. However, the differences between the two languages extend past V2 and also differ in a variation of V2, which is V3. In both language varieties, occurrence of V3 can be triggered by the presence of an initial frame-setting clause or adverbial (1). Other languages: Kotgarhi and Kochi In his 1976 three-volume study of two languages of Himachal Pradesh, Hendriksen reports on two intermediate cases: Kotgarhi and Kochi. Although neither language shows a regular V-2 pattern, they have evolved to the point that main and subordinate clauses differ in word order and auxiliaries may separate from other parts of the verb: Hendriksen reports that relative clauses in Kochi show a greater tendency to have the finite verbal element in clause-final position than matrix clauses do (III:188). Other languages: Ingush In Ingush, "for main clauses, other than episode-initial and other all-new ones, verb-second order is most common. The verb, or the finite part of a compound verb or analytic tense form (i.e. the light verb or the auxiliary), follows the first word or phrase in the clause." O'odham O'odham has relatively free V2 word order within clauses; for example, all of the following sentences mean "the boy brands the pig": ceoj ʼo g ko:jĭ ceposid ko:jĭ ʼo g ceoj ceposid ceoj ʼo ceposid g ko:jĭ ko:jĭ ʼo ceposid g ceoj ceposid ʼo g ceoj g ko:jĭ ceposid ʼo g ko:jĭ g ceoj The finite verb is "'o" which appears after a constituent, in second position Despite the general freedom of sentence word order, O'odham is fairly strictly verb-second in its placement of the auxiliary verb (in the above sentences, it is ʼo; in the following it is ʼañ): Affirmative: cipkan ʼañ = "I am working" Negative: pi ʼañ cipkan = "I am not working" [not *pi cipkan ʼañ] Sursilvan Among dialects of the Romansh, V2 word order is limited to Sursilvan, the insertion of entire phrases between auxiliary verbs and participles occurs, as in 'Cun Mariano Tschuor ha Augustin Beeli discurriu ' ('Mariano Tschuor has spoken with Augustin Beeli'), as compared to Engadinese 'Cun Rudolf Gasser ha discurrü Gion Peider Mischol' ('Rudolf Gasser has spoken with Gion Peider Mischol'.)The constituent that is bounded by the auxiliary, ha, and the participle, discurriu, is known as a Satzklammer or 'verbal bracket'. Other languages: Estonian In Estonian, V2 word order is very frequent in the literate register, but less frequent in the spoken register. When V2 order does occur, it is found in main clauses, as illustrated in (1). Unlike Germanic V2 languages, Estonian has several instances where V2 word order is not attested in embedded clauses, such as wh-interrogatives (2), exclamatives (3), and non-subject-initial clauses (4). Other languages: Welsh In Welsh, V2 word order is found in Middle Welsh, but not in Old and Modern Welsh which only has verb-initial order. Middle Welsh displays three characteristics of V2 grammar: (1) A finite verb in the C-domain (2) The constituent preceding the verb can be any constituent (often driven by pragmatic features). (3) Only one constituent preceding the verb in subject position As we can see in the examples of V2 in Welsh below, there is only one constituent preceding the finite verb, but any kind of constituent (such as a noun phrase NP, adverb phrase AP and preposition phrase PP) can occur in this position. Other languages: Middle Welsh can also exhibit variations of V2 such as cases of V1 (verb-initial word order) and V3 orders. However, these variations are restricted to specific contexts such as in sentences that has impersonal verbs, imperatives, answers or direct responses to questions or commands and idiomatic sayings. It is also possible to have a preverbal particle preceding the verb in V2, however these kind of sentences are limited as well. Other languages: Wymysorys Wymysory is classified as a West-Germanic language, however it can exhibit various Slavonic characteristics. It is argued that Wymysorys enables its speaker to operate between two word order system that represent two forces driving the grammar of this language Germanic and Slavonic. The Germanic system is not as flexible and allows for V2 order to exist in it form while the Slavonic system is relatively free. Due to the rigid word order in the Germanic system, the placement of the verb is determines by syntactic rules in which V2 word order is commonly respected. Wymysory, like with other languages that exhibit V2 word order, the finite verb is in second position with a constituent of any category preceding the verb such as DP, PP, AP and so on. Other languages: Classical Portuguese Compared to other Romance languages, the V2 word order has existed in Classical Portuguese a lot longer. Although Classical Portuguese is a V2 language, V1 occurred more frequently and as a result of this, it is argued whether or not Classical Portuguese really is a V2-like language. However, Classical Portuguese is a relaxed V2 language, meaning V2 patterns coexist with its variations, which are V1 and/or V3. In the case of Classical Portuguese, there is a strong relationship between V1 and V2 due to V2 clauses being derived from V1 clauses. In languages, such as Classical Portuguese, where both V1 and V2 exist, both patterns depend on the movement of the verb to a high position of the CP layer, with the difference being whether or not a phrase is moved to a preverbal position. Although V1 occurred more frequently in Classical Portuguese, V2 is the more frequent order found in matrix clauses. Post-verbal subjects may also occupy a high position in the clause and can precede VP adverbs. In (1) and (2), we can see that the adverb 'bem' can precede or proceed the post-verbal subject. In (2), the post-verbal subject is understood as an informational focus, but the same cannot be said for (1) because the difference of the positions determine how the subject is interpreted. Structural analysis of V2: Various structural analyses of V2 have been developed, including within the model of dependency grammar and generative grammar. Structural analysis of V2: Structural analysis in dependency grammar Dependency grammar (DG) can accommodate the V2 phenomenon simply by stipulating that one and only one constituent can be a predependent of the finite verb (i.e. a dependent which precedes its head) in declarative (matrix) clauses (in this, Dependency Grammar assumes only one clausal level and one position of the verb, instead of a distinction between a VP-internal and a higher clausal position of the verb as in Generative Grammar, cf. the next section). On this account, the V2 principle is violated if the finite verb has more than one predependent or no predependent at all. The following DG structures of the first four German sentences above illustrate the analysis (the sentence means 'The kids play soccer in the park before school'): The finite verb spielen is the root of all clause structure. The V2 principle requires that this root have a single predependent, which it does in each of the four sentences. Structural analysis of V2: The four English sentences above involving the V2 phenomenon receive the following analyses: Structural analysis in generative grammar In the theory of Generative Grammar, the verb second phenomenon has been described as an application of X-bar theory. The combination of a first position for a phrase and a second position for a single verb has been identified as the combination of specifier and head of a phrase. The part after the finite verb is then the complement. While the sentence structure of English is usually analysed in terms of three levels, CP, IP, and VP, in German linguistics the consensus has emerged that there is no IP in German. Structural analysis of V2: The VP (verb phrase) structure assigns position and functions to the arguments of the verb. Hence, this structure is shaped by the grammatical properties of the V (verb) which heads the structure. The CP (complementizer phrase) structure incorporates the grammatical information which identifies the clause as declarative or interrogative, main or embedded. The structure is shaped by the abstract C (complementiser) which is considered the head of the structure. In embedded clauses the C position accommodates complementizers. In German declarative main clauses, C hosts the finite verb. Structural analysis of V2: Thus the V2 structure is analysed as 1 Topic element (specifier of CP) 2 Finite-verb form (C=head of CP) i.e. verb-second 3 Remainder of the clauseIn embedded clauses, the C position is occupied by a complementizer. In most Germanic languages (but not in Icelandic or Yiddish), this generally prevents the finite verb from moving to C. The structure is analysed as 1 Complementizer (C=head of CP) 2 Bulk of clause (VP), including, in German, the subject. Structural analysis of V2: 3 Finite verb (V position)This analysis does not provide a structure for the instances in some language of root clauses after bridge verbs. Example: Danish Vi ved at denne bog har Bo ikke læst with the object of the embedded clause fronted. (Literally 'We know that this book has Bo not read')The solution is to allow verbs such as ved to accept a clause with a second (recursive) CP. The complementizer occupies C position in the upper CP. The finite verb moves to the C position in the lower CP. Literature: Adger, D. 2003. Core syntax: A minimalist approach. Oxford, UK: Oxford University Press. Ágel, V., L. Eichinger, H.-W. Eroms, P. Hellwig, H. Heringer, and H. Lobin (eds.) 2003/6. Dependency and valency: An international handbook of contemporary research. Berlin: Walter de Gruyter. Andrason, A. (2020). Verb second in Wymysorys. Oxford University Press. Borsley, R. 1996. Modern phrase structure grammar. Cambridge, MA: Blackwell Publishers. Carnie, A. 2007. Syntax: A generative introduction, 2nd edition. Malden, MA: Blackwell Publishing. Emonds, J. 1976. A transformational approach to English syntax: Root, structure-preserving, and local transformations. New York: Academic Press. Fagan, S. M. B. 2009. German: A linguistic introduction. Cambridge: Cambridge University Press Fischer, O., A. van Kermenade, W. Koopman, and W. van der Wurff. 2000. The Syntax of Early English. Cambridge: Cambridge University Press. Fromkin, V. et al. 2000. Linguistics: An introduction to linguistic theory. Malden, MA: Blackwell Publishers. Harbert, Wayne. 2007. The Germanic Languages. Cambridge: Cambridge University Press. Hook, P. E. 1976. Is Kashmiri an SVO Language? Indian Linguistics 37: 133–142. Jouitteau, M. (2020). Verb second and the left edge filling trigger. Oxford University Liver, Ricarda. 2009. Deutsche Einflüsse im Bündnerromanischen. In Elmentaler, Michael (Hrsg.) Deutsch und seine Nachbarn. Peter Lang. ISBN 978-3-631-58885-7 König, E. and J. van der Auwera (eds.). 1994. The Germanic Languages. London and New York: Routledge. Liver, Ricarda. 2009. Deutsche Einflüsse im Bündnerromanischen. In Elmentaler, Michael (Hrsg.) Deutsch und seine Nachbarn. Frankfurt am Main: Peter Lang. Meelen, M. (2020). Reconstructing the rise of verb second in welsh. Oxford University Press. Nichols, Johanna. 2011. Ingush Grammar. Berkeley: University of California Press. Osborne T. 2005. Coherence: A dependency grammar analysis. SKY Journal of Linguistics 18, 223–286. Ouhalla, J. 1994. Transformational grammar: From rules to principles and parameters. London: Edward Arnold. Peters, P. 2013. The Cambridge Dictionary of English Grammar. Cambridge: Cambridge University Press. Posner, R. 1996. The Romance languages. Cambridge: Cambridge University Press. Rowlett, P. 2007. The Syntax of French. Cambridge: Cambridge University Press. van Riemsdijk, H. and E. Williams. 1986. Introduction to the theory of grammar. Cambridge, MA: The MIT Press. Tesnière, L. 1959. Éleménts de syntaxe structurale. Paris: Klincksieck. Thráinsson, H. 2007. The Syntax of Icelandic. Cambridge: Cambridge University Press. Walkden, G. (2017). Language contact and V3 in germanic varieties new and old. The Journal of Comparative Germanic Linguistics, 20(1), 49-81. Woods, R. (2020). A different perspective on embedded verb second. Oxford University Press. Woods, R., Wolfe, s., & UPSO eCollections. (2020). Rethinking verb second (First ed.). Oxford University Press. Zwart, J-W. 2011. The Syntax of Dutch. Cambridge: Cambridge University Press.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Entropic uncertainty** Entropic uncertainty: In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations. Entropic uncertainty: In 1957, Hirschman considered a function f and its Fourier transform g such that exp exp ⁡(2πixy)g(y)dy, where the "≈" indicates convergence in L2, and normalized so that (by Plancherel's theorem), ∫−∞∞|f(x)|2dx=∫−∞∞|g(y)|2dy=1. He showed that for any such functions the sum of the Shannon entropies is non-negative, log log 0. A tighter bound, was conjectured by Hirschman and Everett, proven in 1975 by W. Beckner and in the same year interpreted as a generalized quantum mechanical uncertainty principle by Białynicki-Birula and Mycielski. The equality holds in the case of Gaussian distributions. Note, however, that the above entropic uncertainty function is distinctly different from the quantum Von Neumann entropy represented in phase space. Sketch of proof: The proof of this tight inequality depends on the so-called (q, p)-norm of the Fourier transformation. (Establishing this norm is the most difficult part of the proof.) From this norm, one is able to establish a lower bound on the sum of the (differential) Rényi entropies, Hα(|f|²)+Hβ(|g|²) , where 1/α + 1/β = 2, which generalize the Shannon entropies. For simplicity, we consider this inequality only in one dimension; the extension to multiple dimensions is straightforward and can be found in the literature cited. Sketch of proof: Babenko–Beckner inequality The (q, p)-norm of the Fourier transform is defined to be sup f∈Lp(R)‖Ff‖q‖f‖p, where 1<p≤2, and 1. In 1961, Babenko found this norm for even integer values of q. Finally, in 1975, using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm (in one dimension) for all q ≥ 2 is ‖F‖q,p=p1/p/q1/q. Thus we have the Babenko–Beckner inequality that ‖Ff‖q≤(p1/p/q1/q)1/2‖f‖p. Rényi entropy bound From this inequality, an expression of the uncertainty principle in terms of the Rényi entropy can be derived.Letting g=Ff , 2α=p, and 2β=q, so that 1/α + 1/β = 2 and 1/2<α<1<β, we have (∫R|g(y)|2βdy)1/2β≤(2α)1/4α(2β)1/4β(∫R|f(x)|2αdx)1/2α. Squaring both sides and taking the logarithm, we get log log log ⁡(∫R|f(x)|2αdx). Multiplying both sides by β1−β=−α1−α reverses the sense of the inequality, log log log ⁡(∫R|f(x)|2αdx). Rearranging terms, finally yields an inequality in terms of the sum of the Rényi entropies, log log log ⁡(2α)1/α(2β)1/β; log log log ⁡2. Note that this inequality is symmetric with respect to α and β: One no longer need assume that α<β; only that they are positive and not both one, and that 1/α + 1/β = 2. To see this symmetry, simply exchange the rôles of i and −i in the Fourier transform. Shannon entropy bound Taking the limit of this last inequality as α, β → 1 yields the less general Shannon entropy inequality, log where g(y)≈∫Re−2πixyf(x)dx, valid for any base of logarithm, as long as we choose an appropriate unit of information, bit, nat, etc. The constant will be different, though, for a different normalization of the Fourier transform, (such as is usually used in physics, with normalizations chosen so that ħ=1 ), i.e., log for g(y)≈12π∫Re−ixyf(x)dx. In this case, the dilation of the Fourier transform absolute squared by a factor of 2π simply adds log(2π) to its entropy. Entropy versus variance bounds: The Gaussian or normal probability distribution plays an important role in the relationship between variance and entropy: it is a problem of the calculus of variations to show that this distribution maximizes entropy for a given variance, and at the same time minimizes the variance for a given entropy. In fact, for any probability density function ϕ on the real line, Shannon's entropy inequality specifies: log ⁡2πeV(ϕ), where H is the Shannon entropy and V is the variance, an inequality that is saturated only in the case of a normal distribution. Entropy versus variance bounds: Moreover, the Fourier transform of a Gaussian probability amplitude function is also Gaussian—and the absolute squares of both of these are Gaussian, too. This can then be used to derive the usual Robertson variance uncertainty inequality from the above entropic inequality, enabling the latter to be tighter than the former. That is (for ħ=1), exponentiating the Hirschman inequality and using Shannon's expression above, exp ⁡(H(|f|2)+H(|g|2))/(2eπ)≤V(|f|2)V(|g|2). Entropy versus variance bounds: Hirschman explained that entropy—his version of entropy was the negative of Shannon's—is a "measure of the concentration of [a probability distribution] in a set of small measure." Thus a low or large negative Shannon entropy means that a considerable mass of the probability distribution is confined to a set of small measure. Entropy versus variance bounds: Note that this set of small measure need not be contiguous; a probability distribution can have several concentrations of mass in intervals of small measure, and the entropy may still be low no matter how widely scattered those intervals are. This is not the case with the variance: variance measures the concentration of mass about the mean of the distribution, and a low variance means that a considerable mass of the probability distribution is concentrated in a contiguous interval of small measure. Entropy versus variance bounds: To formalize this distinction, we say that two probability density functions ϕ1 and ϕ2 are equimeasurable if ∀δ>0,μ{x∈R|ϕ1(x)≥δ}=μ{x∈R|ϕ2(x)≥δ}, where μ is the Lebesgue measure. Any two equimeasurable probability density functions have the same Shannon entropy, and in fact the same Rényi entropy, of any order. The same is not true of variance, however. Any probability density function has a radially decreasing equimeasurable "rearrangement" whose variance is less (up to translation) than any other rearrangement of the function; and there exist rearrangements of arbitrarily high variance, (all having the same entropy.)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Web of Spies** Web of Spies: Web of Spies is the eleventh novel in the long-running Nick Carter-Killmaster series of spy novels. Carter is a US secret agent, code-named N-3, with the rank of Killmaster. He works for AXE – a secret arm of the US intelligence services. Publishing history: The book was first published in January 1966 (Number A163F) by Award Books part of the Beacon-Signal division of Universal Publishing and Distributing Corporation (New York, USA), part of the Conde Nast Publications Inc. The novel was written by Manning Lee Stokes. Plot summary: The story is set in September 1965. Carter is assigned to Mission Sappho – to kidnap British scientist Alicia Todd – holidaying on the Costa Brava with her Russian spy lover – or kill her if she resists. Todd has developed a secret formula known as the Paradise Pill which has the ability to greatly enhance a soldier's morale and stamina. However, Todd has not left any written records of the formula and has committed the details to memory. Plot summary: First, Carter contacts AXE agent Gay Lord in Tangier, Morocco. She knows where Todd is staying from her dealings with die Spinne (The Spider) – a Spanish underground group who smuggle Nazis out of Europe. Gay Lord kept tabs on the current whereabouts of the smuggled Nazis and reported back to AXE. The Spider has recently fragmented into two factions – the largest led by Judas – Carter's adversary in Run, Spy, Run and The China Doll. Gay Lord was a double agent; doctoring her reports to AXE about the location of the Nazis in exchange for cash to fund her extravagant lifestyle. The smaller faction of The Spider (led by El Lobo) discovered her involvement and killed her. Plot summary: Carter escapes from Tangier and travels to a villa near L'Estartit on the Costa Brava where Russian agent Tasia Loften is seducing Alicia Todd. Carter discovers that Judas' men will raid the villa and attempt to kidnap Todd. Carter arrives at the villa just as it is besieged by Judas' men. Carter arranges a truce with the Russian spy in exchange for help in escaping the villa. Under the influence of narcotics, Todd panics and bolts and is captured by Judas. Plot summary: Carter and Tasia join forces to evade capture and rescue Todd. They are summoned to meet Judas at the bullfighting arena in Girona where he intends to sell Todd to the highest bidder. Tasia knows that Russia will be outbid by the Americans so she plants heroin on Carter causing him to be arrested on suspicion of drug trafficking before he can complete a deal with Judas. Plot summary: Tasia follows Judas to a monastery near La Jonquera / Prats-de-Mollo-la-Preste on the France-Spain border where Todd is imprisoned. Carter is rescued from the police cells by the smaller Spider group led by Carmena Santos – El Lobo's granddaughter. They want Carter's help to kill Judas and reunite the two Spider factions. Plot summary: Carter leads the assault on the monastery; he is to disable the electric fence, machine gun posts and searchlights to allow El Lobo's men to enter. Inside the monastery, Tasia secretly contacts a Russian commando outpost in nearby Andorra to come to her assistance. Carter betrays El Lobo's men by signaling that all is clear when in fact he has not disabled the monastery's security systems. El Lobo and his men attack and a vicious firefight with Judas' forces ensues during which Carmena Santos is killed. Plot summary: The Russian commandos arrive and join the assault on the monastery. Sensing that the end is near, Judas imprisons Carter and Tasia in a sealed coffin perched on the monastery walls ready to be tipped into the moat and makes his escape with Alicia Todd by river. Carter and Tasia escape from the coffin and follow Judas. Judas and Todd are tipped into the water approaching some rapids and scramble to the river bank. Judas bargains with Carter – his freedom in exchange for Todd. Carter agrees but finds that Todd is already dead. Judas escapes in Carter's car which is attacked by El Lobo's men seeking revenge for the causing the death of Carmena. Judas is presumed to be dead. Plot summary: After escaping a manhunt by the Spanish police and military, Carter and Tasia take refuge in Barcelona. Carter discovers that Tasia succeeded in extracting some information from Todd before her death and takes it from her. He leaves her some money and tells her she must choose to defect or return to Russia. Back in the US, Carter learns that Tasia has returned to Russia but her fate is unknown. The information she extracted from Todd was examined by experts and found to be worthless. Main characters: Nick Carter (agent N-3, AXE; posing as author Kenneth Ludwell Hughes) Mr Hawk (Carter's boss, head of AXE) Tasia Loften (real name: Anastasia Zaloff; Russian agent posing as Todd's lover) Alicia Todd (English pharmacologist) Judas (leader of die Spinne, Carter's foe) Skull (Judas' henchman) El Lobo (leader of smaller faction of die Spinne) Carmena Santos (die Spinne member, granddaughter of El Lobo) Gay Lord (AXE agent based in Morocco)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Muscles of mastication** Muscles of mastication: There are four classical muscles of mastication. During mastication, three muscles of mastication (musculi masticatorii) are responsible for adduction of the jaw, and one (the lateral pterygoid) helps to abduct it. All four move the jaw laterally. Other muscles, usually associated with the hyoid, such as the mylohyoid muscle, are responsible for opening the jaw in addition to the lateral pterygoid. Structure: The muscles are: The masseter (composed of the superficial and deep head) The temporalis (the sphenomandibularis is considered a part of the temporalis by some sources, and a distinct muscle by others) The medial pterygoid The lateral pterygoidIn humans, the mandible, or lower jaw, is connected to the temporal bone of the skull via the temporomandibular joint. This is an extremely complex joint which permits movement in all planes. The muscles of mastication originate on the skull and insert into the mandible, thereby allowing for jaw movements during contraction. Structure: Each of these primary muscles of mastication is paired, with each side of the mandible possessing one of the four. Innervation Unlike most of the other facial muscles, which are innervated by the facial nerve (or CN VII), the muscles of mastication are innervated by the trigeminal nerve (or CN V). More specifically, they are innervated by the mandibular branch, or V3. The Mandibular nerve is both sensory and motor. Development This is a testament to their shared embryological origin from the first pharyngeal arch. The muscles of facial expression, on the other hand, derive from the second pharyngeal arch. Function: The mandible is the only bone that moves during mastication and other activities, such as talking. While these four muscles are the primary participants in mastication, other muscles are usually if not always helping the process, such as those of the tongue and the cheeks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rising Card** Rising Card: The Rising Card is a popular category of magical illusion in which the magician causes randomly selected playing cards to spontaneously rise from the center of a deck. Many variations of this trick exist and are performed widely. The effect can be accomplished using a variety of methods and techniques, ranging from pure sleight of hand to complex electronic and mechanical solutions. Variations: Magician Howard Thurston is attributed with creating a unique take on the Rising Card. As described by Smithsonian Magazine:One, called the "Rising Card," started with an audience member choosing certain cards, as if for a regular card trick. But expectations turned upside down when Thurston put the deck into a glass goblet. He would then call up certain cards—the king of spades, the ten of clubs—and they would rise two feet in the air, into his hands. The dazzling end was when all 52 cards were thrown, serially, into the audience. One reporter wrote that they fluttered to audience members "like beautiful butterflies."A similar variation is attributed to magician and inventor Samuel Cox Hooker. This version includes cards rising from the deck and floating in air beneath a glass bell jar. This complex, multi-stage iteration of the Rising Card effect was reenacted by John Gaughan in 2007 and has inspired curiosity and speculation as to the methods behind it.In his Complete Encyclopedia of Magic, Joseph Dunninger shares a number of variations of the Rising Card effect, including ones where the deck of cards is held in the magician's hand, or placed in a wine glass on a table. Magician Jeff McBride developed a version of the Rising Card effect where the card rises while the deck is held by a spectator; entitled "Kundalini Rising," McBride's variation links the Rising Card effect to mythology- and religion-themed storytelling. Methods: Magicians accomplish the Rising Card effect using a variety of methodologies that include both sleight of hand techniques and mechanical solutions involving threads, weights, rubber rollers, elastics, adhesives, electronics, motors, and more. Historic versions of the Rising Card in particular often involved complex mechanics and automation, similar to clock and watch-making technology, to accomplish the effect.While some versions of the Rising Card involve complex equipment and carefully prepared decks, other variations can be accomplished using only special hand positions and an unaltered deck of cards.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flat-panel display** Flat-panel display: A flat-panel display (FPD) is an electronic display used to display visual content such as text or images. It is present in consumer, medical, transportation, and industrial equipment. Flat-panel displays are thin, lightweight, provide better linearity and are capable of higher resolution than typical consumer-grade TVs from earlier eras. They are usually less than 10 centimetres (3.9 in) thick. While the highest resolution for consumer-grade CRT televisions was 1080i, many flat-panel displays in the 2020s are capable of 1080p and 4K resolution. In the 2010s, portable consumer electronics such as laptops, mobile phones, and portable cameras have used flat-panel displays since they consume less power and are lightweight. As of 2016, flat-panel displays have almost completely replaced CRT displays. Flat-panel display: Most 2010s-era flat-panel displays use LCD or light-emitting diode (LED) technologies, sometimes combined. Most LCD screens are back-lit with color filters used to display colors. In many cases, flat-panel displays are combined with touch screen technology, which allows the user to interact with the display in a natural manner. For example, modern smartphone displays often use OLED panels, with capacitive touch screens. Flat-panel display: Flat-panel displays can be divided into two display device categories: volatile and static. The former requires that pixels be periodically electronically refreshed to retain their state (e.g. liquid-crystal displays (LCD)), and can only show an image when it has power. On the other hand, static flat-panel displays rely on materials whose color states are bistable, such as displays that make use of e-ink technology, and as such retain content even when power is removed. History: The first engineering proposal for a flat-panel TV was by General Electric in 1954 as a result of its work on radar monitors. The publication of their findings gave all the basics of future flat-panel TVs and monitors. But GE did not continue with the R&D required and never built a working flat panel at that time. The first production flat-panel display was the Aiken tube, developed in the early 1950s and produced in limited numbers in 1958. This saw some use in military systems as a heads up display and as an oscilloscope monitor, but conventional technologies overtook its development. Attempts to commercialize the system for home television use ran into continued problems and the system was never released commercially.The Philco Predicta featured a relatively flat (for its day) cathode ray tube setup and would be the first commercially released "flat panel" upon its launch in 1958; the Predicta was a commercial failure. The plasma display panel was invented in 1964 at the University of Illinois, according to The History of Plasma Display Panels. History: Liquid-crystal displays (LCDs) The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, and presented in 1960. Building on their work, Paul K. Weimer at RCA developed the thin-film transistor (TFT) in 1962. It was a type of MOSFET distinct from the standard bulk MOSFET. The idea of a TFT-based LCD was conceived by Bernard J. Lechner of RCA Laboratories in 1968. B.J. Lechner, F.J. Marlowe, E.O. Nester and J. Tults demonstrated the concept in 1968 with a dynamic scattering LCD that used standard discrete MOSFETs.The first active-matrix addressed electroluminescent display (ELD) was made using TFTs by T. Peter Brody's Thin-Film Devices department at Westinghouse Electric Corporation in 1968. In 1973, Brody, J. A. Asars and G. D. Dixon at Westinghouse Research Laboratories demonstrated the first thin-film-transistor liquid-crystal display (TFT LCD). Brody and Fang-Chen Luo demonstrated the first flat active-matrix liquid-crystal display (AM LCD) using TFTs in 1974.By 1982, pocket LCD TVs based on LCD technology were developed in Japan. The 2.1-inch Epson ET-10 Epson Elf was the first color LCD pocket TV, released in 1984. In 1988, a Sharp research team led by engineer T. Nagayasu demonstrated a 14-inch full-color LCD display, which convinced the electronics industry that LCD would eventually replace CRTs as the standard television display technology. As of 2013, all modern high-resolution and high-quality electronic visual display devices use TFT-based active-matrix displays. History: LED displays The first usable LED display was developed by Hewlett-Packard (HP) and introduced in 1968. It was the result of research and development (R&D) on practical LED technology between 1962 and 1968, by a research team under Howard C. Borden, Gerald P. Pighini, and Mohamed M. Atalla, at HP Associates and HP Labs. In February 1969, they introduced the HP Model 5082-7000 Numeric Indicator. It was the first alphanumeric LED display, and was a revolution in digital display technology, replacing the Nixie tube for numeric displays and becoming the basis for later LED displays. In 1977, James P Mitchell prototyped and later demonstrated what was perhaps the earliest monochromatic flat panel LED television display. History: Ching W. Tang and Steven Van Slyke at Eastman Kodak built the first practical organic LED (OLED) device in 1987. In 2003, Hynix produced an organic EL driver capable of lighting in 4,096 colors. In 2004, the Sony Qualia 005 was the first LED-backlit LCD display. The Sony XEL-1, released in 2007, was the first OLED television. Common types: Liquid-crystal display (LCD) Field-effect LCDs are lightweight, compact, portable, cheap, more reliable, and easier on the eyes than CRT screens. LCD screens use a thin layer of liquid crystal, a liquid that exhibits crystalline properties. It is sandwiched between two glass plates carrying transparent electrodes. Two polarizing films are placed at each side of the LCD. By generating a controlled electric field between electrodes, various segments or pixels of the liquid crystal can be activated, causing changes in their polarizing properties. These polarizing properties depend on the alignment of the liquid-crystal layer and the specific field-effect used, being either Twisted Nematic (TN), In-Plane Switching (IPS) or Vertical Alignment (VA). Color is produced by applying appropriate color filters (red, green and blue) to the individual subpixels. LCD displays are used in various electronics like watches, calculators, mobile phones, TVs, computer monitors and laptops screens etc. Common types: LED-LCD Most earlier large LCD screens were back-lit using a number of CCFL (cold-cathode fluorescent lamps). However, small pocket size devices almost always used LEDs as their illumination source. With the improvement of LEDs, almost all new displays are now equipped with LED backlight technology. The image is still generated by the LCD layer. Common types: Plasma panel A plasma display consists of two glass plates separated by a thin gap filled with a gas such as neon. Each of these plates has several parallel electrodes running across it. The electrodes on the two plates are at right angles to each other. A voltage applied between the two electrodes one on each plate causes a small segment of gas at the two electrodes to glow. The glow of gas segments is maintained by a lower voltage that is continuously applied to all electrodes. By 2010, consumer plasma displays had been discontinued by numerous manufacturers. Common types: Electroluminescent panel In an electroluminescent display (ELD), the image is created by applying electrical signals to the plates which make the phosphor glow. Common types: Organic light-emitting diode An OLED (organic light-emitting diode) is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound which emits light in response to an electric current. This layer of organic semiconductor is situated between two electrodes; typically, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens, computer monitors, portable systems such as mobile phones, handheld game consoles and PDAs. Common types: Quantum-dot light-emitting diode QLED or quantum dot LED is a flat panel display technology introduced by Samsung under this trademark. Other television set manufacturers such as Sony have used the same technology to enhance the backlighting of LCD TVs already in 2013. Quantum dots create their own unique light when illuminated by a light source of shorter wavelength such as blue LEDs. This type of LED TV enhances the colour gamut of LCD panels, where the image is still generated by the LCD. In the view of Samsung, quantum dot displays for large-screen TVs are expected to become more popular than the OLED displays in the coming years; Firms like Nanoco and Nanosys compete to provide the QD materials. In the meantime, Samsung Galaxy devices such as smartphones are still equipped with OLED displays manufactured by Samsung as well. Samsung explains on their website that the QLED TV they produce can determine what part of the display needs more or less contrast. Samsung also announced a partnership with Microsoft that will promote the new Samsung QLED TV. Volatile: Volatile displays require that pixels be periodically refreshed to retain their state, even for a static image. As such, a volatile screen needs electrical power, either from mains electricity (being plugged into a wall socket) or a battery to maintain an image on the display or change the image. This refresh typically occurs many times a second. If this is not done, for example, if there is a power outage, the pixels will gradually lose their coherent state, and the image will "fade" from the screen. Volatile: Examples The following flat-display technologies have been commercialized in 1990s to 2010s: Plasma display panel (PDP) Active-matrix liquid-crystal display (AMLCD) Rear projection: Digital Light Processing (DLP), LCD, LCOS Electronic paper: E Ink, Gyricon Light-emitting diode display (LED) Active-matrix organic light-emitting diode (AMOLED) Quantum dot display (QLED)Technologies that were extensively researched, but their commercialization was limited or has been ultimately abandoned: Active-matrix electroluminescent display (ELD) Interferometric modulator display (IMOD) Field emission display (FED) Surface-conduction electron-emitter display (SED, SED-TV) Static: Static flat-panel displays rely on materials whose color states are bistable. This means that the image they hold requires no energy to maintain, but instead requires energy to change. This results in a much more energy-efficient display, but with a tendency toward slow refresh rates which are undesirable in an interactive display. Bistable flat-panel displays are beginning deployment in limited applications (cholesteric liquid-crystal displays, manufactured by Magink, in outdoor advertising; electrophoretic displays in e-book reader devices from Sony and iRex; anlabels; interferometric modulator displays in a smartwatch).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chu space** Chu space: Chu spaces generalize the notion of topological space by dropping the requirements that the set of open sets be closed under union and finite intersection, that the open sets be extensional, and that the membership predicate (of points in open sets) be two-valued. The definition of continuous function remains unchanged other than having to be worded carefully to continue to make sense after these generalizations. Chu space: The name is due to Po-Hsiang Chu, who originally constructed a verification of autonomous categories as a graduate student under the direction of Michael Barr in 1979. Definition: Understood statically, a Chu space (A, r, X) over a set K consists of a set A of points, a set X of states, and a function r : A × X → K. This makes it an A × X matrix with entries drawn from K, or equivalently a K-valued binary relation between A and X (ordinary binary relations being 2-valued). Definition: Understood dynamically, Chu spaces transform in the manner of topological spaces, with A as the set of points, X as the set of open sets, and r as the membership relation between them, where K is the set of all possible degrees of membership of a point in an open set. The counterpart of a continuous function from (A, r, X) to (B, s, Y) is a pair (f, g) of functions f : A → B, g : Y → X satisfying the adjointness condition s(f(a), y) = r(a, g(y)) for all a ∈ A and y ∈ Y. That is, f maps points forwards at the same time as g maps states backwards. The adjointness condition makes g the inverse image function f−1, while the choice of X for the codomain of g corresponds to the requirement for continuous functions that the inverse image of open sets be open. Such a pair is called a Chu transform or morphism of Chu spaces. Definition: A topological space (X, T) where X is the set of points and T the set of open sets, can be understood as a Chu space (X,∈,T) over {0, 1}. That is, the points of the topological space become those of the Chu space while the open sets become states and the membership relation " ∈ " between points and open sets is made explicit in the Chu space. The condition that the set of open sets be closed under arbitrary (including empty) union and finite (including empty) intersection becomes the corresponding condition on the columns of the matrix. A continuous function f: X → X' between two topological spaces becomes an adjoint pair (f,g) in which f is now paired with a realization of the continuity condition constructed as an explicit witness function g exhibiting the requisite open sets in the domain of f. Categorical structure: The category of Chu spaces over K and their maps is denoted by Chu(Set, K). As is clear from the symmetry of the definitions, it is a self-dual category: it is equivalent (in fact isomorphic) to its dual, the category obtained by reversing all the maps. It is furthermore a *-autonomous category with dualizing object (K, λ, {*}) where λ : K × {*} → K is defined by λ(k, *) = k (Barr 1979). As such it is a model of Jean-Yves Girard's linear logic (Girard 1987). Variants: The more general enriched category Chu(V, k) originally appeared in an appendix to Barr (1979). The Chu space concept originated with Michael Barr and the details were developed by his student Po-Hsiang Chu, whose master's thesis formed the appendix. Ordinary Chu spaces arise as the case V = Set, that is, when the monoidal category V is specialized to the cartesian closed category Set of sets and their functions, but were not studied in their own right until more than a decade after the appearance of the more general enriched notion. A variant of Chu spaces, called dialectica spaces, due to de Paiva (1989) replaces the map condition (1) with the map condition (2): s(f(a), y) = r(a, g(y)). Universality: The category Top of topological spaces and their continuous functions embeds in Chu(Set, 2) in the sense that there exists a full and faithful functor F : Top → Chu(Set, 2) providing for each topological space (X, T) its representation F((X, T)) = (X, ∈, T) as noted above. This representation is moreover a realization in the sense of Pultr and Trnková (1980), namely that the representing Chu space has the same set of points as the represented topological space and transforms in the same way via the same functions. Universality: Chu spaces are remarkable for the wide variety of familiar structures they realize. Lafont and Streicher (1991) point out that Chu spaces over 2 realize both topological spaces and coherent spaces (introduced by J.-Y. Girard (1987) to model linear logic), while Chu spaces over K realize any category of vector spaces over a field whose cardinality is at most that of K. This was extended by Vaughan Pratt (1995) to the realization of k-ary relational structures by Chu spaces over 2k. For example, the category Grp of groups and their homomorphisms is realized by Chu(Set, 8) since the group multiplication can be organized as a ternary relation. Chu(Set, 2) realizes a wide range of "logical" structures such as semilattices, distributive lattices, complete and completely distributive lattices, Boolean algebras, complete atomic Boolean algebras, etc. Further information on this and other aspects of Chu spaces, including their application to the modeling of concurrent behavior, may be found at Chu Spaces. Applications: Automata Chu spaces can serve as a model of concurrent computation in automata theory to express branching time and true concurrency. Chu spaces exhibit the quantum mechanical phenomena of complementarity and uncertainty. The complementarity arises as the duality of information and time, automata and schedules, and states and events. Uncertainty arises when a measurement is defined to be a morphism such that increasing structure in the observed object reduces the clarity of observation. This uncertainty can be calculated numerically from its form factor to yield the usual Heisenberg uncertainty relation. Chu spaces correspond to wavefunctions as vectors of Hilbert space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subject–verb–object word order** Subject–verb–object word order: In linguistic typology, subject–verb–object (SVO) is a sentence structure where the subject comes first, the verb second, and the object third. Languages may be classified according to the dominant sequence of these elements in unmarked sentences (i.e., sentences in which an unusual word order is not used for emphasis). English is included in this group. An example is "Sam ate yogurt." SVO is the second-most common order by number of known languages, after SOV. Together, SVO and SOV account for more than 87% of the world's languages. Subject–verb–object word order: The label SVO often includes ergative languages although they do not have nominative subjects. Properties: Subject–verb–object languages almost always place relative clauses after the nouns which they modify and adverbial subordinators before the clause modified, with varieties of Chinese being notable exceptions. Properties: Although some subject–verb–object languages in West Africa, the best known being Ewe, use postpositions in noun phrases, the vast majority of them, such as English, have prepositions. Most subject–verb–object languages place genitives after the noun, but a significant minority, including the postpositional SVO languages of West Africa, the Hmong–Mien languages, some Sino-Tibetan languages, and European languages like Swedish, Danish, Lithuanian and Latvian have prenominal genitives (as would be expected in an SOV language). Properties: Non-European SVO languages usually have a strong tendency to place adjectives, demonstratives and numerals after the nouns that they modify, but Chinese, Vietnamese, Malaysian and Indonesian place numerals before nouns, as in English. Some linguists have come to view the numeral as the head in the relationship to fit the rigid right-branching of these languages.There is a strong tendency, as in English, for main verbs to be preceded by auxiliaries: I am thinking. He should reconsider. Language differences and variation: An example of SVO order in English is: Andy ate cereal.In an analytic language such as English, subject–verb–object order is relatively inflexible because it identifies which part of the sentence is the subject and which one is the object. ("The dog bit Andy" and "Andy bit the dog" mean two completely different things, while, in case of "Bit Andy the dog", it may be difficult to determine whether it's a complete sentence or a fragment, with "Andy the dog" the object and an omitted/implied subject.) The situation is more complex in languages that have no strict order of V and O imposed by their grammar. e.g. Russian, Finnish, Ukrainian, or Hungarian. Here, the ordering is rather governed by emphasis. Russian allows the use of subject, verb, and object in any order and "shuffles" parts to bring up a slightly different contextual meaning each time. E.g. "любит она его" (loves she him) may be used to point out "she acts this way because she LOVES him", or "его она любит" (him she loves) is used in the context "if you pay attention, you'll see that HE is the one she truly loves", or "его любит она" (him loves she) may appear along the lines "I agree that cat is a disaster, but since my wife adores it and I adore her...". Regardless of order, it is clear that "его" is the object because it is in the accusative case. In Polish, SVO order is basic in an affirmative sentence, and a different order is used to either emphasize some part of it or to adapt it to a broader context logic. For example, "Roweru ci nie kupię" (I won't buy you a bicycle), "Od piątej czekam" (I've been waiting since five).In Turkish, it is normal to use SOV, but SVO may be used sometimes to emphasize the verb. For example, "John terketti Mary'yi" (Lit. John/left/Mary: John left Mary) is the answer to the question "What did John do with Mary?" instead of the regular [SOV] sentence "John Mary'yi terketti" (Lit. John/Mary/left). Language differences and variation: German, Dutch, and Kashmiri display the order subject-verb-object in some, especially main clauses, but really are verb-second languages, not SVO languages in the sense of a word order type. They have SOV in subordinate clauses, as given in Example 1 below. Example 2 shows the effect of verb second order: the first element in the clause that comes before the V need not be the subject. In Kashmiri, the word order in embedded clauses is conditioned by the category of the subordinating conjunction, as in Example 3. "Er weiß, dass ich jeden Sonntag das Auto wasche."/"Hij weet dat ik elke zondag de auto was." (German & Dutch respectively: "He knows that I wash the car each Sunday", lit. "He knows that I each Sunday the car wash".) Cf. the simple sentence "Ich wasche das Auto jeden Sonntag."/ "Ik was de auto elke zondag.", "I wash the car each Sunday." "Jeden Sonntag wasche ich das Auto."/"Elke zondag was ik de auto." (German & Dutch respectively: "Each Sunday I wash the car.", lit. "Each Sunday wash I the car."). "Ich wasche das Auto jeden Sonntag"/"Ik was de auto elke zondag" translates perfectly into English "I wash the car each Sunday", but preposing the adverbial results in a structure that is different from the English one. Language differences and variation: Kashmiri: If the embedded clause is introduced by the transparent conjunction zyi the SOV order changes to SVO. "mye ees phyikyir (zyi) tsi maa dyikh temyis ciThy".English developed from such a reordering language and still bears traces of this word order, for example in locative inversion ("In the garden sat a cat.") and some clauses beginning with negative expressions: "only" ("Only then do we find X."), "not only" ("Not only did he storm away but also slammed the door."), "under no circumstances" ("under no circumstances are the students allowed to use a mobile phone"), "never" ("Never have I done that."), "on no account" and the like. In such cases, do-support is sometimes required, depending on the construction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Butyl rubber** Butyl rubber: Butyl rubber, sometimes just called "butyl", is a synthetic rubber, a copolymer of isobutylene with isoprene. The abbreviation IIR stands for isobutylene isoprene rubber. Polyisobutylene, also known as "PIB" or polyisobutene, (C4H8)n, is the homopolymer of isobutylene, or 2-methyl-1-propene, on which butyl rubber is based. Butyl rubber is produced by polymerization of about 98% of isobutylene with about 2% of isoprene. Structurally, polyisobutylene resembles polypropylene, but has two methyl groups substituted on every other carbon atom, rather than one. Polyisobutylene is a colorless to light yellow viscoelastic material. It is generally odorless and tasteless, though it may exhibit a slight characteristic odor. Properties: Butyl rubber has excellent impermeability to gas diffusion, and the long polyisobutylene segments of its polymer chains give it good flex properties. The formula for PIB is: –(–CH2–C(CH3)2–)n– The formula for IIR is: It can be made from the monomer isobutylene (CH2=C(CH3)2) only via cationic addition polymerization. Properties: A synthetic rubber, or elastomer, butyl rubber is impermeable to air and used in many applications requiring an airtight rubber. Polyisobutylene and butyl rubber are used in the manufacture of adhesives, agricultural chemicals, fiber optic compounds, ball bladders, O-rings, caulks and sealants, cling film, electrical fluids, lubricants (2 stroke engine oil), paper and pulp, personal care products, pigment concentrates, for rubber and polymer modification, for protecting and sealing certain equipment for use in areas where chemical weapons are present, as a gasoline/diesel fuel additive, and chewing gum. The first major application of butyl rubber was tire inner tubes. This remains an important segment of its market even today. History: Isobutylene was discovered by Michael Faraday in 1825. Polyisobutylene (PIB) was first developed by the BASF unit of IG Farben in 1931 using a boron trifluoride catalyst at low temperatures and sold under the trade name Oppanol B. PIB remains a core business for BASF to this day. History: It was later developed into butyl rubber in 1937, by researchers William J. Sparks and Robert M. Thomas, at Standard Oil of New Jersey's Linden, N.J., laboratory. Today, the majority of the global supply of butyl rubber is produced by two companies, ExxonMobil (one of the descendants of Standard Oil) and Polymer Corporation, a Canadian federal crown corporation established in 1942 to produce artificial rubber to substitute for overseas supply cut off by World War II. It was renamed Polysar in 1976 and the rubber component became a subsidiary, Polysar Rubber Corp. The company was privatized in 1988 with its sale to NOVA Corp which, in turn, sold Polysar Rubber in 1990 to Bayer AG of Germany. In 2005 Bayer AG spun off chemical divisions, including most of the Sarnia site, creating LANXESS AG, also of Germany.PIB homopolymers of high molecular weight (100,000–400,000 or more) are polyolefin elastomers: tough extensible rubber-like materials over a wide temperature range; with low density (0.913–0.920), low permeability and excellent electrical properties. History: In the 1950s and 1960s, halogenated butyl rubber (halobutyl) was developed, in its chlorinated (chlorobutyl) and brominated (bromobutyl) variants, providing significantly higher curing rates and allowing covulcanization with other rubbers such as natural rubber and styrene-butadiene rubber. Halobutyl is today the most important material for the inner linings of tubeless tires. Francis P. Baldwin received the 1979 Charles Goodyear Medal for the many patents he held for these developments. In the spring of 2013 two incidents of PIB contamination in the English Channel, believed to be connected, were described as the worst UK marine pollution 'for decades'. The RSPB estimated over 2,600 seabirds were killed by the chemical and hundreds more were rescued and decontaminated. Uses: Fuel and lubricant additive Polyisobutylene can be reacted with maleic anhydride to make polyisobutenylsuccinic anhydride (PIBSA), which can then be converted into polyisobutenylsuccinimides (PIBSI) by reacting it with various ethyleneamines. When used as an additive in lubricating oils and motor fuels, they can have a substantial effect on the properties of the oil or fuel. Polyisobutylene added in small amounts to the lubricating oils used in machining results in a significant reduction in the generation of oil mist and thus reduces the operator's inhalation of oil mist. It is also used to clean up waterborne oil spills as part of the commercial product Elastol. When added to crude oil it increases the oil's viscoelasticity when pulled, causing the oil to resist breakup when it is vacuumed from the surface of the water. Uses: As a fuel additive, polyisobutylene has detergent properties. When added to diesel fuel, it resists fouling of fuel injectors, leading to reduced hydrocarbon and particulate emissions. It is blended with other detergents and additives to make a "detergent package" that is added to gasoline and diesel fuel to resist buildup of deposits and engine knock.Polyisobutylene is used in some formulations as a thickening agent. Uses: Explosives Polyisobutylene is often used by the explosives industry as a binding agent in plastic explosives such as C-4. Polyisobutylene binder is used because it makes the explosive more insensitive to premature detonation as well as making it easier to handle and mold. Speakers and audio equipment Butyl rubber is generally used in speakers, specifically the surrounds. It was used as a replacement for foam surrounds because the foam would deteriorate. The majority of modern speakers use butyl rubber, while most vintage speakers use foam. Sporting equipment Butyl rubber is used for the bladders in sporting balls (e.g. Rugby balls, footballs, basketballs, netballs) and to make bicycle inner tubes to provide a tough, airtight inner compartment. Damp proofing and roof repair Butyl rubber sealant is used for damp proofing, rubber roof repair and for maintenance of roof membranes (especially around the edges). It is important to have the roof membrane fixed, as a lot of fixtures (e.g., air conditioner vents, plumbing, and other pipes) can considerably loosen it. Rubber roofing typically refers to a specific type of roofing materials that are made of ethylene propylene diene monomers (EPDM rubber). It is crucial to the integrity of such roofs to avoid using harsh abrasive materials and petroleum-based solvents for their maintenance. Polyester fabric laminated to butyl rubber binder provides a single-sided waterproof tape that can be used on metal, PVC, and cement joints. It is used for repairing and waterproofing metal roofs. Uses: Gas masks and chemical agent protection Butyl rubber is one of the most robust elastomers when subjected to chemical warfare agents and decontamination materials. It is a harder and less porous material than other elastomers, such as natural rubber or silicone, but still has enough elasticity to form an airtight seal. While butyl rubber will break down when exposed to agents such as NH3 (ammonia) or certain solvents, it breaks down more slowly than comparable elastomers. It is therefore used to create seals in gas masks and other protective clothing. Uses: Pharmaceutical stoppers Butyl and bromobutyl rubber are commonly used for manufacturing rubber stoppers used for sealing medicine vials and bottles. Uses: Chewing gum Most modern chewing gum uses food-grade butyl rubber as the central gum base, which contributes not only the gum's elasticity but also gives it a stubborn, sticky quality which has led some municipalities to propose taxation to cover costs of its removal.Recycled chewing gum has also been used as a source of recovered polyisobutylene. Amongst other products, this base rubber has been manufactured into coffee cups and 'Gumdrop' gum-collecting bins. When filled, the collecting bins and their contents are shredded together and recycled again. Uses: Tires Because of their superior resistance to gas diffusion, butyl rubber and halogenated rubber are used for the innerliner inside pneumatic tubeless tires, and for the inner tube in older tires. Insulating windows Polyisobutylene is used as the primary seal in an insulating glass unit for commercial and residential construction providing the air and moisture seal for the unit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Water slide** Water slide: A water slide (also referred to as a flume, or water chute) is a type of slide designed for warm-weather or indoor recreational use at water parks. Water slides differ in their riding method and therefore size. Some slides require riders to sit directly on the slide, or on a raft or tube designed to be used with the slide. Water slide: A typical water slide uses a pump system to pump water to the top which is then allowed to freely flow down its surface. The water reduces friction so sliders travel down the slide very quickly. Water slides run into a swimming pool (often called a plunge pool) or a long run-out chute. A lifeguard is usually stationed at the top and the bottom of the slide, so that if a rider gets hurt they will be treated immediately. Traditional water slides: Body slides Body slides feature no mat or tube, instead having riders sit or lie directly on the surface of the slide. The simplest resemble wet playground slides. There are a variety of types of body slides including flumes, speed slides, bowls and AquaLoops; the latter three are explained below. Inline tube slides Some slides are designed to be ridden with a tube which typically seats either 2 or 3 riders inline. Similar to a traditional body slide, these slides include many twists and turns and come in a variety of types including bowls, funnels and half-pipes. Traditional water slides: Longest The world's longest water slide was a temporary installation in Waimauku, New Zealand, in February 2013. Constructed with a length of 650 metres (2,130 ft), of which 550 metres (1,800 ft) functioned properly. Its creators claimed the previous record holder had a length of ~350 metres (1,150 ft). The slide is being moved to Action Park in Vernon, New Jersey.The "Waterslide" at Buena Vista Lodge in Costa Rica is a 400 metres (1,300 ft) long water slide where the rider sits directly on the slide, with an inner-tube around their upper body for safety.The longest multi-person water-coaster (see below) is the 1,763 foot (537 m) long Mammoth at Holiday World in Santa Claus, Indiana."The Longest" is a permanent single-passenger tube waterslide located in Selangor, Malaysia, at the ESCAPE family theme park. Visitors access the attraction via a cable car system and ride down the slide for approximately 4 minutes whilst navigating through 1,111 metres (3,645 ft) of scenic jungle. 21st century water slides: AquaLoop The first known existence of a looping water slide was at Action Park in Vernon Township, New Jersey in the mid-1980s, named Cannonball Loop. This slide featured a vertical loop but was repeatedly closed due to safety concerns. In the late 2000s, Austrian manufacturer Aquarena developed the world's first safe looping water slide, known as the AquaLoop. The company engineered a slide with an inclined loop rather than a standard vertical one. The slide is currently licensed and distributed by Canadian water slide manufacturer WhiteWater West. There are nearly 20 AquaLoop installations around the world. The first installation was in Slovenia in 2008. The largest collections are located at Wet'n'Wild Gold Coast and Raging Waters Sydney in Australia, which both house 4 AquaLoops that opened in 2010 and 2013, respectively. Wet'n'Wild Gold Coast was also the first to install more than one AquaLoop at a single location. The AquaLoop uses a trap-door to release riders down a 17-metre (56 ft) near-vertical descent at a speed of up to 60 kilometres per hour (37 mph). Riders experience 2.5 Gs in less than 2 seconds. The whole ride is over within 7 seconds. 21st century water slides: Bowl A bowl is a type of water slide where riders descend a steep drop into a round bowl. Under the effects of centrifugal force, the riders circle the outer area of the bowl before exiting down through the middle, often into a pool underneath but sometimes into an additional slide section. This style of water slide comes in various styles and is manufactured by ProSlide, WhiteWater West and Waterfun Products. The different variations can be ridden on a 4-person cloverleaf tube, 2 person inline tube, single person tube or as a body slide. 21st century water slides: Family rafting Family rafting water slides have the largest capacity of all the different types of tubing water slides averaging between 4 and 6 riders per dispatch. Riders hop in a circular raft and travel down long, twisted 4.5-metre (15 ft) channels to the ground. This type of water slide is manufactured by Australian Waterslides and Leisure, ProSlide, Waterfun Products and WhiteWater West. All of these companies manufacture open-air slides while ProSlide also manufactures an enclosed version. 21st century water slides: Funnel A funnel water slide requires riders to sit in a 2 or 4 seater round tube. Riders drop from inside a tunnel out into the ride's main element shaped like a funnel on its side. Riders oscillate from one side to the other until they exit through the back of the funnel and into a splash pool. The most common type of funnel is the ProSlide Tornado which is installed at almost 60 locations around the world dating back to 2003. In 2010, WhiteWater West began developing a competing product known as the Abyss, utilizing a raft that holds up to six riders. 21st century water slides: The Half-Pipe Similar to a funnel, a half-pipe features a slide in which riders oscillate back and forth. However, this style of ride does not feature any enclosed sections. On a Waterfun Product Sidewinder or Sidewinder Mini, riders oscillate several times before coming to a rest at the base of the slide. Riders then need to walk off the slide returning their tube to the next riders.A variation of the half-pipe called a boomerang slide typically has a steep enclosed section that exits to a wider upward-rising section that the rider then slides back down the other direction to the end of the slide. 21st century water slides: Multi-lane racer A multi-lane racer is a ride where between 4 and 8 riders dive head-first onto a mat and down a slide with several dips. As an additional component of this ride, some offer an additional enclosed helix at the top of the ride. ProSlide offer ProRacers, Octopus Racers, Kraken Racers and Rally Racers, while WhiteWater West have designed the Mat Racers and Whizzards. In 2016, WhiteWater West introduced the Mat Blaster, which combines the Whizzard model with elements of their MasterBlaster water coaster. Australian Waterslides and Leisure have also manufactured a standard multi-lane racer. 21st century water slides: Speed slide A speed slide is a type of body slide where riders are sent down steep, free-fall plunges to the ground. Almost all water slide manufacturers offer a variation of this type of slide. ProSlide & WhiteWater West both offer a speed slide with a trap door, the same trap door found on the AquaLoop. 21st century water slides: Water coaster A water coaster is a water slide that mimics a roller coaster by providing not only descents, but also ascents. There are three different ways water coasters operate: water jets, conveyor belts, and linear induction motors. High-powered water jets power the first type of water coaster, generically known as “Master Blasters”. Originally manufactured by New Braunfels General Store (NBGS), the rights were sold in December 2006 to WhiteWater West of Canada. The first installations of this type of ride were Dragon Blaster and Family Blaster, installed in 1994, at Schlitterbahn in New Braunfels, Texas. The following month, a third Master Blaster opened at Adventure Bay in Houston, Texas. This type of ride features over 70 installations worldwide. The largest collection of Master Blasters is at Wild Wadi Water Park in Dubai, where 9 of the park's 16 water slides utilize this technology, propelling riders to the top of a mountain. In 2021, WhiteWater West opened their tallest Master Blaster, and tallest water coaster in the world, Tsunami Surge at Six Flags Hurricane Harbor Chicago.The first conveyor belt was installed at Kalahari Resort in Sandusky, Ohio. Known as the Zip Coaster, the ride carries guests quickly uphill and over steep slides using high-speed conveyor belts. The third incarnation of the water coaster utilizes linear induction motors (LIM technology) and specially-designed rafts. The first installation to use LIM technology was Deluge, opening in 2006 at what was (at the time) Splash Kingdom at Six Flags Kentucky Kingdom.The longest water coaster utilizing this magnetic system is Mammoth, at Splashin' Safari in Santa Claus, Indiana. This technology has been adapted to other ProSlide products, and is collectively known as the ProSlide HydroMAGNETIC. In 2010, ProSlide announced that they would be combining the family rafting and water coaster technologies to create a Hydromagnetic Mammoth. The first installation of this variation is aptly titled Mammoth, which premiered in 2012 at Splashin' Safari in Indiana. It replaced the park's Wildebeest as the longest water coaster in the world. 21st century water slides: Drop-Launch Capsule A drop-launch capsule is a device that is placed at the start of a body slide. Riders step into a capsule, usually with a clear front. Once the capsule is closed, a hatch opens underneath the riders dropping them into a near-vertical portion of the slide. The feature is known by different names from various manufacturers. ProSlide calls it a SkyBox, WhiteWater West refers to it as an AquaDrop. 21st century water slides: River stream slide A river slide, also commonly referred to as "crazy river", resembles a brook (small stream), and may feature buffer pools throughout the way down. Its mass sliding ability, meaning multiple people can safely slide simultaneously, clears its queue area at a faster rate. Inflatable water slides: Inflatable water slides are designed for the home user. They are typically made of a thick strong PVC or vinyl and nylon, and are inflated using an electric or gasoline powered blower. The water slide is attached to a water hose in order to generate the supply of water. There are small-sized inflatable water slides for private house uses or larger inflatable water slides for school, picnic, corporate, or carnival style use. Inflatable water slides: There are also swimming pool water slides which users can set up to slide straight into a pool. Most parks avoid this due to safety concerns and will have swimming sections in a separate pool.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comps (casino)** Comps (casino): Comps are complimentary items and services given out by casinos to encourage players to gamble. The amount and quality of comps that a player is given usually depends on what games they play, how much they bet, and how long they play. Comps (casino): Most casinos have casino hosts responsible for awarding comps and contacting players to bring them back to the casino. Pit bosses can also award comps at table games. Despite the strategic importance of the comps in the casinos' structure, there is no separate profession for it, in different gambling houses, different specialists may be responsible for it. Most casinos ask players to get a player's club card so their play can be tracked and comps awarded accordingly. Levels: The lowest level of comp is free alcohol. Many casinos provide free drinks to anyone gambling.The second level of comp is free self-parking, lounge access, or free meals. Many casinos have several players' lounges and restaurants and might require more play to earn a comp to higher-end restaurants. Often the player is given a certain amount to spend, but sometimes high rollers might get to order as much food as they want and bring guests.The next level of comps is free lodging, free valet parking, and free access to more exclusive high roller lounges. Many casinos have attached hotels, but those that do not can sometimes comp rooms to a hotel nearby. Many casino hotels have better rooms, such as suites, villas, and presidential suites for bigger betters. Many players who get hotel rooms also get a package called "RFB" (for "room, food, and beverage") or "RF" (for "room and food").Many casinos also offer other comps, especially to high rollers. These can include airfare, limo rides, show tickets, golf, concierge services, cash back, loss rebates, private gaming areas, and private jet service. Levels: Casinos also often offer players comps by mail, email, or app. These can include free bet offers, free meals, discounted or free rooms, tournament entries, or prize drawings. These offers often come with terms and conditions for rollover and wagering requirements.Some casinos contract with bus companies to bring players. Riders often enjoy free slot play and dining coupons, often worth as much as the bus fare itself. Calculation: Technically, every player may be offered comps, but most casinos require players to have played for a given period of time and at a certain level; the duration of play and amount wagered are directly proportional to the level of expected comps. Which games are played are also factors. Casinos award comps based on a player's average daily theoretical loss (known as ADT, theoretical loss, or "theo"). Theoretical loss is the amount of money a player is expected to lose based on the long run statistical advantage the casino has on the particular game being played. Theoretical loss algorithms differ by casino, but the logic behind the calculation generally works like this: Theoretical Loss = (Casino Advantage) × (Total Wager) Comp hustling: "Comp counters", "comp hustlers", or "comp wizards" try to maximize comps while minimizing expected losses. Comp hustlers play games with a low house edge, such as blackjack or video poker, or games with small bet sizes, such as penny slots. Comp hustlers use tactics such as placing large bets when a pit boss is checking their bet size to rate them for comps and then moving to a smaller bet size when the boss is not watching. They also take frequent breaks from playing, play at full tables to be dealt fewer hands per hour, and play more slowly. Online comps: Online casinos, poker rooms, and sportsbooks offer bonuses similar to brick and mortar casino comps. Comp hustlers and advantage players can use these bonuses to turn a profit via bonus hunting or can convert these comps to a guaranteed profit using matched betting. Online casinos know of the potential for losing money while giving out bonuses and have minimum wagering requirements as a result. Some casinos limit the payout in case of a win. They also sometimes restrict players from playing games with a low house edge.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Effective mass (spring–mass system)** Effective mass (spring–mass system): In a real spring–mass system, the spring has a non-negligible mass m . Since not all of the spring's length moves at the same velocity v as the suspended mass M , its kinetic energy is not equal to 12mv2 . As such, m cannot be simply added to M to determine the frequency of oscillation, and the effective mass of the spring is defined as the mass that needs to be added to M to correctly predict the behavior of the system. Ideal uniform spring: The effective mass of the spring in a spring-mass system when using an ideal spring of uniform linear density is 1/3 of the mass of the spring and is independent of the direction of the spring-mass system (i.e., horizontal, vertical, and oblique systems all have the same effective mass). This is because external acceleration does not affect the period of motion around the equilibrium point. Ideal uniform spring: The effective mass of the spring can be determined by finding its kinetic energy. This requires adding all the mass elements' kinetic energy, and requires the following integral, where u is the velocity of mass element: K=∫m12u2dm Since the spring is uniform, dm=(dyL)m , where L is the length of the spring at the time of measuring the speed. Hence, K=∫0L12u2(dyL)m =12mL∫0Lu2dy The velocity of each mass element of the spring is directly proportional to length from the position where it is attached (if near to the block then more velocity and if near to the ceiling then less velocity), i.e. u=vyL , from which it follows: K=12mL∫0L(vyL)2dy =12mL3v2∫0Ly2dy =12mL3v2[y33]0L =12m3v2 Comparing to the expected original kinetic energy formula 12mv2, the effective mass of spring in this case is m/3. Using this result, the total energy of system can be written in terms of the displacement x from the spring's unstretched position (ignoring constant potential terms and taking the upwards direction as positive): T (Total energy of system) =12(m3)v2+12Mv2+12kx2−12mgx−Mgx Note that g here is the acceleration of gravity along the spring. By differentiation of the equation with respect to time, the equation of motion is: (−m3−M)a=kx−12mg−Mg The equilibrium point xeq can be found by letting the acceleration be zero: xeq=1k(12mg+Mg) Defining x¯=x−xeq , the equation of motion becomes: (m3+M)a=−kx¯ This is the equation for a simple harmonic oscillator with period: τ=2π(M+m/3k)1/2 So the effective mass of the spring added to the mass of the load gives us the "effective total mass" of the system that must be used in the standard formula 2πmk to determine the period of oscillation. General case: As seen above, the effective mass of a spring does not depend upon "external" factors such as the acceleration of gravity along it. In fact, for a non-uniform spring, the effective mass solely depends on its linear density ρ(x) along its length: K=∫m12u2dm =∫0L12u2ρ(x)dx =∫0L12(vxL)2ρ(x)dx =12[∫0Lx2L2ρ(x)dx]v2 So the effective mass of a spring is: meff=∫0Lx2L2ρ(x)dx This result also shows that meff≤m , with meff=m occurring in the case of an unphysical spring whose mass is located purely at the end farthest from the support. Real spring: The above calculations assume that the stiffness coefficient of the spring does not depend on its length. However, this is not the case for real springs. For small values of M/m , the displacement is not so large as to cause elastic deformation. Jun-ichi Ueda and Yoshiro Sadamoto have found that as M/m increases beyond 7, the effective mass of a spring in a vertical spring-mass system becomes smaller than Rayleigh's value m/3 and eventually reaches negative values. This unexpected behavior of the effective mass can be explained in terms of the elastic after-effect (which is the spring's not returning to its original length after the load is removed).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Twimight** Twimight: Twimight was an open source Android client for the social networking site Twitter. The client let users view in real time "tweets" or micro-blog posts on the Twitter website as well as publish their own. Added value: In addition to being a fully functional, ad-free and open-source Twitter client, Twimight allowed communication if the cellular network is unavailable (for example, in case of a natural disaster). Twimight was also equipped with a feature called the "disaster mode", which users could enable or disable at will. When the disaster mode was enabled and the cellular network was down, Twimight used peer-to-peer communication to let users tweet in any circumstance. Enabling the disaster mode enabled on the phone's Bluetooth transceiver and connected the user to other nearby phones. This created a mobile ad hoc network or MANET, which could be used, for example, to locate missing persons even when the communication infrastructure had failed. History: Twimight started out as a project for a Master thesis at ETH Zurich in the spring of 2011.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vitellogenesis** Vitellogenesis: Vitellogenesis is the process of yolk protein formation in the oocytes of non mammalian vertebrates during sexual maturation. The term vitellogenesis comes from the Latin vitellus ("egg yolk"). Yolk proteins, such as Lipovitellin and Phosvitin, provides maturing oocytes with the metabolic energy required for development. Vitellogenins are the precursor proteins that lead to yolk protein accumulation in the oocyte. Estrogen and vitellogenin production have a positive correlation. When estrogen production in the ovary is increased via the activation of the hypothalmo-pituitary axis it leads to heightened vitellogenin production in the liver. Vitellogenin production in the liver is the first step of vitellogenesis. Once Vitellogenins are released into the blood stream where they are then transported to the growing oocyte where they lead to yolk protein production. The transport of vitellogenins into the maturing oocyte is done via receptor mediated endocytosis which is a low-density lipoprotein receptor (LDLR). Yolk is a lipoprotein composed of proteins, phospholipids and neutral fats along with a small amount of glycogen. The yolk is synthesised in the liver of the female parent in soluble form. Through circulation it is transported to the follicle cells that surround the maturing ovum, and is deposited in the form of yolk platelets and granules in the ooplasm. The mitochondria and Golgi complex are said to bring about the conversion of the soluble form of yolk into insoluble granules or platelets. Vitellogenesis: The two hormones responsible for vitellogenesis stimulation in insects are sesquiterpenoid juvenile hormone (JH) and ecdysteroid 20-hydroxyecdysone (E20). More recent studies are showing the importance of miRNA in vitellogenesis stimulation as well. The pathways that these hormones regulate is largely dependent on the evolutionary growth of the insect species. Together, JH, E20, and miRNA help synthesize vitellogenins within the fat body. JH uses a JH Methoprene tolerant /Taiman receptor complex that is regulated by JH to synthesis vitellogenins in the fat body.In cockroaches, for example, vitellogenesis can be stimulated by injection of juvenile hormone into immature females and mature males. In mosquitoes infected with Plasmodium, vitellogenesis may be manipulated by the parasites to reduce fecundity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Norpsilocin** Norpsilocin: Norpsilocin (4-HO-NMT) is tryptamine alkaloid recently discovered in 2017 in the psychedelic mushroom Psilocybe cubensis. It is hypothesized to be a dephosphorylated metabolite of baeocystin. Norpsilocin was found to be a near full agonist of the 5-HT2A receptor. It is also more potent than psilocin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comparison of programming languages (basic instructions)** Comparison of programming languages (basic instructions): This article compares a large number of programming languages by tabulating their data types, their expression, statement, and declaration syntax, and some common operating-system interfaces. Conventions of this article: Generally, var, var, or var is how variable names or other non-literal values to be interpreted by the reader are represented. The rest is literal code. Guillemets (« and ») enclose optional sections. Tab ↹ indicates a necessary (whitespace) indentation. The tables are not sorted lexicographically ascending by programming language name by default, and that some languages have entries in some tables but not others. Type identifiers: Integers ^a The standard constants int shorts and int lengths can be used to determine how many shorts and longs can be usefully prefixed to short int and long int. The actual sizes of short int, int, and long int are available as the constants short max int, max int, and long max int etc. ^b Commonly used for characters. Type identifiers: ^c The ALGOL 68, C and C++ languages do not specify the exact width of the integer types short, int, long, and (C99, C++11) long long, so they are implementation-dependent. In C and C++ short, long, and long long types are required to be at least 16, 32, and 64 bits wide, respectively, but can be more. The int type is required to be at least as wide as short and at most as wide as long, and is typically the width of the word size on the processor of the machine (i.e. on a 32-bit machine it is often 32 bits wide; on 64-bit machines it is sometimes 64 bits wide). C99 and C++11 also define the [u]intN_t exact-width types in the stdint.h header. See C syntax#Integral types for more information. In addition the types size_t and ptrdiff_t are defined in relation to the address size to hold unsigned and signed integers sufficiently large to handle array indices and the difference between pointers. Type identifiers: ^d Perl 5 does not have distinct types. Integers, floating point numbers, strings, etc. are all considered "scalars". ^e PHP has two arbitrary-precision libraries. The BCMath library just uses strings as datatype. The GMP library uses an internal "resource" type. ^f The value of n is provided by the SELECTED_INT_KIND intrinsic function. ^g ALGOL 68G's runtime option --precision "number" can set precision for long long ints to the required "number" significant digits. The standard constants long long int width and long long max int can be used to determine actual precision. ^h COBOL allows the specification of a required precision and will automatically select an available type capable of representing the specified precision. "PIC S9999", for example, would require a signed variable of four decimal digits precision. If specified as a binary field, this would select a 16-bit signed type on most platforms. ^i Smalltalk automatically chooses an appropriate representation for integral numbers. Typically, two representations are present, one for integers fitting the native word size minus any tag bit (SmallInteger) and one supporting arbitrary sized integers (LargeInteger). Arithmetic operations support polymorphic arguments and return the result in the most appropriate compact representation. Type identifiers: ^j Ada range types are checked for boundary violations at run-time (as well as at compile-time for static expressions). Run-time boundary violations raise a "constraint error" exception. Ranges are not restricted to powers of two. Commonly predefined Integer subtypes are: Positive (range 1 .. Integer'Last) and Natural (range 0 .. Integer'Last). Short_Short_Integer (8 bits), Short_Integer (16 bits) and Long_Integer (64 bits) are also commonly predefined, but not required by the Ada standard. Runtime checks can be disabled if performance is more important than integrity checks. Type identifiers: ^k Ada modulo types implement modulo arithmetic in all operations, i.e. no range violations are possible. Modulos are not restricted to powers of two. ^l Commonly used for characters like Java's char. ^m int in PHP has the same width as long type in C has on that system.[c] ^n Erlang is dynamically typed. The type identifiers are usually used to specify types of record fields and the argument and return types of functions. ^o When it exceeds one word. Type identifiers: Floating point ^a The standard constants real shorts and real lengths can be used to determine how many shorts and longs can be usefully prefixed to short real and long real. The actual sizes of short real, real, and long real are available as the constants short max real, max real and long max real etc. With the constants short small real, small real and long small real available for each type's machine epsilon. Type identifiers: ^b declarations of single precision often are not honored ^c The value of n is provided by the SELECTED_REAL_KIND intrinsic function. ^d ALGOL 68G's runtime option --precision "number" can set precision for long long reals to the required "number" significant digits. The standard constants long long real width and long long max real can be used to determine actual precision. ^e These IEEE floating-point types will be introduced in the next COBOL standard. ^f Same size as double on many implementations. ^g Swift supports 80-bit extended precision floating point type, equivalent to long double in C languages. Complex numbers ^a The value of n is provided by the SELECTED_REAL_KIND intrinsic function. ^b Generic type which can be instantiated with any base floating point type. Other variable types ^a specifically, strings of arbitrary length and automatically managed. ^b This language represents a boolean as an integer where false is represented as a value of zero and true by a non-zero value. ^c All values evaluate to either true or false. Everything in TrueClass evaluates to true and everything in FalseClass evaluates to false. ^d This language does not have a separate character type. Characters are represented as strings of length 1. ^e Enumerations in this language are algebraic types with only nullary constructors ^f The value of n is provided by the SELECTED_INT_KIND intrinsic function. Derived types: Array ^a In most expressions (except the sizeof and & operators), values of array types in C are automatically converted to a pointer of its first argument. See C syntax#Arrays for further details of syntax and pointer operations. ^b The C-like type x[] works in Java, however type[] x is the preferred form of array declaration. ^c Subranges are used to define the bounds of the array. ^d JavaScript's array are a special kind of object. ^e The DEPENDING ON clause in COBOL does not create a true variable length array and will always allocate the maximum size of the array. Other types ^a Only classes are supported. ^b structs in C++ are actually classes, but have default public visibility and are also POD objects. C++11 extended this further, to make classes act identically to POD objects in many more cases. ^c pair only ^d Although Perl doesn't have records, because Perl's type system allows different data types to be in an array, "hashes" (associative arrays) that don't have a variable index would effectively be the same as records. ^e Enumerations in this language are algebraic types with only nullary constructors Variable and constant declarations: ^a Pascal has declaration blocks. See functions. ^b Types are just regular objects, so you can just assign them. ^c In Perl, the "my" keyword scopes the variable into the block. ^d Technically, this does not declare name to be a mutable variable—in ML, all names can only be bound once; rather, it declares name to point to a "reference" data structure, which is a simple mutable cell. The data structure can then be read and written to using the ! and := operators, respectively. Variable and constant declarations: ^e If no initial value is given, an invalid value is automatically assigned (which will trigger a run-time exception if it used before a valid value has been assigned). While this behaviour can be suppressed it is recommended in the interest of predictability. If no invalid value can be found for a type (for example in case of an unconstraint integer type), a valid, yet predictable value is chosen instead. Variable and constant declarations: ^f In Rust, if no initial value is given to a let or let mut variable and it is never assigned to later, there is an "unused variable" warning. If no value is provided for a const or static or static mut variable, there is an error. There is a "non-upper-case globals" error for non-uppercase const variables. After it is defined, a static mut variable can only be assigned to in an unsafe block or function. Control flow: Conditional statements ^a A single instruction can be written on the same line following the colon. Multiple instructions are grouped together in a block which starts on a newline (The indentation is required). The conditional expression syntax does not follow this rule. ^b This is pattern matching and is similar to select case but not the same. It is usually used to deconstruct algebraic data types. ^c In languages of the Pascal family, the semicolon is not part of the statement. It is a separator between statements, not a terminator. ^d END-IF may be used instead of the period at the end. ^e In Rust, the comma (,) at the end of a match arm can be omitted after the last match arm, or after any match arm in which the expression is a block (ends in possibly empty matching brackets {}). Loop statements ^a "step n" is used to change the loop interval. If "step" is omitted, then the loop interval is 1. ^b This implements the universal quantifier ("for all" or " ∀ ") as well as the existential quantifier ("there exists" or " ∃ "). ^c THRU may be used instead of THROUGH. ^d «IS» GREATER «THAN» may be used instead of >. ^e Type of set expression must implement trait std::iter::IntoIterator. Exceptions ^a Common Lisp allows with-simple-restart, restart-case and restart-bind to define restarts for use with invoke-restart. Unhandled conditions may cause the implementation to show a restarts menu to the user before unwinding the stack. ^b Uncaught exceptions are propagated to the innermost dynamically enclosing execution. Exceptions are not propagated across tasks (unless these tasks are currently synchronised in a rendezvous). Other control flow statements ^a Pascal has declaration blocks. See functions. ^b label must be a number between 1 and 99999. Functions: See reflection for calling and declaring functions by strings. ^a Pascal requires "forward;" for forward declarations. ^b Eiffel allows the specification of an application's root class and feature. ^c In Fortran, function/subroutine parameters are called arguments (since PARAMETER is a language keyword); the CALL keyword is required for subroutines. ^d Instead of using "foo", a string variable may be used instead containing the same value. Type conversions: Where string is a signed decimal number: ^a JavaScript only uses floating point numbers so there are some technicalities. ^b Perl doesn't have separate types. Strings and numbers are interchangeable. ^c NUMVAL-C or NUMVAL-F may be used instead of NUMVAL. ^ str::parse is available to convert any type that has an implementation of the std::str::FromStr trait. Both str::parse and FromStr::from_str return a Result that contains the specified type if there is no error. The turbofish (::<_>) on str::parse can be omitted if the type can be inferred from context. Standard stream I/O: ^a ALGOL 68 additionally as the "unformatted" transput routines: read, write, get, and put. ^b gets(x) and fgets(x, length, stdin) read unformatted text from stdin. Use of gets is not recommended. ^c puts(x) and fputs(x, stdout) write unformatted text to stdout. ^d fputs(x, stderr) writes unformatted text to stderr ^e INPUT_UNIT, OUTPUT_UNIT, ERROR_UNIT are defined in the ISO_FORTRAN_ENV module. Reading command-line arguments: ^a In Rust, std::env::args and std::env::args_os return iterators, std::env::Args and std::env::ArgsOs respectively. Args converts each argument to a String and it panics if it reaches an argument that cannot be converted to UTF-8. ArgsOs returns a non-lossy representation of the raw strings from the operating system (std::ffi::OsString), which can be invalid UTF-8. ^b In Visual Basic, command-line arguments are not separated. Separating them requires a split function Split(string). ^c The COBOL standard includes no means to access command-line arguments, but common compiler extensions to access them include defining parameters for the main program or using ACCEPT statements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Urotensin II–related peptide** Urotensin II–related peptide: Urotensin II-related peptide (URP) is a cyclic neuropeptide that is found in all vertebrates that have been genome sequenced so far. It has a long lasting hypotensive effect and may also regulate reproduction. It is part of the Urotensin II system and is one of the two endogenous ligands for rats, mice, and possibly humans. Discovery: URP was discovered in rats when researchers were trying to locate urotensin II (UII), a neuropeptide that is a potent vasoconstrictor and increases REM cycles in the brain. The researchers designed antibodies using Goby UII as an antigen that would target the specific peptide sequence CFWKYC. When the peptide was observed using a mass spectrometer the scientists discovered this peptide was smaller than UII but had similar characteristics as well which is why it was called Urotensin II-related peptide. Structure: The URP gene is located on the 3q28 chromosome of humans. The mature URP peptide is only 8 peptides long making it smaller than UII. URP is also the same across all vertebrates because it has the same cleaving site unlike UII whose cleaving sites vary among species making its sequence different for all species. URP has the same cysteine bridged hexapeptide ring with the sequence CFWKYC as UII. This is known as the core and is the major site of action on the peptide. Destruction of the core leads to immediate loss of biological activity. On the other hand, the amino terminus of URP doesn’t seem to contain any relevant information because it can be modified without any loss in pharmacological activity. Unlike UII, URP doesn’t have an acidic amino acid (either glutamic acid or aspartic acid) preceding its core. It is still a potent agonist for the UII- receptor which suggests that this acidic amino acid is not required for activation of the receptor. Structure: The peptide sequence for URP is: Alanine-Cysteine-Phenylalanine-Tryptophan-Lysine-Tyrosine-Cysteine-Valine Receptor: URP is an agonist for the UII receptor which is a G protein-coupled receptor with the alpha subunit Gαq11. This activates PKC which then activates PLC which increases the intercellular calcium concentration. It is found in many peripheral tissues, blood vessels, and also the brainstem cholinergic neurons of the laterodorsal tegmental (LDT) and the pedunculopontine tegmental nuclei (PPT). Tissue Localization: Prepro-URP which is the precursor to the mature URP peptide is found in various tissues including specific parts of the brain such as frontal lobe and hypothalamus, and other peripheral tissues such as heart, kidneys, lungs, placenta, ovaries, and testes. In humans the amount of UII and URP gene expression are comparable except in the spinal cord where UII gene expression is much higher.In rats the UII gene expression is higher than the URP gene expression throughout the entire body. However, when the brains of the rats were tested, only the URP peptide was found making it the primary endogenous ligand in the brain.Unlike humans and rats, URP gene expression is found in mice spinal cords. Function: Cardiovascular When URP is injected into rats a long hypotensive response will be observed. UII is known as a vasoconstrictor meaning that even though both are agonists for the same receptor they can produce opposite effects CNS Axons that react to URP are primarily found in organum vasculosum laminae terminalis (OVLT) and in the median eminence (ME). These axons are located near the hypothalamus and almost always contain the hormone Gondotropin- releasing hormone (GnRH) which was found through in situ hybridization which provides information of the anatomical location URP mRNA. This means that URP might have an effect on reproduction which has not been discovered . Binding between UII and URP: Since they are both ligands for the same receptor, an experiment was done to determine which ligand had a higher affinity. When the binding of the two were compared and tested, URP actually had higher affinity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arithmaurel** Arithmaurel: The Arithmaurel was a mechanical calculator that had a very intuitive user interface, especially for multiplying and dividing numbers because the result was displayed as soon as the operands were entered. It was first patented in France by Timoleon Maurel, in 1842. It received a gold medal at the French national show in Paris in 1849. Its complexity and the fragility of its design prevented it from being manufactured.Its name came from the concatenation of Arithmometer, the machine that inspired its design and of Maurel, the name of its inventor. The heart of the machine uses one Leibniz stepped cylinder driven by a set of differential gears. History: Timoleon Maurel patented an early version of his machine in 1842, he then improved its design with the help of Jean Jayet and patented it in 1846. This is the design that won a gold medal at the Exposition nationale de Paris in 1849. History: Winnerl, a French clockmaker, was asked to manufacture the device in 1850, but only thirty machines were built because the machine was too complex for the manufacturing capabilities of the time. During the first four years, Winnerl was not able to build any of the 8 digit machines (a minimum for any professional usage) that had been ordered while Thomas de Colmar delivered, during the same period, two hundred 10 digit Arithmometers and fifty 16 digit ones.None of the machines that were built and none of the machines described in the patents could be used at full capacity because the capacity of the result display register was equal to the capacity of the operand register (for a multiplication, the capacity of the result register should be equal to the capacity of the operand register augmented by the capacity of the operator register). Description: Following is a description of one of the two machines introduced in the 1846 patent. It has a capacity of five digits for the operator and ten digits for the operand and the result registers. All the registers are located on the front panel, the reset mechanism is on the side. 10 numbered stems, arranged horizontally at the top of the front panel, can be pulled at different lengths to enter the operands with the rightmost stem representing units. A 10 digit display register located in the middle is used to display the results. Description: 5 dials, each coupled with an input key, are used to enter the operators with the rightmost dial representing units. Turning the units key one division clockwise will add the content of the operand register to the total. Turning the units key one division counterclockwise will subtract the content of the operand register from the current total. Turning the tens key one division clockwise will add 10 times the content of the operand register to the total etc..
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Allomone** Allomone: An allomone (from Ancient Greek ἄλλος allos "other" and pheromone) is a type of semiochemical produced and released by an individual of one species that affects the behaviour of a member of another species to the benefit of the originator but not the receiver. Production of allomones is a common form of defense against predators, particularly by plant species against insect herbivores. In addition to defense, allomones are also used by organisms to obtain their prey or to hinder any surrounding competitors.Many insects have developed ways to defend against these plant defenses (in an evolutionary arms race). One method of adapting to allomones is to develop a positive reaction to them; the allomone then becomes a kairomone. Others alter the allomones to form pheromones or other hormones, and yet others adopt them into their own defensive strategies, for example by regurgitating them when attacked by an insectivorous insect. Allomone: A third class of allelochemical (chemical used in interspecific communication), synomones, benefit both the sender and receiver. Allomone: "Allomone was proposed by Brown and Eisner (Brown, 1968) to denote those substances which convey an advantage upon the emitter. Because Brown and Eisner did not specify whether or not the receiver would benefit, the original definition of allomone includes both substances that benefit the receiver and the emitter, and substances that only benefit the emitter. An example of the first relationship would be a mutualistic relationship, and the latter would be a repellent secretion." Examples: AntibioticsDisrupt growth and development and reduce longevity of adults e.g. toxins or digestibility reducing factors. AntixenoticsDisrupt normal host selection behaviour e.g. Repellents, suppressants, locomotory excitants. Examples: Plants producing allomones Desmodium (tick-trefoils) Insects producing allomones The larvae of the berothid lacewing Lomamyia latipennis feed on termites which they subdue with an aggressive allomone. The first instar approaches a termite and waves the tip of its abdomen near the termite's head. The termite becomes immobile after 1 to 3 minutes, and completely paralysed very soon after this, although it may live for up to 3 hours. The berothid then feeds on the paralysed prey. The third instar feeds in a similar manner and may kill up to six termites at a time. Contact between the termite and the berothid is not necessary for subduing, and other insects present are not affected by the allomone. Examples: Bark beetles communicate via pheromones to announce a new food resource (i.e. dead trees, roots, living trees, etc.) ultimately resulting in the accumulation of a large concentration of bark beetles. Select species of bark beetles have the capability to emit pheromones that can negatively affect the behavioral response of another competing species of bark beetles when both species are attempting to inhabit the loblolly pine tree. A certain molecular compound within the released pheromone of one species can interfere with a competing species' ability to respond its own species' pheromone in the environment. This interaction aides the emitter by decreasing its local bark beetle competition. A competitive interaction occurring between two species of bark beetles is seen when the pheromones of G. sulcates interferes with the behavioral feedback of G. retusus. The impact of interactions between competing species fighting for food and space within an environment is seen when observing the California I. pini. The I. pinihave two receptors in which one is used to receive the pheromone of their own species and the other receptor receives the pheromone of their competing species. The presence of these two receptors makes sure that pheromones from their own species, I. pini, are not being interrupted by their competing species, I. paraconfusus. Examples: Arthropods that travel alone, like beetles and cockroaches, have evolved to emit pheromones when in the presence of ants in which the emitted pheromone is identical to the ant's alarm pheromone. The alarm pheromone of the worker ants causes the ants to stop what they are doing and to return to their nest till the alarm pheromone ceases in their environment. This release of an ant alarm pheromone by an arthropod causes the ants to go into alarm and allows the arthropod to escape its predators before the ants are able to recruit more workers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rice paper** Rice paper: "Rice paper" has many varieties such as rice paper made from tree bark to make drawing and writing paper or from rice flour and tapioca flour and then mixed with salt and water to produce a thin rice cake and dried to become harder and paper-like. It is used to wrap many ingredients when eating. Vietnam is the only country that creates edible rice paper from the process of making rice noodles and pho noodles. Rice paper: Rice paper is a product constructed of paper-like materials made from different plants. These include: Thin peeled dried pith of Tetrapanax papyrifer: A sheet-like "paper" material was used extensively in late 19th century Guangdong, China as a common support medium for gouache paintings sold to Western clients of the era. The term was first defined in the Chinese–English Dictionary of Robert Morrison who referred to the use of the Chinese medicinal plant as material for painting, as well as for making artificial flowers and shoe soles. Rice paper: Xuan paper made from paper mulberry: The traditional paper which originated in ancient China and it has been used for centuries in China, Japan, Korea, and Vietnam for writing, artwork, and architecture. Various pulp-based papers: May be made from the rice straw or other plants, such as hemp and bamboo. Dried starch sheets of various thickness or texture: These edible paper sheets have some properties of pulp paper and can be made from rice starch. They are known as bánh tráng, used in Vietnamese cuisine. Rice paper plant: In Europe, around the 1900s, a paperlike substance was originally known as rice paper, due to the mistaken notion that it is made from rice. In fact, it consists of the pith of a small tree, Tetrapanax papyrifer, the rice paper plant (蓪草).The plant grows in the swampy forests of Taiwan, and is also cultivated as an ornamental plant for its large, exotic leaves. In order to produce the paper, the boughs are boiled and freed from bark. The cylindrical core of pith is rolled on a hard flat surface against a knife, by which it is cut into thin sheets of a fine ivory-like texture. Rice paper plant: Dyed in various colours, this rice paper is extensively used for the preparation of artificial flowers, while the white sheets are employed for watercolor drawings. Due to its texture, this paper is not suited for writing. Mulberry paper: This "rice paper", smooth, thin, crackly, and strong, is named as a wrapper for rice, and is made from bark fibres of the paper mulberry tree. It is used for origami, calligraphy, paper screens and clothing. It is stronger than commercially made wood-pulp paper. Less commonly, the paper is made from rice straw. Depending on the type of mulberry used, it is named kozo (Broussonetia papyrifera, the paper mulberry), gampi (Wikstroemia diplomorpha), or mitsumata (Edgeworthia chrysantha). The fiber comes from the bark of the paper mulberry, not the inner wood or pith, and traditionally the paper is made by hand. Mulberry paper: The branches of the paper mulberry shrubs are harvested in the autumn, so the fibre can be processed and the paper formed during the cold winter months, because the fibre spoils easily in the heat. The branches are cut into sections two to three feet long and steamed in a large kettle, which makes the bark shrink back from the inner wood, allowing it to be pulled off like a banana peel. The bark can then be dried and stored, or used immediately. There are three layers to the bark at this stage: black bark, the outermost layer; green bark, the middle layer; and white bark, the innermost layer. All three can be made into paper, but the finest paper is made of white bark only. Mulberry paper: If the bark strips have been dried, they are soaked in water overnight before being processed further. To clean the black and green bark from the white bark, the bark strip is spread on a board and scraped with a flat knife. Any knots or tough spots in the fibre are cut out and discarded at this stage. Mulberry paper: The scraped bark strips are then cooked for two or three hours in a mixture of water and soda ash. The fibre is cooked enough when it can easily be pulled apart lengthwise. The strips are then rinsed several times in clean water to rinse off the soda ash. Rinsing also makes the fibre brighter and whiter—fine kozo paper is not bleached, but is naturally pure white. Mulberry paper: Each bark strip is then inspected by hand, against a white background or lit from behind by a light box. Any tiny pieces of black bark and other debris are removed with tweezers, and any knots or tough patches of fibre missed during scraping are cut out of the strips. The ultimate goal is to have completely pure white bark. Mulberry paper: The scraped, cooked, and cleaned strips are then laid out on a table and beaten by hand. The beating tool is a wooden bat that looks like a thicker version of a cricket bat. The fibres are beaten for about half an hour, or until all the fibres have been separated and no longer resemble strips of bark. Mulberry paper: The prepared fibre can now be made into sheets of paper. A viscous substance called formation aid is added to the vat with the fibre and water. Formation aid is polyethylene oxide, and it helps slow the flow of water, which gives the paper-maker more time to form sheets. Sheets are formed with multiple thin layers of fibre, one on top of another. Vietnamese rice paper: "Rice Paper" is created from the process of creating rice noodles and Pho, one of the specialties from rice of Vietnamese people. Rice paper originates from the southern provinces. The Northern and Central provinces also created many other types of rice paper with different names. Rice paper has many types from traditional rice paper, coconut milk rice paper, coconut rice paper, mango rice paper, pandan leaf rice paper, etc. Vietnamese rice paper: Edible rice paper is used for making fresh spring rolls (salad rolls) or fried spring rolls in Vietnamese cuisine, where the rice paper is called bánh tráng or bánh đa nem. Ingredients of the food rice paper include white rice flour, tapioca flour, salt, and water. The tapioca powder makes the rice paper glutinous and smooth. It is usually sold dried in thin, crisp, translucent round sheets that are wrapped in cellophane. The sheets are individually dipped briefly for a few seconds in warm or cool water to soften, then wrapped around savoury or sweet ingredients. Vietnamese rice paper: Edible paper is used in the home baking of foods such as macaroons and is often sold separately as colored sheets that are either plain or printed with images, such as bank notes. In the media: In the pilot episode of the television series Kung Fu, Kwai Chang Caine undergoes training to become a Shaolin priest. One of the challenges he faces is to walk on a long sheet of rice paper without tearing it or leaving any marks of his passage. His successful completion of the test is incorporated into the series' opening title sequence, with the narration, "When you can walk its length and leave no trace, you will have learned."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lectin pathway** Lectin pathway: The lectin pathway or lectin complement pathway is a type of cascade reaction in the complement system, similar in structure to the classical complement pathway, in that, after activation, it proceeds through the action of C4 and C2 to produce activated complement proteins further down the cascade. In contrast to the classical complement pathway, the lectin pathway does not recognize an antibody bound to its target. The lectin pathway starts with mannose-binding lectin (MBL) or ficolin binding to certain sugars. Lectin pathway: In this pathway, mannose-binding lectin binds to mannose, glucose, or other sugars with 3- and 4-OH groups placed in the equatorial plane, in terminal positions on carbohydrate or glycoprotein components of microorganisms including bacteria such as Salmonella, Listeria, and Neisseria strains. Fungal pathogens such as Candida albicans and Cryptococcus neoformans as well as some viruses such as HIV-1 and Respiratory syncytial virus (RSV) are bound by MBL. Lectin pathway: Mannan-binding lectin, also called mannose-binding protein, is a protein belonging to the collectin family that is produced by the liver and can initiate the complement cascade by binding to pathogen surfaces. MBL: MBL forms oligomers of subunits, which are trimers (such that 6- and 18-subunit oligomers correspond to a dimer and a hexamer, respectively). Multimers of MBL form a complex with MASP1 (Mannose-binding lectin-Associated Serine Protease), MASP2 and MASP3, that are protease zymogens. The MASPs are very similar to C1r and C1s molecules of the classical complement pathway, respectively. When the carbohydrate-recognising heads of MBL bind to specifically arranged mannose residues on the surface of a pathogen, MASP-1 and MASP-2 are activated to cleave complement components C4 and C2 into C4a, C4b, C2a, and C2b. In f, two smaller MBL-associated proteins (MAps) are found in complex with MBL. MBL-associated protein of 19 kDa (MAp19) and MBL-associated protein of 44 kDa (Map44). MASP-1, MASP-3 and MAp44 are alternative splice products of the MASP1 gene, while MASP-2 and MAp19 are alternative splice products of the MASP-2 gene. MAp44 has been suggested to act as a competitive inhibitor of lectin pathway activation, by displacing MASP-2 from MBL, hence preventing cleavage of C4 and C2 C3 convertase: C4b tends to bind to bacterial cell membranes. If it is not then inactivated, it will combine with C2a to form the classical C3 convertase (C4bC2a) on the surface of the pathogen, as opposed to the alternative C3 convertase (C3bBb) involved in the alternative pathway. C4a and C2b act as potent cytokines, with C4a causing degranulation of mast cells and basophils and C2b acting to increase vascular permeability. Historically, the larger fragment of C2 was called C2a but some publications now refer to it as C2b in keeping with the convention of assigning 'b' to the larger fragment. Clinical significance: Mannose-binding Lectin deficiency - These individuals are prone to recurrent infections, including infections of the upper respiratory tract and other body systems. People with this condition may also contract more serious infections such as pneumonia and meningitis. Depending on the type of infection, the symptoms caused by the infections vary in frequency and severity. Although the clinical significance of MBL-Deficiency is debated.Infants and young children with mannose-binding lectin deficiency seem to be more susceptible to infections, but adults can also develop recurrent infections. In addition, affected individuals undergoing chemotherapy or taking drugs that suppress the immune system are especially prone to infections.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reliability-centered maintenance** Reliability-centered maintenance: Reliability-centered maintenance (RCM) is a concept of maintenance planning to ensure that systems continue to do what their user require in their present operating context. Successful implementation of RCM will lead to increase in cost effectiveness, reliability, machine uptime, and a greater understanding of the level of risk that the organization is managing. Context: It is generally used to achieve improvements in fields such as the establishment of safe minimum levels of maintenance, changes to operating procedures and strategies and the establishment of capital maintenance regimes and plans. Successful implementation of RCM will lead to increase in cost effectiveness, machine uptime, and a greater understanding of the level of risk that the organization is managing. Context: John Moubray characterized RCM as a process to establish the safe minimum levels of maintenance. This description echoed statements in the Nowlan and Heap report from United Airlines. Context: It is defined by the technical standard SAE JA1011, Evaluation Criteria for RCM Processes, which sets out the minimum criteria that any process should meet before it can be called RCM. This starts with the seven questions below, worked through in the order that they are listed: 1. What is the item supposed to do and its associated performance standards? 2. In what ways can it fail to provide the required functions? 3. What are the events that cause each failure? 4. What happens when each failure occurs? 5. In what way does each failure matter? 6. What systematic task can be performed proactively to prevent, or to diminish to a satisfactory degree, the consequences of the failure? 7. What must be done if a suitable preventive task cannot be found?Reliability centered maintenance is an engineering framework that enables the definition of a complete maintenance regimen. It regards maintenance as the means to maintain the functions a user may require of machinery in a defined operating context. As a discipline it enables machinery stakeholders to monitor, assess, predict and generally understand the working of their physical assets. This is embodied in the initial part of the RCM process which is to identify the operating context of the machinery, and write a Failure Mode Effects and Criticality Analysis (FMECA). The second part of the analysis is to apply the "RCM logic", which helps determine the appropriate maintenance tasks for the identified failure modes in the FMECA. Once the logic is complete for all elements in the FMECA, the resulting list of maintenance is "packaged", so that the periodicities of the tasks are rationalised to be called up in work packages; it is important not to destroy the applicability of maintenance in this phase. Lastly, RCM is kept live throughout the "in-service" life of machinery, where the effectiveness of the maintenance is kept under constant review and adjusted in light of the experience gained. Context: RCM can be used to create a cost-effective maintenance strategy to address dominant causes of equipment failure. It is a systematic approach to defining a routine maintenance program composed of cost-effective tasks that preserve important functions. Context: The important functions (of a piece of equipment) to preserve with routine maintenance are identified, their dominant failure modes and causes determined and the consequences of failure ascertained. Levels of criticality are assigned to the consequences of failure. Some functions are not critical and are left to "run to failure" while other functions must be preserved at all cost. Maintenance tasks are selected that address the dominant failure causes. This process directly addresses maintenance preventable failures. Failures caused by unlikely events, non-predictable acts of nature, etc. will usually receive no action provided their risk (combination of severity and frequency) is trivial (or at least tolerable). When the risk of such failures is very high, RCM encourages (and sometimes mandates) the user to consider changing something which will reduce the risk to a tolerable level. Context: The result is a maintenance program that focuses scarce economic resources on those items that would cause the most disruption if they were to fail. RCM emphasizes the use of predictive maintenance (PdM) techniques in addition to traditional preventive measures. Background: The term "reliability-centered maintenance" authored by Tom Matteson, Stanley Nowlan and Howard Heap of United Airlines (UAL) to describe a process used to determine the optimum maintenance requirements for aircraft (having left United Airlines to pursue a consulting career a few months before the publication of the final Nowlan-Heap report, Matteson received no authorial credit for the work). The US Department of Defense (DOD) sponsored the authoring of both a textbook (by UAL) and an evaluation report (by Rand Corporation) on Reliability-Centered Maintenance, both published in 1978. They brought RCM concepts to the attention of a wider audience. Background: The first generation of jet aircraft had a crash rate that would be considered highly alarming today, and both the Federal Aviation Administration (FAA) and the airlines' senior management felt strong pressure to improve matters. In the early 1960s, with FAA approval the airlines began to conduct a series of intensive engineering studies on in-service aircraft. The studies proved that the fundamental assumption of design engineers and maintenance planners—that every aircraft and every major component thereof (such as its engines) had a specific "lifetime" of reliable service, after which it had to be replaced (or overhauled) in order to prevent failures—was wrong in nearly every specific example in a complex modern jet airliner. Background: This was one of many astounding discoveries that have revolutionized the managerial discipline of physical asset management and have been at the base of many developments since this seminal work was published. Among some of the paradigm shifts inspired by RCM were: an understanding that the vast majority of failures are not necessarily linked to the age of the asset changing from efforts to predict life expectancies to trying to manage the process of failure an understanding of the difference between the requirements of assets from a user perspective, and the design reliability of the asset an understanding of the importance of managing assets on condition (often referred to as condition monitoring, condition based maintenance and predictive maintenance) an understanding of four basic routine maintenance tasks linking levels of tolerable risk to maintenance strategy developmentLater RCM was defined in the standard SAE JA1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. This sets out the minimum criteria for what is, and for what is not, able to be defined as RCM. The standard is a watershed event in the ongoing evolution of the discipline of physical asset management. Prior to the development of the standard many processes were labeled as RCM even though they were not true to the intentions and the principles in the original report that defined the term publicly. Basic features: The RCM process described in the DOD/UAL report recognized three principal risks from equipment failures: threats to safety, to operations, and to the maintenance budget.Modern RCM gives threats to the environment a separate classification, though most forms manage them in the same way as threats to safety. Basic features: RCM offers five principal options among the risk management strategies: Predictive maintenance tasks, Preventive Restoration or Preventive Replacement maintenance tasks, Detective maintenance tasks, Run-to-Failure, and One-time changes to the "system" (changes to hardware design, to operations, or to other things).RCM also offers specific criteria to use when selecting a risk management strategy for a system that presents a specific risk when it fails. Some are technical in nature (can the proposed task detect the condition it needs to detect? does the equipment actually wear out, with use?). Others are goal-oriented (is it reasonably likely that the proposed task-and-task-frequency will reduce the risk to a tolerable level?). The criteria are often presented in the form of a decision-logic diagram, though this is not intrinsic to the nature of the process. In use: After being created by the commercial aviation industry, RCM was adopted by the U.S. military (beginning in the mid-1970s) and by the U.S. commercial nuclear power industry (in the 1980s). In use: Starting in the late 1980s, an independent initiative led by John Moubray corrected some early flaws in the process, and adapted it for use in the wider industry. Moubray was also responsible for popularizing the method and for introducing it to much of the industrial community outside of the aviation industry. In the two decades since this approach (called by the author RCM2) was first released, industry has undergone massive change with advances in lean thinking and efficiency methods. At this point in time many methods sprung up that took an approach of reducing the rigour of the RCM approach. The result was the propagation of methods that called themselves RCM, yet had little in common with the original concepts. In some cases these were misleading and inefficient, while in other cases they were even dangerous. Since each initiative is sponsored by one or more consulting firms eager to help clients use it, there is still considerable disagreement about their relative dangers (or merits).The RCM standard (SAE JA1011, available from http://www.sae.org) provides the minimum criteria that processes must comply with if they are to be called RCM. In use: Although a voluntary standard, it provides a reference for companies looking to implement RCM to ensure they are getting a process, software package or service that is in line with the original report. The Walt Disney Company introduced RCM to its parks in 1997, led by Paul Pressler and consultants McKinsey & Company, laying off a large number of maintenance workers and saving large amounts of money. Some people blamed the new cost-conscious maintenance culture for some of the Incidents at Disneyland Resort that occurred in the following years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Family tree** Family tree: A family tree, also called a genealogy or a pedigree chart, is a chart representing family relationships in a conventional tree structure. More detailed family trees, used in medicine and social work, are known as genograms. Representations of family history: Genealogical data can be represented in several formats, for example, as a pedigree or ancestry chart. Family trees are often presented with the oldest generations at the top of the tree and the younger generations at the bottom. An ancestry chart, which is a tree showing the ancestors of an individual and not all members of a family, will more closely resemble a tree in shape, being wider at the top than at the bottom. In some ancestry charts, an individual appears on the left and his or her ancestors appear to the right. Conversely, a descendant chart, which depicts all the descendants of an individual, will be narrowest at the top. Beyond these formats, some family trees might include all members of a particular surname (e.g., male-line descendants). Yet another approach is to include all holders of a certain office, such as the Kings of Germany, which represents the reliance on marriage to link dynasties together. Representations of family history: The passage of time can also be included to illustrate ancestry and descent. A time scale is often used, expanding radially across the center, divided into decades. Children of the parent form branches around the center and their names are plotted in their birth year on the time scale. Spouses' names join children's names and nuclear families of parents and children branch off to grandchildren, and so on. Great-grandparents are often in the center to portray four or five generations, which reflect the natural growth pattern of a tree as seen from the top. In a descendant tree, living relatives are common on the outer branches and contemporary cousins appear adjacent to each other. Privacy should be considered when preparing a living family tree.The image of the tree probably originated with that of the Tree of Jesse in medieval art, used to illustrate the Genealogy of Christ in terms of a prophecy of Isaiah (Isaiah 11:1). Possibly the first non-biblical use, and the first to show full family relationships rather than a purely patrilineal scheme, was that involving family trees of the classical gods in Boccaccio's Genealogia Deorum Gentilium ("On the Genealogy of the Gods of the Gentiles"), whose first version dates to 1360. Common formats: In addition to familiar representations of family history and genealogy as a tree structure, there are other notable systems used to illustrate and document ancestry and descent. Common formats: Ahnentafel An Ahnentafel (German for "ancestor table") is a genealogical numbering system for listing a person's direct ancestors in a fixed sequence of ascent: Subject (or proband) Father Mother Paternal grandfather Paternal grandmother Maternal grandfather Maternal grandmotherand so on, back through the generations. Apart from the subject or proband, who can be male or female, all even-numbered persons are male, and all odd-numbered persons are female. In this scheme, the number of any person's father is double the person's number, and a person's mother is double the person's number plus one. This system can also be displayed as a tree: Fan chart A fan chart features a half circle chart with concentric rings: the subject is the inner circle, the second circle is divided in two (each side is one parent), the third circle is divided in four, and so forth. Fan charts depict paternal and maternal ancestors. Graph theory: While family trees are depicted as trees, family relations do not in general form a tree in the strict sense used in graph theory, since distant relatives can mate. Therefore, a person can have a common ancestor on both their mother's and father's side. However, because a parent must be born before their child, an individual cannot be their own ancestor, and thus there are no loops. In this regard, ancestry forms a directed acyclic graph. Nevertheless, graphs depicting matrilineal descent (mother-daughter relationships) and patrilineal descent (father-son relationships) do form trees. Assuming no common ancestor, an ancestry chart is a perfect binary tree, as each person has exactly one mother and one father; these thus have a regular structure. A Descendant chart, on the other hand, does not, in general, have a regular structure, as a person can have any number of children or none at all. Notable examples: Family trees have been used to document family histories across time and cultures throughout the world. Africa In Africa, the ruling dynasty of Ethiopia claimed descent from King Solomon via the Queen of Sheba. Through this claim, the family traced their descent back to the House of David. The genealogy of Ancient Egyptian ruling dynasties was recorded from the beginnings of the Pharaonic era circa 3000 BC to the end of the Ptolomaic Kingdom; although this is not a record of one continuously-linked family lineage, and surviving records are incomplete. Notable examples: Elsewhere in Africa, oral traditions of genealogical recording predominate. Members of the Keita dynasty of Mali, for example, have had their pedigrees sung by griots during annual ceremonies since the 14th century. Meanwhile, in Nigeria, many ruling clans—most notably those descended from Oduduwa—claim descent from the legendary King Kisra. Here too, pedigrees are recited by griots attached to the royal courts. Notable examples: The Americas In some pre-contact Native American civilizations, genealogical records of ruling and priestly families were kept, some of which extended over several centuries or longer. East Asia There are extensive genealogies for the ruling dynasties of China, but these do not form a single, unified family tree. Additionally, it is unclear at which point(s) the most ancient historical figures named become mythological. In Japan, the ancestry of the Imperial Family is traced back to the mythological origins of Japan. The connection to persons from the established historical record only begins in the mid-first millennium AD. Notable examples: The longest family tree in the world is that of the Chinese philosopher and educator Confucius (551–479 BC), who is descended from King Tang (1675–1646 BC). The tree spans more than 80 generations from him and includes more than 2 million members. An international effort involving more than 450 branches around the world was started in 1998 to retrace and revise this family tree. A new edition of the Confucius genealogy was printed in September 2009 by the Confucius Genealogy Compilation Committee, to coincide with the 2560th anniversary of the birth of the Chinese thinker. This latest edition was expected to include some 1.3 million living members who are scattered around the world today. Notable examples: Europe and West Asia Before the Dark Ages, in the Greco-Roman world, some reliable pedigrees dated back perhaps at least as far as the first half of the first millennium BC; with claimed or mythological origins reaching back further. Roman clan and family lineages played an important part in the structure of their society and were the basis of their intricate system of personal names. However, there was a break in the continuity of record-keeping at the end of Classical Antiquity. Records of the lines of succession of the Popes and the Eastern Roman Emperors through this transitional period have survived, but these are not continuous genealogical histories of single families. Refer to descent from antiquity. Notable examples: Many noble and aristocratic families of European and West Asian origin can reliably trace their ancestry back as far as the mid to late first millennium AD; some claiming undocumented descent from Classical Antiquity or mythological ancestors. In Europe, for example, the pedigree of Niall Noígíallach would be a contender for the longest, through Conn of the Hundred Battles (fl. 123 AD); in the legendary history of Ireland, he is further descended from Breogán, and ultimately from Adam, through the sons of Noah. Notable examples: Another very old and extensive tree is that of the Lurie lineage—which includes Sigmund Freud and Martin Buber—and traces back to Lurie, a 13th-century rabbi in Brest-Litovsk, and from there to Rashi and purportedly back to the legendary King David, as documented by Neil Rosenstein in his book The Lurie Legacy. The 1999 edition of the Guinness Book of Records recorded the Lurie family in the "longest lineage" category as one of the oldest-known living families in the world today.Family trees and representations of lineages are also important in religious traditions. The biblical genealogies of Jesus also claim descent from the House of David, covering a period of approximately 1000 years. In the Torah and Old Testament, genealogies are provided for many biblical persons, including a record of the descendants of Adam. Also according to the Torah, the Kohanim are descended from Aaron. Genetic testing performed at the Technion has shown that most modern Kohanim share common Y-chromosome origins, although there is no complete family tree of the Kohanim. In the Islamic world, claimed descent from the prophet Muhammad greatly enhanced the status of political and religious leaders; new dynasties often used claims of such descent to help establish their legitimacy. Notable examples: Elsewhere Elsewhere, in many human cultures, clan and tribal associations are based on claims of common ancestry, although detailed documentation of those origins is often very limited. Notable examples: Global Forms of family trees are also used in genetic genealogy. In 2022, scientists reported the largest detailed human genetic genealogy, that unifies human genomes from many sources for insights about human history, ancestry and evolution and demonstrates a novel computational method for estimating how human DNA is related via a series of 13 million linked trees along the genome, a tree-sequence, which has been described as the largest "human family tree". Other uses: The author Pete Frame is notable for having produced "family trees" of rock bands. In this instance, the entries represent a membership of certain groups, and personnel changes within them, rather than family relationships. Several books have been produced with his family trees, which in turn have led to a BBC television series about them, including interviews from the bands depicted in the trees.Another common use is in the creation of episcopal trees in Christian traditions that believe in apostolic succession. In this case, the connection is not made through blood, but through the order of succession of bishops.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thiothionyl fluoride** Thiothionyl fluoride: Thiothionyl fluoride is a chemical compound of fluorine and sulfur, with the chemical formula S=SF2. It is an isomer of disulfur difluoride (difluorodisulfane) F−S−S−F. Preparation: Thiothionyl fluoride can be obtained from the reaction between disulfur dichloride with potassium fluoride at about 150 °C or with mercury(II) fluoride at 20 °C. S2Cl2 + 2 KF → S=SF2 + 2 KClAnother possible preparation is by the reaction of nitrogen trifluoride with sulfur. NF3 + 3 S → S=SF2 + NSFIt also forms from disulfur difluoride when in contact with alkali metal fluorides.S=SF2 can also be synthesized with the reaction of potassium fluorosulfite and disulfur dichloride: 2 KSO2F + S2Cl2 → S=SF2 + 2 KCl + 2 SO2 Properties: Thiothionyl fluoride is a colorless gas. At high temperatures and pressures, it decomposes into sulfur tetrafluoride and sulfur. 2 S=SF2 → SF4 + 3 SWith hydrogen fluoride, it forms sulfur tetrafluoride and hydrogen sulfide. S=SF2 + 2 HF → SF4 + H2SIt condenses with sulfur difluoride at low temperatures to yield 1,3-difluoro-trisulfane-1,1-difluoride. S=SF2 + SF2 → FS−S−SF3
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Population momentum** Population momentum: Population momentum is a consequence of the demographic transition. Population momentum explains why a population will continue to grow even if the fertility rate declines. Population momentum occurs because it is not only the number of children per woman that determine population growth, but also the number of women in reproductive age. Eventually, when the fertility rate reaches the replacement rate and the population size of women in the reproductive age bracket stabilizes, the population achieves equilibrium and population momentum comes to an end. Population momentum is defined as the ratio of the size of the population at that new equilibrium level to the size of the initial population. Population momentum usually occurs in populations that are growing. Example: Assume that a population has three generations: First (oldest), Second (child bearing), and Third (children). Further assume that this population has a fertility rate equal to four (4). That is, each generation is twice the size of the previous. If the population of the first generation is arbitrarily set at 100, the second is then 200, and the third is 400. The spreadsheet below shows the initial population in the first row. Example: First note that the second and third generation of the initial population are each twice the size of the previous. The total of the initial population is 700 = 100 + 200 + 400. Example: Then assume that at the end of the third generation, fertility falls to replacement (for simplicity assume that to be two). Now take the population forward in time to the next generation, line two of the spreadsheet. The first generation dies, and the new generation, the fourth, is equal to the third (because now fertility is replacement). Repeat the process again to reach the fifth generation (line 3 in the spreadsheet). The fifth generation is again equal to the fourth and now the population’s three generations are equal, and the population has reached equilibrium. Example: The initial population has grown from 700 to 1,200 even though fertility dropped from four to replacement (two) at the end of the third generation. Population momentum carried the population to higher levels over the next two generations. Further steps to zero population growth: Population momentum impacts the immediate birth and death rates in the population that determine the natural rate of growth. However, for a population to have an absolute zero amount of natural growth, the US National Library of Medicine National Institutes of Health suggests that three things must occur. 1. Fertility rates must level off to the replacement rate (the net reproduction rate should be 1). If the fertility rate remains higher than the replacement rate, the population would continue to grow. 2. Mortality rate must stop declining, that is, it must remain constant. 3. Lastly, the age structure must adjust to the new rates of fertility and mortality. This last step takes the longest to complete. Implications: Population momentum has implications for population policy for a number of reasons. 1. With respect to high-fertility countries (for example in the developing world), a positive population momentum, meaning that the population is increasing, states that these countries will continue to grow despite large and rapid declines in fertility. Implications: 2. With respect to lowest-low fertility countries (for example in Europe), a negative population momentum implies that these countries may experience population decline even if they try to increase their rate of fertility to the replacement rate of 2.1. For example, some Eastern European countries show a population shrinkage even if their birth rates recovered to replacement level. Population momentum can become negative if the fertility rate is under replacement level for a long period of time. Implications: 3. Population momentum shows that replacement level fertility is a long-term concept rather than an indication of current population growth rates. Depending on the extant age structure, a fertility rate of two children per woman may correspond to short-term growth or decline. Calculation: To calculate population momentum for population A, a theoretical population is constructed in which the birth rate for population A immediately becomes replacement level. Under such conditions, the population will eventually stabilize into a stationary population, with no year-to-year changes in age-specific rates or in total population. The population momentum is calculated by dividing this final total population number by the starting population. Momentum, Ω, can be expressed as: Ω=(beo)Q In this equation, b is the crude birth rate while eo is the life expectancy at birth. Q is the total number of births per initial birth. Calculation: Q=1rμRo−1Ro This equation is used to derive Q (total births per initial birth), r is the growth rate and µ is the unchanging population mean age at childbearing. Ro is the Net Reproduction Rate of the non-changing population. Causes: Population momentum is typically caused by a shift in the country's demographic transition. When mortality rates drop, the young survive childhood and the aging population live longer. Fertility rates remain high, causing the overall population size to grow. According to population momentum, even if high fertility rates were immediately replaced with replacement level fertility rates, the population would continue to grow due to the pre-childbearing population entering childbearing years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carminite** Carminite: Carminite (PbFe3+2(AsO4)2(OH)2) is an anhydrous arsenate mineral containing hydroxyl. It is a rare secondary mineral that is structurally related to palermoite (Li2SrAl4(PO4)4(OH)4). Sewardite (CaFe3+2(AsO4)2(OH)2) is an analogue of carminite, with calcium in sewardite in place of the lead in carminite. Mawbyite is a dimorph (same formula, different structure) of carminite; mawbyite is monoclinic and carminite is orthorhombic. It has a molar mass of 639.87 g. It was discovered in 1850 and named for the characteristic carmine colour. Structure: Carminite belongs to the orthorhombic crystal class (2/m 2/m 2/m) and has space group C ccm or C cc2. The structure consists of linked octahedra of iron surrounded by oxygen and hydroxyl which are aligned parallel to the c axis. They are connected together in the direction of the a axis by arsenate tetrahedra (arsenic surrounded by 4 oxygen). Coordination about the lead atoms is eight-fold. The edges of the unit cell have lengths a = 16.59 Å, b = 7.58 Å and c = 12.295 Å. There are 8 formula units in each unit cell (Z = 8). Appearance: Crystals have been found up to 2 cm long, though most are smaller. They are typically bladed, elongated along the c axis and flattened perpendicular to the b axis. They also occur as acicular crystals, in spherical or tufted aggregates and as fibrous or drusy masses. The crystals are a characteristic carmine red colour, hence the name, and they are also red in transmitted light. They are translucent with a vitreous lustre and a reddish yellow streak. Physical properties: Carminite is fairly soft, with a Mohs hardness of 3+1⁄2, between that of calcite and fluorite. Because of the lead content it is heavy, with specific gravity of 5.03 - 5.18, although specimens from Mapimi are less dense at 4.10. Cleavage is distinct in one direction parallel to the c axis. The mineral is slowly soluble in hydrochloric acid (HCl) with the separation of lead(II) chloride (PbCl2) and totally soluble in nitric acid (HNO3). Carminite is not radioactive and no piezoelectric effect has been detected. Optical properties: Orthorhombic crystals (and monoclinic and triclinic crystals) have two directions in which light travels with zero birefringence; these directions are called the optic axes, and the crystal is said to be biaxial. The speed of a ray of light travelling through the crystal differs with direction. The direction of the fastest ray is called the X direction and the direction of the slowest ray is called the Z direction. X and Z are perpendicular to each other and a third direction Y is defined as perpendicular to both X and Z; light travelling along Y has an intermediate speed. Refractive index is inversely proportional to speed, so the refractive indices for the X, Y and Z directions increase from X to Z.Carminite is orthorhombic and for an orthorhombic crystal the optical directions correspond to the crystal axes a, b and c, but not necessarily in that order. For carminite the orientation is X = c, Y = a and Z = b and the refractive indices are high, with nα = 2.070, nβ = 2.070, nγ = 2.080, only a little less than diamond at 2.4.The maximum birefringence δ is the difference between the highest and lowest refractive index; for carminite δ = 0.010. Optical properties: The angle between the two optic axes is called the optic angle, 2V, and it is always acute, and bisected either by X or by Z. If Z is the bisector then the crystal is said to be positive, and if X is the bisector it is said to be negative. Carminite is biaxial (+) and 2V is moderate to large. 2V depends on the refractive indices, but refractive index varies with wavelength, and hence with colour. So 2V also depends on the colour, and is different for red and for violet light. This effect is called dispersion of the optic axes, or just dispersion (not to be confused with chromatic dispersion). If 2V is greater for red light than for violet light the dispersion is designated r > v, and vice versa. For carminite the dispersion is strong, with r < v. Optical properties: The mineral exhibits strong pleochroism; when viewed along the X direction it appears pale yellowish red and dark carmine red along the Y and Z directions. Absorption is equal along the Y and Z optic directions, but less along the X optic direction. When a birefringent crystal is rotated between crossed polarizers it will turn dark every 90° of rotation. This effect is known as extinction. Carminite exhibits the parallel extinction that is characteristic of orthorhombic crystals. Occurrence: Carminite is formed as an uncommon alteration product of arsenopyrite (FeAsS) in the oxidized zones of some lead-bearing deposits. Common associates are wulfenite, scorodite, plumbojarosite, mimetite, dussertite, cerussite, beudantite, bayldonite, arseniosiderite and anglesite.The type locality is the Louise Mine, Bürdenbach, Altenkirchen, Wied Iron Spar District, Westerwald, Rhineland-Palatinate, Germany where it is associated with beudantite. At the Hingston Down Consols mine in Cornwall, England, carminite occurs with scorodite, mimetite and pharmacosiderite.The ores of the Ojuela Mine, Mexico, are replacement deposits in limestone and consist of galena, sphalerite, pyrite, and arsenopyrite in a matrix of quartz, dolomite and fluorite. Arsenopyrite is abundant. On a dump near the north shaft blocks of massive scorodite containing seams and pockets of arseniosiderite and small areas of dussertite and carminite have been found. Carminite also occurs as masses mixed with cerussite, anglesite and plumbojarosite. It is almost always intimately associated with arseniosiderite and dussertite.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alternating multilinear map** Alternating multilinear map: In mathematics, more specifically in multilinear algebra, an alternating multilinear map is a multilinear map with all arguments belonging to the same vector space (for example, a bilinear form or a multilinear form) that is zero whenever any pair of arguments is equal. More generally, the vector space may be a module over a commutative ring. The notion of alternatization (or alternatisation) is used to derive an alternating multilinear map from any multilinear map with all arguments belonging to the same space. Definition: Let R be a commutative ring and V,W be modules over R . A multilinear map of the form f:Vn→W is said to be alternating if it satisfies the following equivalent conditions: whenever there exists {\textstyle 1\leq i\leq n-1} such that xi=xi+1 then 0. whenever there exists {\textstyle 1\leq i\neq j\leq n} such that xi=xj then 0. Vector spaces: Let V,W be vector spaces over the same field. Then a multilinear map of the form f:Vn→W is alternating iff it satisfies the following condition: if x1,…,xn are linearly dependent then f(x1,…,xn)=0 Example: In a Lie algebra, the Lie bracket is an alternating bilinear map. The determinant of a matrix is a multilinear alternating map of the rows or columns of the matrix. Properties: If any component xi of an alternating multilinear map is replaced by xi+cxj for any j≠i and c in the base ring R, then the value of that map is not changed.Every alternating multilinear map is antisymmetric, meaning that or equivalently, where Sn denotes the permutation group of order n and sgn ⁡σ is the sign of σ. If n! is a unit in the base ring R, then every antisymmetric n -multilinear form is alternating. Alternatization: Given a multilinear map of the form f:Vn→W, the alternating multilinear map g:Vn→W defined by is said to be the alternatization of f. Properties The alternatization of an n-multilinear alternating map is n! times itself. The alternatization of a symmetric map is zero. The alternatization of a bilinear map is bilinear. Most notably, the alternatization of any cocycle is bilinear. This fact plays a crucial role in identifying the second cohomology group of a lattice with the group of alternating bilinear forms on a lattice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WinUSB** WinUSB: WinUSB is a generic USB driver provided by Microsoft, for their operating systems starting with Windows Vista but which is also available for Windows XP. It is aimed at simple devices that are accessed by only one application at a time (for example instruments like weather stations, devices that only need a diagnostic connection or for firmware upgrades). It enables the application to directly access the device through a simple software library. The library provides access to the pipes of the device. WinUSB exposes a client API that enables developers to work with USB devices from user-mode. Starting with Windows 7, USB MTP devices use WinUSB instead of the kernel mode filter driver. Advantages and disadvantages: Advantages Doesn't require the knowledge to write a driver Speeds up development Disadvantages Only one application can access the device at a time Doesn't support isochronous transfers prior to Windows 8.1 Doesn't support USB Reset (as requested by DFU protocol for example) On other operating systems, the device still needs a custom driver WCID: A WCID device, where WCID stands for "Windows Compatible ID", is a USB device that provides extra information to a Windows system, in order to facilitate automated driver installation and, in most circumstances, allow immediate access. WCID allows a device to be used by a Windows application almost as soon as it is plugged in, as opposed to the usual scenario where a USB device that is neither HID nor Mass Storage requires end-users to perform a manual driver installation. As such, WCID can bring the 'Plug-and-Play' functionality of HID and Mass Storage to any USB device (that sports a WCID aware firmware).WCID is an extension of the WinUSB Device functionality. Other solutions: One solution is the use of a predefined USB device class. Operating systems provide built-in drivers for some of them. The most widely used device class for embedded devices is the USB communications device class (CDC). A CDC device can appear as a virtual serial port to simplify the use of a new device for older applications. Another solution is UsbDk. UsbDk supports all device types including isochronous and provides simpler way for device access acquisition that does not involve INF files creation and installation. UsbDk is open source, community supported and works on all Windows versions starting from Windows XP. If the previous solutions are inappropriate, one can write a custom driver. For newer versions of Microsoft Windows, it can be done using the Windows Driver Foundation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gastrolith** Gastrolith: A gastrolith, also called a stomach stone or gizzard stone, is a rock held inside a gastrointestinal tract. Gastroliths in some species are retained in the muscular gizzard and used to grind food in animals lacking suitable grinding teeth. In other species the rocks are ingested and pass through the digestive system and are frequently replaced. The grain size depends upon the size of the animal and the gastrolith's role in digestion. Other species use gastroliths as ballast. Particles ranging in size from sand to cobble have been documented. Etymology: Gastrolith comes from the Greek γαστήρ (gastēr), meaning "stomach", and λίθος (lithos), meaning "stone". Occurrence: Among living vertebrates, gastroliths are common among crocodiles, alligators, herbivorous birds, seals and sea lions. Domestic fowl require access to grit. Stones swallowed by ostriches can exceed a length of 10 centimetres (3.9 in). Apparent microgastroliths have also been found in frog tadpoles. Ingestion of silt and gravel by tadpoles of various anuran (frog) species has been observed to improve buoyancy control.Some extinct animals such as sauropod dinosaurs appear to have used stones to grind tough plant matter. A rare example of this is the Early Cretaceous theropod Caudipteryx zoui from northeastern China, which was discovered with a series of small stones, interpreted as gastroliths, in the area of its skeleton that would have corresponded with its abdominal region. Aquatic animals, such as plesiosaurs, may have used them as ballast, to help balance themselves or to decrease their buoyancy, as crocodiles do. While some fossil gastroliths are rounded and polished, many stones in living birds are not polished at all. Gastroliths associated with dinosaur fossils can weigh several kilograms. Occurrence: Certain crayfish store gastroliths in their stomachs. Especially crayfish living in freshwater store these gastroliths as the presence of calcium is limited in freshwater. These gastroliths serve as a calcium source for molting. Paleontology: History of discovery In 1906, George Reber Weiland reported the presence of worn and polished quartz pebbles associated with the remains of plesiosaurs and sauropod dinosaurs and interpreted these stones as gastroliths. In 1907, Barnum Brown found gravel in close association with the fossil remains of the duck-billed hadrosaur Claosaurus and interpreted it as gastroliths. Brown was among the first paleontologist to recognize that dinosaurs used gastroliths in their digestive systems to aid in the grinding of food. Other paleontologists over the years were unconvinced. In 1932, Friedrich von Huene found stones in Late Triassic sediments, in association with the fossil remains of the prosauropod Sellosaurus and interpreted them as gastroliths. In 1934, the Howe Quarry, a fossil location in northwestern Wyoming also yielded dinosaur bones with their associated gastroliths. In 1942, William Lee Stokes recognized the presence of gastroliths in the remains of sauropod dinosaurs recovered from Late Jurassic strata. Paleontology: Identification Geologists usually require several pieces of evidence before they will accept that a rock was used by a dinosaur to aid its digestion. First, it should be rounded on all edges (and some are polished) because inside a dinosaur's gizzard any genuine gastrolith would have been acted upon by other stones and fibrous materials in a process similar to the action of a rock tumbler. Second, the stone must be unlike the rock found in its geological vicinity, i.e., its geologic context. Many gastroliths have been found in fine grained lake, mud, and swamp deposits. These environs are calm water deposits and could not carry pebbles and cobbles (unlike a river or beach). Oliver Wings also argues that the stone must be found with the fossils of the dinosaur which ingested it. It is this last criterion that causes trouble in identification, as smooth stones found without context can (possibly erroneously in some cases) be dismissed as having been polished by water or wind. Christopher H. Whittle (1988,9) pioneered scanning electron microscope analysis of wear patterns on gastroliths. Wings (2003) found that ostrich gastroliths would be deposited outside the skeleton if the carcass was deposited in an aquatic environment for as little as a few days following death. He concludes that this is likely to hold true for all birds (with the possible exception of moa) due to their air-filled bones which would cause a carcass deposited in water to float for the time it needs to rot sufficiently to allow gastroliths to escape. Paleontology: Gastroliths can be distinguished from stream- or beach-rounded rocks by several criteria: gastroliths are highly polished on the higher surfaces, with little or no polish in depressions or crevices, often strongly resembling the surface of worn animal teeth. Stream- or beach-worn rocks, particularly in a high-impact environment, show less polishing on higher surfaces, often with many small pits or cracks on these higher surfaces. Finally, highly polished gastroliths often show long microscopic rilles, presumably caused by contact with stomach acid. Since most gastroliths were scattered when the animal died and many entered a stream or beach environment, some gastroliths show a mixture of these wear features. Others were undoubtedly swallowed by other dinosaurs and highly polished gastroliths may have been swallowed repeatedly. Paleontology: None of the gastroliths examined in a 2001 study of Cedarosaurus gastroliths had the "soapy" texture popularly used to distinguish gastroliths from other types of clast. The researchers dismissed using a soapy texture to identify gastroliths as "unreliable". Gastroliths tended to be universally dull, although the colors represented were varied including black, dark brown, purplish red and grey-blue. Reflectance values greater than 50% are very diagnostic for identifying gastroliths. Clasts from beaches and streams tended to have reflectance values of less than 35%. Less than ten percent of beach clasts have reflectance values lying between 50 and 80%. Paleontology: The American Museum of Natural History Photograph # 311488 demonstrates an articulated skeleton of a Psittacosaurus mongoliensis, from the Ondai Sair Formation, Lower Cretaceous Period of Mongolia, showing a collection of about 40 gastroliths inside the rib cage, about midway between shoulder and pelvis. Geologic distribution Jurassic Gastroliths have sometimes been called Morrison stones because they are often found in the Morrison Formation (named after the town of Morrison, west of Denver, Colorado), a late Jurassic formation roughly 150 million years old. Some gastroliths are made of petrified wood. Most known instances of preserved sauropod gastroliths are from Jurassic animals. Paleontology: Cretaceous The Early Cretaceous Cedar Mountain Formation of Central Utah is full of highly polished red and black cherts, and other rounded quartzose clasts, which may partly represent gastroliths. The cherts may themselves contain fossils of ancient animals, such as corals. These stones do not appear to be associated with stream deposits and are rarely more than fist-sized, which is consistent with the idea that they are gastroliths. Paleontology: Sauropods Most known instances of preserved sauropod gastroliths are from Jurassic animals. The largest known gastroliths found in association with sauropod skeletons are approximately ten centimeters in length. Paleontology: Cedarosaurus weiskopfae In 2001 Frank Sanders, Kim Manley, and Kenneth Carpenter published a study on 115 gastroliths discovered in association with a Cedarosaurus specimen. The stones were identified as gastroliths on the basis of their tight spatial distribution, partial matrix support, and an edge-on orientation indicative of their being deposited while the carcass still had soft tissue. Their high surface reflectance values are consistent with other known dinosaur gastroliths. Nearly all of the Cedarosaurus gastroliths were found within a .06 m volume of space in the gut region of the skeleton.The total mass of the gastroliths themselves was 7 kilograms (15 lb). Most were less than 10 millilitres (0.35 imp fl oz; 0.34 US fl oz) in volume. The least massive clast was 0.1 grams (0.0035 oz) and the most was 715 grams (25.2 oz), with most of them being toward the smaller end of that range. The clasts tended to be close to spherical in shape, although the largest specimens were also the most irregular. The largest gastroliths contributed the most to the total surface area of the set. Some gastroliths were so large and irregularly shaped that they may have been difficult to swallow. The gastroliths were mostly composed of chert, with some sandstone, siltstone, and quartzite clasts also included.Since some of the most irregular gastroliths are also the largest, it is unlikely that they were ingested by accident. Cedarosaurus may have found irregular clasts to be attractive potential gastroliths or was not selective about shape. The clasts were generally of dull coloration, suggesting that color was not a major factor for the sauropod's decision making. The high surface area to volume ratio of the largest clasts suggests that the gastroliths may have broken down ingested plant material by grinding or crushing it. The sandstone clasts tended to be fragile and some broke in the process of collection. The sandstone gastroliths may have been rendered fragile after deposition by loss of cement caused by the external chemical environment. If the clasts had been that fragile while the animal was alive, they probably rolled and tumbled in the digestive tract. If they were more robust, they could have served as part of a ball-mill system. Paleontology: Migration Paleontologists and geologists are researching new methods of identifying gastroliths that have been found disassociated from animal remains, because of the important information they can provide, if indeed they are trace fossils. If the validity of such gastroliths can be verified, it may be possible to trace gastrolithic rocks back to their original source area where the dinosaur first swallowed the rock. This may provide important information on how dinosaurs migrated. Because the number of suspected gastroliths is substantial, they might provide significant new information and insights into the lives and behaviour of dinosaurs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Moist static energy** Moist static energy: The moist static energy is a thermodynamic variable that describes the state of an air parcel, and is similar to the equivalent potential temperature. The moist static energy is a combination of a parcel's enthalpy due to an air parcel's internal energy and energy required to make room for it, its potential energy due to its height above the surface, and the latent energy due to water vapor present in the air parcel. It is a useful variable for researching the atmosphere because, like several other similar variables, it is approximately conserved during adiabatic ascent and descent.The moist static energy, S, can be described mathematically as: S=Cp⋅T+g⋅z+Lv⋅q where Cp is the specific heat at constant pressure, T is the absolute air temperature, g is the gravitational constant, z is the geopotential height above sea level, Lv is the latent heat of vaporization, and q is water vapor specific humidity. Note that many texts use mixing ratio r in place of specific humidity q because these values tend to be close (within a few percent) under normal atmospheric conditions, but this is an approximation and not strictly correct. Moist static energy: Through the study of moist static energy profiles, Herbert Riehl and Joanne Malkus determined in 1958 that hot towers, small cores of convection approximately 5 kilometres (3.1 mi) wide that extend from the planetary boundary layer to the tropopause, were the primary mechanism that transported energy out of the tropics to the middle latitudes. More recently, idealized model simulations of the tropics indicate that the moist static energy budget is dominated by advection, with shallow inflow in the lowest 2 kilometres (6,600 ft) of the atmosphere with outflow concentrated about 10 kilometres (33,000 ft) above the surface. Moist static energy has also been used to study the Madden–Julian oscillation (MJO). As with the tropics as a whole, the budget of moist static energy in the MJO is dominated by advection, but also is influenced by the wind-driven component of the surface latent heat flux. The relationship between the advection component and the latent heat component influence the timing of the MJO.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transmembrane protein 217** Transmembrane protein 217: Transmembrane Protein 217 is a protein encoded by the gene TMEM217. TMEM217 has been found to have expression correlated with the lymphatic system and endothelial tissues and has been predicted to have a function linked to the cytoskeleton. Gene: TMEM217 is located on the chromosome 6 minus strand at 6p21.2. The gene consists of 46,857 base pairs and is flanked by TBC1D22B (TBC1 Domain Family Member 22B) and PIM1. It was previously known as C6orf128 (Chromosome 6 open reading frame 128). mRNA: TMEM217 has three common isoforms formed from the alternative splicing of three exons. Isoform 1 translates for the longest polypeptide, consisting of 1590 nucleotides. The 5’ un-translated region of isoform 1 is relatively short and is predicted to fold into several stem loop domains within conserved areas of the un-translated region. Protein: Primary Protein Sequence The longest polypeptide of transmembrane protein 217 consists of 229 amino acids. This protein isoform has a predicted weight of 26.6 kDa and isoelectric point at a pH of 9.3. It is notably rich in isoleucine and phenylalanine, and deficient in alanine, aspartate, and proline compared to other proteins. Transmembrane protein 217 contains the domain of unknown function, DUF4534, between amino acids 11-171. Protein: Secondary Structure Transmembrane protein 217 is predicted to have four transmembrane domains. These transmembrane domains consist primarily of uncharged amino acids in predicted alpha helices. The N-terminus and C-terminus of the protein are predicted to be facing the cytosol with the C-terminus containing a long predicted coiled tail extending from the final transmembrane domain. Protein: Post-Translational Modifications There are several predicted phosphorylation and glycosylation sites on transmembrane protein 217 in highly conserved parts of the protein, where the phosphorylation sites are located primarily on the C-terminal tail. There are also two highly conserved cysteine residues, which have the potential to form a disulfide bond in the extracellular space between the first and second transmembrane domains. Expression: TMEM217 is not ubiquitously expressed. The gene tends to have expression correlated to lymphatic system, vascular/arterial endothelial tissue, and notable expression in the bladder based on expression profiles and microarray analysis. Other tissues that have been shown to express TMEM217 include: connective tissues, the liver, mammary glands, the testis, and the cervix. Co-expression analyses have found that TMEM217 was up-regulated in response to mechanical stretch in dermal fibroblast cells and in response to the resveratrol derivative, DMU-212, in vascular endothelial tissues. Function: No known function has been attributed to TMEM217, however a co-expression analysis in dermal fibroblasts has predicted the protein to have a potential association with the cytoskeleton. Clinical Significance: Single nucleotide polymorphisms in TMEM217 have been linked to Alzheimer’s disease and diabetic retinopathy. TMEM217 was also found to have similar expression patterns as TRPM2, a biomarker linked to breast carcinoma. Expression profiles have also linked elevated TMEM217 expression to bladder cancer and lymphoma. Homology: TMEM217 was found to have orthologs in organisms as early as the scaled fish, which diverged 420 million years ago. Although found in organisms as early as fish and reptiles, TMEM217 has no known orthologs in any bird species.TMEM217 has no known paralogs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Privateer (motorsport)** Privateer (motorsport): In motorsport, a privateer is usually an entrant into a racing event that is not directly supported by an automobile or motorcycle manufacturer. Privateers teams are often found competing in rally, circuit racing and motorcycle racing events and often include competitors who build and maintain their own vehicles and motorcycles. In previous Formula One seasons, privately owned teams would race using the chassis of another team or constructor in preference to building their own car; the Concorde Agreement now prohibits this practice. Increasingly the term is being used in an F1 context to refer to teams who are not at least part-owned by large corporations, such as Williams F1 and McLaren F1 Team. Privateer (motorsport): Many privateer entrants compete for the enjoyment of the sport, and are not paid to be racing drivers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bytownite** Bytownite: Bytownite is a calcium rich member of the plagioclase solid solution series of feldspar minerals with composition between anorthite and labradorite. It is usually defined as having between 70 and 90%An (formula: (Ca0.7−0.9Na0.3−0.1)[Al(Al,Si)Si2O8]). Like others of the series, bytownite forms grey to white triclinic crystals commonly exhibiting the typical plagioclase twinning and associated fine striations. The specific gravity of bytownite varies between 2.74 and 2.75. The refractive indices ranges are nα=1.563 – 1.572, nβ=1.568 – 1.578, and nγ=1.573 – 1.583. Precise determination of these two properties with chemical, X-ray diffraction, or petrographic analysis are required for identification. Occurrence: Bytownite is a rock forming mineral occurring in mafic igneous rocks such as gabbros and anorthosites. It also occurs as phenocrysts in mafic volcanic rocks. It is rare in metamorphic rocks. It is typically associated with pyroxenes and olivine.The mineral was first described in 1836 and named for an occurrence at Bytown (now Ottawa), Canada. Other noted occurrences in Canada include the Shawmere anorthosite in Foleyet Township, Ontario, and on Yamaska Mountain, near Abbotsford, Quebec. It occurs on Rùm island, Scotland and Eycott Hill, near Keswick, Cumberland, England. It is reported from Naaraodal, Norway and in the Bushveld complex of South Africa. It is also found in Isa Valley, Western Australia.In the US it is found in the Stillwater igneous complex of Montana; from near Lakeview, Lake County, Oregon. It occurs in the Lucky Cuss mine, Tombstone, Arizona; and from the Grants district, McKinley County, New Mexico. In the eastern US it occurs at Cornwall, Lebanon County, Pennsylvania and Phoenixville, Chester County, Pennsylvania.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DRIP-seq** DRIP-seq: DRIP-seq (DRIP-sequencing) is a technology for genome-wide profiling of a type of DNA-RNA hybrid called an "R-loop". DRIP-seq utilizes a sequence-independent but structure-specific antibody for DNA-RNA immunoprecipitation (DRIP) to capture R-loops for massively parallel DNA sequencing. Introduction: An R-loop is a three-stranded nucleic acid structure, which consists of a DNA-RNA hybrid duplex and a displaced single stranded DNA (ssDNA). R-loops are predominantly formed in cytosine-rich genomic regions during transcription and are known to be involved with gene expression and immunoglobulin class switching. They have been found in a variety of species, ranging from bacteria to mammals. They are preferentially localized at CpG island promoters in human cells and highly transcribed regions in yeast.Under abnormal conditions, namely elevated production of DNA-RNA hybrids, R-loops can cause genome instability by exposing single-stranded DNA to endogenous damages exerted by the action of enzymes such as AID and APOBEC, or overexposure to chemically reactive species. Therefore, understanding where and in what circumstances R-loops are formed across the genome is crucial for the better understanding of genome instability. R-loop characterization was initially limited to locus specific approaches. However, upon the arrival of massive parallel sequencing technologies and thereafter derivatives like DRIP-seq, the possibility to investigate entire genomes for R-loops has opened up. Introduction: DRIP-seq relies on the high specificity and affinity of the S9.6 monoclonal antibody (mAb) towards DNA-RNA hybrids of various lengths. S9.6 mAb was first created and characterized in 1986 and is currently used for the selective immunoprecipitation of R-loops. Since then, it was used in diverse immunoprecipitation methods for R-loop characterization. The concept behind DRIP-seq is similar to ChIP-sequencing; R-loop fragments are the main immunoprecipitated material in DRIP-seq. Uses and Current Research: DRIP-seq is mainly used for genome-wide mapping of R-loops. Identifying R-loop formation sites allows the study of diverse cellular events, such as the function of R-loop formation at specific regions, the characterization of these regions, and the impact on gene expression. It can also be used to study the influence of R-loops in other processes like DNA replication and synthesis. Indirectly, DRIP-seq can be performed on mutant cell lines deficient in genes involved in R-loop resolution. These types of studies provide information about the roles of the mutated gene in suppressing DNA-RNA formation and potentially about the significance of R-loops in genome instability. Uses and Current Research: DRIP-seq was first used for genome-wide profiling of R-loops in humans, which showed widespread R-loop formation at CpG island promoters. Particularly, the researchers found that R-loop formation is associated with the unmethylated state of CpG islands. DRIP-seq was later used to profile R-loop formation at transcription start and termination sites in human pluripotent Ntera2 cells. In this study, the researchers revealed that R-loops on 3' ends of genes may be correlated with transcription termination. Workflow of DRIP-seq: Genomic DNA extraction First, genomic DNA (gDNA) is extracted from cells of interest by proteinase K treatment followed by phenol-chloroform extraction and ethanol precipitation. Additional zymolyase digestion is necessary for yeast cells to remove the cell wall prior to proteinase K treatment. gDNA can also be extracted with a variety of other methods, such as column-based methods. Genomic DNA fragmentation gDNA is treated with S1 nuclease to remove undesired ssDNA and RNA, followed by ethanol precipitation to remove the S1 nuclease. Then, gDNA is fragmented with restriction endonuclease, yielding double-stranded DNA (dsDNA) fragments of different sizes. Alternatively, gDNA fragments can be generated by sonication. Workflow of DRIP-seq: Immunoprecipitation Fragmented gDNA is incubated with the DNA-RNA structure-specific S9.6 mAb. This step is unique for the DRIP-seq protocol, since it entirely relies on the high specificity and affinity of the S9.6 mAb for DNA-RNA hybrids. The antibody will recognize and bind these regions dispersed across the genome and will be used for immunoprecipitation. The S9.6 antibodies are bound to magnetic beads by interacting with specific ligands (i.e. protein A or protein G) on the surface of the beads. Thus, the DNA-RNA containing fragments will bind to the beads by means of the antibody. Workflow of DRIP-seq: Elution The magnetic beads are washed to remove any gDNA not bound to the beads by a series of washes and DNA-RNA hybrids are recovered by elution. To remove the antibody bound to the nucleic acid hybrids, proteinase K treatment is performed followed by phenol-chloroform extraction and ethanol precipitation. This results in the isolation of purified DNA-RNA hybrids of different sizes. Workflow of DRIP-seq: Sequencing For massive parallel sequencing of these fragments, the immunoprecipitated material is sonicated, size selected and ligated to barcoded oligonucleotide adaptors for cluster enrichment and sequencing. Computational Analysis: To detect sites of R-loop formation, the hundreds of millions of sequencing reads from DRIP-seq are first aligned to a reference genome with a short-read sequence aligner, then peak calling methods designed for ChIP-seq can be used to evaluate DRIP signals. If different cocktails of restriction enzymes were used for different DRIP-seq experiments of the same sample, consensus DRIP-seq peaks are called. Typically, peaks are compared against those from a corresponding RNase H1-treated sample, which serves as an input control. Limitations: Due to the absence of another antibody-based method for R-loop immunoprecipitation, validation of DRIP-seq results is difficult. However, results of other R-loop profiling methods, such as DRIVE-seq, may be used to measure consensus. Limitations: On the other hand, DRIP-seq relies on existing short-read sequencing platforms for the sequencing of R-loops. In other words, all inherent limitations of these platform also apply to DRIP-seq. In particular, typical short-read sequencing platforms would produce uneven read coverage in GC-rich regions. Sequencing long R-loops might pose a challenge because R-loops are predominantly formed in cytosine-rich DNA regions. Moreover, GC-rich regions tend to have low complexity by nature, which is difficult for short read aligners to produce unique alignments. Other R-loop Profiling Methods: Although there are several other methods for analysis and profiling of R-loop formation, only few provide coverage and robustness at the genome-wide scale. Non-denaturing bisulfite modification and sequencing: This method consists of bisulfite treatment followed by sequencing and relies on the mutagenic effect of sodium bisulfite on ssDNA. Although this method is primarily used to localize specific CpG island promoters, it was used to detect R-loops at a minor scale and other ssDNA fragile sites. DNA:RNA In Vitro Enrichment (DRIVE-seq): This method shares very similar principles of DRIP-seq except for the use of MBP-RNASEH1 endonuclease instead of the S9.6 mAb for R-loops recovery. MBP-RNASEH1 provides an alternative to S9.6 mAb when an additional capture assay is needed, however over-expression of this endonuclease may introduce cytotoxic risks in vivo. Other R-loop Profiling Methods: DNA:RNA immnunoprecipitation followed by hybridization on tiling microarray (DRIP-chip): This method also relies on the use of the S9.6 mAb. However, instead of entering into a sequencing pipeline, the immunoprecipitated material in DRIP-chip is hybridized to a microarray. An advantage of DRIP-chip over DRIP-seq is the rapid obtention of the data. The limiting factors of this technique are the number of probes on the chip microarrays and absence of DNA sequence information.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RJ TextEd** RJ TextEd: RJ TextEd is a freeware Unicode text and source code editor for Windows, that can also be used as a simple web development tool. The editor uses a variety of techniques for syntax highlighting in the source. It can use auto completion and hints to assist in editing source code. Previews of HTML/ASP/PHP code are supported. A syntax file editor is included.The interface is based on the MDI with tabs for editing multiple files and open document manipulation. TextEd includes a web browser, a file manager, and a CSS editor, as well as various tools for web developers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Experimental and Applied Acarology** Experimental and Applied Acarology: Experimental and Applied Acarology is a monthly peer-reviewed scientific journal covering all aspects of acarology. It was established in 1985 and is published by Springer Science+Business Media. The editor-in-chief is Maurice W. Sabelis (University of Amsterdam). Abstracting and indexing: The journal is abstracted and indexed in:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pascal Costanza** Pascal Costanza: Pascal Costanza is a research scientist at the ExaScience Lab at Intel Belgium. He is known in the field of functional programming in LISP as well as in the aspect-oriented programming (AOP) community for contributions to this field by applying AOP through Lisp1. More recently, he has developed Context-oriented programming, with Robert Hirschfeld. Pascal Costanza: His past involvements include specification and implementation of the languages Gilgul and Lava, and the design and application of the JMangler framework for load-time transformation of Java class files. He has also implemented ContextL, the first programming language extension for Context-oriented Programming based on CLOS, and aspect-oriented extensions for CLOS. He is furthermore the initiator and lead of Closer, an open source project that provides a compatibility layer for the CLOS MOP across multiple Common Lisp implementations. He has also co-organized numerous workshops on Unanticipated Software Evolution, Aspect-Oriented Programming, Object Technology for Ambient Intelligence, Lisp, and redefinition of computing. He has a Ph.D. degree from the University of Bonn, Germany. Notes: Dynamically Scoped Functions as the Essence of AOP OOP 2003 Workshop on Object-Oriented Language Engineering for the Post-Java Era, Darmstadt, Germany, July 22, 2003; published in ACM SIGPLAN Notices Volume 38, Issue 8 (August 2003), ACM Press
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Urea transporter 2** Urea transporter 2: Urea transporter 2 is a protein that in humans is encoded by the SLC14A2 gene. Function: In mammalian cells, urea is the chief end-product of nitrogen catabolism and plays an important role in the urinary concentration mechanism. Thus, the plasma membrane of erythrocytes and some renal epithelial cells exhibit an elevated urea permeability that is mediated by highly selective urea transporters. In mammals, two urea transporters have been identified: the renal tubular urea transporter, UT2 (UT-A), and the erythrocyte urea transporter, UT11 (also called UT-B, coded for by the SLC14A1 gene). SLC14A2 and SLC14A1 constitute solute carrier family 14.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**One size fits all** One size fits all: "One size fits all" is a description for a product that would fit in all instances. The term has been extended to mean one style or procedure would fit in all related applications. It is an alternative for "Not everyone fits the mold." It has been in use for over five decades. There are both positive and negative uses of the phrase. History of the phrase: The term "one size fits all" has been used as a common, cliche phrase for over 5 decades. Positive views of the phrase: There are several positive views of the phrase "one size fits all": A wristwatch could be considered as fitting all people. In women's clothing, a flexible or open garment can be labeled as one size fits all; however, the size is typically a medium size (able to expand), rather than actually fitting petite or extra-large (XL) sizes. A neck chain could be designed to be worn by a person of any size. Bicycle helmets with ring fit systems allow for a single size, also known as universal fit. In military gear, some items have just one size (but smaller or larger people have already been excluded from military service). Many baseball hats available for commercial purchase are labeled "one-size-fits-all." Negative views of the phrase: There are many negative views of the phrase "one size fits all" including: Many customers prefer to have custom-tailored clothing. Men's suits typically have specific sizes for chest & waist measurements. Shoes are an example where sizes (and widths) vary depending on the specific person. For the U.S. G.I. Bill in education, options and coverage will vary depending on each person.Politically, the phrase has come to mean that methods of administration or political beliefs in one country should not necessarily be applied to another.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Selectivity factor** Selectivity factor: Selectivity factor is a quantifiable measure of how efficient an antibiotic is during the process of gene selection. It measures of the capacity an antibiotic to select for transfected (resistant) cells that contain a selectable marker, while killing untransfected (sensitive) cells that do not contain a selectable marker. A selectivity factor higher than 10 is optimal. This means the concentration of antibiotic is sufficient to kill untransfected cells but not toxic enough to kill transfected cells. A selectivity factor lower than 10 means the concentration of antibiotic needed for selection is too close to the toxic concentration for the transfected cells. As a result, fewer transfected cells survive and more untransfected cells survive. In this case an alternative antibiotic should be considered. Calculating the selectivity factor: The method uses a modified MTT assay. The MTT assay is a colorimetric assay used to assess cell metabolic activity. The assay is based on the reduction of yellow tetrazolium salt (MTT) by active cells to produce purple formazan crystals which accumulate in living cells. Cells are lysed, the crystals are dissolved, and the absorbance of the solution is analysed on a spectrophotometer as a measure of cell viability. In situations where the use of MTT is problematic, PI or Sytox Green screening in a fluorescence plate reader can be considered. Calculating the selectivity factor: The next step is to generate a kill curve which defines the ideal concentration of a selection antibiotic to kill untransfected cells (Fig 1A). Curves are generated for both sensitive cells and resistant cells. The half-maximal Inhibitory Concentration (IC50) can be calculated, which measures the potency of the antibiotic. The selectivity factor is calculated as follows: SF = IC50R/IC50S whereas SF = selectivity factor; IC50 = half-maximal inhibitory concentration; R= resistant cells; S = sensitive cells Advantages of the selectivity factor: The selectivity factor has the following advantages: it is quantitative thus can be reported numerically using a microplate reader, it streamlines the process of generating stable cell lines (assay can be completed in 3 days), it considers both sensitive and resistant cells, and it allows comparison of the consistency and quality of antibiotics from different batches, vendors, and manufacturing methods. Practical uses of the selectivity factor: The selectivity factor can be used for the creation of stably transfected cell lines, an important tool in drug discovery, biomedical research, and biological pathway investigation. Cell line creation involves transfection (transferring the gene into the cell line), and selection (applying selective pressure in the form of an antibiotic. Transfection efficiency is dependent on cell type, cell density, vector, and transfection method. Selection efficiency depends on the capacity of the antibiotic to kill the parental cells but not the transfected cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**QED vacuum** QED vacuum: The QED vacuum or quantum electrodynamic vacuum is the field-theoretic vacuum of quantum electrodynamics. It is the lowest energy state (the ground state) of the electromagnetic field when the fields are quantized. When Planck's constant is hypothetically allowed to approach zero, QED vacuum is converted to classical vacuum, which is to say, the vacuum of classical electromagnetism.Another field-theoretic vacuum is the QCD vacuum of the Standard Model. Fluctuations: The QED vacuum is subject to fluctuations about a dormant zero average-field condition; Here is a description of the quantum vacuum: The quantum theory asserts that a vacuum, even the most perfect vacuum devoid of any matter, is not really empty. Rather the quantum vacuum can be depicted as a sea of continuously appearing and disappearing [pairs of] particles that manifest themselves in the apparent jostling of particles that is quite distinct from their thermal motions. These particles are ‘virtual’, as opposed to real, particles. ...At any given instant, the vacuum is full of such virtual pairs, which leave their signature behind, by affecting the energy levels of atoms. Fluctuations: Virtual particles It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle: (where ΔE and Δt are energy and time variations, and ħ the Planck constant divided by 2π) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.This interpretation of the energy-time uncertainty relation is not universally accepted, however. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation, because energy and time (unlike position q and momentum p, for example) do not satisfy a canonical commutation relation (such as [q, p] = iħ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. The many approaches to the energy-time uncertainty principle are a continuing subject of study. Fluctuations: Quantization of the fields The Heisenberg uncertainty principle does not allow a particle to exist in a state in which the particle is simultaneously at a fixed location, say the origin of coordinates, and has also zero momentum. Instead the particle has a range of momentum and spread in location attributable to quantum fluctuations; if confined, it has a zero-point energy.An uncertainty principle applies to all quantum mechanical operators that do not commute. In particular, it applies also to the electromagnetic field. A digression follows to flesh out the role of commutators for the electromagnetic field. Fluctuations: The standard approach to the quantization of the electromagnetic field begins by introducing a vector potential A and a scalar potential V to represent the basic electromagnetic electric field E and magnetic field B using the relations: The vector potential is not completely determined by these relations, leaving open a so-called gauge freedom. Resolving this ambiguity using the Coulomb gauge leads to a description of the electromagnetic fields in the absence of charges in terms of the vector potential and the momentum field Π, given by: where ε0 is the electric constant of the SI units. Quantization is achieved by insisting that the momentum field and the vector potential do not commute. That is, the equal-time commutator is: where r, r′ are spatial locations, ħ is Planck's constant over 2π, δij is the Kronecker delta and δ(r − r′) is the Dirac delta function. The notation [ , ] denotes the commutator. Fluctuations: Quantization can be achieved without introducing the vector potential, in terms of the underlying fields themselves: where the circumflex denotes a Schrödinger time-independent field operator, and εijk is the antisymmetric Levi-Civita tensor.Because of the non-commutation of field variables, the variances of the fields cannot be zero, although their averages are zero. The electromagnetic field has therefore a zero-point energy, and a lowest quantum state. The interaction of an excited atom with this lowest quantum state of the electromagnetic field is what leads to spontaneous emission, the transition of an excited atom to a state of lower energy by emission of a photon even when no external perturbation of the atom is present. Electromagnetic properties: As a result of quantization, the quantum electrodynamic vacuum can be considered as a material medium. It is capable of vacuum polarization. In particular, the force law between charged particles is affected. The electrical permittivity of quantum electrodynamic vacuum can be calculated, and it differs slightly from the simple ε0 of the classical vacuum. Likewise, its permeability can be calculated and differs slightly from μ0. This medium is a dielectric with relative dielectric constant > 1, and is diamagnetic, with relative magnetic permeability < 1. Under some extreme circumstances in which the field exceeds the Schwinger limit (for example, in the very high fields found in the exterior regions of pulsars), the quantum electrodynamic vacuum is thought to exhibit nonlinearity in the fields. Calculations also indicate birefringence and dichroism at high fields. Many of electromagnetic effects of the vacuum are small, and only recently have experiments been designed to enable the observation of nonlinear effects. PVLAS and other teams are working towards the needed sensitivity to detect QED effects. Attainability: A perfect vacuum is itself only attainable in principle. It is an idealization, like absolute zero for temperature, that can be approached, but never actually realized: One reason [a vacuum is not empty] is that the walls of a vacuum chamber emit light in the form of black-body radiation...If this soup of photons is in thermodynamic equilibrium with the walls, it can be said to have a particular temperature, as well as a pressure. Another reason that perfect vacuum is impossible is the Heisenberg uncertainty principle which states that no particles can ever have an exact position ...Each atom exists as a probability function of space, which has a certain nonzero value everywhere in a given volume. ...More fundamentally, quantum mechanics predicts ...a correction to the energy called the zero-point energy [that] consists of energies of virtual particles that have a brief existence. This is called vacuum fluctuation. Attainability: Virtual particles make a perfect vacuum unrealizable, but leave open the question of attainability of a quantum electrodynamic vacuum or QED vacuum. Predictions of QED vacuum such as spontaneous emission, the Casimir effect and the Lamb shift have been experimentally verified, suggesting QED vacuum is a good model for a high quality realizable vacuum. There are competing theoretical models for vacuum, however. For example, quantum chromodynamic vacuum includes many virtual particles not treated in quantum electrodynamics. The vacuum of quantum gravity treats gravitational effects not included in the Standard Model. It remains an open question whether further refinements in experimental technique ultimately will support another model for realizable vacuum.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded