text
stringlengths
60
353k
source
stringclasses
2 values
**Hybrid event** Hybrid event: A hybrid event is a tradeshow, conference, unconference, seminar, workshop or other meeting that combines a "live" in-person event with a "virtual" online component. Hybrid event: With the growing popularity and cost-effectiveness of virtual events, hybrid events have become a popular way of increasing participation in traditional events at a relatively low cost. They also enable participation by people who might be unable to attend physically due to travel or time zone constraints, or through a wish to reduce the carbon footprint of the event. The open, participatory nature of unconferences (e.g., Barcamp) and their focus on sharing content, makes them hybrid events too. Hybrid event: Generally, the virtual component involves an online representation of the live event. For example, online participants might have access to: live audio or video streaming of keynote speakers or workshops alongside their presentation material online presentations (ranging from webcasts to sharing of content via online slide sharing websites) hybrid event webcast with synchronized slides alongside the live and archived webcast video presentation creation of a live commentary or transcript of proceedings online chat or discussion forum facilities, including audience polls or question submission live blogs event photographs and video integration of other social media toolsProvision of internet access, usually via free Wi-Fi, is normal at hybrid events. As well as allowing a physical event to reach a wider audience, these online tools also provide a means for physical attendees to interact with each other, with the event organisers and with online participants, and for online participants to interact with each other. Some events have featured 'TwitterWalls' where Twitter comments about the event are shared with physical attendees. Hybrid event: Event content can also be recorded and made available online to foster further discussions after the event has ended, build out a knowledge portal for event participants, and help market the next year's event by sharing highlights from the current year. Examples of hybrid events: One of the first university-level hybrid events was held in 1992, between the University of Helsinki in Finland and Williams College in the US, directed by the philosophers Esa Saarinen and Mark C. Taylor, though then defined as the first global seminar using teleconferencing technology. The book, Imagologies: Media Philosophy (1994) grew out of the seminar. At Barcamp events, all attendees are encouraged to share information and experiences of the event via public web channels including blogs, photo sharing, social bookmarking, Twitter, wikis, and IRC. This is a conscious change from the "off-the-record by default" and "no recordings" rules at conventional conferences. Run by an online community advocating use of social media or Web 2.0 to improve the built environment, Be2camp unconference events are also a practical demonstration of how the tools can be used to combine face-to-face and online participation. Examples of hybrid events: BASF has complemented a global employee summit with a virtual component. The physical event brought together IT professionals from all over the world to the BASF headquarters. Shortly after the event keynotes, workshop results, street interviews, and other materials were available for virtual participants. This virtual event lasted several months, and especially people whose tough time schedule did not allow them to participate in person were able to participate virtually. Examples of hybrid events: Cisco Live on-site conferences run concurrently with a virtual component, called Cisco Live and Networkers Virtual. Cisco Live was awarded Best Hybrid Live+Virtual Program at the 2010 Ex Awards. In addition, it was awarded the 2010 Grand Ex Award. Examples of hybrid events: Centers for Disease Control and Prevention (CDC) held the government's first hybrid event on August 21–24, 2011, with the introduction of an immersive virtual version of the Public Health Informatics conference, (an in-person event held at the Hyatt Regency Atlanta hotel). Remotely located state and local partners and public health IT colleagues were able to experience all of the plenary sessions and many other concurrent activities simultaneously with the in-person event. The event was co-sponsored by the National Association of County and City Health Officials (NACCHO) for employees, state and local groups and partners to experience activities remotely during the traditional conference - without everyone having to spend money on travel and lodging or negatively affecting the environment. The hybrid event was well received by attendees, as there were 1,865 online registrations, (after only 1 week of advertising) – with the traditional conference averaging around 1500. The hybrid event was named one of BizBash.com's 15 Most Innovative Meetings of 2012. Other Press: CDC Looks To Virtual Conferences Over Costlier Onsite Events; A Federal Case. Examples of hybrid events: BASF has complemented a global employee summit with a virtual component. The physical event brought together IT professionals from all over the world to the BASF headquarters. Shortly after the event keynotes, workshop results, street interviews, and other materials were available for virtual participants. This virtual event lasted several months, and especially people whose tough time schedule did not allow them to participate in person were able to participate virtually. Examples of hybrid events: At the ACTE conference (Association of Corporate Travel Executives) in Rome, the first virtual panel was introduced on Monday October 15, with 4 panelists live at the Waldorf Astoria and 1 panelist, subject matter expert from British Telecom, attending virtually from Brussels. The session subject titled "Finding the Balance Between Physical and Virtual Travel" and the setup was organized by GVN, The Virtual Airline (Global Videoconferencing Network) Visual Collab 2018 at the RSA House in London.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Threonine ammonia-lyase** Threonine ammonia-lyase: Threonine ammonia-lyase (EC 4.3.1.19, systematic name L-threonine ammonia-lyase (2-oxobutanoate-forming), also commonly referred to as threonine deaminase or threonine dehydratase, is an enzyme responsible for catalyzing the conversion of L-threonine into α-ketobutyrate and ammonia: L-threonine = 2-oxobutanoate + NH3 (overall reaction) (1a) L-threonine = 2-aminobut-2-enoate + H2O (1b) 2-aminobut-2-enoate = 2-iminobutanoate (spontaneous) (1c) 2-iminobutanoate + H2O = 2-oxobutanoate + NH3 (spontaneous)α-Ketobutyrate can be converted into L-isoleucine, so threonine ammonia-lyase functions as a key enzyme in BCAA synthesis. It employs a pyridoxal-5'-phosphate cofactor, similar to many enzymes involved in amino acid metabolism. It is found in bacteria, yeast, and plants, though most research to date has focused on forms of the enzyme in bacteria. This enzyme was one of the first in which negative feedback inhibition by the end product of a metabolic pathway was directly observed and studied. The enzyme serves as an excellent example of the regulatory strategies used in amino acid homeostasis. Structure: Threonine ammonia-lyase is a tetramer of identical subunits, and is arranged as a dimer of dimers. Each subunit has two domains: a domain containing the catalytic active site and a domain with allosteric regulatory sites. The two have been shown to be distinct regions, but the regulatory site of one subunit actually interacts with the catalytic site of another subunit. Both domains contain the repeating structural motif of beta sheets surrounded by alpha helices. While the threonine binding site is not perfectly understood, structural studies do reveal how the pyridoxal phosphate cofactor is bound. The PLP cofactor is bonded to a lysine residue by means of a Schiff base, and the phosphate group of PLP is held in place by amine groups derived from a repeating sequence of glycine residues. The aromatic ring is bound to phenylalanine, and the nitrogen on the ring is hydrogen bonded to hydroxyl group-containing residues. Mechanism: The mechanism of threonine ammonia-lyase is analogous to other deaminating PLP enzymes in its use of Schiff base intermediates. Initially, the amine group of threonine attacks the lysine/PLP Schiff base, displacing lysine. After deprotonation of the amino acid alpha carbon and subsequent dehydration (hence the common name threonine dehydratase), a new Schiff base is formed. This Schiff base is replaced by lysine attack, reforming the catalytically active PLP and releasing an initial alkene-containing product. This product tautomerizes, and after hydrolysis of the Schiff base, the final products are generated. After the final alpha-ketobutyrate product is generated, isoleucine is synthesized by progressing through the intermediates alpha-acetohydroxybutyrate to alpha-beta-dihydroxy-beta-methylvalerate, then to alpha-keto-beta-methylvalerate. Regulation: Threonine ammonia-lyase has been shown to not follow Michaelis-Menten kinetics, rather, it is subject to complex allosteric control. The enzyme is inhibited by isoleucine, the product of the pathway it participates in, and is activated by valine, the product of a parallel pathway. Thus, an increase in isoleucine concentration shuts off its production, and an increase in valine concentration diverts starting material (Hydroxyethyl-TPP) away from valine production. The enzyme has two binding sites for isoleucine; one has a high affinity for isoleucine and the other has a low affinity. The binding of isoleucine to the high affinity site increases the binding affinity of the low affinity site, and enzyme deactivation occurs when isoleucine binds to the low affinity site. Valine promotes enzyme activity by competitively binding to the high affinity site, preventing isoleucine from having an inhibitory effect. The combination of these two feedback methods balances the concentration of BCAAs. Isoforms and other functions: Multiple forms of threonine ammonia-lyase have been observed in a variety of species of organism. In Escherichia coli, a system in which the enzyme has been studied extensively, two different forms of the enzyme are found. One is biosynthetic and resembles the enzyme characteristics presented here, while the other is degradative and functions to generate carbon fragments for energy production. The pair of isoforms has also been observed in other bacteria. In many bacteria, the biodegradative isoform of the enzyme is expressed in anaerobic conditions and is promoted by cAMP and threonine, while the biosynthetic isoform is expressed in aerobic conditions. This allows the bacterium to balance energy stores and inhibit energy-consuming synthetic pathways when energy is not abundant. Isoforms and other functions: In plants, threonine ammonia-lyase is important in defense mechanisms against herbivores and is upregulated in response to abiotic stress. An adapted isoform of the enzyme with unique properties that deter herbivores is expressed in plant leaves. The catalytic domain of this isoform is extremely resistant to proteolysis, while the regulatory domain degrades readily, so upon ingestion by another organism, the threonine deamination capabilities of the enzyme go unchecked. This degrades threonine before the herbivore can absorb it, starving the herbivore of an essential amino acid. Studies of threonine ammonia-lyase in plants have also offered new strategies in the development of GMOs with increased nutritional value by increasing essential amino acid content.Other more exotic forms of the enzyme have been found that are extremely small in size, but still retain all catalytic and regulatory functions. Evolution: There are five major fold types for PLP-dependent enzymes. Threonine ammonia-lyase is a member of the Fold Type II family, also known as the tryptophan synthase family. Though threonine ammonia-lyase does not possess substrate tunneling like tryptophan synthase does, it contains much conserved homology. Threonine ammonia-lyase is most closely related to serine dehydratase, and both possess the same general catalytic mechanism. In fact, threonine ammonia-lyase has been shown to exhibit some specificity towards serine and can convert serine into pyruvate. The regulatory domain of threonine ammonia-lyase is very similar to the regulatory domain of phosphoglycerate dehydrogenase. All of these relationships demonstrate that threonine ammonia-lyase has close evolutionary ties to these enzymes. Due to the degree of conserved structure and sequence in enzymes that recognize amino acids, it is likely that the evolutionary diversity of these enzymes came about by the matching together of individual regulatory and catalytic domains in various ways. Relevance to humans: Threonine ammonia-lyase is not found in humans. Thus, this is one example of why humans cannot synthesize all 20 proteinogenic amino acids; in this specific case, humans cannot convert threonine into isoleucine and must consume isoleucine in the diet. The enzyme has also been studied in the past as a possible tumor suppressing agent for the previously described reasons, in that it deprives tumor cells of an essential amino acid and kills them, but this treatment has not been utilized.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DSP-2230** DSP-2230: DSP-2230 is a selective small-molecule Nav1.7 and Nav1.8 voltage-gated sodium channel blocker which is under development by Dainippon Sumitomo Pharma for the treatment of neuropathic pain. As of June 2014, it is in phase I/phase II clinical trials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Humanized antibody** Humanized antibody: Humanized antibodies are antibodies from non-human species whose protein sequences have been modified to increase their similarity to antibody variants produced naturally in humans. The process of "humanization" is usually applied to monoclonal antibodies developed for administration to humans (for example, antibodies developed as anti-cancer drugs). Humanization can be necessary when the process of developing a specific antibody involves generation in a non-human immune system (such as that in mice). The protein sequences of antibodies produced in this way are partially distinct from homologous antibodies occurring naturally in humans, and are therefore potentially immunogenic when administered to human patients (see also Human anti-mouse antibody). The International Nonproprietary Names of humanized antibodies end in -zumab, as in omalizumab (see Nomenclature of monoclonal antibodies). Humanized antibodies are distinct from chimeric antibodies. The latter also have their protein sequences made more similar to human antibodies, but carry a larger stretch of non-human protein. Humanized antibody: There are other ways to develop monoclonal antibodies. This list covers many of the monoclonals developed for use in humans. Use of recombinant DNA in humanization process: The humanization process takes advantage of the fact that production of monoclonal antibodies can be accomplished using recombinant DNA to create constructs capable of expression in mammalian cell culture. That is, gene segments capable of producing antibodies are isolated and cloned into cells that can be grown in a bioreactor such that antibody proteins produced from the DNA of the cloned genes can be harvested en masse. The step involving recombinant DNA provides an intervention point that can be readily exploited to alter the protein sequence of the expressed antibody. The alterations to antibody structure that are achieved in the humanization process are therefore all effectuated through techniques at the DNA level. Not all methods for deriving antibodies intended for human therapy require a humanization step (e.g. phage display) but essentially all are dependent on techniques that similarly allow the "insertion" or "swapping-out" of portions of the antibody molecule. Distinction from "chimeric antibody": Humanization is usually seen as distinct from the creation of a mouse-human antibody chimera. So, although the creation of an antibody chimera is normally undertaken to achieve a more human-like antibody (by replacing constant region of the mouse antibody with that from human) simple chimeras of this type are not usually referred to as humanized. Rather, the protein sequence of a humanized antibody is essentially identical to that of a human variant, despite the non-human origin of some of its complementarity-determining region (CDR) segments responsible for the ability of the antibody to bind to its target antigen. Distinction from "chimeric antibody": Chimeric antibody names contain a -xi- stem. Examples of chimeric antibodies approved for human therapy include abciximab (ReoPro), basiliximab (Simulect), cetuximab (Erbitux), infliximab (Remicade) and rituximab (MabThera). There are also several examples of chimerics currently in clinical trials (e.g. bavituximab, see sortable list for additional examples). Humanizing via a chimeric intermediate: The humanization process may also include the creation of a mouse-human chimera as an initial step. In this case, a mouse variable region is spliced to a human constant region. The chimera can then be further humanized by selectively altering the sequence of amino acids in the variable region of the molecule. The alteration process must be "selective" to retain the specificity for which the antibody was originally developed. That is, since the CDR portions of the variable region are essential to the ability of the antibody to bind to its intended target, the amino acids in these portions cannot be altered without the risk of undermining the purpose of the development. Aside from the CDR segments, the portions of the variable regions that differ from those in humans can be corrected by exchanging the appropriate individual amino acids. This is accomplished at the DNA level through mutagenesis. Humanizing via a chimeric intermediate: Naming of humanized chimeras includes the stem for both designations (-xi- + -zu-). Otelixizumab is an example of a humanized chimera currently in clinical trials for treatment of rheumatoid arthritis and diabetes mellitus. Humanization by insertion of relevant CDRs into human antibody "scaffold": It is possible to produce a humanized antibody without creating a chimeric intermediate. "Direct" creation of a humanized antibody can be accomplished by inserting the appropriate CDR coding segments (so-called 'donor', responsible for the desired binding properties) into a human antibody "scaffold" (so-called 'acceptor'). As discussed above, this is achieved through recombinant DNA methods using an appropriate vector and expression in mammalian cells. That is, after an antibody is developed to have the desired properties in a mouse (or other non-human), the DNA coding for that antibody can be isolated, cloned into a vector and sequenced. The DNA sequence corresponding to the antibody CDRs can then be determined. Once the precise sequence of the desired CDRs are known, a strategy can be devised for inserting these sequences appropriately into a construct containing the DNA for a human antibody variant. The strategy may also employ synthesis of linear DNA fragments based on the reading of CDR sequences. The process requires computer-modelling software to determine which of the antibody's amino acids can be changed from murine-sequence to human-sequence without the changes compromising the conformation of the binding site. In the United States, this software was developed, patented, and demonstrated, by Protein Design Labs, Inc. in Mountain View, California, in the 1980s and 1990s.Alemtuzumab is an early example of an antibody whose humanization did not include a chimeric intermediate. In this case, a monoclonal dubbed "Campath-1" was developed to bind CD52 using a mouse system. The hypervariable loops of Campath-1 (that contain its CDRs and thereby impart its ability to bind CD52) were then extracted and inserted into a human antibody framework. Alemtuzumab is approved for treatment of B-cell chronic lymphocytic leukemia and is currently in clinical trials for a variety of other conditions including multiple sclerosis. Derivation from sources other than mice: There are technologies that completely avoid the use of mice or other non-human mammals in the process of discovering antibodies for human therapy. Examples of such systems include various "display" methods (primarily phage display) as well as methods that exploit the elevated B-cell levels that occur during a human immune response. Derivation from sources other than mice: Display methods These employ the selective principles of specific antibody production but exploit micro-organisms (as in phage display) or even cell free extracts (as in ribosome display). These systems rely on the creation of antibody gene "libraries" which can be wholly derived from human RNA isolated from peripheral blood. The immediate products of these systems are antibody fragments, normally Fab or scFv. Derivation from sources other than mice: This means that, although antibody fragments created using display methods are of fully human sequence, they are not full antibodies. Therefore, processes in essence identical to humanization are used to incorporate and express the derived affinities within a full antibody. Adalimumab (Humira) is an example of an antibody approved for human therapy that was created through phage display. Derivation from sources other than mice: Antibodies from human patients or vaccine recipients It is possible to exploit human immune reaction in the discovery of monoclonal antibodies. Simply put, human immune response works in the same way as that in a mouse or other non-human mammal. Therefore, persons experiencing a challenge to their immune system, such as an infectious disease, cancer or a vaccination are a potential source of monoclonal antibodies directed at that challenge. This approach seems especially apt for the development of anti-viral therapies that exploit the principles of passive immunity. Variants of this approach have been demonstrated in principle and some are finding their way into commercial development.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Malaysian Journal of Nutrition** Malaysian Journal of Nutrition: The Malaysian Journal of Nutrition is a triannual peer-reviewed medical journal published by the Nutrition Society of Malaysia. It was established in 1995 and covers nutrition science. The editor-in-chief is Khor Geok Lin. Abstracting and indexing: The journal is abstracted and indexed in Index Medicus/PubMed/MEDLINE and Scopus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**P6 (microarchitecture)** P6 (microarchitecture): The P6 microarchitecture is the sixth-generation Intel x86 microarchitecture, implemented by the Pentium Pro microprocessor that was introduced in November 1995. It is frequently referred to as i686. It was planned to be succeeded by the NetBurst microarchitecture used by the Pentium 4 in 2000, but was revived for the Pentium M line of microprocessors. The successor to the Pentium M variant of the P6 microarchitecture is the Core microarchitecture which in turn is also derived from P6. P6 (microarchitecture): P6 was used within Intel's mainstream offerings from the Pentium Pro to Pentium III, and was widely known for low power consumption, excellent integer performance, and relatively high instructions per cycle (IPC). Features: The P6 core was the sixth generation Intel microprocessor in the x86 line. The first implementation of the P6 core was the Pentium Pro CPU in 1995, the immediate successor to the original Pentium design (P5). P6 processors dynamically translate IA-32 instructions into sequences of buffered RISC-like micro-operations, then analyze and reorder the micro-operations to detect parallelizable operations that may be issued to more than one execution unit at once. The Pentium Pro was the first x86 microprocessor designed by Intel to use this technique, though the NexGen Nx586, introduced in 1994, did so earlier. Other features first implemented in the x86 space in the P6 core include: Speculative execution and out-of-order completion (called "dynamic execution" by Intel), which required new retire units in the execution core. This lessened pipeline stalls, and in part enabled greater speed-scaling of the Pentium Pro and successive generations of CPUs. Features: Superpipelining, which increased from Pentium's 5-stage pipeline to 14 of the Pentium Pro and early model of the Pentium III (Coppermine), and eventually morphed into less than 10-stage pipeline of the Pentium M for embedded and mobile market due to energy inefficiency and higher voltage issues that encountered in the predecessor, and then again lengthening the 10- to 12-stage pipeline back to the Core 2 due to facing difficulty increasing clock speed while improving fabrication process can somehow negate some negative impact of higher power consumption on the deeper pipeline design. Features: A front-side bus using a variant of Gunning transceiver logic to enable four discrete processors to share system resources. Physical Address Extension (PAE) and a wider 36-bit address bus to support 64 GB of physical memory. Register renaming, which enabled more efficient execution of multiple instructions in the pipeline. CMOV instructions, which are heavily used in compiler optimization. Other new instructions: FCMOV, FCOMI/FCOMIP/FUCOMI/FUCOMIP, RDPMC, UD2. New instructions in Pentium II Deschutes core: MMX, FXSAVE, FXRSTOR. New instructions in Pentium III: Streaming SIMD Extensions. P6 based chips Celeron (Covington/Mendocino/Coppermine/Tualatin variants) Pentium Pro Pentium II Overdrive (a Pentium II chip in the 387 pin Socket 8) Pentium II Pentium II Xeon Pentium III Pentium III Xeon P6 Variant Pentium M: Upon release of the Pentium 4-M and Mobile Pentium 4, it was quickly realized that the new mobile NetBurst processors were not ideal for mobile computing. NetBurst-based processors were simply not as efficient per clock or per watt compared to their P6 predecessors. Mobile Pentium 4 processors ran much hotter than Pentium III-M processors without significant performance advantages. Its inefficiency affected not only the cooling system complexity, but also the all-important battery life. Intel went back to the drawing board for a design that would be optimally suited for this market segment. The result was a modernized P6 design called the Pentium M. P6 Variant Pentium M: Design Overview Quad-pumped front-side bus. With the initial Banias core, Intel adopted the 400 MT/s FSB first used in Pentium 4. The Dothan core moved to the 533 MT/s FSB, following Pentium 4's evolution. Larger L1/L2 cache. L1 cache increased from predecessor's 32 KB to current 64 KB in all models. Initially 1 MB L2 cache in the Banias core, then 2 MB in the Dothan core. Dynamic cache activation by quadrant selector from sleep states. SSE2 Streaming SIMD Extensions 2 support. A 10- or 12-stage Enhanced instruction pipeline that allows for higher clock speeds without lengthening the pipeline stage, reduced from 14 stage on Pentium Pro/II/III. Dedicated register stack management. Addition of global history, indirect prediction, and loop prediction to branch prediction table. Removal of local prediction. P6 Variant Pentium M: Micro-ops Fusion of certain sub-instructions mediated by decoding units. x86 commands can result in fewer micro-operations and thus require fewer processor cycles to complete.The Pentium M was the most power efficient x86 processor for notebooks for several years, consuming a maximum of 27 watts at maximum load and 4-5 watts while idle. The processing efficiency gains brought about by its modernization allowed it to rival the Mobile Pentium 4 clocked over 1 GHz higher (the fastest-clocked Mobile Pentium 4 compared to the fastest-clocked Pentium M) and equipped with much more memory and bus bandwidth. The first Pentium M family processors ("Banias") internally support PAE but do not show the PAE support flag in their CPUID information; this causes some operating systems (primarily Linux distributions) to refuse to boot on such processors since PAE support is required in their kernels. Windows 8 and later also refuses to boot on these CPUs for the same reason, as they specifically require PAE support to run properly. P6 Variant Pentium M: Banias/Dothan variant Celeron M (Banias/Shelton/Dothan variants) Pentium M A100/A110 EP80579 CE 3100 P6 Variant Enhanced Pentium M: The Yonah CPU was launched in January 2006 under the Core brand. Single and dual-core mobile version were sold under the Core Solo, Core Duo, and Pentium Dual-Core brands, and a server version was released as Xeon LV. These processors provided partial solutions to some of the Pentium M's shortcomings by adding: SSE3 Support Single- and dual-core technology with 2 MB of shared L2 cache (restructuring processor organization) Increased FSB speed, with the FSB running at 533 MT/s or 667 MT/s. P6 Variant Enhanced Pentium M: A 12-stage instruction pipeline.This resulted in the interim microarchitecture for low-voltage only CPUs, part way between P6 and the following Core microarchitecture. Yonah variant Celeron M 400 series Core Solo/Duo Pentium Dual-Core T2060/T2080/T2130 Xeon LV/ULV (Sossaman) Successor: On July 27, 2006, the Core microarchitecture, a derivative of P6, was launched in form of the Core 2 processor. Subsequently, more processors were released with the Core microarchitecture under Core 2, Xeon, Pentium and Celeron brand names. The Core microarchitecture is Intel's final mainstream processor line to use FSB, with all later Intel processors based on Nehalem and later Intel microarchitectures featuring an integrated memory controller and a QPI or DMI bus for communication with the rest of the system. Improvements relative to the Intel Core processors were: A 14-stage instruction pipeline that allows for higher clock speeds. Successor: SSE4.1 support for all Core 2 models manufactured at a 45 nm lithography. Support for the 64-bit x86-64 architecture, which was previously only offered by Prescott processors, the Pentium 4 last architectural installment. Increased FSB speed, ranging from 533 MT/s to 1600 MT/s. Increased L2 cache size, with the L2 cache size ranging from 1 MB to 12 MB (Core 2 Duo processors use a shared L2 cache while Core 2 Quad processors having half of the total cache is shared by each core pair). Dynamic Front Side Bus Throttling (some mobile models), where the speed of the FSB is reduced in half, which by extension reduces the processor's speed in half. Thus the processor goes to a low power consumption mode called Super Low Frequency Mode that helps extend battery life. Successor: Dynamic Acceleration Technology for some mobile Core 2 Duo processors, and Dual Dynamic Acceleration Technology for mobile Core 2 Quad processors. Dynamic Acceleration Technology allows the CPU to overclock one processor core while turning off the one. In Dual Dynamic Acceleration Technology two cores are deactivated and two cores are overclocked. This feature is triggered when an application only uses a single core for Core 2 Duo or up to two cores for Core 2 Quad. The overclocking is performed by increasing the clock multiplier by 1.While all these chips are technically derivatives of the Pentium Pro, the architecture has gone through several radical changes since its inception.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ramanujan prime** Ramanujan prime: In mathematics, a Ramanujan prime is a prime number that satisfies a result proven by Srinivasa Ramanujan relating to the prime-counting function. Origins and definition: In 1919, Ramanujan published a new proof of Bertrand's postulate which, as he notes, was first proved by Chebyshev. At the end of the two-page published paper, Ramanujan derived a generalized result, and that is: for all 11 17 29 41 respectively OEIS: A104272where π(x) is the prime-counting function, equal to the number of primes less than or equal to x. Origins and definition: The converse of this result is the definition of Ramanujan primes: The nth Ramanujan prime is the least integer Rn for which π(x)−π(x/2)≥n, for all x ≥ Rn. In other words: Ramanujan primes are the least integers Rn for which there are at least n primes between x and x/2 for all x ≥ Rn.The first five Ramanujan primes are thus 2, 11, 17, 29, and 41. Note that the integer Rn is necessarily a prime number: π(x)−π(x/2) and, hence, π(x) must increase by obtaining another prime at x = Rn. Since π(x)−π(x/2) can increase by at most 1, π(Rn)−π(Rn2)=n. Bounds and an asymptotic formula: For all n≥1 , the bounds ln ln ⁡4n hold. If n>1 , then also p2n<Rn<p3n where pn is the nth prime number. Bounds and an asymptotic formula: As n tends to infinity, Rn is asymptotic to the 2nth prime, i.e., Rn ~ p2n (n → ∞).All these results were proved by Sondow (2009), except for the upper bound Rn < p3n which was conjectured by him and proved by Laishram (2010). The bound was improved by Sondow, Nicholson, and Noe (2011) to 41 47 p3n which is the optimal form of Rn ≤ c·p3n since it is an equality for n = 5.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neurocan** Neurocan: Neurocan core protein is a protein that in humans is encoded by the NCAN gene.Neurocan is a member of the lectican / chondroitin sulfate proteoglycan protein families and consists of neurocan core protein and chondroitin sulfate. It is thought to be involved in the modulation of cell adhesion and migration. Role in bipolar disorder: Neurocan is a significant component of the extracellular matrix, and its levels are modulated by a variety of factors, but mice in which the NCAN gene has been knocked out show no easily observable defects in brain development or behavior. However, a genome-wide association study published in 2011 identified Neurocan as a susceptibility factor for bipolar disorder. A more comprehensive study published in 2012 confirmed that association. The 2012 study examined correlations between NCAN alleles and various symptoms of bipolar disorder, and also examined the behavior of NCAN knockout mice. In the human subjects, it was found that NCAN genotype was strongly associated with manic symptoms but not with depressive symptoms. In the mice, the absence of functional Neurocan resulted in a variety of manic-like behaviors, which could be normalized by administering lithium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital intermediate** Digital intermediate: Digital intermediate (DI) is a motion picture finishing process which classically involves digitizing a motion picture and manipulating the color and other image characteristics. Definition and overview: A digital intermediate often replaces or augments the photochemical timing process and is usually the final creative adjustment to a movie before distribution in theaters. It is distinguished from the telecine process in which film is scanned and color is manipulated early in the process to facilitate editing. However the lines between telecine and DI are continually blurred and are often executed on the same hardware by colorists of the same background. These two steps are typically part of the overall color management process in a motion picture at different points in time. A digital intermediate is also customarily done at higher resolution and with greater color fidelity than telecine transfers.Although originally used to describe a process that started with film scanning and ended with film recording, digital intermediate is also used to describe color correction and color grading and even final mastering when a digital camera is used as the image source and/or when the final movie is not output to film. This is due to recent advances in digital cinematography and digital projection technologies that strive to match film origination and film projection. Definition and overview: In traditional photochemical film finishing, an intermediate is produced by exposing film to the original camera negative. The intermediate is then used to mass-produce the films that get distributed to theaters. Color grading is done by varying the amount of red, green, and blue light used to expose the intermediate. Definition and overview: The digital intermediate process uses digital tools to color grade, which allows for much finer control of individual colors and areas of the image, and allows for the adjustment of image structure (grain, sharpness, etc.). The intermediate for film reproduction can then be produced by means of a film recorder. The physical intermediate film that is a result of the recording process is sometimes also called a digital intermediate, and is usually recorded to internegative (IN) stock, which is inherently finer-grain than original camera negative (OCN).One of the key technical achievements that made the transition to DI possible was the use of 3D look-up tables, which could be used to mimic how the digital image would look once it was printed onto release print stock. This removed a large amount of guesswork from the film-making process, and allowed greater freedom in the colour grading process while reducing risk.The digital master is often used as a source for a DCI-compliant distribution of the motion picture for digital projection. For archival purposes, the digital master created during the digital intermediate process can be recorded to very stable high dynamic range yellow-cyan-magenta (YCM) separations on black-and-white film with an expected 100-year or longer life. This archival format, long used in the industry prior to the invention of DI, still provides an archival medium that is independent of changes in digital data recording technologies and file formats that might otherwise render digitally archived material unreadable in the long term.A "film intermediate" is an analog variation of a digital intermediate, where a project shot on digital video is printed onto film stock and transferred back to digital video to emulate film. The term was coined after it was used on the Oscar-winning 2012 short film "Curfew". The process was also used on the films Dune (2021) and The Batman (2022). History: Telecine tools to electronically capture film images are nearly as old as broadcast television, but the resulting images were widely considered unsuitable for exposing back onto film for theatrical distribution. Film scanners and recorders with quality sufficient to produce images that could be inter-cut with regular film began appearing in the 1970s, with significant improvements in the late 1980s and early 1990s. During this time, digitally processing an entire feature-length film was impractical because the scanners and recorders were extremely slow and the image files were too large compared to computing power available. Instead, individual shots or short sequences were processed for special visual effects. History: In 1992, Visual Effects Supervisor/Producer Chris F. Woods broke through several "techno-barriers" in creating a digital studio to produce the visual effects for the 1993 release Super Mario Bros. It was the first feature film project to digitally scan a large number of VFX plates (over 700) at 2K resolution. It was also the first film scanned and recorded at Kodak's just launched Cinesite facility in Hollywood. This project based studio was the first feature film to use Discreet Logic's (now Autodesk) Flame and Inferno systems, which enjoyed early dominance as high resolution / high performance digital compositing systems. History: Digital film compositing for visual effects was immediately embraced, while optical printer use for VFX declined just as quickly. Chris Watts further revolutionized the process on the 1998 feature film Pleasantville, becoming the first visual effects supervisor for New Line Cinema to scan, process, and record the majority of a feature-length, live-action, Hollywood film digitally. The first Hollywood film to utilize a digital intermediate process from beginning to end was O Brother, Where Art Thou? in 2000 and in Europe it was Chicken Run released that same year.The process rapidly caught on in the mid-2000s. Around 50% of Hollywood films went through a digital intermediate in 2005, increasing to around 70% by mid-2007. This is due not only to the extra creative options the process affords film makers but also the need for high-quality scanning and color adjustments to produce movies for digital cinema. Milestones: 1990: The Rescuers Down Under – First feature-length film to be entirely recorded to film from digital files; in this case animation assembled on computers using Walt Disney Feature Animation and Pixar's CAPS system. 1992: VFX Supervisor/Producer Chris F. Woods creates a VFX studio to produce the visual effects for the 1993 release Super Mario Bros. It was the first 35mm feature film to digitally scan a large number of VFX plates (over 700) at 2K resolution, as well as to output the finished VFX to 35mm negative at 2K. 1993: Snow White and the Seven Dwarfs – First film to be entirely scanned to digital files, manipulated, and recorded back to film at 4K resolution. The restoration project was done entirely at 4K resolution and 10-bit color depth using the Cineon system to digitally remove dirt and scratches and restore faded colors. Milestones: 1998: Pleasantville – The first time the majority of a new feature film was scanned, processed, and recorded digitally. The black-and-white meets color world portrayed in the movie was filmed entirely in color and selectively desaturated and contrast adjusted digitally. The work was done in Los Angeles by Cinesite utilizing a Spirit DataCine for scanning at 2K resolution and a MegaDef color correction system from UK Company Pandora International 1998: Zingo - The first feature film to use digital color correction via digital intermediate in its entirety. The work was performed at the Digital Film Lab in Copenhagen, using a Spirit Datacine to transfer the entire film to digital files at 2K resolution. The digital intermediate process was also used to perform a digital blowup of the film's original Super 16 source format to a 35mm output. Milestones: 1999: Pacific Ocean Post Film, a team led by John McCunn and Greg Kimble used Kodak film scanners & laser film printer, Cineon software as well as proprietary tools to rebuild and repair the first two reels of the 1968 Beatles' film Yellow Submarine for re-release. Milestones: 1999: Star Wars: Episode I – The Phantom Menace - Industrial Light & Magic (ILM) scanned the entirety of the visual effects-laden film for the purposes of digital enhancement and the integration of thousands of separately filmed elements with computer generated characters and environments. Outside of the approximately 2000 effects shots that were digitally manipulated, the remaining 170 non-effects shots were also scanned for continuity. However, after the digital shots were manipulated at ILM, they were filmed out individually and sent to Deluxe Labs where they were processed and color timed photochemically. Milestones: 2000: Sorted - The first feature-length, color 35mm motion picture to fully utilize the digital intermediate process in its entirety from inception to completion. The film was produced at Wave Pictures' digital intermediate film facility in London, England. It was scanned at 2K resolution with 8 bits color depth per color / per pixel using a pin registered, liquid gate Oxberry 6400 Motion Picture Film Scanner and recorded onto Kodak 5242 color intermediate stock using MGI Celco Cine V Film Recorders. Digital visual effects and color correction were done using a Discreet Logic Inferno. Sorted premiered at the Cannes Film Festival in May 2000. Milestones: 2000: O Brother, Where Art Thou? – The first time a digital intermediate was used on the entirety of a first-run Hollywood film which otherwise had very few visual effects. The work was done in Los Angeles by Cinesite utilizing a Spirit DataCine for scanning at 2K resolution, a Pandora International MegaDef system to adjust the color and a Kodak Lightning II recorder to output to film. Milestones: 2000: Chicken Run was the first wide-release feature film in Europe to use the Digital Intermediate process, digitally storing and manipulating every frame of the film before recording back to film. Milestones: 2001: Honolulu Baby by Maurizio Nichetti, the first live action feature film post produced in Europe to use the digital 2K Digital Intermediate process from a production filmed in Super 35mm, made by Rumblefish and Massimo Germoglio as a DI supervisor and film editor, edited on Avid, filmscanner with Spirit, CGI with Maya, graphics in AE, finishing and VFX in Inferno, filmrecording of the entire film on the internegative. printed on film. Milestones: 2004: Spider-Man 2 – The first digital intermediate on a new Hollywood film to be done entirely at 4K resolution. Although scanning, recording, and color-correction was done at 4K by EFILM, most of the visual effects were created at 2K and were upscaled to 4K. 2005: Serenity - The first film to fully conform to Digital Cinema Initiatives specifications. 2008: Baraka – The first 8K resolution digital intermediate by FotoKem of a 65mm negative source for the October 2008 remastered DVD and Blu-ray Disc release. The scan produced 30 terabytes of data and took 12–13 seconds to scan each frame, for a total scan time of over three weeks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trimethoprim/sulfadiazine** Trimethoprim/sulfadiazine: Trimethoprim/sulfadiazine (TMP/SDZ) is a combination drug composed of trimethoprim and sulfadiazine used in the treatment of bacterial infections of animals, particularly horses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dactylitis** Dactylitis: Dactylitis or sausage digit is inflammation of an entire digit (a finger or toe), and can be painful. The word dactyl comes from the Greek word "daktylos" meaning "finger". In its medical term, it refers to both the fingers and the toes. Associated conditions: Dactylitis can occur in seronegative arthropathies, such as psoriatic arthritis and ankylosing spondylitis, and in sickle-cell disease as result of a vasoocclusive crisis with bone infarcts, and in infectious conditions including tuberculosis, syphilis, and leprosy. In reactive arthritis, sausage fingers occur due to synovitis. Dactylitis may also be seen with sarcoidosis. In sickle-cell disease it is manifested for the first time between 6–9 month old infants (as their protective fetal hemoglobin, HbF, is replaced with adult hemoglobin and the disease manifests) and is very often the presenting sign of the disorder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barber and Calverley** Barber and Calverley: Theodore Xenophon Barber (1927–2005) and David Smith Calverley (1937–2008) were American psychologists who studied "hypnotic behaviour". They measured how susceptible patients were to hypnotic induction. One result of their research was showing that the hypnotic induction was not superior to motivational instructions in producing a heightened state of suggestibility. The Barber Suggestibility Scale, a product of their research, measures hypnotic susceptibility with or without the use of a hypnotic induction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fruit hat (pudding)** Fruit hat (pudding): Fruit hat is the generic name for a British steamed pudding, originally made with a pastry of suet and flour, and filled with fruit. Later, the term came to refer to a category of steamed puddings "made with butter, flour and eggs produce a sponge-like pud topped with sauce that can be put into the base of the bowl and cooked with the pudding... so the sauce melds with the sponge to produce a deliciously gunky top: the 'fruit hat'."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sup45p** Sup45p: Sup45p is the Saccharomyces cerevisiae (a yeast) eukaryotic translation termination factor. More specifically, it is the yeast eukaryotic release factor 1 (eRF1). Its job is to recognize stop codons in RNA and bind to them. It binds to the Sup35p protein and then takes on the shape of a tRNA molecule so that it can safety incorporate itself into the A site of the Ribosome to disruptits flow and "release" the protein and end translation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fracture mechanics** Fracture mechanics: Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of analytical solid mechanics to calculate the driving force on a crack and those of experimental solid mechanics to characterize the material's resistance to fracture. Fracture mechanics: Theoretically, the stress ahead of a sharp crack tip becomes infinite and cannot be used to describe the state around a crack. Fracture mechanics is used to characterise the loads on a crack, typically using a single parameter to describe the complete loading state at the crack tip. A number of different parameters have been developed. When the plastic zone at the tip of the crack is small relative to the crack length the stress state at the crack tip is the result of elastic forces within the material and is termed linear elastic fracture mechanics (LEFM) and can be characterised using the stress intensity factor K . Although the load on a crack can be arbitrary, in 1957 G. Irwin found any state could be reduced to a combination of three independent stress intensity factors: Mode I – Opening mode (a tensile stress normal to the plane of the crack), Mode II – Sliding mode (a shear stress acting parallel to the plane of the crack and perpendicular to the crack front), and Mode III – Tearing mode (a shear stress acting parallel to the plane of the crack and parallel to the crack front).When the size of the plastic zone at the crack tip is too large, elastic-plastic fracture mechanics can be used with parameters such as the J-integral or the crack tip opening displacement. Fracture mechanics: The characterising parameter describes the state of the crack tip which can then be related to experimental conditions to ensure similitude. Crack growth occurs when the parameters typically exceed certain critical values. Corrosion may cause a crack to slowly grow when the stress corrosion stress intensity threshold is exceeded. Similarly, small flaws may result in crack growth when subjected to cyclic loading. Known as fatigue, it was found that for long cracks, the rate of growth is largely governed by the range of the stress intensity ΔK experienced by the crack due to the applied loading. Fast fracture will occur when the stress intensity exceeds the fracture toughness of the material. The prediction of crack growth is at the heart of the damage tolerance mechanical design discipline. Motivation: The processes of material manufacture, processing, machining, and forming may introduce flaws in a finished mechanical component. Arising from the manufacturing process, interior and surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions. Fracture mechanics is the analysis of flaws to discover those that are safe (that is, do not grow) and those that are liable to propagate as cracks and so cause failure of the flawed structure. Despite these inherent flaws, it is possible to achieve through damage tolerance analysis the safe operation of a structure. Fracture mechanics as a subject for critical study has barely been around for a century and thus is relatively new.Fracture mechanics should attempt to provide quantitative answers to the following questions: What is the strength of the component as a function of crack size? What crack size can be tolerated under service loading, i.e. what is the maximum permissible crack size? How long does it take for a crack to grow from a certain initial size, for example the minimum detectable crack size, to the maximum permissible crack size? What is the service life of a structure when a certain pre-existing flaw size (e.g. a manufacturing defect) is assumed to exist? During the period available for crack detection how often should the structure be inspected for cracks? Linear elastic fracture mechanics: Griffith's criterion Fracture mechanics was developed during World War I by English aeronautical engineer A. A. Griffith – thus the term Griffith crack – to explain the failure of brittle materials. Griffith's work was motivated by two contradictory facts: The stress needed to fracture bulk glass is around 100 MPa (15,000 psi). Linear elastic fracture mechanics: The theoretical stress needed for breaking atomic bonds of glass is approximately 10,000 MPa (1,500,000 psi).A theory was needed to reconcile these conflicting observations. Also, experiments on glass fibers that Griffith himself conducted suggested that the fracture stress increases as the fiber diameter decreases. Hence the uniaxial tensile strength, which had been used extensively to predict material failure before Griffith, could not be a specimen-independent material property. Griffith suggested that the low fracture strength observed in experiments, as well as the size-dependence of strength, was due to the presence of microscopic flaws in the bulk material. Linear elastic fracture mechanics: To verify the flaw hypothesis, Griffith introduced an artificial flaw in his experimental glass specimens. The artificial flaw was in the form of a surface crack which was much larger than other flaws in a specimen. The experiments showed that the product of the square root of the flaw length ( a ) and the stress at fracture ( σf ) was nearly constant, which is expressed by the equation: σfa≈C An explanation of this relation in terms of linear elasticity theory is problematic. Linear elasticity theory predicts that stress (and hence the strain) at the tip of a sharp flaw in a linear elastic material is infinite. To avoid that problem, Griffith developed a thermodynamic approach to explain the relation that he observed. Linear elastic fracture mechanics: The growth of a crack, the extension of the surfaces on either side of the crack, requires an increase in the surface energy. Griffith found an expression for the constant C in terms of the surface energy of the crack by solving the elasticity problem of a finite crack in an elastic plate. Briefly, the approach was: Compute the potential energy stored in a perfect specimen under a uniaxial tensile load. Linear elastic fracture mechanics: Fix the boundary so that the applied load does no work and then introduce a crack into the specimen. The crack relaxes the stress and hence reduces the elastic energy near the crack faces. On the other hand, the crack increases the total surface energy of the specimen. Linear elastic fracture mechanics: Compute the change in the free energy (surface energy − elastic energy) as a function of the crack length. Failure occurs when the free energy attains a peak value at a critical crack length, beyond which the free energy decreases as the crack length increases, i.e. by causing fracture. Using this procedure, Griffith found that C=2Eγπ where E is the Young's modulus of the material and γ is the surface energy density of the material. Assuming 62 GPa and J/m 2 gives excellent agreement of Griffith's predicted fracture stress with experimental results for glass. Linear elastic fracture mechanics: For the simple case of a thin rectangular plate with a crack perpendicular to the load, the energy release rate, G , becomes: G=πσ2aE where σ is the applied stress, a is half the crack length, and E is the Young’s modulus, which for the case of plane strain should be divided by the plate stiffness factor (1−ν2) . The strain energy release rate can physically be understood as: the rate at which energy is absorbed by growth of the crack. Linear elastic fracture mechanics: However, we also have that: Gc=πσf2aE If G ≥ Gc , this is the criterion for which the crack will begin to propagate. For materials highly deformed before crack propagation, the linear elastic fracture mechanics formulation is no longer applicable and an adapted model is necessary to describe the stress and displacement field close to crack tip, such as on fracture of soft materials. Linear elastic fracture mechanics: Irwin's modification Griffith's work was largely ignored by the engineering community until the early 1950s. The reasons for this appear to be (a) in the actual structural materials the level of energy needed to cause fracture is orders of magnitude higher than the corresponding surface energy, and (b) in structural materials there are always some inelastic deformations around the crack front that would make the assumption of linear elastic medium with infinite stresses at the crack tip highly unrealistic. Griffith's theory provides excellent agreement with experimental data for brittle materials such as glass. For ductile materials such as steel, although the relation σfa=C still holds, the surface energy (γ) predicted by Griffith's theory is usually unrealistically high. A group working under G. R. Irwin at the U.S. Naval Research Laboratory (NRL) during World War II realized that plasticity must play a significant role in the fracture of ductile materials. Linear elastic fracture mechanics: In ductile materials (and even in materials that appear to be brittle), a plastic zone develops at the tip of the crack. As the applied load increases, the plastic zone increases in size until the crack grows and the elastically strained material behind the crack tip unloads. The plastic loading and unloading cycle near the crack tip leads to the dissipation of energy as heat. Hence, a dissipative term has to be added to the energy balance relation devised by Griffith for brittle materials. In physical terms, additional energy is needed for crack growth in ductile materials as compared to brittle materials. Linear elastic fracture mechanics: Irwin's strategy was to partition the energy into two parts: the stored elastic strain energy which is released as a crack grows. This is the thermodynamic driving force for fracture. the dissipated energy which includes plastic dissipation and the surface energy (and any other dissipative forces that may be at work). The dissipated energy provides the thermodynamic resistance to fracture.Then the total energy is: G=2γ+Gp where γ is the surface energy and Gp is the plastic dissipation (and dissipation from other sources) per unit area of crack growth. The modified version of Griffith's energy criterion can then be written as σfa=EGπ. Linear elastic fracture mechanics: For brittle materials such as glass, the surface energy term dominates and J/m 2 . For ductile materials such as steel, the plastic dissipation term dominates and 1000 J/m 2 . For polymers close to the glass transition temperature, we have intermediate values of G between 2 and 1000 J/m 2 Stress intensity factor Another significant achievement of Irwin and his colleagues was to find a method of calculating the amount of energy available for fracture in terms of the asymptotic stress and displacement fields around a crack front in a linear elastic solid. This asymptotic expression for the stress field in mode I loading is related to the stress intensity factor KI following: σij=(KI2πr)fij(θ) where σij are the Cauchy stresses, r is the distance from the crack tip, θ is the angle with respect to the plane of the crack, and fij are functions that depend on the crack geometry and loading conditions. Irwin called the quantity K the stress intensity factor. Since the quantity fij is dimensionless, the stress intensity factor can be expressed in units of MPa m Stress intensity replaced strain energy release rate and a term called fracture toughness replaced surface weakness energy. Both of these terms are simply related to the energy terms that Griffith used: KI=σπa and for plane stress for plane strain where KI is the mode I stress intensity, Kc the fracture toughness, and ν is Poisson’s ratio. Linear elastic fracture mechanics: Fracture occurs when KI≥Kc . For the special case of plane strain deformation, Kc becomes KIc and is considered a material property. The subscript I arises because of the different ways of loading a material to enable a crack to propagate. It refers to so-called "mode I " loading as opposed to mode II or III The expression for KI will be different for geometries other than the center-cracked infinite plate, as discussed in the article on the stress intensity factor. Consequently, it is necessary to introduce a dimensionless correction factor, Y , in order to characterize the geometry. This correction factor, also often referred to as the geometric shape factor, is given by empirically determined series and accounts for the type and geometry of the crack or notch. We thus have: KI=Yσπa where Y is a function of the crack length and width of sheet given, for a sheet of finite width W containing a through-thickness crack of length 2a , by: sec ⁡(πaW) Strain energy release Irwin was the first to observe that if the size of the plastic zone around a crack is small compared to the size of the crack, the energy required to grow the crack will not be critically dependent on the state of stress (the plastic zone) at the crack tip. In other words, a purely elastic solution may be used to calculate the amount of energy available for fracture. Linear elastic fracture mechanics: The energy release rate for crack growth or strain energy release rate may then be calculated as the change in elastic strain energy per unit area of crack growth, i.e., := [∂U∂a]P=−[∂U∂a]u where U is the elastic energy of the system and a is the crack length. Either the load P or the displacement u are constant while evaluating the above expressions. Linear elastic fracture mechanics: Irwin showed that for a mode I crack (opening mode) the strain energy release rate and the stress intensity factor are related by: plane stress plane strain where E is the Young's modulus, ν is Poisson's ratio, and KI is the stress intensity factor in mode I. Irwin also showed that the strain energy release rate of a planar crack in a linear elastic body can be expressed in terms of the mode I, mode II (sliding mode), and mode III (tearing mode) stress intensity factors for the most general loading conditions. Linear elastic fracture mechanics: Next, Irwin adopted the additional assumption that the size and shape of the energy dissipation zone remains approximately constant during brittle fracture. This assumption suggests that the energy needed to create a unit fracture surface is a constant that depends only on the material. This new material property was given the name fracture toughness and designated GIc. Today, it is the critical stress intensity factor KIc, found in the plane strain condition, which is accepted as the defining property in linear elastic fracture mechanics. Linear elastic fracture mechanics: Crack tip plastic zone In theory the stress at the crack tip where the radius is nearly zero, would tend to infinity. This would be considered a stress singularity, which is not possible in real-world applications. For this reason, in numerical studies in the field of fracture mechanics, it is often appropriate to represent cracks as round tipped notches, with a geometry dependent region of stress concentration replacing the crack-tip singularity. In actuality, the stress concentration at the tip of a crack within real materials has been found to have a finite value but larger than the nominal stress applied to the specimen. Linear elastic fracture mechanics: Nevertheless, there must be some sort of mechanism or property of the material that prevents such a crack from propagating spontaneously. The assumption is, the plastic deformation at the crack tip effectively blunts the crack tip. This deformation depends primarily on the applied stress in the applicable direction (in most cases, this is the y-direction of a regular Cartesian coordinate system), the crack length, and the geometry of the specimen. To estimate how this plastic deformation zone extended from the crack tip, Irwin equated the yield strength of the material to the far-field stresses of the y-direction along the crack (x direction) and solved for the effective radius. From this relationship, and assuming that the crack is loaded to the critical stress intensity factor, Irwin developed the following expression for the idealized radius of the zone of plastic deformation at the crack tip: rp=KC22πσY2 Models of ideal materials have shown that this zone of plasticity is centered at the crack tip. This equation gives the approximate ideal radius of the plastic zone deformation beyond the crack tip, which is useful to many structural scientists because it gives a good estimate of how the material behaves when subjected to stress. In the above equation, the parameters of the stress intensity factor and indicator of material toughness, KC , and the yield stress, σY , are of importance because they illustrate many things about the material and its properties, as well as about the plastic zone size. For example, if Kc is high, then it can be deduced that the material is tough, and if σY is low, one knows that the material is more ductile. The ratio of these two parameters is important to the radius of the plastic zone. For instance, if σY is small, then the squared ratio of KC to σY is large, which results in a larger plastic radius. This implies that the material can plastically deform, and, therefore, is tough. This estimate of the size of the plastic zone beyond the crack tip can then be used to more accurately analyze how a material will behave in the presence of a crack. Linear elastic fracture mechanics: The same process as described above for a single event loading also applies and to cyclic loading. If a crack is present in a specimen that undergoes cyclic loading, the specimen will plastically deform at the crack tip and delay the crack growth. In the event of an overload or excursion, this model changes slightly to accommodate the sudden increase in stress from that which the material previously experienced. At a sufficiently high load (overload), the crack grows out of the plastic zone that contained it and leaves behind the pocket of the original plastic deformation. Now, assuming that the overload stress is not sufficiently high as to completely fracture the specimen, the crack will undergo further plastic deformation around the new crack tip, enlarging the zone of residual plastic stresses. This process further toughens and prolongs the life of the material because the new plastic zone is larger than what it would be under the usual stress conditions. This allows the material to undergo more cycles of loading. This idea can be illustrated further by the graph of Aluminum with a center crack undergoing overloading events. Linear elastic fracture mechanics: Limitations But a problem arose for the NRL researchers because naval materials, e.g., ship-plate steel, are not perfectly elastic but undergo significant plastic deformation at the tip of a crack. One basic assumption in Irwin's linear elastic fracture mechanics is small scale yielding, the condition that the size of the plastic zone is small compared to the crack length. However, this assumption is quite restrictive for certain types of failure in structural steels though such steels can be prone to brittle fracture, which has led to a number of catastrophic failures. Linear elastic fracture mechanics: Linear-elastic fracture mechanics is of limited practical use for structural steels and Fracture toughness testing can be expensive. Elastic–plastic fracture mechanics: Most engineering materials show some nonlinear elastic and inelastic behavior under operating conditions that involve large loads. In such materials the assumptions of linear elastic fracture mechanics may not hold, that is, the plastic zone at a crack tip may have a size of the same order of magnitude as the crack size the size and shape of the plastic zone may change as the applied load is increased and also as the crack length increases.Therefore, a more general theory of crack growth is needed for elastic-plastic materials that can account for: the local conditions for initial crack growth which include the nucleation, growth, and coalescence of voids (decohesion) at a crack tip. Elastic–plastic fracture mechanics: a global energy balance criterion for further crack growth and unstable fracture. Elastic–plastic fracture mechanics: CTOD Historically, the first parameter for the determination of fracture toughness in the elasto-plastic region was the crack tip opening displacement (CTOD) or "opening at the apex of the crack" indicated. This parameter was determined by Wells during the studies of structural steels, which due to the high toughness could not be characterized with the linear elastic fracture mechanics model. He noted that, before the fracture happened, the walls of the crack were leaving and that the crack tip, after fracture, ranged from acute to rounded off due to plastic deformation. In addition, the rounding of the crack tip was more pronounced in steels with superior toughness. Elastic–plastic fracture mechanics: There are a number of alternative definitions of CTOD. In the two most common definitions, CTOD is the displacement at the original crack tip and the 90 degree intercept. The latter definition was suggested by Rice and is commonly used to infer CTOD in finite element models of such. Note that these two definitions are equivalent if the crack tip blunts in a semicircle. Elastic–plastic fracture mechanics: Most laboratory measurements of CTOD have been made on edge-cracked specimens loaded in three-point bending. Early experiments used a flat paddle-shaped gage that was inserted into the crack; as the crack opened, the paddle gage rotated, and an electronic signal was sent to an x-y plotter. This method was inaccurate, however, because it was difficult to reach the crack tip with the paddle gage. Today, the displacement V at the crack mouth is measured, and the CTOD is inferred by assuming the specimen halves are rigid and rotate about a hinge point (the crack tip). Elastic–plastic fracture mechanics: R-curve An early attempt in the direction of elastic-plastic fracture mechanics was Irwin's crack extension resistance curve, Crack growth resistance curve or R-curve. This curve acknowledges the fact that the resistance to fracture increases with growing crack size in elastic-plastic materials. The R-curve is a plot of the total energy dissipation rate as a function of the crack size and can be used to examine the processes of slow stable crack growth and unstable fracture. However, the R-curve was not widely used in applications until the early 1970s. The main reasons appear to be that the R-curve depends on the geometry of the specimen and the crack driving force may be difficult to calculate. Elastic–plastic fracture mechanics: J-integral In the mid-1960s James R. Rice (then at Brown University) and G. P. Cherepanov independently developed a new toughness measure to describe the case where there is sufficient crack-tip deformation that the part no longer obeys the linear-elastic approximation. Rice's analysis, which assumes non-linear elastic (or monotonic deformation theory plastic) deformation ahead of the crack tip, is designated the J-integral. This analysis is limited to situations where plastic deformation at the crack tip does not extend to the furthest edge of the loaded part. It also demands that the assumed non-linear elastic behavior of the material is a reasonable approximation in shape and magnitude to the real material's load response. The elastic-plastic failure parameter is designated JIc and is conventionally converted to KIc using the equation below. Also note that the J integral approach reduces to the Griffith theory for linear-elastic behavior. Elastic–plastic fracture mechanics: The mathematical definition of J-integral is as follows: with w=∫0εijσijdεij where Γ is an arbitrary path clockwise around the apex of the crack, w is the density of strain energy, Ti are the components of the vectors of traction, ui are the components of the displacement vectors, ds is an incremental length along the path Γ , and σij and εij are the stress and strain tensors.Since engineers became accustomed to using KIc to characterise fracture toughness, a relation has been used to reduce JIc to it: KIc=E∗JIc where E∗=E for plane stress and E∗=E1−ν2 for plane strain. Elastic–plastic fracture mechanics: Cohesive zone model When a significant region around a crack tip has undergone plastic deformation, other approaches can be used to determine the possibility of further crack extension and the direction of crack growth and branching. A simple technique that is easily incorporated into numerical calculations is the cohesive zone model method which is based on concepts proposed independently by Barenblatt and Dugdale in the early 1960s. The relationship between the Dugdale-Barenblatt models and Griffith's theory was first discussed by Willis in 1967. The equivalence of the two approaches in the context of brittle fracture was shown by Rice in 1968. Elastic–plastic fracture mechanics: Transition flaw size Let a material have a yield strength σY and a fracture toughness in mode I KIc . Based on fracture mechanics, the material will fail at stress fail =KIc/πa . Based on plasticity, the material will yield when σfail=σY . These curves intersect when a=KIc2/πσY2 . This value of a is called as transition flaw size at ., and depends on the material properties of the structure. When the a<at , the failure is governed by plastic yielding, and when a>at the failure is governed by fracture mechanics. The value of at for engineering alloys is 100 mm and for ceramics is 0.001 mm. If we assume that manufacturing processes can give rise to flaws in the order of micrometers, then, it can be seen that ceramics are more likely to fail by fracture, whereas engineering alloys would fail by plastic deformation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbon monoxide-releasing molecules** Carbon monoxide-releasing molecules: Carbon monoxide-releasing molecules (CORMs) are chemical compounds designed to release controlled amounts of carbon monoxide (CO). CORMs are being developed as potential therapeutic agents to locally deliver CO to cells and tissues, thus overcoming limitations of CO gas inhalation protocols. CO is best known for its toxicity in carbon monoxide poisoning at high doses. However, CO is a gasotransmitter and supplemental low dosage of CO has been linked to therapeutic benefits. Pre-clinical research has focused on CO's anti-inflammatory activity with significant applications in cardiovascular disease, oncology, transplant surgery, and neuroprotection. History: The simplest source of CO is from a combustion reaction via burning sources such as fossil fuels or fire wood. Sources releasing CO upon thermal decomposition or combustion are generally not considered CORMs. History: Therapeutic interest in CO dates back to the study of factitious airs (hydrocarbonate) in the 1790s by Thomas Beddoes, James Watt, James Lind, Humphry Davy, Tiberius Cavallo and many others.Nickel tetracarbonyl was the first carbonyl-complex used to achieve local delivery of CO and was the first CO delivery molecule suggested to have therapeutic potential in 1891. The acronym CORM was coined in 2002 which marks the first modern biomedical and pharmaceutical initiative. The enzymatic reaction of heme oxygenase inspired the development of synthetic CORMs. History: The first synthetic CORMs were typically metal carbonyl complexes. A representative CORM that has been extensively characterized both from a biochemical and pharmacological view point is the ruthenium(II) complex Ru(glycinate)Cl(CO)3, commonly known as CORM-3. Therapeutic data pertaining to metallic CORMs are being reappraised to elucidate if observed effects are actually due to CO, or, if metal reactivity mediates physiological effects via thiol depletion, facilitating reduction, ion channel blockage, or redox catalysis. Despite questions pertaining to transition metals, pure CO gas and alternative non-metallic CO prodrugs and drug delivery devices have confirmed CO's therapeutic potential. CORM classifications: Transition metal CORMs The majority of therapeutically relevant CORMs are transition metal complexes primarily based on iron, molybdenum, ruthenium, manganese, cobalt, rhenium and others. PhotoCORMs The release of CO from carrier agents can be induced photochemically. These carriers are called photoCORMs and include both metal complexes and metal-free (organic) compounds of various structural motifs which could be regarded as a special type of photolabile protecting group. ET-CORMs Enzyme-triggered CORMs (ET-CORMs) have been developed to improve selective local delivery of CO. Some ET-CORM prodrugs are activated by esterase enzymes for site specific liberation of CO. CO prodrugs Organic CORMs are being developed to overcome reactivity and certain toxicity limitations of inorganic CORMs. CORM classifications: Methylene chloride was the first organic CORM orally administered based on previous reports of carboxyhemoglobin formation via metabolism. The second organic CORM, CORM-A1 (sodium boranocarbonate), was developed based on a 1960s report of CO release from potassium boranocarbonate.In 2003, cyclic oxocarbons were suggested as a source for therapeutic CO including deltic acid, squaric acid, croconic acid, and rhodizonic acid and their salts.Recent years have seen increasing interests in organic CO prodrugs because of the need to consider drug developability issues in developing CO-based therapeutics. These CO prodrugs have tunable release rate, triggered release, and the ability to release more than one payload from a single prodrug. CORM classifications: Enzyme hybrids Based on the synergism of the heme oxygenase system and CO delivery, a new molecular hybrid-CORM (HYCO) class emerged consisting of a conjoined HO-1 inducer and CORM species. One such HYCO includes a dimethyl fumarate moiety which activates NRF2 to thereby induce HO-1, whilst the CORM moiety also liberates CO. CORM classifications: Carbon monoxide releasing materials Carbon monoxide releasing materials (CORMAs) are essentially novel drug formulations and drug delivery platforms which have emerged to overcome the pharmaceutical limitations of most CORM species. An exemplary CORMA developed by Hubbell consists of a formulation of micelles prepared from triblock copolymers with a CORM entity, which is triggered for release via addition of cysteine. Other CO-releasing scaffolds include polymers, peptides, silica nanoparticles, nanodiamond, magnetic nanoparticles, nanofiber gel, metallodendrimers, and CORM-protein (macromolecule) conjugates.Other advanced drug delivery devices, such as encapsulated CORMs and extracorporeal membrane-inspired technologies, have been developed. CORM classifications: Carboxyhemoglobin infusion Carboxyhemoglobin can be infused to deliver CO. The most common approaches are based on polyethylene glycol PEGylated bovine carboxyhemoglobin and maleimide PEG conjugated human carboxyhemoglobin. Porphyrins Porphyrin structures such as heme, hemin, and metallic protoporphyrin IX (PPIX) analogs (such as cobalt PPIX) have been deployed to induce heme oxygenase and subsequently undergo biotransformation to liberate CO, the inorganic ion, and biliverdin/bilirubin. Some PPIX analogs such as tin PPIX, tin mesoporphyrin, and zinc PPIX, are heme oxygenase inhibitors. Endogenous CO: HMOX is regarded as the main source of endogenous CO production, though other minor contributors have been identified in recent years. CO is formed at a rate of 16.4 μmol/hour in the human body, ~86% originating from heme via heme oxygenase and ~14% from non-heme sources including: photooxidation, lipid peroxidation, and xenobiotics. The average carboxyhemoglobin (CO-Hb) level in a non-smoker is under 3% CO-Hb (whereas a smoker may reach levels near 10% CO-Hb), though geographic location, occupation, health and behavior are contributing variables. Endogenous CO: Heme oxygenase In the late 1960s Rudi Schmid characterized the enzyme that facilitates the reaction for heme catabolism, thereby identifying the heme oxygenase (HMOX) enzyme. Endogenous CO: HMOX is a heme-containing member of the heat shock protein (HSP) family identified as HSP32. Three isoforms of HMOX have been identified to date including the stress-induced HMOX-1 and constitutive HMOX-2. HMOX-1 is considered a cell rescue protein which is induced in response to oxidative stress and numerous disease states. Furthermore, HMOX-1 is induced by countless molecules including statins, hemin, and natural products.HMOX catalyzes the degradation of heme to biliverdin/bilirubin, ferrous ion, and CO. Though present throughout the body, HO has significant activity in the spleen in the degradation of hemoglobin during erythrocyte recycling (0.8% of the erythrocyte pool per day), which accounts for ~80% of heme derived endogenous CO production. The majority of the remaining 20% of heme derived CO production is attributed to hepatic catabolism of hemoproteins (myoglobin, cytochromes, catalase, peroxidases, soluble guanylate cyclase, nitric oxide synthase) and ineffective erythropoiesis in bone marrow.The enzymatic velocity and catalytic activity of HMOX can be enhanced by a plethora of dietary substances and xenobiotics to increase CO production. Endogenous CO: Minor CO sources The formation of CO from lipid peroxidation was first reported in the late 1960s and is regarded as a minor contributor to endogenous CO production. Other contributing sources include: the microbiome, cytochrome P450 reductase, human acireductone dioxygenase, tyrosinase, lipid peroxidation, alpha-keto acids, and other oxidative and redox mechanisms. CO pharmacology: Carbon monoxide is one of three gaseous signaling molecules alongside nitric oxide and hydrogen sulfide. These gases are collectively referred to as gasotransmitters. CO is a classical example of hormesis such that low-dose is essential and beneficial, whereas an absence or excessive exposure to CO can be toxic. CO pharmacology: Signaling The first evidence of CO as a signaling molecule occurred upon observation of CO stimulating soluble guanylate cyclase and subsequent cyclic guanosine monophosphate (cGMP) production to serve as a vasodilator in vascular smooth muscle cells. The anti-inflammatory effects of CO are attributed to activation of the p38 mitogen-activated protein kinase (MAPK) pathway. While CO commonly interacts with the ferrous iron atom of heme in a hemoprotein, it has been demonstrated that CO activates calcium-dependent potassium channels by engaging in hydrogen-bonding with surface histidine residues.CO may have an inhibitory effect on numerous proteins including cytochrome P450 and cytochrome c oxidase. CO pharmacology: Pharmacokinetics CO has approximately 210x greater affinity for hemoglobin than oxygen. The equilibrium dissociation constant for the reaction Hb-CO ⇌ Hb + CO strongly favours the CO complex, thus the release of CO for pulmonary excretion generally takes some time. Based on this binding affinity, blood is essentially an irreversible sink for CO and presents a therapeutic challenge for the delivery of CO to cells and tissues. CO is considered non-reactive in the body and primarily undergoes pulmonary excretion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Medial root of median nerve** Medial root of median nerve: The medial root of median nerve is one of the two sources of the median nerve, the other being the lateral root of median nerve.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automatic image annotation** Automatic image annotation: Automatic image annotation (also known as automatic image tagging or linguistic indexing) is the process by which a computer system automatically assigns metadata in the form of captioning or keywords to a digital image. This application of computer vision techniques is used in image retrieval systems to organize and locate images of interest from a database. Automatic image annotation: This method can be regarded as a type of multi-class image classification with a very large number of classes - as large as the vocabulary size. Typically, image analysis in the form of extracted feature vectors and the training annotation words are used by machine learning techniques to attempt to automatically apply annotations to new images. The first methods learned the correlations between image features and training annotations, then techniques were developed using machine translation to try to translate the textual vocabulary with the 'visual vocabulary', or clustered regions known as blobs. Work following these efforts have included classification approaches, relevance models and so on. Automatic image annotation: The advantages of automatic image annotation versus content-based image retrieval (CBIR) are that queries can be more naturally specified by the user. CBIR generally (at present) requires users to search by image concepts such as color and texture, or finding example queries. Certain image features in example images may override the concept that the user is really focusing on. The traditional methods of image retrieval such as those used by libraries have relied on manually annotated images, which is expensive and time-consuming, especially given the large and constantly growing image databases in existence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**German volume training** German volume training: German Volume Training (GVT), commonly referred to as the "10x10 workout", is a form of weight training. It employs high set counts and moderate repetitions. GVT workouts typically involve 10 sets of 10 repetitions focused on a specific muscle group. Muscle building: GVT training programs emphasize different muscle groups each day in order to work the targeted muscle groups close to their breaking points, causing the body to build muscle mass quickly. GVT is a mainstream bodybuilding program and can be done at a frequency suitable to the trainee. Muscle building: Guidelines ensure trainer safety - rest is important between sets, and should last between 60 and 90 seconds. During the set the lifter must also consider the amount of weight. For any given exercise, only about 60% of the lifter's one rep max should be used. For example, if a lifter's maximum bench press amount is 100 pounds (45 kg) then they should only lift 60 pounds (27 kg) for each rep. Another important consideration when using this type of training program is recovery time. Most training programs involve daily training, whereas GVT recommends 5 days of workouts each week. Since each day a different muscle group is targeted and worked to near exhaustion, the soreness of that muscle may feel more intense and recovery may take longer than normal. On the 6th day, the regimen starts over, giving the lifter 2 days off for muscle recovery. The GVT program typically helps put on mass, and does not necessarily help improve the lifter's one rep max.German volume training could also involve a set of muscles targeted in a day. For example, in one day, lifter can train his back and legs. Usually complimentary muscle groups are chosen to reduce the strain on muscles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Schottky junction solar cell** Schottky junction solar cell: In a basic Schottky-junction (Schottky-barrier) solar cell, an interface between a metal and a semiconductor provides the band bending necessary for charge separation. Traditional solar cells are composed of p-type and n-type semiconductor layers sandwiched together, forming the source of built-in voltage (a p-n junction). Due to differing energy levels between the Fermi level of the metal and the conduction band of the semiconductor, an abrupt potential difference is created, instead of the smooth band transition observed across a p-n junction in a standard solar cell, and this is a Schottky height barrier. Although vulnerable to higher rates of thermionic emission, manufacturing of Schottky barrier solar cells proves to be cost-effective and industrially scalable.However, research has shown thin insulating layers between metal and semiconductors improve solar cell performance, generating interest in metal-insulator-semiconductor Schottky junction solar cells. A thin insulating layer, such as silicon dioxide, can reduce rates of electron-hole pair recombination and dark current by allowing the possibility of minority carriers to tunnel through this layer.The Schottky-junction is an attempt to increase the efficiency of solar cells by introducing an impurity energy level in the band gap. This impurity can absorb more lower energy photons, which improves the power conversion efficiency of the cell. This type of solar cell allows enhanced light trapping and faster carrier transport compared to more conventional photovoltaic cells. Material types: Schottky junction solar cells can be constructed using many different material types. Material types: Cadmium selenide One material is cadmium selenide. As a direct bandgap semiconductor, CdSe has many applications in modern technology. Previous experiments using CdSe in solar cells resulted in a power-conversion efficiency of approximately 0.72%. Liang Li et al. propose using single cadmium selenide nanobelts-on-electrodes. This method uses electron-beam lithography, or EBL, which provides a more efficient synthesis method to developing Schottky junction solar cells. Although this material does not provide a large power-conversion efficiency as of yet, the advent of simpler fabrication methods show promise in nano-electronic applications. Further research is being conducted to increase the efficiency of cadmium selenide cells. Material types: Nickel oxide When constructing bulk-heterojunction solar cells, p-type nickel oxide is an effective anode layer. Its function as a wide band-gap semiconductor helps planarize the anode surface, and helps maximum photon flux to reach the active layer. In this case, NiO thickness was also measured, and increasing the thickness decreases cell efficiency. In these cells, nickel oxide replaces poly(3,4-ethylenedioxythiophene) polystyrene sulfonate, or PEDOT:PSS, resulting in a dramatic increases in performance while still maintaining stability of the cell. Compared to the cadmium selenide cell, nickel dioxide cells provide a power-conversion efficiency to 5.2%. Material types: Gallium arsenide Under the right conditions, a gallium arsenide cell can produce an efficiency of around 22%. This is considered an MIS, or metal-insulator-semiconductor, and requires a thin oxide layer to prevent photo-current suppression. Sheng S. Li et al. for the first time show that an effective barrier height equal to the band gap energy can be realized if the thickness and dopant density of the p-layer as well as the dopant density in the n substrate are properly chosen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AIDS-related lymphoma** AIDS-related lymphoma: AIDS-related lymphoma describes lymphomas occurring in patients with acquired immunodeficiency syndrome (AIDS).A lymphoma is a type of cancer arising from lymphoid cells. In AIDS, the incidences of non-Hodgkin's lymphoma, primary cerebral lymphoma and Hodgkin's disease are all increased. There are three different varieties of AIDS-related lymphoma: Diffuse large B-cell lymphoma, B-cell immunoblastic lymphoma, and Burkitt's lymphoma (small non-cleaved cell lymphoma). Symptoms and signs: The symptoms of AIDS-related lymphoma can include: weight loss, fever, and night sweats. Non-Hodgkin's lymphoma: Non-Hodgkin's lymphoma (NHL) is present in about 1%–3% of HIV seropositive people at the time of the initial diagnosis of HIV. However, it is believed that such patients have been seropositive for a prolonged period, but have simply not had their infections recognized previously. This is so because immunodysregulation must exist for an extended interval of time, in order for a lymphoproliferative process to evolve in that context. Primary cerebral lymphoma: Primary cerebral lymphoma (or primary central nervous system lymphoma) is a form of NHL. It is very rare in immunocompetent people, with an incidence of 5–30 cases per million person-years. However the incidence in immunocompromised individuals is greatly increased, up to 100 per million person-years.Primary cerebral lymphoma is strongly associated with Epstein–Barr virus (EBV). The presence of EBV DNA in cerebrospinal fluid is highly suggestive of primary cerebral lymphoma.Treatment of AIDS patients with antiretroviral drugs reduces the incidence of primary cerebral lymphoma. Hodgkin's disease: The incidence of Hodgkin's disease in the general population is about 10–30 per million person-years. This increases to 170 per million person-years in HIV positive patients.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cardea (DRM)** Cardea (DRM): Cardea is the codename for portable version of Windows Media DRM for network devices, the marketing name of which is Windows Media DRM for Network Devices (or in short form WMDRM-ND) introduced by Microsoft. It is used for streaming protected digital media across a network for immediate playback. Janus (DRM) is a similar system for portable devices, but is used for synchronization.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chemical burn** Chemical burn: A chemical burn occurs when living tissue is exposed to a corrosive substance (such as a strong acid, base or oxidizer) or a cytotoxic agent (such as mustard gas, lewisite or arsine). Chemical burns follow standard burn classification and may cause extensive tissue damage. The main types of irritant and/or corrosive products are: acids, bases, oxidizers / reducing agents, solvents, and alkylants. Additionally, chemical burns can be caused by biological toxins (such as anthrax toxin) and by some types of cytotoxic chemical weapons, e.g., vesicants such as mustard gas and Lewisite, or urticants such as phosgene oxime. Chemical burn: Chemical burns may: need no source of heat occur immediately on contact not be immediately evident or noticeable be extremely painful diffuse into tissue and damage cellular structures under skin without immediately apparent damage to skin surface Presentation: The exact symptoms of a chemical burn depend on the chemical involved. Symptoms include itching, bleaching or darkening of skin, burning sensations, trouble breathing, coughing blood and/or tissue necrosis. Common sources of chemical burns include sulfuric acid (H2SO4), hydrochloric acid (HCl), sodium hydroxide (NaOH), lime (CaO), silver nitrate (AgNO3), and hydrogen peroxide (H2O2). Effects depend on the substance; hydrogen peroxide removes a bleached layer of skin, while nitric acid causes a characteristic color change to yellow in the skin, and silver nitrate produces noticeable black stains. Chemical burns may occur through direct contact on body surfaces, including skin and eyes, via inhalation, and/or by ingestion. Substances that diffuse efficiently in human tissue, e.g., hydrofluoric acid, sulfur mustard, and dimethyl sulfate, may not react immediately, but instead produce the burns and inflammation hours after the contact. Chemical fabrication, mining, medicine, and related professional fields are examples of occupations where chemical burns may occur. Hydrofluoric acid leaches into the bloodstream, reacts with calcium and magnesium, and the resulting salts can cause cardiac arrest after eating through skin. Prevention: In Belgium, the Conseil Supérieur de la Santé gives a scientific advisory report on public health policy. The Superior Health Council of Belgium provides an overview of products that are authorized in Belgium for consumer use and that contain caustic substances, as well as of the risks linked to exposure to these products. This report aims at suggesting protection measures for the consumers, and formulates recommendations that apply to the different stages of the chain, which begins with the formulation of the product, followed by its regulation, marketing, application, post-application and ends with its monitoring.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ethinylestradiol/levonorgestrel** Ethinylestradiol/levonorgestrel: Ethinylestradiol/levonorgestrel (EE/LNG) is a combined birth control pill made up of ethinylestradiol, an estrogen and levonorgestrel a progestin. It is used for birth control, symptoms of menstruation, endometriosis, and as emergency contraception. It is taken by mouth. Some preparations of EE/LNG additionally contain an iron supplement in the form of ferrous bisglycinate or ferrous fumarate.Side effects can include nausea, headache, blood clots, breast pain, depression, and liver problems. Use is not recommended during pregnancy, the initial three weeks after childbirth, and in those at high risk of blood clots. However, it may be started immediately after a miscarriage or abortion. Smoking while using combined birth control pills is not recommended. It works by stopping ovulation, making the mucus at the opening to the cervix thick, and making the uterus not suitable for implantation.Ethinylestradiol/levonorgestrel has been approved for medical use in the United States at least since 1982. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. It is marketed under a large number of brand names. In 2020, it was the 159th most commonly prescribed medication in the United States, with more than 3 million prescriptions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VoIP vulnerabilities** VoIP vulnerabilities: VoIP is vulnerable to similar types of attacks that Web connection and emails are prone to. VoIP attractiveness, because of its low fixed cost and numerous features, come with some risks that are well known to the developers an are constantly being addressed. But these risks are usually not mentioned to the business which is the most common target.VoIP also allows the use of fraud and illicit practices that most people are not aware of. Whilst these practices are restricted by most providers, the possibility remains that someone is using them for their own gain. Vulnerabilities: Remote eavesdropping Unencrypted connections lead to communication and security breaches. Hackers/trackers can eavesdrop on important or private conversations and extract valuable data. The overheard conversations might be sold to or used by competing businesses. The gathered intelligence can also be used as blackmail for personal gain. Vulnerabilities: Network attacks Attacks to the user network, or internet provider can disrupt or even cut the connection. Since VOIP is highly dependent on our internet connection, direct attacks on the internet connection, or provider, are highly effective way of attack. These kinds of attacks target office telephony, since mobile internet is harder to interrupt. Also, mobile applications that do not rely on internet connection to make VOIP calls are immune to such attacks. Vulnerabilities: Default security settings Hardphones (a.k.a. VoIP phones) are smart devices. They are more of a computer than a phone, and as such they need to be well configured. In some cases, Chinese manufacturers are using default passwords for each of the manufactured devices which leads to vulnerabilities. VOIP over Wi-Fi While VoIP is relatively secure, it still needs a source of internet, which in most cases is a Wi-Fi network. And while a home/office Wi-Fi can be relatively secure, using public or shared networks will further compromise the connection. VOIP exploits: VoIP spam VoIP has its own spam called SPIT (Spam over Internet Telephony). Using the unlimited extensions provided by VOIP PBX capabilities, the spammer can constantly harass their target from different numbers. The process is not hard to automate and can fill the target's voice mail with notifications. The caller can make calls often enough to block the target from getting important incoming calls. This practice can be costly to the caller and is rarely used other than for marketing needs. VOIP exploits: VoIP phishing VOIP users can change their Caller ID (a.k.a. Caller ID spoofing), allowing caller to represent himself as relative, colleague, or part of the family, in order to extract information, money or benefits from the target. Secure SIP Port FreePBX has inbuilt firewall, you can use that to secure SIP port.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Game tree** Game tree: In the context of Combinatorial game theory, which typically studies sequential games with perfect information, a game tree is a graph representing all possible game states within such a game. Such games include well-known ones such as chess, checkers, Go, and tic-tac-toe. This can be used to measure the complexity of a game, as it represents all the possible ways a game can pan out. Due to the large game trees of complex games such as chess, algorithms that are designed to play this class of games will use partial game trees, which makes computation feasible on modern computers. Various methods exist to solve game trees. If a complete game tree can be generated, a deterministic algorithm, such as backward induction or retrograde analysis can be used. Randomized algorithms and minimax algorithms such as MCTS can be used in cases where a complete game tree is not feasible. Understanding the game tree: To better understand the game tree, it can be thought of as a technique for analyzing adversarial games, which determine the actions that player takes to win the game. In game theory, a game tree is a directed graph whose nodes are positions in a game (e.g., the arrangement of the pieces in a board game) and whose edges are moves (e.g., to move pieces from one position on a board to another).The complete game tree for a game is the game tree starting at the initial position and containing all possible moves from each position; the complete tree is the same tree as that obtained from the extensive-form game representation. To be more specific, the complete game is a norm for the game in game theory. Which can clearly express many important aspects. For example, the sequence of actions that stakeholders may take, their choices at each decision point, information about actions taken by other stakeholders when each stakeholder makes a decision, and the benefits of all possible game results. Understanding the game tree: The diagram shows the first two levels, or plies, in the game tree for tic-tac-toe. The rotations and reflections of positions are equivalent, so the first player has three choices of move: in the center, at the edge, or in the corner. The second player has two choices for the reply if the first player played in the center, otherwise five choices. And so on. Understanding the game tree: The number of leaf nodes in the complete game tree is the number of possible different ways the game can be played. For example, the game tree for tic-tac-toe has 255,168 leaf nodes. Understanding the game tree: Game trees are important in artificial intelligence because one way to pick the best move in a game is to search the game tree using any of numerous tree search algorithms, combined with minimax-like rules to prune the tree. The game tree for tic-tac-toe is easily searchable, but the complete game trees for larger games like chess are much too large to search. Instead, a chess-playing program searches a partial game tree: typically as many plies from the current position as it can search in the time available. Except for the case of "pathological" game trees (which seem to be quite rare in practice), increasing the search depth (i.e., the number of plies searched) generally improves the chance of picking the best move. Understanding the game tree: Two-person games can also be represented as and-or trees. For the first player to win a game, there must exist a winning move for all moves of the second player. This is represented in the and-or tree by using disjunction to represent the first player's alternative moves and using conjunction to represent all of the second player's moves. Solving game trees: Deterministic algorithm version With a complete game tree, it is possible to "solve" the game – that is to say, find a sequence of moves that either the first or second player can follow that will guarantee the best possible outcome for that player (usually a win or a tie). The deterministic algorithm (which is generally called backward induction or retrograde analysis) can be described recursively as follows. Solving game trees: Color the final ply of the game tree so that all wins for player 1 are colored one way (Blue in the diagram), all wins for player 2 are colored another way (Red in the diagram), and all ties are colored a third way (Grey in the diagram). Look at the next ply up. If there exists a node colored opposite as the current player, color this node for that player as well. If all immediately lower nodes are colored for the same player, color this node for the same player as well. Otherwise, color this node a tie. Repeat for each ply, moving upwards, until all nodes are colored. The color of the root node will determine the nature of the game.The diagram shows a game tree for an arbitrary game, colored using the above algorithm. It is usually possible to solve a game (in this technical sense of "solve") using only a subset of the game tree, since in many games a move need not be analyzed if there is another move that is better for the same player (for example alpha-beta pruning can be used in many deterministic games). Any subtree that can be used to solve the game is known as a decision tree, and the sizes of decision trees of various shapes are used as measures of game complexity. Solving game trees: Randomized algorithms version Randomized algorithms can be used in solving game trees. There are two main advantages in this type of implementation: speed and practicality. Whereas a deterministic version of solving game trees can be done in Ο(n), the following randomized algorithm has an expected run time of θ(n0.792) if every node in the game tree has degree 2. Moreover, it is practical because randomized algorithms are capable of "foiling an enemy", meaning an opponent cannot beat the system of game trees by knowing the algorithm used to solve the game tree because the order of solving is random. Solving game trees: The following is an implementation of randomized game tree solution algorithm: The algorithm makes use of the idea of "short-circuiting": if the root node is considered an "OR" operator, then once one True is found, the root is classified as True; conversely, if the root node is considered an "AND" operator then once one False is found, the root is classified as False.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ball spline** Ball spline: Ball splines (Ball Spline bearings) are a special type of linear motion bearing that are used to provide nearly frictionless linear motion while allowing the member to transmit torque simultaneously. There are grooves ground along the length of the shaft (thus forming splines) for the ball bearings to run inside. The outer shell that houses the balls is called a nut rather than a bushing, but is not a nut in the traditional sense—it is not free to rotate about the shaft, but is free to travel up and down the shaft. For a shaft travel of any significant length the nut will have channels that recirculate the balls, operating in the same way as a ball screw. By increasing the contact area of the ball bearings on the shaft to approximately 45 degrees, the side load and direct load carrying capabilities are greatly increased. Each nut can be individually preloaded at the factory to decrease the available radial play to ensure rigidity. This process not only increases the contact area, increasing direct loading capabilities, but it also restricts any radial movement, increasing the overhung moment capabilities. This creates a sturdier structure that can handle a very strenuous working environment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cache-oblivious algorithm** Cache-oblivious algorithm: In computing, a cache-oblivious algorithm (or cache-transcendent algorithm) is an algorithm designed to take advantage of a processor cache without having the size of the cache (or the length of the cache lines, etc.) as an explicit parameter. An optimal cache-oblivious algorithm is a cache-oblivious algorithm that uses the cache optimally (in an asymptotic sense, ignoring constant factors). Thus, a cache-oblivious algorithm is designed to perform well, without modification, on multiple machines with different cache sizes, or for a memory hierarchy with different levels of cache having different sizes. Cache-oblivious algorithms are contrasted with explicit loop tiling, which explicitly breaks a problem into blocks that are optimally sized for a given cache. Cache-oblivious algorithm: Optimal cache-oblivious algorithms are known for matrix multiplication, matrix transposition, sorting, and several other problems. Some more general algorithms, such as Cooley–Tukey FFT, are optimally cache-oblivious under certain choices of parameters. As these algorithms are only optimal in an asymptotic sense (ignoring constant factors), further machine-specific tuning may be required to obtain nearly optimal performance in an absolute sense. The goal of cache-oblivious algorithms is to reduce the amount of such tuning that is required. Cache-oblivious algorithm: Typically, a cache-oblivious algorithm works by a recursive divide-and-conquer algorithm, where the problem is divided into smaller and smaller subproblems. Eventually, one reaches a subproblem size that fits into the cache, regardless of the cache size. For example, an optimal cache-oblivious matrix multiplication is obtained by recursively dividing each matrix into four sub-matrices to be multiplied, multiplying the submatrices in a depth-first fashion. In tuning for a specific machine, one may use a hybrid algorithm which uses loop tiling tuned for the specific cache sizes at the bottom level but otherwise uses the cache-oblivious algorithm. History: The idea (and name) for cache-oblivious algorithms was conceived by Charles E. Leiserson as early as 1996 and first published by Harald Prokop in his master's thesis at the Massachusetts Institute of Technology in 1999. There were many predecessors, typically analyzing specific problems; these are discussed in detail in Frigo et al. 1999. Early examples cited include Singleton 1969 for a recursive Fast Fourier Transform, similar ideas in Aggarwal et al. 1987, Frigo 1996 for matrix multiplication and LU decomposition, and Todd Veldhuizen 1996 for matrix algorithms in the Blitz++ library. Idealized cache model: In general, a program can be made more cache-conscious: Temporal locality, where the algorithm fetches the same pieces of memory multiple times; Spatial locality, where the subsequent memory accesses are adjacent or nearby memory addresses.Cache-oblivious algorithms are typically analyzed using an idealized model of the cache, sometimes called the cache-oblivious model. This model is much easier to analyze than a real cache's characteristics (which have complicated associativity, replacement policies, etc.), but in many cases is provably within a constant factor of a more realistic cache's performance. It is different than the external memory model because cache-oblivious algorithms do not know the block size or the cache size. Idealized cache model: In particular, the cache-oblivious model is an abstract machine (i.e., a theoretical model of computation). It is similar to the RAM machine model which replaces the Turing machine's infinite tape with an infinite array. Each location within the array can be accessed in O(1) time, similar to the random-access memory on a real computer. Unlike the RAM machine model, it also introduces a cache: the second level of storage between the RAM and the CPU. The other differences between the two models are listed below. In the cache-oblivious model: Memory is broken into blocks of B objects each. Idealized cache model: A load or a store between main memory and a CPU register may now be serviced from the cache. If a load or a store cannot be serviced from the cache, it is called a cache miss. A cache miss results in one block being loaded from the main memory into the cache. Namely, if the CPU tries to access word w and x is the line containing w , then x is loaded into the cache. If the cache was previously full, then a line will be evicted as well (see replacement policy below). The cache holds M objects, where M=Ω(B2) . This is also known as the tall cache assumption. The cache is fully associative: each line can be loaded into any location in the cache. Idealized cache model: The replacement policy is optimal. In other words, the cache is assumed to be given the entire sequence of memory accesses during algorithm execution. If it needs to evict a line at time t , it will look into its sequence of future requests and evict the line whose first access is furthest in the future. This can be emulated in practice with the Least Recently Used policy, which is shown to be within a small constant factor of the offline optimal replacement strategyTo measure the complexity of an algorithm that executes within the cache-oblivious model, we measure the number of cache misses that the algorithm experiences. Because the model captures the fact that accessing elements in the cache is much faster than accessing things in main memory, the running time of the algorithm is defined only by the number of memory transfers between the cache and main memory. This is similar to the external memory model, which all of the features above, but cache-oblivious algorithms are independent of cache parameters ( B and M ). The benefit of such an algorithm is that what is efficient on a cache-oblivious machine is likely to be efficient across many real machines without fine-tuning for particular real machine parameters. For many problems, an optimal cache-oblivious algorithm will also be optimal for a machine with more than two memory hierarchy levels. Examples: The simplest cache-oblivious algorithm presented in Frigo et al. is an out-of-place matrix transpose operation (in-place algorithms have also been devised for transposition, but are much more complicated for non-square matrices). Given m×n array A and n×m array B, we would like to store the transpose of A in B. The naive solution traverses one array in row-major order and another in column-major. The result is that when the matrices are large, we get a cache miss on every step of the column-wise traversal. The total number of cache misses is Θ(mn) The cache-oblivious algorithm has optimal work complexity O(mn) and optimal cache complexity O(1+mn/B) . The basic idea is to reduce the transpose of two large matrices into the transpose of small (sub)matrices. We do this by dividing the matrices in half along their larger dimension until we just have to perform the transpose of a matrix that will fit into the cache. Because the cache size is not known to the algorithm, the matrices will continue to be divided recursively even after this point, but these further subdivisions will be in cache. Once the dimensions m and n are small enough so an input array of size m×n and an output array of size n×m fit into the cache, both row-major and column-major traversals result in O(mn) work and O(mn/B) cache misses. By using this divide and conquer approach we can achieve the same level of complexity for the overall matrix. Examples: (In principle, one could continue dividing the matrices until a base case of size 1×1 is reached, but in practice one uses a larger base case (e.g. 16×16) in order to amortize the overhead of the recursive subroutine calls.) Most cache-oblivious algorithms rely on a divide-and-conquer approach. They reduce the problem, so that it eventually fits in cache no matter how small the cache is, and end the recursion at some small size determined by the function-call overhead and similar cache-unrelated optimizations, and then use some cache-efficient access pattern to merge the results of these small, solved problems. Examples: Like external sorting in the external memory model, cache-oblivious sorting is possible in two variants: funnelsort, which resembles mergesort, and cache-oblivious distribution sort, which resembles quicksort. Like their external memory counterparts, both achieve a running time of log MB⁡NB) , which matches a lower bound and is thus asymptotically optimal. Practicality: An empirical comparison of 2 RAM-based, 1 cache-aware, and 2 cache-oblivious algorithms implementing priority queues found that: Cache-oblivious algorithms performed worse than RAM-based and cache-aware algorithms when data fits into main memory. The cache-aware algorithm did not seem significantly more complex to implement than the cache-oblivious algorithms, and offered the best performance in all cases tested in the study. Practicality: Cache oblivious algorithms outperformed RAM-based algorithms when data size exceeded the size of main memory.Another study compared hash tables (as RAM-based or cache-unaware), B-trees (as cache-aware), and a cache-oblivious data structure referred to as a "Bender set". For both execution time and memory usage, the hash table was best, followed by the B-tree, with the Bender set the worst in all cases. The memory usage for all tests did not exceed main memory. The hash tables were described as easy to implement, while the Bender set "required a greater amount of effort to implement correctly".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Terrestrial analogue sites** Terrestrial analogue sites: Terrestrial analogue sites (also called "space analogues" or terrestrial analog sites) are places on Earth with assumed past or present geological, environmental or biological conditions of a celestial body such as the Moon or Mars. Analogue sites are used in the frame of space exploration to either study geological or biological processes observed on other planets, or to prepare astronauts for surface extra-vehicular activity. Definition: Analogue sites are places on Earth with assumed, past or present, geological, environmental or biological conditions of a celestial body. Analogue site studies are necessary because they help to understand geological processes (on Earth) which can be extrapolated to other Solar System bodies in order to interpret and validate the data received from orbiters or planetary rovers. Analogue sites are also important for optimizing scientific and technological needs and exploration strategies in robotic or crewed missions to the Moon or Mars. The definition of space analogues is therefore rather vast, reaching from places on Earth that exhibit geologic or atmospheric characteristics which are close to those observed on other celestial bodies, to sites that are used for space mission simulations to test sampling or drilling equipment, space suits, or the performance of astronauts in reduced gravity. Definition: Some sites are therefore suited to test instruments for exobiological research or to train sampling procedures for field explorations. Other sites offer an extreme environment that can be used by astronauts to prepare for the difficulties in future space missions. Definition: Fidelity An important notion in the evaluation of analogue sites is that of "fidelity", which describes the resemblance of the analogue to its extraterrestrial correspondent. Fidelity is used in comparative planetary science to express the analogy of a terrestrial site to a target extraterrestrial surface. This classification is possible based on various criteria such as geomorphology, geochemistry, exobiology or exploration conditions. Definition: Geomorphology Geomorphology is the scientific study of landforms and the processes that shape them. In terms of analogue sites, scientists search for locations on Earth that exhibit similar landforms such as can be found on exploration targets like the Moon, Mars or even asteroids and comets. The idea is to confront astronauts, robots or scientific equipment with sites that resemble in their geologic appearance those extraterrestrial surfaces. Examples are volcanic sites which resemble lunar terrain (regolith), polar locations and glaciers that can be compared to the poles of Mars or of Jupiter moon Europa, or terrestrial lava tubes which can also be found on the Moon or Mars. Definition: Geochemistry Geochemistry is the science that uses the principles of chemistry to explain the mechanisms behind major geological systems. The aspect of geochemistry is of importance for analogue sites when locations offer the possibility to test analysis instruments for future space missions (crewed or robotic). Geochemical fidelity is also of importance for the development and test of equipment used for in-situ resource utilization. Examples for such analogue sites are terrestrial volcanoes that offer rocks similar to those found on the Moon or hematite concretions which can be found in Earth deserts and also on Mars (so-called "Blueberries"). Definition: Exobiology Exobiology or astrobiology is the study of the origin and evolution of extraterrestrial life. In terrestrial analogues efforts are put on the identification of so-called extremophile organisms, which are life forms that live and survive in extreme conditions such as can be found on other planets or moons. The objective of this research is to understand how such organisms survive and how they can be identified (or their remnants). Definition: Examples of exobiology analogue sites are the Rio Tinto in Spain, which hosts bacteria that can survive high temperatures and harsh chemical conditions, or black smokers in the deep sea that host colonies of life forms in high-pressure and high-temperature conditions. The cold dry hyperarid core of the Atacama desert is one of the closest analogues for Martian surface conditions and is often used for testing rovers and life detection equipment that one day may be sent to Mars. Other extreme environments, such as the polar regions, high-altitude mountainous areas, or remote islands are also used in studies to better understanding of life under such conditions. Scientists can test at such analogue sites sampling equipment designed to search and identify lifeforms. Definition: Exploration conditions Another criterion to search for analogue sites are locations where the exploration conditions of future astronauts can be simulated. Future explorers of the Moon or Mars will have to handle various conditions, such as reduced gravity, radiation, work in pressurized space suits and extreme temperatures. Preparing astronauts for these conditions calls for training on sites that exhibit some of those conditions. The operations that can be simulated reach from living in isolation, to extra-vehicular activity (EVA) in reduced gravity to the construction of habitats. Examples for analogue sites that offer such exploration conditions are research stations at the poles or underwater EVA training as it is done at NEEMO by NASA, at the Marseilles subsea analogue by COMEX, or by using parabolic flights to simulate lower gravity for shorter durations. Underwater analogue sites allow for the training of astronauts in neutral buoyancy conditions (such as is done in test pools at NASA, ESA or Star City in Russia) while operating on a natural terrain. Potential targets for such training are missions to the Moon and Mars, to test sampling, drilling and field explorations in 1/6th or 1/3rd of Earth's gravity, or asteroids, and to test anchoring systems in microgravity. History of the space analogues: The notion of space analogues is not new. NASA has used such sites for a long time to train its astronauts for space missions. The following data are taken from the official website of NASA.The first analog mission was undertaken in 1997 in Arizona. Since then, NASA leads annual missions there to evaluate and test EVAs and outpost systems and operations. This site was chosen to test the materials in a desolate environment with rugged terrain, dust storms, extreme temperatures... History of the space analogues: In the same year, the Haughton-Mars Project (HMP) was started on Devon Island in the Arctic. Since then, 14 missions have been conducted there to test technology and operations in a remote, extreme environment and conduct science research on the Mars-like terrain. In 2001, NASA conducted the mission named NEEMO near Florida, 62 feet (19 m) underwater, that was supposed to be a simulation for six aquanauts living in a confined space. It was also the way to test the exploration equipment in an extreme and isolated environment. Since 2001, 14 missions have been undertaken there in a multi-organizational environment. Since 2004, two-week missions are conducted every summer in Pavilion Lake in Canada. This analogue site allows astronauts to train in searching for evidence of life in an extreme environment with reduced-gravity conditions. This is an international and multi-organizational project conducted underwater. History of the space analogues: The last analogue site used by NASA is at Mauna Kea on the Big Island of Hawaii, after which an analogue base under the name of HI-SEAS was founded on Mauna Loa. In total, six NASA missions have started in this base between 2013 and 2018, until the sixth HI-SEAS mission was halted due to a medical emergency. This project was led to test technologies for sustaining human exploration on desolate planetary surfaces like the Moon or Mars and explore social wellbeing and crew dynamics on long-duration missions. HI-SEAS is currently under management of the International Moonbase Alliance, founded by Henk Rogers. History of the space analogues: Keen interest for space analogues has emerged through the student community, the 2017 NASA Ames Grand Prize Winning entry Anastasi, explores the possibility of an underwater settlement as a preliminary to space settlement infrastructure. Objectives: The history of the use of terrestrial analogues shows the importance attached to using analogue sites in validating spatial technologies and scientific instruments. But analogue sites also have other uses: Training Space analogues can help to train personnel in using technologies and instruments, and in knowing how to behave with a spacesuit. Thus two types of analogue sites exist: underwater sites and surface sites. Objectives: Underwater sites simulate a reduced-gravity environment by compensating weight by the Archimedes' principle, thus simulating zero gravity or reduced gravity (lunar gravity, for example). Surface sites serve to train astronauts to walk and move within a spacesuit, and to test the Mars Exploration Rover (for example). Expeditions to surface sites also help teach geology to astronauts, who mostly trained as pilots. Exobiology Space analogues may have potential similarities to environments for exobiology. In some places on Earth the conditions allow only certain types of organisms - extremophile organisms - to live. Currently used space analogues: Following table lists currently used space analogues on Earth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**American Osteopathic Board of Neurology and Psychiatry** American Osteopathic Board of Neurology and Psychiatry: The American Osteopathic Board of Neurology and Psychiatry (AOBNP) is an organization that provides board certification to qualified Doctors of Osteopathic Medicine (D.O.) and physicians who specialize in disorders of the nervous system (neurologists) and to qualified Doctors of Osteopathic Medicine and physicians who specialize in the diagnosis and treatment of mental disorders (psychiatrists). American Osteopathic Board of Neurology and Psychiatry: The board is one of 16 medical specialty certifying boards of the American Osteopathic Association Bureau of Osteopathic Specialists (AOABOS) of the American Osteopathic Association (AOA). Established in 1941, the AOBNP is responsible for examining physicians who have completed an ACGME-accredited residency in neurology and/or psychiatry. Since its inception, over 630 physicians have achieved primary certification in psychiatry and 400 in neurology, along with physicians holding subspecialty certifications.The purpose of the certification examination is to ensure that physicians who have completed the required training have a high level of competency and therefore can safely provide services to their patients which meets a well established standard of care. Physicians who successfully pass the examination are recommended by the AOBNP to the AOABOS for certification. The AOABOS holds the ultimate authority in conferring board certification.The AOBNP is one of two certifying boards for neurologists and psychiatrists in the United States. The other certifying authority is the American Board of Psychiatry and Neurology, Inc. (ABPN), a member board of the American Board of Medical Specialties. Organization: There are eight elected members of the AOBNP. Each member is an AOA board-certified physician, certified through the AOBNP. Membership includes a representatives from each area of neurology (4) and psychiatry (4), as well as representation from the subspecialties of the board and a representative from each of the time divisions of the United States whenever possible. Board certification: Initial certification is available to osteopathic and other neurologists and psychiatrists who have successfully completed an ACGME-accredited residency in neurology or psychiatry and successful completion of the written exam.Board certified neurologists and psychiatrists (diplomates of the AOBNP) must participate in Osteopathic Continuous Certification on an ongoing basis to avoid expiration of their board certified status.Effective June 1, 2019, all AOA specialty certifying boards implemented an updated continuous certification process for osteopathic physicians, called “(OCC)”, and are required to publish the requirements for OCC in their basic documents. The following components comprise the updated OCC process: Component 1: Licensure. AOA board-certified physicians must hold a valid, active license to practice medicine in one of the 50 states or Canada. Board certification: Component 2: Lifelong Learning/Continuing Medical Education. A minimum of 75 CME credits in the specialty area of certification during each 3-year cycle. Of these 75 specialty CME credits, 18 of these CME hours must be AOA Category 1-A. The remaining 57 hours will have broad acceptance of specialty CME. Component 3: Cognitive Assessment: AOBA board-certified physicians must complete the online cognitive assessment annually after entry into the Longitudinal Assessment process to maintain compliance with OCC. Board certification: Component 4: Practice Performance Assessment and Improvement. Attestation of participation in quality improvement activities. Physicians may view the Attestation Form by logging in with their AOA credentials to the AOA Physician Portal on the AOA website.Diplomates of the AOBNP may also receive Subspecialty Certification or Certification of Special Qualifications in the following areas: Addiction Medicine Neurophysiology Geriatric Psychiatry Hospice and Palliative Medicine Sleep MedicineEffective July 1, 2020, allopathic (MD) physicians may apply for certification by the AOBNP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Evans Gambit** Evans Gambit: The Evans Gambit is a chess opening characterised by the moves: 1. e4 e5 2. Nf3 Nc6 3. Bc4 Bc5 4. b4The Evans Gambit is an aggressive line of the Giuoco Piano. White offers a pawn to divert the black bishop on c5. If Black accepts, White can follow up with c3 and d4, ripping open the centre, while also opening diagonals to play Ba3 or Qb3 at some point, preventing Black from castling kingside and threatening the f7-pawn, respectively. If Black declines, the b4-pawn stakes out space on the queenside, and White can follow up with a4 later in the game, potentially gaining a tempo by threatening to trap Black's dark-square bishop. According to Reuben Fine, the Evans Gambit poses a challenge for Black since the usual defences (playing ...d6 and/or returning the gambit pawn) are more difficult to achieve than with other gambits. (Fine was once beaten by this gambit in a friendly game against Bobby Fischer, in just 17 moves.) The Encyclopaedia of Chess Openings has two codes for the Evans Gambit, C51 and C52. History: The gambit is named after the Welsh sea captain, William Davies Evans, the first player known to have played it. The first game with the opening is considered to be Evans–McDonnell, London 1827, although in that game a slightly different move order was tried (1.e4 e5 2.Nf3 Nc6 3.Bc4 Bc5 4.0-0 d6 and only now 5.b4). In his monthly Chess Life column, Andrew Soltis commented that Evans was "the first player to be widely honored for an opening we know he played".The first analysis of the gambit was published in the Second Series of Progressive Lessons (1832) by William Lewis. The gambit became very popular and was played several times in the series of games between McDonnell and Louis de la Bourdonnais in 1834. Players including Adolf Anderssen, Paul Morphy and Mikhail Chigorin later took it up. The Evergreen Game won by Adolf Anderssen against Jean Dufresne opened with the Evans Gambit. Eventually, however, the second world chess champion Emanuel Lasker dealt a heavy blow to the opening with a modern defensive idea: returning the pawn under favourable circumstances. The opening was out of favour for much of the 20th century, although John Nunn and Jan Timman played it in some games in the late 1970s and early 1980s, and in the 1990s, Garry Kasparov used it in a few games (notably a famous 25-move win against Viswanathan Anand in Riga, 1995), which prompted a brief revival of interest in it. General remarks: Accepting the gambit The most obvious and most usual way for Black to meet the gambit is to accept it with 4...Bxb4, after which White plays 5.c3 and Black usually follows up with 5...Ba5 (5...Be7 and the less common 5...Bc5 and 5...Bd6, the Stone–Ware Defence, are also played). White usually follows up with 6.d4. Emanuel Lasker's line is 4...Bxb4 5.c3 Ba5 6.d4 d6 7.0-0 Bb6 8.dxe5 dxe5 9.Qxd8+ Nxd8 10.Nxe5 Be6. This variation takes the sting out of White's attack by returning the gambit pawn and exchanging queens, and according to Fine, the resulting simplified position "is psychologically depressing for the gambit player" whose intent is usually an aggressive attack. Chigorin did a lot of analysis on the alternative 9.Qb3 Qf6 10.Bg5 Qg6 11.Bd5 Nge7 12.Bxe7 Kxe7 13.Bxc6 Qxc6 14.Nxe5 Qe6, which avoids the exchange of queens, but reached no clear verdict. Instead White often avoids this line with 7.Qb3 Qd7 8.dxe5, when Black can return the pawn with 8...Bb6 or hold onto it with 8...dxe5, though White obtains sufficient compensation in this line. General remarks: Alternatively, Black can meet 6.d4 with 6...exd4, when White can try 7.Qb3, a move often favoured by Nigel Short. 7.0-0 is traditionally met by 7...Nge7, intending to meet 8.Ng5 or 8.cxd4 with 8...d5 returning the pawn in many lines, rather than the materialistic 7...dxc3, which is well met by 8.Qb3 with a very dangerous initiative for the sacrificed pawns. Alternatively, 7...d6 8.cxd4 Bb6 is known as the Normal Position, in which Black is content to settle for a one-pawn advantage and White seeks compensation in the form of open lines and a strong centre. General remarks: Declining the gambit Alternatively, the gambit can be declined with 4...Bb6, when 5.a4 a6 is the normal continuation. But due to the loss of tempo involved, most commentators consider declining the Evans Gambit to be weaker than accepting it, then returning the pawn at a later stage. Black can also play the rare Countergambit Variation (4...d5), but this is thought to be dubious. General remarks: Aron Nimzowitsch states in the book My System, however, that by declining the gambit Black has not lost a tempo, since the move b4 was, in the sense of development, unproductive, as is every pawn move, if it does not bear a logical connection with the centre. For suppose after 4...Bb6 5.b5 (to make a virtue of necessity and attempt something of a demobilizing effect with the ill-moved b-pawn move), 5...Nd4 and now if 6.Nxe5, then 6...Qg5 with a strong attack. Bishop retreats after accepting the gambit: After 4.b4 Bxb4 5.c3, the bishop must move or be captured. The common retreats are listed here, with the good and bad sides of each: 5...Ba5 Black's most popular retreat according to Chessgames.com. It gets out of the way of White's centre pawns, and pins the c3-pawn if White plays 6.d4, but it has the drawback of removing the a5-square from the black queen's knight. Black usually subsequently retreats the bishop to b6 to facilitate ...Na5, which is particularly strong when White opts for the Bc4, Qb3 approach. Bishop retreats after accepting the gambit: 5...Bc5 The second most popular retreat according to Chessgames.com, with White scoring better than after 5...Ba5. This is often played by those unfamiliar with the Evans Gambit, and is arguably inferior to 5...Ba5, because 6.d4 attacks the bishop again and limits Black's options as compared with 5...Ba5 6.d4. 5...Be7 Lasker's Defence has often been considered one of the "safer" retreats and has been played by Viswanathan Anand. After 6.d4 Na5 White can attempt to maintain an initiative with 7.Be2 as played by Kasparov, or immediately recapture the pawn with 7.Nxe5. 5...Bd6 The Stone–Ware Defence, named after Henry Nathan Stone and Preston Ware, reinforces the e5-pawn and has been played by several grandmasters such as Andrei Volokitin, Alexander Grischuk and Loek van Wely. 5...Bf8 The Mayet Defence, named after Carl Mayet, is played very rarely. In popular culture: The Evans Gambit is referenced in episode 15 of Season 3 of The West Wing "Hartsfield's Landing". It is the favourite opening of agadmator, a popular chess Youtuber.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vegetarian bacon** Vegetarian bacon: Vegetarian bacon, also referred to as veggie bacon, vegan bacon, vegan rashers, vacon, or facon, is a plant-based version of bacon. Nutrition: It is high in protein and fiber, yet low in fat, and has no cholesterol. Many vegan bacon products are lower in salt than pork back bacon, and some have less than 10% of the fat. Two slices of one particular brand average 310 kilojoules (75 kilocalories) of food energy. Range: Brands include Morningstar Farms, LightLife, Quorn, Tofurky, Soy Boy, Sweet Earth, Upton's Naturals, and Hooray Foods.In 2015, the Media Wales reported vegan restaurant Anna Loka in Cardiff served vegan rashers. In 2021, Aldi supermarkets in the United Kingdom added No Pork Streaky Bacon Rashers. Sainsbury's sold vegan sausages wrapped in vegan rashers during Christmas 2021. In 2023, Burger King added vegan bacon, made by La Vie Bakon, to its UK menus. Homemade recipes: Welsh chef Gaz Oakley makes a vegan version of bacon bits from coconut flakes. The Bangor Daily News reported that vegan expert Avery Yale Kamila said homemade vegan bacon can be made from shiitake mushrooms, rice paper, coconut, eggplant or banana peels. American musician Lizzo uses maple syrup to cook vegan bacon. Homemade recipes: Vegetarian bacon can also be made at home by marinating strips of tempeh or tofu in various flavorings, such as soy sauce or liquid smoke, and then either frying or baking. Aficionados of raw food also use coconut meat as a bacon substitute. Seitan can also be formed into vegetarian bacon.Food writer David Goldbeck suggests frying provolone cheese in a skillet to produce a bacon substitute he calls "cheeson".Plant based recipes for vegetarian bacon often utilise seitan or rice paper. Flavourings include liquid smoke, nutritional yeast, smoked paprika and barbecue sauce.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Angina bullosa haemorrhagica** Angina bullosa haemorrhagica: Angina bullosa haemorrhagica is a condition of the mucous membranes characterized by the sudden appearance of one or more blood blisters within the oral cavity.: 808  The lesions, which may be caused by mild trauma to the mouth tissues such as hot foods, typically rupture quickly and heal without scarring or further discomfort. The condition is not serious except in rare cases where a large bulla that does not rupture spontaneously may cause airway obstruction. Angina bullosa haemorrhagica: The blisters usually affect the palate or oropharynx, and are often long lived to the extent that patients burst them for symptomatic relief. Diagnosis: The condition is diagnosed on the basis of exclusion of other conditions and the typical presentation, particularly the constant presence of blood as the blister fluid. Angina bullosa haemorrhagica does not cause desquamative gingivitis. Treatment: Treatment includes steroid gel and analgesics (anesthetic suspension).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Formula One tyres** Formula One tyres: Formula One tyres play a significant role in the performance of a Formula One car. The tyres have undergone major changes throughout the history of Formula One with different manufacturers and specifications used in the sport. Design and usage: Formula One tyres bear only a superficial resemblance to a normal road tyre. Whereas the latter has a useful life of up to 80,000 km (50,000 miles), the tyres used in Formula One are built to last less than one race distance. The purpose of the tyre determines the compound of the rubber to be used. In extremely wet weather, such as that seen in the 2007 European Grand Prix, the F1 cars are unable to keep up with the safety car in deep standing water due to the risk of aquaplaning. In very wet races, such as the 2011 Canadian Grand Prix, the tyres are unable to provide a safe race due to the amount of water, and so the race can be red flagged. The race is either then stopped permanently or suspended for up to a 3 hour period until the cars can race safely again. Both the latter - and successively the former - situations occurred at the 2021 Belgian Grand Prix. History: During the 1950s and 1960s, Formula One tyres were supplied by Dunlop, Englebert, Firestone, Continental and Goodyear. In 1958, Dunlop introduced its R5 racing tyre, replacing the cotton fabric of the earlier R1 to R4 tyres with nylon fabric, allowing for a reported 12 lb reduction in tyre weight. During the 1960s, Dunlop introduced improved nylon casings, reduced aspect ratio, significantly increased tyre width, and the use of synthetic rubber.Slick tyres were introduced to Formula One by Firestone at the 1971 Spanish Grand Prix. 1975's Ferrari 312T used a Goodyear 26.0"×16.2"-13" slick tyre (overall diameter × width) in the rear on a 13"×18" rim, with a Goodyear 20.0"×9.2"-13" slick tyre in the front on a 13×10" rim.For the 1981 season the maximum diameter of the rear tyre was limited to 26.0", while the diameter of the front tyres was increased. Therefore, from 1981 until 1992, Goodyear supplied white sidewall marked Eagle tyres with the sizes of 25.0"×10.0"-13" in the front and 26.0"×15.0"-13" in the rear. For the 1993 season, the complete wheel width of the rear was reduced from 18" to 15". This prompted Goodyear to change to yellow sidewall markings to correspond to the new, narrower rear tyres which were approximately 12.8" wide, down from the previous 15.0".For the 1997 F1 season, Bridgestone joined Goodyear in supplying tyres to F1 competitors, creating a tyre war between the two manufacturers. Goodyear would leave the sport following the 1998 season, leaving Bridgestone as the sole tyre provider for the next two seasons. History: In 1998, grooved tyres were introduced with three groove lines in the front tyres and four groove lines in the rear tyres. Between 1999 and 2008, regulations required the tyres to feature a minimum of four 14 mm (0.55 in) grooves in them, with the intention of slowing the cars down. This is because a slick tyre, with no indentations, provides the most grip in dry conditions. They could be no wider than 355 mm (14 in) at the front and 380 mm (15 in) at the rear, and the maximum diameter was 660 mm (26 in), or 670 mm (26.4 in) for wet tyres.In 2001, Michelin entered Formula One, once again creating a tyre war after Bridgestone had been the sole tyre provider for the preceding two seasons.In 2005, tyre changes were disallowed in Formula One, therefore the compounds were harder as the tyres had to last the full race distance of around 300 km (200 miles). Tyre changes were re-instated in 2006, following the dramatic and highly political 2005 United States Grand Prix, which saw Michelin tyres fail on two separate cars at the same turn, resulting in all Michelin runners pulling out of the Grand Prix, leaving just the three teams using Bridgestone tyres to race. History: For 2007, Bridgestone again became the sole tyre partner and supplier in Formula One with the withdrawal of Michelin, and introduced four compounds of tyre, two of which are made available at each race. The harder tyre (referred to as the "prime" tyre) is more durable but gives less grip, and the softer tyre (referred to as the "option" tyre) gives more grip but is less durable. Both compounds have to be used by each car during a race and the softer tyre had a painted white stripe in the second groove to distinguish between compounds. This was introduced after the first race of the season when confusion occurred because a small dot was put on the sidewall of the tyre, instead of the white stripe. Upon the reintroduction of slicks in 2009, the sidewalls of the softer tyres were painted green to indicate the difference in compound, as there were no longer any grooves in tyres. Each team must use each specification during the race, unless wet or intermediate tyres are used during the race, in which case this rule no longer applies. History: Slick tyres were reintroduced at the beginning of 2009, along with aerodynamic changes intended to shift the balance towards mechanical grip in an attempt to increase overtaking. History: On 2 November 2009, Bridgestone announced their withdrawal from Formula One at the end of the 2010 season. Michelin, Cooper Avon and Pirelli showed interest in taking over the role of tyre partner and supplier. On 24 June 2010, it was announced that Pirelli would be the sole tyre partner and supplier for 2011 and would receive a three-year contract. They thus ended their programmes for both the Grand-Am Rolex Sports Car Series and FIA World Rally Championship after spending three years as an official tyre partner and supplier (as the Grand-Am Rolex Sports Car Series switched to Continental and the FIA World Rally Championship switched to Michelin tyres in 2011). During August 2010, Pirelli commenced its test programme with the Toyota TF109 at the Mugello Circuit with Nick Heidfeld as the test driver. From 2011, the feeder GP2 Series used identical Pirelli tyres as in F1.In 2009 with the removal of the four 14 mm (0.55 in) grooves the front tyres gained proportionally larger contact patch. In 2010, the front tyres were narrowed from 270 mm (11 in) to 245 mm (9.6 in), in order to improve the balance of grip between the front and rear. In 2011, with the sole tyre supplier having been changed from Bridgestone to Pirelli, the rules were the same as the 2010 season rules concerning the tyres. All teams still were required to use each type of dry tyre compound supplied in the race, and drivers that made it through to Q3 still had to use the same tyres they used to set their fastest qualifying time with to start the race. However, the way of denoting different tyre specifications was changed. Rather than a green stripe denoting a softer compound, for each tyre specification, the lettering on the tyre would have a specific colour. The hard compound would have silver lettering, the medium compound would have white lettering, the soft tyres would have yellow lettering and the super-soft tyres would have red lettering. For the wet tyres, the intermediate tyres would have light blue lettering and the full wet tyres would have orange lettering.At the 2011 Malaysian Grand Prix, Pirelli introduced a coloured band around the outside of the tyre on the softer of the two dry compounds. This was due to confusion during the first round of the season. This measure was said to be a stopgap, with a permanent solution due to be implemented at the first European race of the season. The coloured line featured at the Chinese Grand Prix too. From the Turkish Grand Prix, the permanent solution was implemented; the option compound had a new marking. The option tyre had two thick coloured lines between the Pirelli and P Zero logos of each tyre, which made it easier to see the colour of the marking when the tyre rotates. The prime tyre remained the same markings as previously, though later in the season had the sidewall updated with the new markings. History: In 2016, new tyre rules were introduced. Pirelli nominated 3 different compounds of slick tyres to bring to each race. Each team had 13 sets of dry tyres for the race weekend. Of the 13 sets, two sets of tyres were chosen by Pirelli to be reserved for the race. Additionally, one set of the softest compound were set aside for Q3. Teams were free to choose what they liked for their ten remaining sets from the three chosen compounds. Each driver must have used at least two different dry weather compounds during the race (including one set of the mandatory race tyres), and drivers who made it to Q3 must start the race with the tyres they set their fastest Q2 lap on. Teams were mandated to inform the FIA about their tyre choices eight weeks before the start of a European event and 14 weeks before a non-European race.For the 2017 F1 season, significantly wider Pirelli tyres were introduced at both the front and rear axles, while the overall diameter of the tyres was increased by 10mm (660 to 670 mm (26.0 to 26.4 in)). Front tyre size increased to 305/670-R13 up from the previous 245/660-R13, while rear-tyre size increased to 405/670-R13 up from the previous 325/660-R13. In 2017 and 2018, the FIA Formula 2 Championship continued to use the pre-2017 size Pirelli F1 tyres. History: Pirelli introduced two new tyre compounds for the 2018 F1 season - hypersoft (pink) and superhard (orange). The hard tyre became ice blue.Heading into the 2019 season, Pirelli reduced the tyre range from seven to five dry weather compounds. They also scrapped the tyre naming system such that the tyres were denoted at each Grand Prix independently as hard, medium and soft with white, yellow and red sidewalls respectively rather than having a separate name and colour for each of the five tyres. The change was implemented so that casual fans could better understand the tyre system. History: As Formula One wheel rim diameter size will switch from 13 to 18 in (330 to 457 mm), the tyre diameter of 2022-spec Pirelli Formula One tyres will also be altered, from 670 to 720 mm (26.4 to 28.3 in), while the tread width of 2022-spec Pirelli Formula One tyres are expected to be unchanged. History: 2005 United States Grand Prix controversy On Friday, 17 June 2005, during the afternoon's practice session, Ralf Schumacher, who was driving for Toyota, crashed heavily in turn 13 of the Indianapolis Motor Speedway road course, as a result of a left-rear tyre failure. Turn 13 on the Indianapolis Motor Speedway road course is a high-speed banked turn, unique in Formula One racing, that causes a greater than usual lateral load. This pressure can cause the side walls of the tyre to bow and wear in abnormal places. History: The following day, Michelin reported that the tyres it had provided for its seven customer teams—BAR, McLaren, Red Bull, Renault, Toyota, Sauber, and Williams—were unsafe for extended high-speed use on this turn, and announced its intention to fly in another set of tyres from its Clermont-Ferrand headquarters. However, the replacement tyres flown in, which were of the type used in the Spanish Grand Prix earlier that year, turned out to have the same problem when tested.In a letter to FIA Race Director Charlie Whiting, Michelin representatives Pierre Dupasquier and Nick Shorrock revealed that they did not know the cause of Schumacher's tyre failure, and unless the cars could be slowed down in turn 13, Michelin's tyres would be unsafe and unsuitable for use during the race. Whiting replied, expressing his surprise that Michelin had not brought along a second set of tyres. Instead, he suggested that the teams be informed of the maximum safe speed in turn 13, and offered to monitor the turn by penalising any excess speed on the Michelin cars. He also addressed several solutions which had been proposed by the teams, insisting that use of the tyres flown in overnight would result in penalties, and the placement of a chicane in the turn was "out of the question"—the race would not be sanctioned by the FIA (making it a non-championship race) if the track layout was changed. He deemed the Michelin teams' proposals to be "grossly unfair" to the Bridgestone teams. In a second letter, Dupasquier and Shorrock announced that they would not permit their teams to race on Michelin's tyres. The race then took place with only the three Bridgestone teams (Ferrari, Jordan and Minardi) taking part. The race was won by Michael Schumacher. History: Make Cars Green campaign At the 2008 Japanese Grand Prix, the tyres had the grooves painted green, as part of a promotion by the FIA to reduce the impact of motoring on the environment called Make Cars Green. The softer of the two types of tyre still had the second innermost groove painted white, as per normal.Upon the return of slicks at the beginning of the 2009 season, the white stripe to indicate differences between the tyres was no longer possible due to the lack of grooves on the tyres. Subsequently, in a continuation of the Make Cars Green tyres in Japan, Bridgestone painted the sidewalls of the option tyre green instead. Tyre summary: There are eight tyre compounds available for the 2023 season. Two of these are for wet weather driving, the intermediate (indicated by a green sidewall) for light standing water conditions, and the full wet (indicated by a blue sidewall) for heavy standing water. These are available to all the teams at every Grand Prix. Pirelli announced a change to the available tyre compounds for 2023, with a compound to be inserted between the old C1 and C2 compounds. This change is supposed to provide teams with more flexible strategy options after criticism towards the original C1 compound for a large drop in grip compared to the other tyres. The remaining six tyre compounds are for dry running and are denoted C0 to C5, with C0 being the hardest tyre, meaning it provides the least grip but is the most durable, and C5 being the softest, having the most grip but being the least durable. The six tyre compounds form a sliding scale of durability and grip levels. Tyre summary: Pirelli nominates three of the compounds to be run at each race. Of these three, the hardest compound is named the hard tyre for the weekend and is denoted by a white sidewall, while the softest compound is named the soft and is denoted by a red sidewall, with the third of the nominated tyres named the medium tyre which is denoted by a yellow side wall. Drivers have to use at least two of the dry weather compound tyres during a race, unless the race is affected by wet weather. Tyre summary: With the intention of making tyre usage more sustainable in the future, Formula One will trial a reduction in allocated tyre sets from 13 to 11 at two races in 2023. At these races the use of tyres in qualifying will be mandated as hard in Q1, medium in Q2 and soft in Q3, assuming that the weather is dry. Teams are usually free to choose which tyre compound they run during qualifying. Manufacturers: From 2011 onwards, the Italian manufacturer Pirelli is the sole tyre supplier. The deal is currently set to last until the 2024 season.Past manufacturers include: Avon Bridgestone Continental Dunlop Englebert Firestone Goodyear Michelin Tyre manufacturers by season The manufacturer that is competing in 2023 is shown in bold. These results are correct as of the 2023 Belgian Grand Prix. Records Ordered by number of races won. The manufacturer that is competing in 2023 is shown in bold. These results are correct as of the 2023 Belgian Grand Prix.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Birthing center** Birthing center: A birthing center is a healthcare facility, staffed by nurse midwives, midwives and/or obstetricians, for mothers in labor, who may be assisted by doulas and coaches. The midwives monitor the labor, and well-being of the mother and the baby during birth. Doulas can assist the midwives and make the birth easier. Should additional medical assistance be required, the mother can be transferred to a hospital. This transfer is more likely if an epidural is needed, there is meconium staining, it is a prolonged labor, or the newborn needs intensive care. Some hospitals have birth centers as an alternative to the usual high tech maternity wards. Birthing center: A birth center presents a more home-like environment than a hospital labor ward, typically with more options during labor: food and drink, music, and the attendance of family and friends if desired. Other characteristics can also include non-institutional furniture such as queen-sized beds, large enough for both mother and father, and perhaps birthing tubs or showers for water births, an option that can help to reduce birthing pains. These centers also offer opioid injections (Pethidine) and Entonox gas as a way to help alleviate pain. The decor is meant to emphasize the normality of birth. In a birth center, women are free to act more spontaneously during their birth, such as squatting, walking or performing other postures that assist in labor. Active birth is encouraged. The length of stay after a birth is shorter at a birth center; sometimes just six hours after birth the mother and infant can go home. Comparison of traditional vs. alternative: A 2012 Cochrane review compared traditional hospital births with alternative, home-like settings in or near conventional hospital labor wards. In comparison with traditional hospital wards, home-like settings had a trend towards an increase in spontaneous vaginal birth, continued breastfeeding at six to eight weeks, and a positive view of care. The review also found that having a birth at an alternative birth center decreased the likelihood of medical intervention during labor, without increasing risk to mother or child. The likelihood of risks during a pregnancy or a mother's preexisting medical conditions may impact the ability for that mother to use a birthing center. Around the world: United States Like clinics, birth centers arose on the East and West Coasts in the 1970s, as alternatives to heavily institutionalized health care. Today, use of birthing centers is generally covered by health insurance. Several of the practices which were innovated in birth centers are beginning to enter the mainstream hospital labor and delivery floors, including: Bathtubs or whirlpools for labor and/or birthing Showers for mothers to labor in Hospital acceptance of the mother choosing to walk during labor, use a labor/birthing ball, not use pain medication during labor rooming in of the infant after birth Beds for family members to stay with the mother during labor and birthThere are certain requirements that a woman needs to meet in order to be able to birth at a birth center. First, the mother must have an uncomplicated, low-risk pregnancy, such as a singleton pregnancy (no twins) and that the baby is positioned head down (cephalic presentation).Free-standing birth centers require hospital backup in case complications arise during labor that require more complex care. However, even if a delivery cannot happen at the birth center due to a high-risk pregnancy, birth center midwives might provide prenatal care up to a certain week of gestation. Around the world: Accreditation The nationwide organization supporting and promoting birth centers is the American Association of Birth Centers (AABC). Many birth centers nationwide, like hospitals, chose to become accredited through the Commission for the Accreditation of Birth Centers (CABC). Since 1985, CABC has provided this accreditation service, as well as education and support to birth centers and alongside maternity centers. Some birth centers are required to obtain accreditation in order to apply for state licensure, or to become in-network with certain insurance plans. Many birth centers chose voluntarily to undergo accreditation to demonstrate their commitment to safety and continuous quality improvement. Around the world: Accreditation depends on a set of measures, called indicators, against which birth centers are evaluated during site visits by CABC Accreditation Specialists. Adherence to these indicators ensures both safety of mother and infant as well as protects the integrity of the birth center model of care as distinct from hospital care. For example, while continuous fetal monitoring is typical in hospital labor and delivery units, intermittent monitoring with a handheld electronic device is used in birth centers to protect the birthing woman's freedom of mobility during her labor and birth. CABC Indicators also require a birth center to have a written plan for how to proceed with transfer to a hospital in the event of an emergency that cannot be managed at the birth center. Around the world: Birth center applications for accreditation are reviewed by Commissioners on the board of trustees of the CABC. These Commissioners are Certified Nurse‐Midwives, certified professional midwives, physician specialists in obstetrics and neonatology, nurses, and birth center consumers. The Commissioners meet quarterly to review items of concern relevant to birth center education and development and publish a monthly newsletter for CABC-accredited birth centers for continuing education. The CABC works with policy advocacy organizations to advance and promote birth centers and the midwifery model of care. While CABC works closely alongside the AABC, the organizations are separate and have distinctly separate roles regarding national standards and accreditation of birth centers. CABC is the only accrediting body devoted exclusively to birth centers, whose site visitors are specifically trained to conduct a site visit in a birth center; and whose review panels have first‐hand knowledge of the philosophy, clinical care and operation of birth centers. AABC on the other hand, is a membership and trade organization for established and developing birth centers and other individuals, agencies and institutions that support the birth center model of care and the national AABC Standards for Birth Centers. Around the world: There has been much research in recent years to support out of hospital birth—especially birth center birth—as not just safe but at times safer than hospital birth because of its judicious use of technology, licensed professionals and connection to the health care system. Around the world: Amish centers The Amish, known for their great respect for tradition, usually have homebirths or give birth at birthing centers. Most Amish women only go to a hospital to give birth when there is a known medical risk for her or the child, but some Amish women choose to go the hospital during labor for peace of mind. Two books have been written about Amish medical issues including their birthing practices: Dr. Frau: A Woman Doctor among the Amish by Grace Kaiser and House calls and hitching posts: stories from Dr. Elton Lehman's career among the Amish by Elton Lehman. Lehman is known for his work in founding a freestanding Amish birthing center. The Mount Eaton Care Center, Ohio's first such center, was established in 1984. In her book, Kaiser recounts the private nature of birthing among the Amish. She points out the practice of Amish women keeping labor a secret to all except their own husbands and midwife or obstetrician, as well as the practice of women waiting until active labor before summoning a midwife or OB. Due to the latter practice, fathers occasionally end up delivering their own children before the midwife or OB can arrive if a homebirth is selected. Amish women who choose a home birth often continue with household duties until they are no longer physically able to continue. If birthing in a birth center, they are free to labor similar to that of home births: eating, drinking, visiting with their family members, etc. Around the world: Australia In a response to the National Maternity Action Plan, State and Territory Governments in 2002 started to respond to consumer demand for an increased number of birth centers to be made available to women. Whilst most birth centers are attached to hospitals, some are being established as free-standing centers much further away from hospital back-up. As long as they are within 90 minutes of a hospital, they are considered 'safe'. Most birth centers are now being run solely by midwives, with obstetric back-up only used when there are complications. Around the world: Some birth centers in Australia are moving away from the 'low-risk' model and are moving to an All-risk model where women with medical complications are accepted into the birth center but extra care is provided to them where necessary. Canada Birthing centers remain controversial. Hospitals do offer this option, and it is available at special clinics. Around the world: Netherlands The Netherlands has seen a growth in the number of locations for giving birth, other than homebirth or hospital maternity wards. In these facilities the birth is overseen by a midwife, typically in a homelike environment. Most community midwives work in group practices and only refer patients to hospital obstetric units for labor complications. Certification requires a four-year education at a midwifery academy. Around the world: Nepal A three-year study focusing on the remote Solukhumbu District explored access to perinatal care and found that 36% of deliveries took place in a health facility. Results from this study noted that access to timely transportation options were a major factor in the lack of accessibility for maternal delivery support.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hip hip hooray** Hip hip hooray: Hip hip hooray (also hippity hip hooray; hooray may also be spelled and pronounced hoorah, hurrah, hurray etc.) is a cheer called out to express congratulation toward someone or something, in the English-speaking world and elsewhere. Hip hip hooray: By a sole speaker, it is a form of interjection. In a group, it takes the form of call and response: the cheer is initiated by one person exclaiming "Three cheers for...[someone or something]" (or, more archaically, "Three times three"), then calling out "hip hip" (archaically, "hip hip hip") three times, each time being responded by "hooray" or "hurrah". The cheer continues to be used to express congratulations. In Australia and the United Kingdom, the cheer is usually expressed after the singing of "Happy Birthday to You". In Canada and the United Kingdom, the cheer has been used to greet and salute the monarch at public events. History: The call was recorded in England in the beginning of the 19th century in connection with making a toast. Eighteenth century dictionaries list "Hip" as an attention-getting interjection, and in an example from 1790 it is repeated. "Hip-hip" was added as a preparatory call before making a toast or cheer in the early 19th century, probably after 1806. By 1813, it had reached its modern form, hip-hip-hurrah.It has been suggested that the word "hip" stems from a medieval Latin acronym, "Hierosolyma Est Perdita", meaning "Jerusalem is lost", a term that gained notoriety in the German Hep hep riots of August to October 1819. Cornell's Michael Fontaine disputes this etymology, tracing it to a single letter in an English newspaper published August 28, 1819, some weeks after the riots. He concludes that the "acrostic interpretation ... has no basis in fact." Ritchie Robertson also disputes the "folk etymology" of the acronym interpretation, citing Jacob Katz.One theory about the origin of "hurrah" is that the Europeans picked up the Mongol exclamation "hooray" as an enthusiastic cry of bravado and mutual encouragement. See Jack Weatherford's book Genghis Khan and the Making of the Modern World.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hard seat** Hard seat: The Hard seat (Chinese: 硬座; pinyin: yìng zuò) or Semi-cushioned seat, abbreviated YZ, is the cheapest class of seating in China Railway. It is available on non-high-speed trains.The name of Hard seat derives comes from the hard, wooden seats in the Mao era on regular passenger trains. Modern "hard seats", however, are upholstered. There are several different tickets and ticket prices that can be obtained. Each carriage provides the most basic services common to all Chinese trains, namely toilets, wash basins and a boiling water dispenser. This demonstrates the importance of the ticket prices and the ability for them to change over time. Hard seat: Compared to soft seat, hard seat carriages have more seats per row (2+3 vs. 2+2) and are usually more crowded, and people without seats may stand in hard seat carriages. Coaches: The coaches current in use include: YZ-21 (no air conditioner) YZ-22 (no air conditioner) YZ-22B (no air conditioner) YZ-25B (no air conditioner) YZ-25G YZ-25K YZ-25T YZ-30 YZ-32 SYZ-25B (double-decked) SYZ-25K (double-decked) Price: As of 2006, the following ticket price used in most regular lines, but some special lines have different prices.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AMD mobile platform** AMD mobile platform: The AMD mobile platform is an open platform for laptops from AMD. Though little marketing was done on this platform, it has been competing with the Centrino platform in the segment to gain more marketshare. Each platform has its own specification, catching up the latest technology developments. Since the acquisition of ATI, AMD began to include Mobility Radeon GPUs and AMD chipsets as part of the requirements of the mobile platform; the first of such platforms is the Puma platform. Open platform approach: In February 2007, AMD had announced the "Better by Design" initiative to continue the success of the open platform approach for desktop back in early 2003 after the launch of Athlon 64 processors with a lack of chipset being developed by AMD, and open the platform to chipset vendors such as VIA, SiS, NVIDIA and from AMD subsidiary ATI. The initiative also includes platforms succeeding the Kite Refresh mobile platform. Open platform approach: Under the "Better by Design" initiative, AMD introduced a three-cell arrow sticker to identify mobile platform products, which the top cell being the processor (as Turion 64 X2). The middle cell for graphics accelerators as NVIDIA or ATI (as a result of retaining the use of "ATI Radeon" branding for graphics ), including onboard graphics (IGP), while the last cell representing the wireless (Wi-Fi, IEEE 802.11 standard) or LAN solutions, provided by one of the following companies: Airgo, Atheros, Broadcom, Marvell, Qualcomm, and Realtek. Open platform approach: The stickers to be used will be further classified by the system performance according to the processor performance, and into five classes, each having different colours as well as different logos for each component, listed as follows: Market analysis: According to AMD figures in December 2007, AMD mobile platform gained 19% unit share in the market and about 23% revenue share of the firm during Q3 2007 while competing with the Intel Centrino platform. Figures for Q1 and Q2 2007 are 15% and 17% unit share, accounting for 14% and 16% of the company's revenue respectively.AMD's mobile platform, even as recent as the Turion 64 X2 platform, has been criticized as consistently performing worse than Intel's Centrino in all areas: system speed, heat dissipation, and battery life. Implementations: Initial platform (2003) Launched in 2003, the initial platform for mobile AMD processors consists of: Kite platform (2006) Introduced in 2006, the Kite platform consists of: Kite Refresh platform (2007) AMD used Kite Refresh as the codenamed for the second-generation AMD mobile platform introduced in February 2007. Puma platform (2008) The Puma platform introduced in 2008 with June 2008 availability for the third-generation AMD mobile platform consists of: Yukon platform (2009) The Yukon platform was introduced on January 8, 2009, with expected April availability for the first AMD Ultrathin Platform targeting the ultra-portable notebook market. Congo platform (2009) The Congo platform was introduced in September 2009, as the second AMD Ultrathin Platform targeting the ultra-portable notebook market. Implementations: Tigris platform (2009) The Tigris platform introduced in September 2009 for the AMD Mainstream Notebook Platform consists of: Nile platform (2010) The Nile platform introduced on May 12, 2010, for the third AMD Ultrathin Platform consists of: Danube platform (2010) The Danube platform introduced on May 12, 2010, for the AMD Mainstream Notebook Platform consists of: Brazos (Fusion) platform (2011) The AMD low-power platform introduced on January 4, 2011, is designed for HD netbooks and other emerging form factors. It features the 40 nm C-Series (formerly codenamed Ontario, a 9-watt APU for netbooks and small form factor desktops and devices) and E-Series (formerly codenamed Zacate, an 18-watt TDP APU for ultrathin, mainstream, and value notebooks as well as desktops and all-in-ones) APUs. Implementations: Both low-power APU versions feature two Bobcat x86 cores and fully support DirectX11, DirectCompute (Microsoft programming interface for GPU computing) and OpenCL (cross-platform programming interface standard for multi-core x86 and accelerated GPU computing). Both also include UVD 3 dedicated hardware acceleration for HD video including 1080p resolutions. This platform consists of: Sabine (Fusion) platform (2011) The Sabine platform introduced on June 30, 2011, for the AMD Mainstream Notebook Platform consists of: Comal (Fusion) platform (2012) The Comal platform introduced on May 15, 2012, for the AMD Mainstream Notebook Platform consists of:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GURPS Dinosaurs** GURPS Dinosaurs: GURPS Dinosaurs is a supplement by Stephen Dedman, published by Steve Jackson Games in 1996 for GURPS (Generic Universal Role-Playing System). Description: GURPS Dinosaurs describes dinosaurs and Earth through prehistoric geological ages. Chapters include descriptions of the climatic conditions and creatures of each geological age: "Timeline" : an overview. "Paleozoic" "Triassic" "Jurassic" "Cretaceous" "Rise of The Mammals" : Beginning of the Tertiary era (from Paleocene to Miocene). "Pliocene and Pleistocene" : "The First Humans" : the predecessors of homo sapiens, as well as the first humans and their civilizations. "Ice Age Characters" : Suggestions for the creation of prehistoric characters, as well as Shamanism and prehistoric equipment. "Prehistoric Campaigns" Publication history: GURPS Dinosaurs is a 128-page softcover book designed by Stephen Dedman, with additional material by Kirk Tate, interior art by Scott Cooper, Russel Hawley, and Pat Ortega, and cover art by Paul Koroshetz. It was published by Steve Jackson Games in 1996. In the 2014 book Designers & Dragons: The '80s, game historian Shannon Appelcline noted that Steve Jackson Games decided in the early 1990s to stop publishing adventures, and as a result "SJG was now putting out standalone GURPS books rather than the more complex tiered book lines. This included more historical subgenre books. Some, such as GURPS Camelot (1991) and GURPS China (1991), were clearly sub-subgenres, while others like GURPS Old West (1991) and GURPS Middle Ages I (1992) covered genres notably missing before this point.": 45 Reception: In the December 1997 edition of Dragon (Issue #242), Rick Swan was ambivalent about this book, noting "As a reality-based reference, GURPS Dinosaurs scores high, cataloging literally hundreds of prehistoric creatures in remarkable detail." But Swan didn't think the book went beyond basic descriptions, commenting, "when it comes to putting all this together in a campaign, GURPS Dinosaurs pretty much leaves you on your own. There’s a ton of hard data, but not much about roleplaying; that is, we’re told a lot about what dinosaurs look like, but darn little about how they behave." He also thought the included scenarios were "too skimpy", and the book lacked sufficient illustrations. Swan concluded by giving the book an average rating of 4 out of 6, saying, "GURPS Dinosaurs is good as far as it goes; it just doesn’t go far enough."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**K-regular sequence** K-regular sequence: In mathematics and theoretical computer science, a k-regular sequence is a sequence satisfying linear recurrence equations that reflect the base-k representations of the integers. The class of k-regular sequences generalizes the class of k-automatic sequences to alphabets of infinite size. Definition: There exist several characterizations of k-regular sequences, all of which are equivalent. Some common characterizations are as follows. For each, we take R′ to be a commutative Noetherian ring and we take R to be a ring containing R′. k-kernel Let k ≥ 2. The k-kernel of the sequence s(n)n≥0 is the set of subsequences and 0≤r≤ke−1}. Definition: The sequence s(n)n≥0 is (R′, k)-regular (often shortened to just "k-regular") if the R′ -module generated by Kk(s) is a finitely-generated R′-module.In the special case when R′=R=Q , the sequence s(n)n≥0 is k -regular if Kk(s) is contained in a finite-dimensional vector space over Q Linear combinations A sequence s(n) is k-regular if there exists an integer E such that, for all ej > E and 0 ≤ rj ≤ kej − 1, every subsequence of s of the form s(kejn + rj) is expressible as an R′-linear combination ∑icijs(kfijn+bij) , where cij is an integer, fij ≤ E, and 0 ≤ bij ≤ kfij − 1.Alternatively, a sequence s(n) is k-regular if there exist an integer r and subsequences s1(n), ..., sr(n) such that, for all 1 ≤ i ≤ r and 0 ≤ a ≤ k − 1, every sequence si(kn + a) in the k-kernel Kk(s) is an R′-linear combination of the subsequences si(n). Definition: Formal series Let x0, ..., xk − 1 be a set of k non-commuting variables and let τ be a map sending some natural number n to the string xa0 ... xae − 1, where the base-k representation of x is the string ae − 1...a0. Then a sequence s(n) is k-regular if and only if the formal series ∑n≥0s(n)τ(n) is Z -rational. Definition: Automata-theoretic The formal series definition of a k-regular sequence leads to an automaton characterization similar to Schützenberger's matrix machine. History: The notion of k-regular sequences was first investigated in a pair of papers by Allouche and Shallit. Prior to this, Berstel and Reutenauer studied the theory of rational series, which is closely related to k-regular sequences. Examples: Ruler sequence Let s(n)=ν2(n+1) be the 2 -adic valuation of n+1 . The ruler sequence s(n)n≥0=0,1,0,2,0,1,0,3,… (OEIS: A007814) is 2 -regular, and the 2 -kernel and 0≤r≤2e−1} is contained in the two-dimensional vector space generated by s(n)n≥0 and the constant sequence 1,1,1,… . These basis elements lead to the recurrence relations s(2n)=0,s(4n+1)=s(2n+1)−s(n),s(4n+3)=2s(2n+1)−s(n), which, along with the initial conditions s(0)=0 and s(1)=1 , uniquely determine the sequence. Examples: Thue–Morse sequence The Thue–Morse sequence t(n) (OEIS: A010060) is the fixed point of the morphism 0 → 01, 1 → 10. It is known that the Thue–Morse sequence is 2-automatic. Thus, it is also 2-regular, and its 2-kernel and 0≤r≤2e−1} consists of the subsequences t(n)n≥0 and t(2n+1)n≥0 Cantor numbers The sequence of Cantor numbers c(n) (OEIS: A005823) consists of numbers whose ternary expansions contain no 1s. It is straightforward to show that c(2n)=3c(n),c(2n+1)=3c(n)+2, and therefore the sequence of Cantor numbers is 2-regular. Similarly the Stanley sequence 0, 1, 3, 4, 9, 10, 12, 13, 27, 28, 30, 31, 36, 37, 39, 40, ... (sequence A005836 in the OEIS)of numbers whose ternary expansions contain no 2s is also 2-regular. Examples: Sorting numbers A somewhat interesting application of the notion of k-regularity to the broader study of algorithms is found in the analysis of the merge sort algorithm. Given a list of n values, the number of comparisons made by the merge sort algorithm are the sorting numbers, governed by the recurrence relation 2. As a result, the sequence defined by the recurrence relation for merge sort, T(n), constitutes a 2-regular sequence. Other sequences If f(x) is an integer-valued polynomial, then f(n)n≥0 is k-regular for every k≥2 The Glaisher–Gould sequence is 2-regular. The Stern–Brocot sequence is 2-regular. Allouche and Shallit give a number of additional examples of k-regular sequences in their papers. Properties: k-regular sequences exhibit a number of interesting properties. Every k-automatic sequence is k-regular. Every k-synchronized sequence is k-regular. A k-regular sequence takes on finitely many values if and only if it is k-automatic. This is an immediate consequence of the class of k-regular sequences being a generalization of the class of k-automatic sequences. The class of k-regular sequences is closed under termwise addition, termwise multiplication, and convolution. The class of k-regular sequences is also closed under scaling each term of the sequence by an integer λ. In particular, the set of k-regular power series forms a ring. If s(n)n≥0 is k-regular, then for all integers m≥1 , mod m)n≥0 is k-automatic. However, the converse does not hold. For multiplicatively independent k, l ≥ 2, if a sequence is both k-regular and l-regular, then the sequence satisfies a linear recurrence. This is a generalization of a result due to Cobham regarding sequences that are both k-automatic and l-automatic. The nth term of a k-regular sequence of integers grows at most polynomially in n. If F is a field and x∈F , then the sequence of powers (xn)n≥0 is k-regular if and only if x=0 or x is a root of unity. Proving and disproving k-regularity: Given a candidate sequence s=s(n)n≥0 that is not known to be k-regular, k-regularity can typically be proved directly from the definition by calculating elements of the kernel of s and proving that all elements of the form (s(krn+e))n≥0 with r sufficiently large and 0≤e<2r can be written as linear combinations of kernel elements with smaller exponents in the place of r . This is usually computationally straightforward. Proving and disproving k-regularity: On the other hand, disproving k-regularity of the candidate sequence s usually requires one to produce a Z -linearly independent subset in the kernel of s , which is typically trickier. Here is one example of such a proof. Proving and disproving k-regularity: Let e0(n) denote the number of 0 's in the binary expansion of n . Let e1(n) denote the number of 1 's in the binary expansion of n . The sequence := e0(n)−e1(n) can be shown to be 2-regular. The sequence := |f(n)| is, however, not 2-regular, by the following argument. Suppose (g(n))n≥0 is 2-regular. We claim that the elements g(2kn) for n≥1 and k≥0 of the 2-kernel of g are linearly independent over Z . The function n↦e0(n)−e1(n) is surjective onto the integers, so let xm be the least integer such that e0(xm)−e1(xm)=m . By 2-regularity of (g(n))n≥0 , there exist b≥0 and constants ci such that for each n≥0 0. Proving and disproving k-regularity: Let a be the least value for which ca≠0 . Then for every n≥0 ,g(2an)=∑a+1≤i≤b−(ci/ca)g(2in). Evaluating this expression at n=xm , where m=0,−1,1,2,−2 and so on in succession, we obtain, on the left-hand side g(2axm)=|e0(xm)−e1(xm)+a|=|m+a|, and on the right-hand side, ∑a+1≤i≤b−(ci/ca)|m+i|. It follows that for every integer m ,|m+a|=∑a+1≤i≤b−(ci/ca)|m+i|. But for m≥−a−1 , the right-hand side of the equation is monotone because it is of the form Am+B for some constants A,B , whereas the left-hand side is not, as can be checked by successively plugging in m=−a−1 , m=−a , and m=−a+1 . Therefore, (g(n))n≥0 is not 2-regular.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sabouraud agar** Sabouraud agar: Sabouraud agar or Sabouraud dextrose agar (SDA) is a type of agar growth medium containing peptones. It is used to cultivate dermatophytes and other types of fungi, and can also grow filamentous bacteria such as Nocardia. It has utility for research and clinical care. It was created by, and is named after, Raymond Sabouraud in 1892. In 1977 the formulation was adjusted by Chester W. Emmons when the pH level was brought closer to the neutral range and the dextrose concentration lowered to support the growth of other microorganisms. The acidic pH (5.6) of traditional Sabouraud agar inhibits bacterial growth. Typical composition: Sabouraud agar is commercially available and typically contains: 40 g/L dextrose 10 g/L peptone 20 g/L agar pH 5.6 Medical Use: Clinical laboratories can use this growth medium to diagnose and further speciate fungal infections, allowing medical professionals to provide appropriate treatment with antifungal medications. Histoplasma and other fungal causes of atypical pneumonia can be grown on this medium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bromfenac** Bromfenac: Bromfenac is a nonsteroidal anti-inflammatory drug (NSAID) marketed in the US as an ophthalmic solution (brand names Prolensa and Bromday, prior formulation brand name Xibrom, which has since been discontinued) by ISTA Pharmaceuticals for short-term, local use. Prolensa and Bromday are the once-daily formulation of bromfenac, while Xibrom was approved for twice-daily administration. In the European Union, the brand name is Yellox. Bromfenac is indicated for the treatment of ocular inflammation and pain after cataract surgery. Medical uses: Bromfenac is indicated for the treatment of postoperative ocular inflammation following cataract extraction.The drug has been shown to reduce macular edema and thickness of the retina (an indicator for inflammation) and improve visual acuity after surgery. Contraindications: Bromfenac is contraindicated for people with adverse reactions to NSAIDs, such as asthma or rashes. Side effects: Bromfenac eye drops are generally well tolerated. Comparatively common side effects in clinical studies included abnormal sensations in eye (0.5% of people treated with bromfenac), mild to moderate erosion of the cornea (0.4%), eye pruritus (0.4%), eye pain (0.3%) and redness (0.3%). Serious side effects such as corneal perforation were not reported in studies but only during post-marketing in less than one patient in 1000. Interactions: No systematic interaction studies have been performed. There are no known cases of interactions with antibiotic eye drops. Blood plasma levels remain very low during bromfenac therapy, so interactions with drugs taken by mouth are unlikely. Pharmacology: Mechanism of action As an NSAID, bromfenac works by inhibiting prostaglandin synthesis by blocking the cyclooxygenase (COX) enzymes. It preferably acts on COX-2 and only has a low affinity for COX-1. Pharmacology: Pharmacokinetics Bromfenac is well absorbed through the cornea and reaches highest concentrations in the aqueous humour after 150 to 180 minutes, with a biological half-life of 1.4 hours and high drug levels being maintained for at least 12 hours. It is mainly concentrated in the aqueous humour and conjunctiva, and much less in the lens and vitreous body.Concentrations in the blood plasma are too low to be measured quantitatively. 99.8% of the substance are bound to plasma proteins. The enzyme mainly responsible for metabolization of bromfenac is CYP2C9, and metabolites include the lactam and several conjugated compounds. 82% are excreted via the urine, and 13% via the faeces.The addition of a bromine atom to bromfenac's chemical structure (Halogenation) increases the molecule’s lipophilicity, enhances penetration into ocular tissues, and lowers its IC50 (the drug concentration required to inhibit COX enzyme activity by 50%), thus increasing its potency. Chemistry: Along with indomethacin, diclofenac and others, bromfenac belongs to the acetic acid group of NSAIDs. It is used in form of bromfenac sodium · 1.5 H2O (CAS number: 120638-55-3 ), which is soluble in water, methanol and aqueous bases, insoluble in chloroform and aqueous acids, and melts at 284 to 286 °C (543 to 547 °F) under decomposition. History: For ophthalmic use, bromfenac has been prescribed more than 20,000,000 times across the world. As an eye drop, it has been available since 2000, starting in Japan where it was sold as Bronuck. It was first FDA approved for use in the United States in 2005, and it was marketed as Xibrom, twice-daily. In October 2010 Bromday received FDA approval as a new, once-daily formulation. More recently, in 2013, Prolensa has also been approved by the FDA. Bromfenac eye drops have been marketed in the European Union since 2011, and are available on worldwide markets with agreements from Bausch & Lomb, Croma-Pharma, and other companies.Bromfenac was formerly marketed in the United States by Wyeth-Ayerst in an oral formulation called Duract for short-term relief of pain (less than 10 days at a time). It was brought to market in July 1997, and was withdrawn 22 June 1998, following numerous reports of hepatotoxicity in patients who had taken the medication for longer than the recommended 10-day period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Environmental design in rail transportation** Environmental design in rail transportation: Environmental design is an emerging topic in railroad technology. From the 1980s to 2009, fuel efficiency in diesel locomotives in the USA has increased 85%, allowing these trains to go farther and move more freight while using less fuel. New low-impact electric and hybrid trains reduce overall carbon emissions. Also, train manufacturers have started utilizing hydrogen technology for propulsion, with carbon emissions only coming from the manufacturing of the hydrogen itself. Increasing efficiency while lowering emissions: Diesel trains Diesel trains replaced the steam engine in the late 1920s as a cleaner more efficient way of moving people and goods. Since 1980, the amount of freight being hauled by Diesel Trains has nearly doubled, yet the fuel consumption of trains has virtually stayed the same. Estimates have shown that about 170,000,000 cubic metres (45×10^9 US gal) of fuel have been saved by increasing the efficiency in diesel trains. US Department of Energy reported that commercial airline energy intensity was 2,352 kilojoules per kilometre (3,587 BTU/mi), automobiles were 2,327 kilojoules per kilometre (3,549 BTU/mi), while commuter rail energy intensity was just 1,804 kilojoules per kilometre (2,751 BTU/mi), which indicates that rail transportation is the most energy efficient of the three. The Union Pacific Rail Road has implemented a particle filter on their diesel engines. Silicon carbide blocks trap particles from the exhaust as they leave the engine, greatly reducing emissions. Increasing efficiency while lowering emissions: Electric trains Electric trains have always had no direct carbon emissions because they are run entirely by internal electric motors. However, the means of generating the electricity used to power these motors was predominately by burning fossil fuels or coal, both of which produce a large amount of carbon emissions. With the emergence of 'clean energy' generation, electrical trains actually run with very low environmental impact. For example, the proposal for the high-speed rail line between San Francisco and Los Angeles in California has the potential for zero greenhouse gas emissions, with the 3,350 GWh each year being generated by California's extensive infrastructure of renewable energy sources. Increasing efficiency while lowering emissions: Hybrid trains Since 1986, engineers have been developing electric-diesel "hybrid" trains. One type of hybrid train implements battery power when the train is idling and at low speed movement, and a diesel engine at higher speeds. To recharge the batteries, power from the diesel motor, charge utilizing regenerative braking, or a combination of both is used. According to the Institute of Electrical and Electronics Engineers, hybrid trains reduce the carbon emissions of Diesel Trains by 19%. Another type of hybrid train, such as the RailPower Technologies Green Goat, uses a large battery, and a small set of generators ("genset") for power. The genset is run at a constant speed and is attached to a generator to replenish the battery. Increasing efficiency while lowering emissions: Hydrail Hydrogen propulsion is an emerging technology, and is currently being implemented in locomotives. Hydrogen-powered trains dubbed "Hydrail" emit only water as a by product of combustion, and have a zero direct greenhouse gas emission. However, the process used to generate hydrogen in a form useful to power trains does produce a small amount of greenhouse gases. By using wind energy and electrolysis, 6.85 grams of greenhouse gases per MJ of LHV are produced, which is an insignificant amount compared to the 22 pounds of greenhouse gas emissions from one gallon of gasoline. Trains are prime targets for hydrogen propulsion due to their ability to store massive tanks of hydrogen. Emissions comparison: Rail transportation emits about 0.2 pounds of greenhouse gases per passenger mile (55 g/km) when each car is filled with 50 passengers. This figure increases to about 0.5 pounds per passenger mile (140 g/km) when only filled with half that amount. These numbers are still much lower than those of Jet transportation, about 1 pound per passenger mile (280 g/km), and that of a solo car driver, about 1.15 pounds per passenger mile (325 g/km). Even the fuel efficient Prius emits more greenhouse gases per passenger mile.Estimates have shown that if just 10% of long-distance freight that is currently moving by truck were to be moved instead by diesel trains, the resulting carbon emission reduction would be the equivalent of taking 2 million cars off the road. The results are more dramatic when the diesel train figures are replaced by hybrid and electric train figures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fake fur** Fake fur: Fake fur, also called faux fur, is a pile fabric engineered to have the appearance and warmth of animal fur. Fake fur can be made from a variety of materials including polyester, nylon, or acrylic. Fake fur: First introduced in 1929, fake furs were initially composed of hair from the South American alpaca. The ensuing decades saw substantial improvements in their quality, particularly in the 1940s, thanks to significant advances in textile manufacture. By the mid-1950s, a transformative development in fake furs occurred when alpaca hair was replaced with acrylic polymers, leading to the creation of the synthetic fur we recognize today.The promotion of fake furs by animal rights and animal welfare organizations has contributed to its increasing popularity as an animal-friendly alternative to traditional fur clothing. Uses: Fake fur is used in all applications where real fur would be used, including stuffed animals, fashion accessories, and home decorations like pillows, bedding and throws. It is also used for craft projects because it can be sewn on a standard sewing machine whereas real fur is generally thicker and requires a special machine, hand sewing, or an awl. Fake fur is increasingly used in mainstream teen fashion; the stores Abercrombie & Fitch and American Eagle use fake furs in their trapper hats and jackets. Ralph Lauren has promoted the use of fake fur in its collections.Fake fur is widely used in making fursuits in the furry community. Uses: In the Soviet, and now Russian Army, fish fur is a derogatory term for low-quality winter clothing and ushanka hats, from a proverb that "a poor man's fur coat is of fish fur". Comparison to real fur: Sewing Process and Storage Unlike genuine fur, faux fur is a type of fabric, which makes it relatively easy to sew. The synthetic nature of faux fur eliminates the need for cold storage to prevent deterioration and prevents infestation by moths, unlike real fur. However, like other fabrics, it should be stored away from humidity, heat, and sunlight in a garment bag or container to maintain its quality. Comparison to real fur: Durability and Energy Consumption Faux fur is perceived as less durable than real fur, and this attribute coupled with its lesser insulating properties forms part of the critique against its use. A study conducted in 1979 claimed that the energy consumption for the production of one coat made out of fake fur was 35 kilowatt-hours (120,000 British thermal units), compared to 127 kWh (433,000 Btu) for trapped animals and 2,340 kWh (7,970,000 Btu) for animals raised in fur farms. Despite these findings, the study has faced criticism for perceived bias and dated methodology. Comparison to real fur: Environmental Impact Fake fur is less biodegradable due to its composition of various synthetic materials, including blends of acrylic and modacrylic polymers derived from coal, air, water, petroleum, and limestone. These materials can potentially take between 500 to 1,000 years to break down. Also, unlike real fur, fake furs are not able to keep snow from melting and re-freezing on the fiber filaments; this is very important, especially in hiking, mountain climbing, skiing and other outdoor activities which are done in extreme conditions. Comparison to real fur: Pricing Fake fur is significantly less expensive than real fur. The price spectrum for luxury fake fur items spans from as low as $127 to as high as $8,900 in the mass market. In contrast, real fur luxury outerwear begins at a significantly higher price point, starting at $2,300. Use of actual fur: Some coats labeled as having faux-fur trim were found to use actual fur in a test conducted by the Humane Society of the United States. In the United States, up until 2012, a labeling loophole allowed any piece of clothing that contains less than $150 of fur to be labeled without mentioning that it included fur. This is the equivalent of thirty rabbits, three raccoons, three red foxes, two to five leopards, twenty ring tailed lemurs, three domestic dogs, or one bear. Use by fashion designers: Faux fur has become increasingly popular for fashion designers to incorporate throughout their collections as today's technology has allowed it to closely imitate the qualities and applications of genuine fur. Hannah Weiland, founder of Shrimps, a London-based faux fur company, says, "I love working with faux fur because it doesn't molt and it feels just as soft. If the faux kind feels as good, why use the real kind?" Designer Stella McCartney also incorporates faux fur throughout her collections with tagged patches reading "Fur Free Fur."German company Hugo Boss made a public stance against animal fur by pledging to go completely fur-free, taking effect with their 2016 Fall/Winter collection. With the announcement, creative director of sportswear Bernd Keller stated the company's intention to prioritize animal protection and sustainability over convenience. However, ethics and sustainability are not the sole motives behind designers' decision to use faux instead of real fur. Julie de Libran, the former artistic director of Sonia Rykiel, incorporated a combination of both real and faux fur in her collections. De Libran stated that she utilized faux fur for its ability to take on creative colors and forms, giving it a playfulness that natural fur alone could not create.Prada embraced synthetics in their Fall/Winter 2007 collection. Miuccia Prada, the brand's owner and designer, commented that she was bored with real fur, and as a result, she included all faux in her collection that year. However, today Prada continues using both real and faux fur throughout their garments. In addition, Dries Van Noten, Hussein Chalayan, Julien David, Julie de Libran for Sonia Rykiel, Kate Spade, and many others featured faux fur in their fall collections.Due to the controversy of fur garments, technology facilitating the production of fake furs has significantly improved since the early twentieth century. There are new tailoring and dyeing techniques to "disguise" fur and change the traditional image of fur with its conventional image associated with the elite fur-clad woman. Modacrylic is a high-quality 'fur' alternative that gains attraction to its convincing look as an alternative to real fur. Howard Strachman of Strachman Associates, a New York-based agent for faux fur, states that synthetic acrylic knitted fabrics have become a go-to resource for high-end faux fur, much of it coming from Asia. Prada put mohair faux fur in its Fall 2007 collection, whilst Max Mara and Dries Van Noten have also used this alternative.More authentic-looking methods of production are being researched. One technique combines coarse and fine fibers to simulate mink or beaver fur.The global artificial fur industry is projected to grow at a rate of over 15% by 2027.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spamware** Spamware: Spamware is software designed by or for spammers. Spamware varies widely, but may include the ability to import thousands of addresses, to generate random addresses, to insert fraudulent headers into messages, to use dozens or hundreds of mail servers simultaneously, and to make use of open relays. Being an automated software it can create e-mail broadcasting hub by establishing superiority in numbers and sending capability as well as brings a position of great disturbance to its target. Normally, applications can be found in various online based chat rooms like Nimbuzz. The sale of spamware is illegal in eight U.S. states.Another type of spamware is software used to search for e-mail addresses to build lists of e-mail addresses to be used either for spamming directly or to be sold to spammers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HA-tag** HA-tag: Human influenza hemagglutinin (HA) is a surface glycoprotein required for the infectivity of the human influenza virus. The HA-tag is derived from the HA-molecule corresponding to amino acids 98-106. HA-tag has been extensively used as a general epitope tag in expression vectors. Many recombinant proteins have been engineered to express the HA-tag, which does not generally appear to interfere with the bioactivity or the biodistribution of the recombinant protein. This tag facilitates the detection, isolation and purification of the protein of interest.The HA-tag is not suitable for detection or purification of proteins from apoptotic cells since it is cleaved by Caspase-3 and / or Caspase-7 after its sequence DVPD, causing it to lose its immunoreactivity. Labeling of endogenous proteins with HA-tag using CRISPR was recently accomplished in-vivo in differentiated neurons. Sequence: The DNA sequences for the HA-tag include: 5'-TAC-CCA-TAC-GAT-GTT-CCA-GAT-TAC-GCT-3' or 5'-TAT-CCA-TAT-GAT-GTT-CCA-GAT-TAT-GCT-3'. The resulting amino acid sequence is YPYDVPDYA (Tyr-Pro-Tyr-Asp-Val-Pro-Asp-Tyr-Ala).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aplanatic lens** Aplanatic lens: An aplanatic lens is a lens that is free of both spherical and coma aberrations. Aplanatic lenses can be made by combining two or three lens elements. A single-element aplanatic lens is an aspheric lens whose surfaces are surfaces of revolution of a cartesian oval.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TIA-942** TIA-942: The Telecommunications Industry Association (TIA) ANSI/TIA-942-B Telecommunications Infrastructure Standard for Data Centers is an American National Standard (ANS) that specifies the minimum requirements for data center infrastructure and is often cited by companies such as ADC Telecommunications and Cisco Systems. TIA-942: The standard was updated with an addendum ANSI/TIA-942-B-1 in February 2022 from the TR-42.1 Engineering Committee.The Telecommunications Industry Association offers TIA-942 certification programs through TIA-licensed certification bodies that assess and certify compliance to the TIA-942 standard. In June 2021, the TIA TR-42.1 Engineering Committee voted to start the revision process of ANSI/TIA-942-B. The standard will undergo updates during 2022. The new version of ANSI/TIA-942 which will be labelled as ANSI/TIA-942-C is expected to be released in 2023. Specifications: The ANSI/TIA-942-B specification references private and public domain data center requirements for data center infrastructure elements such as: Network architecture Electrical design Mechanical systems System redundancy for electrical, mechanical and telecommunication Fire safety Physical security Efficiency Newer revisions: As of August 2021, TIA has released a licensing scheme for TIA-942 audits. TIA has laid down specific criteria for organizations who wish to conduct 3rd party external audits. Once fulfilled they will be licensed as a CAB - Conformity Assessment Body. Data centers conforming to the TIA-942 standard are listed on the website; https://tiaonline.org/942-datacenters/.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electron therapy** Electron therapy: Electron therapy or electron beam therapy (EBT) is a kind of external beam radiotherapy where electrons are directed to a tumor site for medical treatment of cancer. Equipment: Electron beam therapy is performed using a medical linear accelerator. The same device can also be used to produce high energy photon beams. When electrons are required, the x-ray target is retracted out of the beam and the electron beam is collimated with a piece of apparatus known as an applicator or an additional collimating insert, constructed from a low melting point alloy. Properties: Electron beams have a finite range, after which dose falls off rapidly. Therefore, they spare deeper healthy tissue. The depth of the treatment is selected by the appropriate energy. Unlike photon beams there is no surface sparing effect, so electron therapy is used when the target extends to the patient's skin. Indications: Electron beam therapy is used in the treatment of superficial tumors like cancer of skin regions, or total skin (e.g. mycosis fungoides), diseases of the limbs (e.g. melanoma and lymphoma), nodal irradiation, and it may also be used to boost the radiation dose to the surgical bed after mastectomy or lumpectomy. For deeper regions intraoperative electron radiation therapy might be applied.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Propulsion and Power** Journal of Propulsion and Power: The Journal of Propulsion and Power is a bimonthly peer-reviewed scientific journal covering research on aerospace propulsion and power. The editor-in-chief is Joseph M. Powers (University of Notre Dame). It is published by the American Institute of Aeronautics and Astronautics and was established in 1985. Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2018 impact factor of 1.362.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Smart meter** Smart meter: A smart meter is an electronic device that records information—such as consumption of electric energy, voltage levels, current, and power factor—and communicates the information to the consumer and electricity suppliers. Such an advanced metering infrastructure (AMI) differs from automatic meter reading (AMR) in that it enables two-way communication between the meter and the supplier. Description: The term smart meter often refers to an electricity meter, but it also may mean a device measuring natural gas, water or district heating consumption. More generally, a smart meter is an electronic device that records information such as consumption of electric energy, voltage levels, current, and power factor. Smart meters communicate the information to the consumer for greater clarity of consumption behavior, and electricity suppliers for system monitoring and customer billing. Smart meters typically record energy near real-time, and report regularly, short intervals throughout the day. Smart meters enable two-way communication between the meter and the central system. Smart meters may be part of a smart grid, but do not themselves constitute a smart grid.Such an advanced metering infrastructure (AMI) differs from automatic meter reading (AMR) in that it enables two-way communication between the meter and the supplier. Communications from the meter to the network may be wireless, or via fixed wired connections such as power line carrier (PLC). Wireless communication options in common use include cellular communications, Wi-Fi (readily available), wireless ad hoc networks over Wi-Fi, wireless mesh networks, low power long-range wireless (LoRa), Wize (high radio penetration rate, open, using the frequency 169 MHz) Zigbee (low power, low data rate wireless), and Wi-SUN (Smart Utility Networks). Description: Similar meters, usually referred to as interval or time-of-use meters, have existed for years, but smart meters usually involve real-time or near real-time sensors, power outage notification, and power quality monitoring. These additional features are more than simple automated meter reading (AMR). They are similar in many respects to Advanced Metering Infrastructure (AMI) meters. Interval and time-of-use meters historically have been installed to measure commercial and industrial customers, but may not have automatic reading. Research by the UK consumer group Which?, showed that as many as one in three confuse smart meters with energy monitors, also known as in-home display monitors. History: In 1972, Theodore Paraskevakos, while working with Boeing in Huntsville, Alabama, developed a sensor monitoring system that used digital transmission for security, fire, and medical alarm systems as well as meter reading capabilities. This technology was a spin-off from the automatic telephone line identification system, now known as Caller ID. History: In 1974, Paraskevakos was awarded a U.S. patent for this technology. In 1977, he launched Metretek, Inc., which developed and produced the first smart meters. Since this system was developed pre-Internet, Metretek utilized the IBM series 1 mini-computer. For this approach, Paraskevakos and Metretek were awarded multiple patents.The installed base of smart meters in Europe at the end of 2008 was about 39 million units, according to analyst firm Berg Insight. Globally, Pike Research found that smart meter shipments were 17.4 million units for the first quarter of 2011. Visiongain determined that the value of the global smart meter market would reach US$7 billion in 2012.As of January 2018, over 99 million electricity meters were deployed across the European Union, with an estimated 24 million more to be installed by the end of 2020. The European Commission DG Energy estimates the 2020 installed base to have required €18.8 billion in investment, growing to €40.7 billion by 2030, with a total deployment of 266 million smart meters.By the end of 2018, the U.S. had over 86 million smart meters installed. In 2017, there were 665 million smart meters installed globally. Revenue generation is expected to grow from $12.8 billion in 2017 to $20 billion by 2022. Purpose: Since the inception of electricity deregulation and market-driven pricing throughout the world, utilities have been looking for a means to match consumption with generation. Non-smart electrical and gas meters only measure total consumption, providing no information of when the energy was consumed. Smart meters provide a way of measuring electricity consumption in near real-time. This allows utility companies to charge different prices for consumption according to the time of day and the season. It also facilitates more accurate cash-flow models for utilities. Since smart meters can be read remotely, labor costs are reduced for utilities. Purpose: Smart metering offers potential benefits to customers. These include, a) an end to estimated bills, which are a major source of complaints for many customers b) a tool to help consumers better manage their energy purchases—smart meters with a display outside their homes could provide up-to-date information on gas and electricity consumption and in doing so help people to manage their energy use and reduce their energy bills. With regards to consumption reduction, this is critical for understanding the benefits of smart meters because the relatively small percentage benefits in terms of savings are multiplied by millions of users. Smart meters for water consumption can also provide detailed and timely information about customer water use and early notification of possible water leaks in their premises. Electricity pricing usually peaks at certain predictable times of the day and the season. In particular, if generation is constrained, prices can rise if power from other jurisdictions or more costly generation is brought online. Proponents assert that billing customers at a higher rate for peak times encourages consumers to adjust their consumption habits to be more responsive to market prices and assert further, that regulatory and market design agencies hope these "price signals" could delay the construction of additional generation or at least the purchase of energy from higher-priced sources, thereby controlling the steady and rapid increase of electricity prices.An academic study based on existing trials showed that homeowners' electricity consumption on average is reduced by approximately 3-5% when provided with real-time feedback.Another advantage of smart meters that benefits both customers and the utility is the monitoring capability they provide for the whole electrical system. As part of an AMI, utilities can use the real-time data from smart meters measurements related to current, voltage, and power factor to detect system disruptions more quickly, allowing immediate corrective action to minimize customer impact such as blackouts. Smart meters also help utilities understand the power grid needs with more granularity than legacy meters. This greater understanding facilitates system planning to meet customer energy needs while reducing the likelihood of additional infrastructure investments, which eliminates unnecessary spending or energy cost increases.Though the task of meeting national electricity demand with accurate supply is becoming ever more challenging as intermittent renewable generation sources make up a greater proportion of the energy mix, the real-time data provided by smart meters allow grid operators to integrate renewable energy onto the grid in order to balance the networks. As a result, smart meters are considered an essential technology to the decarbonisation of the energy system. Technology: Connectivity Communication is a critical technological requirement for smart meters. Each meter must be able to reliably and securely communicate the information collected to a central location. Considering the varying environments and places where meters are found, that problem can be daunting. Among the solutions proposed are: the use of cell and pager networks, satellite, licensed radio, combination licensed and unlicensed radio, and power line communication. Not only the medium used for communication purposes, but also the type of network used, is critical. As such, one would find: fixed wireless, wireless mesh network and wireless ad hoc networks, or a combination of the two. There are several other potential network configurations possible, including the use of Wi-Fi and other internet related networks. To date no one solution seems to be optimal for all applications. Rural utilities have very different communication problems from urban utilities or utilities located in difficult locations such as mountainous regions or areas ill-served by wireless and internet companies. Technology: In addition to communication with the head-end network, smart meters may need to be part of a home area network, which can include an in-premises display and a hub to interface one or more meters with the head end. Technologies for this network vary from country to country, but include power line communication, wireless ad hoc network, and Zigbee. Technology: Protocols ANSI C12.18 is an ANSI Standard that describes a protocol used for two-way communications with a meter, mostly used in North American markets. The C12.18 Standard is written specifically for meter communications via an ANSI Type 2 Optical Port, and specifies lower-level protocol details. ANSI C12.19 specifies the data tables that are used. ANSI C12.21 is an extension of C12.18 written for modem instead of optical communications, so it is better suited to automatic meter reading. ANSI C12.22 is the communication protocol for remote communications.IEC 61107 is a communication protocol for smart meters published by the IEC that is widely used for utility meters in the European Union. It is superseded by IEC 62056, but remains in wide use because it is simple and well-accepted. It sends ASCII data using a serial port. The physical media are either modulated light, sent with an LED and received with a photodiode, or a pair of wires, usually modulated by EIA-485. The protocol is half-duplex. IEC 61107 is related to, and sometimes wrongly confused with, the FLAG protocol. Ferranti and Landis+Gyr were early proponents of an interface standard that eventually became a sub-set of IEC1107. Technology: The Open Smart Grid Protocol (OSGP) is a family of specifications published by the European Telecommunications Standards Institute (ETSI) used in conjunction with the ISO/IEC 14908 control networking standard for smart metering and smart grid applications. Millions of smart meters based on OSGP are deployed worldwide. On July 15, 2015, the OSGP Alliance announced the release of a new security protocol (OSGP-AES-128-PSK) and its availability from OSGP vendors. This deprecated the original OSGP-RC4-PSK security protocol which had been identified to be vulnerable.There is a growing trend toward the use of TCP/IP technology as a common communication platform for Smart Meter applications, so that utilities can deploy multiple communication systems, while using IP technology as a common management platform. A universal metering interface would allow for development and mass production of smart meters and smart grid devices prior to the communication standards being set, and then for the relevant communication modules to be easily added or switched when they are. This would lower the risk of investing in the wrong standard as well as permit a single product to be used globally even if regional communication standards vary.Some smart meters may use a test IR LED to transmit non-encrypted usage data that bypasses meter security by transmitting lower level data in real-time. Technology: Smart Meter Equipment Technical Specifications (SMETS) In the UK, smart meters variants are classified as Smart Meter Equipment Technical Specifications (SMETS), with first generation smart meters commonly known as SMETS1 and second generation smart meters known as SMETS2. Data management The other critical technology for smart meter systems is the information technology at the utility that integrates the Smart Meter networks with utility applications, such as billing and CIS. This includes the Meter Data Management system. Technology: It also is essential for smart grid implementations that power line communication (PLC) technologies used within the home over a Home Area Network (HAN), are standardized and compatible. The HAN allows HVAC systems and other household appliances to communicate with the smart meter, and from there to the utility. Currently there are several broadband or narrowband standards in place, or being developed, that are not yet compatible. To address this issue, the National Institute for Standards and Technology (NIST) established the PAP15 group, which studies and recommends coexistence mechanisms with a focus on the harmonization of PLC Standards for the HAN. The objective of the group is to ensure that all PLC technologies selected for the HAN coexist as a minimum. The two leading broadband PLC technologies selected are the HomePlug AV / IEEE 1901 and ITU-T G.hn technologies. Technical working groups within these organizations are working to develop appropriate coexistence mechanisms. The HomePlug Powerline Alliance has developed a new standard for smart grid HAN communications called the HomePlug Green PHY specification. It is interoperable and coexistent with the widely deployed HomePlug AV technology and with the latest IEEE 1901 global Standard and is based on Broadband OFDM technology. ITU-T commissioned in 2010 a new project called G.hnem, to address the home networking aspects of energy management, built upon existing Low Frequency Narrowband OFDM technologies. Technology: The Google.org's PowerMeter, until its demise in 2011, was able to use a smart meter for tracking electricity usage, as can eMeter' Energy Engage as in, for example, the PowerCentsDC(TM) demand response program. Advanced metering infrastructure: Advanced metering infrastructure (AMI) refers to systems that measure, collect, and analyze energy usage, and communicate with metering devices such as electricity meters, gas meters, heat meters, and water meters, either on request or on a schedule. These systems include hardware, software, communications, consumer energy displays and controllers, customer associated systems, meter data management software, and supplier business systems. Advanced metering infrastructure: Government agencies and utilities are turning toward advanced metering infrastructure (AMI) systems as part of larger "smart grid" initiatives. AMI extends automatic meter reading (AMR) technology by providing two-way meter communications, allowing commands to be sent toward the home for multiple purposes, including time-based pricing information, demand-response actions, or remote service disconnects. Wireless technologies are critical elements of the neighborhood network, aggregating a mesh configuration of up to thousands of meters for back haul to the utility's IT headquarters. Advanced metering infrastructure: The network between the measurement devices and business systems allows the collection and distribution of information to customers, suppliers, utility companies, and service providers. This enables these businesses to participate in demand response services. Consumers can use the information provided by the system to change their normal consumption patterns to take advantage of lower prices. Pricing can be used to curb the growth of peak demand consumption. AMI differs from traditional automatic meter reading (AMR) in that it enables two-way communications with the meter. Systems only capable of meter readings do not qualify as AMI systems. Opposition and concerns: Some groups have expressed concerns regarding the cost, health, fire risk, security and privacy effects of smart meters and the remote controllable "kill switch" that is included with most of them. Many of these concerns regard wireless-only smart meters with no home energy monitoring or control or safety features. Metering-only solutions, while popular with utilities because they fit existing business models and have cheap up-front capital costs, often result in such "backlash". Often the entire smart grid and smart building concept is discredited in part by confusion about the difference between home control and home area network technology and AMI. The (now former) attorney general of Connecticut has stated that he does not believe smart meters provide any financial benefit to consumers, however, the cost of the installation of the new system is absorbed by those customers. Opposition and concerns: Security Smart meters expose the power grid to cyberattacks that could lead to power outages, both by cutting off people's electricity and by overloading the grid. However many cyber security experts state that smart meters of UK and Germany have relatively high cybersecurity and that any such attack there would thus require extraordinarily high efforts or financial resources. The EU Cyber security Act took effect in June 2019, which includes Directive on Security Network and Information Systems establishing notification and security requirements for operators of essential services.Through the Smartgrid Cybersecurity Committee, the U.S. Department of Energy published cybersecurity guidelines for grid operators in 2010 and updated them in 2014. The guidelines “...present an analytical framework that organizations can use to develop effective cybersecurity strategies...”Implementing security protocols that protect these devices from malicious attacks has been problematic, due to their limited computational resources and long operational life.The current version of IEC 62056 includes the possibility to encrypt, authenticate, or sign the meter data. Opposition and concerns: One proposed smart meter data verification method involves analyzing the network traffic in real-time to detect anomalies using an Intrusion Detection System (IDS). By identifying exploits as they are being leveraged by attackers, an IDS mitigates the suppliers' risks of energy theft by consumers and denial-of-service attacks by hackers. Energy utilities must choose between a centralized IDS, embedded IDS, or dedicated IDS depending on the individual needs of the utility. Researchers have found that for a typical advanced metering infrastructure, the centralized IDS architecture is superior in terms of cost efficiency and security gains.In the United Kingdom, the Data Communication Company, which transports the commands from the supplier to the smart meter, performs an additional anomaly check on commands issued (and signed) by the energy supplier. Opposition and concerns: As Smart Meter devices are Intelligent Measurement Devices which periodically record the measured values and send the data encrypted to the Service Provider, therefore in Switzerland these devices need to be evaluated by an evaluation Laboratory, and need to be certified by METAS from 01.01.2020 according to Prüfmethodologie (Test Methodology for Execution of Data Security Evaluation of Swiss Smart Metering Components). Opposition and concerns: According to a report published by Brian Krebs, in 2009 a Puerto Rico electricity supplier asked the FBI to investigate large-scale thefts of electricity related to its smart meters. The FBI found that former employees of the power company and the company that made the meters were being paid by consumers to reprogram the devices to show incorrect results, as well as teaching people how to do it themselves. Several hacking tools that allow security researchers and penetration testers verify the security of electric utility smart meters have been released so far. Opposition and concerns: Health Most health concerns about the meters arise from the pulsed radiofrequency (RF) radiation emitted by wireless smart meters.Members of the California State Assembly asked the California Council on Science and Technology (CCST) to study the issue of potential health impacts from smart meters, in particular whether current FCC standards are protective of public health. The CCST report in April 2011 found no health impacts, based both on lack of scientific evidence of harmful effects from radio frequency (RF) waves and that the RF exposure of people in their homes to smart meters is likely to be minuscule compared to RF exposure to cell phones and microwave ovens. Daniel Hirsch, retired director of the Program on Environmental and Nuclear Policy at UC Santa Cruz, criticized the CCST report on the grounds that it did not consider studies that suggest the potential for non-thermal health effects such as latent cancers from RF exposure. Hirsch also stated that the CCST report failed to correct errors in its comparison to cell phones and microwave ovens and that, when these errors are corrected, smart meters "may produce cumulative whole-body exposures far higher than that of cell phones or microwave ovens."The Federal Communications Commission (FCC) has adopted recommended Permissible Exposure Limit (PEL) for all RF transmitters (including smart meters) operating at frequencies of 300 kHz to 100 GHz. These limits, based on field strength and power density, are below the levels of RF radiation that are hazardous to human health.Other studies substantiate the finding of the California Council on Science and Technology (CCST). In 2011, the Electric Power Research Institute performed a study to gauge human exposure to smart meters as compared to the FCC PEL. The report found that most smart meters only transmit RF signals 1% of the time or less. At this rate, and at a distance of 1 foot from the meter, RF exposure would be at a rate of 0.14% of the FCC PEL.An indirect potential for harm to health by smart meters is that they enable energy companies to disconnect consumers remotely, typically in response to difficulties with payment. This can cause health problems to vulnerable people in financial difficulty; in addition to denial of heat, lighting, and use of appliances, there are people who depend on power to use medical equipment essential for life. While there may be legal protections in place to protect the vulnerable, many people in the UK were disconnected in violation of the rules. Opposition and concerns: Safety Issues surrounding smart meters causing fires have been reported, particularly involving the manufacturer Sensus. In 2012. PECO Energy Company replaced the Sensus meters it had deployed in the Philadelphia, US region after reports that a number of the units had overheated and caused fires. In July 2014, SaskPower, the province-run utility company of the Canadian province of Saskatchewan, halted its roll-out of Sensus meters after similar, isolated incidents were discovered. Shortly afterward, Portland General Electric announced that it would replace 70,000 smart meters that had been deployed in the state of Oregon after similar reports. The company noted that it had been aware of the issues since at least 2013, and they were limited to specific models it had installed between 2010 and 2012. On July 30, 2014, after a total of eight recent fire incidents involving the meters, SaskPower was ordered by the Government of Saskatchewan to immediately end its smart meter program, and remove the 105,000 smart meters it had installed. Opposition and concerns: Privacy concerns One technical reason for privacy concerns is that these meters send detailed information about how much electricity is being used each time. More frequent reports provide more detailed information. Infrequent reports may be of little benefit for the provider, as it doesn't allow as good demand management in the response of changing needs for electricity. On the other hand, widespread reports would allow the utility company to infer behavioral patterns for the occupants of a house, such as when the members of the household are probably asleep or absent. Furthermore, the fine-grained information collected by smart meters raises growing concerns of privacy invasion due to personal behavior exposure (private activity, daily routine, etc.). Current trends are to increase the frequency of reports. A solution that benefits both provider and user privacy would be to adapt the interval dynamically. Another solution involves energy storage installed at the household used to reshape the energy consumption profile. In British Columbia the electric utility is government-owned and as such must comply with privacy laws that prevent the sale of data collected by smart meters; many parts of the world are serviced by private companies that are able to sell their data. In Australia debt collectors can make use of the data to know when people are at home. Used as evidence in a court case in Austin, Texas, police agencies secretly collected smart meter power usage data from thousands of residences to determine which used more power than "typical" to identify marijuana growing operations.Smart meter power data usage patterns can reveal much more than how much power is being used. Research has demonstrated that smart meters sampling power levels at two-second intervals can reliably identify when different electrical devices are in use.Ross Anderson wrote about privacy concerns "It is not necessary for my meter to tell the power company, let alone the government, how much I used in every half-hour period last month"; that meters can provide "targeting information for burglars"; that detailed energy usage history can help energy companies to sell users exploitative contracts; and that there may be "a temptation for policymakers to use smart metering data to target any needed power cuts." Opt-out options Reviews of smart meter programs, moratoriums, delays, and "opt-out" programs are some responses to the concerns of customers and government officials. In response to residents who did not want a smart meter, in June 2012 a utility in Hawaii changed its smart meter program to "opt out". The utility said that once the smart grid installation project is nearing completion, KIUC may convert the deferral policy to an opt-out policy or program and may charge a fee to those members to cover the costs of servicing the traditional meters. Any fee would require approval from the Hawaii Public Utilities Commission. Opposition and concerns: After receiving numerous complaints about health, hacking, and privacy concerns with the wireless digital devices, the Public Utility Commission of the US state of Maine voted to allow customers to opt-out of the meter change at the cost of $12 a month. In Connecticut, another US state to consider smart metering, regulators declined a request by the state's largest utility, Connecticut Light & Power, to install 1.2 million of the devices, arguing that the potential savings in electric bills do not justify the cost. CL&P already offers its customers time-based rates. The state's Attorney General George Jepsen was quoted as saying the proposal would cause customers to spend upwards of $500 million on meters and get few benefits in return, a claim that Connecticut Light & Power disputed. Opposition and concerns: Abuse of dynamic pricing Smart meters allow dynamic pricing; it has been pointed out that, while this allows prices to be reduced at times of low demand, it can also be used to increase prices at peak times if all consumers have smart meters. Additionally smart meters allow energy suppliers to switch customers to expensive prepay tariffs instantly in case of difficulties paying. In the UK during a period of very high energy prices from 2022, companies were remotely switching smart meters from a credit tariff to an expensive prepay tariff which disconnects supplies unless credit has been purchased. While regulations do not permit this without appropriate precautions to help those in financial difficulties and to protect the vulnerable, the rules were often flouted. (Prepaid tariffs could also be levied without smart meters, but this required a dedicated prepay meter to be installed.) In 2022, 3.2 million people were left without power at some point after running out of prepay credit. Opposition and concerns: Limited benefits There are questions about whether electricity is or should be primarily a "when you need it" service where the inconvenience/cost-benefit ratio of time-shifting of loads is poor. In the Chicago area, Commonwealth Edison ran a test installing smart meters on 8,000 randomly selected households together with variable rates and rebates to encourage cutting back during peak usage. In Crain's Chicago Business article "Smart grid test underwhelms. In the pilot, few power down to save money.", it was reported that fewer than 9% exhibited any amount of peak usage reduction and that the overall amount of reduction was "statistically insignificant". This was from a report by the Electric Power Research Institute, a utility industry think tank who conducted the study and prepared the report. Susan Satter, senior assistant Illinois attorney general for public utilities said "It's devastating to their plan......The report shows zero statistically different result compared to business as usual." By 2016, the 7 million smart meters in Texas had not persuaded many people to check their energy data as the process was too complicated.A report from a parliamentary group in the UK suggests people who have smart meters installed are expected to save an average of £11 annually on their energy bills, much less than originally hoped. The 2016 cost-benefit analysis was updated in 2019 and estimated a similar average saving.The Australian Victorian Auditor-General found in 2015 that 'Victoria's electricity consumers will have paid an estimated $2.239 billion for metering services, including the rollout and connection of smart meters. In contrast, while a few benefits have accrued to consumers, benefits realisation is behind schedule and most benefits are yet to be realised' Erratic demand Smart meters can allow real-time pricing, and in theory this could help smooth power consumption as consumers adjust their demand in response to price changes. However, modelling by researchers at the University of Bremen suggests that in certain circumstances, "power demand fluctuations are not dampened but amplified instead." In the media In 2013, Take Back Your Power, an independent Canadian documentary directed by Josh del Sol was released describing "dirty electricity" and the aforementioned issues with smart meters. The film explores the various contexts of the health, legal, and economic concerns. It features narration from the mayor of Peterborough, Ontario, Daryl Bennett, as well as American researcher De-Kun Li, journalist Blake Levitt, and Dr. Sam Milham. It won a Leo Award for best feature-length documentary and the Annual Humanitarian Award from Indie Fest the following year. Opposition and concerns: Criticism of smart meter roll-out in the UK In a 2011 submission to the Public Accounts Committee, Ross Anderson wrote that Ofgem was "making all the classic mistakes which have been known for years to lead to public-sector IT project failures" and that the "most critical part of the project—how smart meters will talk to domestic appliances to facilitate demand response—is essentially ignored."Citizens Advice said in August 2018 that 80% of people with smart meters were happy with them. Still, it had 3,000 calls in 2017 about problems. These related to first-generation smart meters losing their functionality, aggressive sales practices, and still having to send smart meter readings.Ross Anderson of the Foundation for Information Policy Research has criticised the UK's program on the grounds that it is unlikely to lower energy consumption, is rushed and expensive, and does not promote metering competition. Anderson writes, "the proposed architecture ensures continued dominance of metering by energy industry incumbents whose financial interests are in selling more energy rather than less," and urged ministers "to kill the project and instead promote competition in domestic energy metering, as the Germans do – and as the UK already has in industrial metering. Every consumer should have the right to appoint the meter operator of their choice."The high number of SMETS1 meters installed has been criticized by Peter Earl, head of energy at the price comparison website comparethemarket.com. He said, "The Government expected there would only be a small number of the first-generation of smart meters before Smets II came in, but the reality is there are now at least five million and perhaps as many as 10 million Smets I meters."UK smart meters in southern England and the Midlands use the mobile phone network to communicate, so they do not work correctly when phone coverage is weak. A solution has been proposed, but was not operational as of March 2017.In March 2018 the National Audit Office (NAO), which watches over public spending, opened an investigation into the smart meter program, which had cost £11bn by then, paid for by electricity users through higher bills. The National Audit Office published the findings of its investigation in a report titled "Rolling out smart meters" published in November 2018. The report, amongst other findings, indicated that the number of smart meters installed in the UK would fall materially short of the Department for Business, Energy & Industrial Strategy (BEIS) original ambitions of all UK consumers having a smart meter installed by 2020. In September 2019, smart meter rollout in the UK was delayed for four years.Ross Anderson and Alex Henney wrote that "Ed Miliband cooked the books" to make a case for smart meters appear economically viable. They say that the first three cost-benefit analyses of residential smart meters found that it would cost more than it would save, but "ministers kept on trying until they got a positive result... To achieve 'profitability' the previous government stretched the assumptions shamelessly".A counter-fraud officer at Ofgem with oversight of the roll-out of the smart meter program who raised concerns with his manager about many millions of pounds being misspent was threatened with imprisonment under section 105 of the Utilities Act 2000, a provision intended to protect national security. The Employment Appeal Tribunal found that the law was in contravention of the European Convention on Human Rights.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Evidence-based education** Evidence-based education: Evidence-based education (EBE) is the principle that education practices should be based on the best available scientific evidence, rather than tradition, personal judgement, or other influences. Evidence-based education is related to evidence-based teaching, evidence-based learning, and school effectiveness research. For example, research has shown that spaced repetition (also spaced training, spacing effect and spaced learning) "leads to more robust memory formation than massed training does, which involves short or no intervals".The evidence-based education movement has its roots in the larger movement towards evidence-based practices, and has been the subject of considerable debate since the late 1990s. However, research published in 2020 showed that there is still widespread belief, amongst educators in ineffective teaching techniques such as matching instruction to a few supposed learning styles and the cone of learning. History: The English author and academic David H. Hargreaves presented a lecture in 1996 in which he stated "Teaching is not at present a research-based profession. I have no doubt that if it were it would be more effective and satisfying". He compared the fields of medicine and teaching, saying that physicians are expected to keep up to date on medical research, whereas many teachers may not even be aware of the importance of research to their profession. In order for teaching to become more research-based, he suggested, educational research would require a "radical change" and teachers would have to become more involved in the creation and application of research.Following that lecture, English policy makers in education tried to bring theory and practice closer together. At the same time, existing education research faced criticism for its quality, reliability, impartiality and accessibility.In 2000 and 2001 two international, evidence-based, studies were created to analyze and report on the effectiveness of school education throughout the world: the Programme for International Student Assessment (PISA) in 2000 and the Progress in International Reading Literacy Study (PIRLS) in 2001. History: Also, around the same time three major evidence-based studies about reading were released highlighting the value of evidence in education: the US National Reading Panel in 2000, the Australian report on Teaching reading in 2005, and the Independent review of the teaching of early reading (Rose Report 2006), England. Approximately a year before the Rose Report, the Scottish Executive Education Department (SEED) published the results of a study entitled A Seven Year Study of the Effects of Synthetic Phonics Teaching on Reading and Spelling Attainment (Clackmannanshire Report), comparing synthetic phonics with analytic phonics.Scientifically based research (SBR) (also evidence-based practice in education) first appeared in United States Federal legislation in the Reading Excellence Act and subsequently in the Comprehensive School Reform program. However, it came into prominence in the U.S. under the No child left behind act of 2001 (NCLB), intended to help students in kindergarten through grade 3 who are reading below grade level. Federal funding was made available for education programs and teacher training that are "based on scientifically based reading research". NCLB was replaced in 2015 by the Every Student Succeeds Act (ESSA).In 2002 the U.S. Department of Education founded the Institute of Education Sciences (IES) to provide scientific evidence to guide education practice and policy. History: The State driven Common Core State Standards Initiative was developed in the United States in 2009 in an attempt to standardize education principles and practices. There appears to have been some attempt to incorporate evidence-based practices. For example, the core standards website has a comprehensive description of the specific details of the English Language Arts Standards that include the areas of the alphabetic principle, print concepts, phonological awareness, phonics and word recognition, and fluency. However, it is up to the individual States and school districts to develop plans to implement the standards, and the National Governors Guide to Early Literacy appears to lack details. As of 2020, 41 States had adopted the standards, and in most cases it has taken three or more years to have them implemented. For example, Wisconsin adopted the standards in 2010 and implemented them in the 2014–2015 school year, yet in 2020 the state Department of Public Instruction was in the process of developing materials to support the standards in teaching phonics.According to reports, the Common Core State Standards Initiative does not appear to have led to a significant national improvement in students' performance. The Center on Standards, Alignment, Instruction, and Learning (C-SAIL) conducted a study of how the Common Core is received in schools. It reported these findings: a) there is moderately high buy-in for the standards among teachers, principals, and superintendents, but buy-in was significantly lower for teachers, b) there is wide variation in teachers' alignment to the standards by content area and grade level, c) specificity is desired by some educators, however states and districts are reluctant to provide too much specificity, d) State officials generally agree that accountability changes under ESSA have allowed them to adopt a "smart power" message that is less punitive and more supportive.Subsequently, in England the Education Endowment Foundation of London was established in 2011 by The Sutton Trust, as the lead charity of the government-designated What Works Centre for high quality evidence in UK Education.In 2012 the Department for Education in England introduced an evidence-based "phonics reading check" to help support primary students with reading. (In 2016, the Minister for Education reported that the percentage of primary students not meeting reading expectations reduced from 33% in 2010 to 20% in 2016.)Evidence-based education in England received a boost from the 2013 briefing paper by Dr. Ben Goldacre. It advocated for systemic change and more randomized controlled trials to assess the effects of educational interventions. He said this was not about telling teachers what to do, but rather "empowering teachers to make independent, informed decisions about what works". Following that a U.K. based non-profit, researchED, was founded to offer a forum for researchers and educationalists to discuss the role of evidence in education.Discussion and criticism ensued. Some said research methods that are useful in medicine can be entirely inappropriate in the sphere of education.In 2014 the National Foundation for Educational Research, Berkshire, England published a report entitled Using Evidence in the Classroom: What Works and Why. History: The review synthesises effective approaches to school and teacher engagement with evidence and discusses challenges, areas for attention and action. It is intended to help the teaching profession to make the best use of evidence about what works in improving educational outcomes. History: In 2014 the British Educational Research Association (BERA) and the Royal Society of Arts (RSA) conducted an inquiry into the role of research in teacher education in England, Northern Ireland, Scotland and Wales. The final report made it clear that research and teacher inquiry were of paramount importance in developing self-improving schools. It advocated for a closer working partnership between teacher-researchers and the wider academic research community.The 2015 Carter Review of Initial Teaching Training in the UK suggested that teacher trainees should have access and skills in using research evidence to support their teaching. However, they do not receive training in utilizing research. History: NCLB in the US was replaced in 2015 by the Every Student Succeeds Act (ESSA) that replaced "scientifically based research" with "evidence-based interventions" (any "activity, strategy, or intervention that shows a statistically significant effect on improving student outcomes or other relevant outcomes"). ESSA has four tiers of evidence that some say gives schools and policy makers greater control because they can choose the desired tier of evidence. The evidence tiers are as follows: Tier 1 – Strong Evidence: supported by one or more well-designed and well-implemented randomized controlled experimental studies. History: Tier 2 – Moderate Evidence: supported by one or more well-designed and well-implemented quasi-experimental studies. Tier 3 – Promising Evidence: supported by one or more well-designed and well-implemented correlational studies (with statistical controls for selection bias). History: Tier 4 – Demonstrates a Rationale: practices that have a well-defined logic model or theory of action, are supported by research, and have some effort underway by state educational agencies (SEA), local educational agencies (LEA), or outside research organization to determine their effectiveness.In 2016 the Department for Education in England published the White Paper Educational Excellence Everywhere. It states its intention to support an evidence-informed teaching profession by increasing teachers' access to and use of "high quality evidence". It will also establish a new British education journal and expand the Education Endowment Foundation. In addition, on October 4, 2016, the Government announced an investment of around £75 million in the Teaching and Leadership Innovation Fund, to support high-quality, evidence-informed, professional development for teachers and school leaders. A research report in July 2017 entitled Evidence-informed teaching: an evaluation of progress in England concluded this was necessary, but not sufficient. It said that the main challenge for policy makers and researchers was the level of leadership capacity and commitment to make it happen. In other words, the attitudes and actions of school leaders influence how classroom teachers are supported and held accountable for using evidence informed practices. History: In 2017 the British Educational Research Association (BERA) examined the role of universities in professional development, focusing especially on teacher education and medical education.Critics continue, saying "Education research is great but never forget teaching is a complex art form." In 2018, Dylan Wiliam, Emeritus Professor of Educational Assessment at University College London, speaking at researchED stated that "Educational research will never tell teachers what to do; their classrooms are too complex for this ever to be possible." Instead, he suggests, teachers should become critical users of educational research and "aware of when even well-established research findings are likely to fail to apply in a particular setting". Reception: Acceptance Since many educators and policy makers are not experienced in evaluating scientific studies and studies have found that "teachers' beliefs are often guided by subjective experience rather than by empirical data", several non-profit organizations have been created to critically evaluate research studies and provide their analysis in a user-friendly manner. They are outlined in research sources and information. Reception: EBP has not been readily adopted in all parts of the education field, leading some to suggest the K-12 teaching profession has suffered a loss of respect because of its science-aversive culture and failure to adopt empirical research as the major determinant of its practices. Speaking in 2017, Harvey Bischof, Ontario Secondary School Teachers' Federation (OSSTF), said there is a need for teacher-centred education based upon what works in the classroom. He suggested that Ontario education "lacks a culture of empiricism" and is vulnerable to gurus, ideologues and advocates promoting unproven trends and fads.Neuroscientist Mark Seidenberg, University of Wisconsin–Madison, stated that "A stronger scientific ethos (in education) could have provided a much needed defense against bad science", particularly in the field of early reading instruction. Other influential researchers in psychopedagogy, cognitive science and neuroscience, such as Stanislas Dehaene and Michel Fayol have also supported the view of incorporating science into educational practices. Reception: Critics and skeptics Skeptics point out that EBP in medicine often produces conflicting results, so why should educators accept EBP in education. Others feel that EBE "limits the opportunities for educational professionals to exert their judgment about what is educationally desirable in particular situations".Some suggest teachers should not pick up research findings and implement them directly into the classroom; instead they advocate for a modified approach some call evidence-informed teaching that combines research with other types of evidence plus personal experience and good judgement. (To be clear, some use the term evidence-informed teaching to mean "practice that is influenced by robust research evidence".)Still others say there is "a mutual interdependence between science and education", and teachers should become better trained in research science and "take science sufficiently seriously" to see how its methods might inform their practice.Straight talk on evidence has suggested that reports about evidence in education need to be scrutinized for accuracy or subjected to Metascience (research on research).In a 2020 talk featured on ResearchED, Dylan Wiliam argues that when looking at the cost, benefit and practicality of research, more impact on student achievement will come from a knowledge-rich curriculum and improving teachers' pedagogical skills. Reception: Philosophical concerns Some of the criticisms about evidence-based approaches to education relate to concerns about the generalisability of educational research, specifically that research findings are context dependent and that it is difficult to generalise findings from one context to another using a positivist approach. Counter to this position is a view that education researchers have a responsibility to consider the practical value of their research.There has also been some discussion of a philosophical nature about the validity of scientific evidence. This led James M. Kauffman, University of Virginia, and Gary M. Sasso, University of Iowa, to respond in 2006 suggesting that problems arise with the extreme views of a) the "unbound faith in science" (i.e. scientism) or b) the "criticism of science" (that they label as the "nonsense of postmodernism"). They go on to say that science is "the imperfect but best tool available for trying to reduce uncertainty about what we do as special educators". Reception: Meta-analysis A meta-analysis is a statistical analysis that combines the results of multiple scientific studies. A concern of some researchers is the unreliability of some of these reports due to mythological features. For example, it is suggested that some meta-analyses findings are not credible because they do not exclude or control for studies with small sample sizes or very short durations, and where the researchers are doing the measurements. Such reports can yield "implausible" results. According to Robert Slavin, of the Center for Research and Reform in Education at Johns Hopkins University and Evidence for ESSA, "Meta-analyses are important, because they are widely read and widely cited, in comparison to individual studies. Yet until meta-analyses start consistently excluding, or at least controlling for studies with factors known to inflate mean effect sizes, then they will have little if any meaning for practice." Research sources and information: The following organizations evaluate research on educational programs, or help educators to understand the research. Research sources and information: Best Evidence Encyclopedia (BEE) Best Evidence Encyclopedia (BEE) is a free website created by the Johns Hopkins University School of Education's Center for Data-Driven Reform in Education (established in 2004) and is funded by the Institute of Education Sciences, U.S. Department of Education. It gives educators and researchers reviews about the strength of the evidence supporting a variety of English programs available for students in grades K–12. The reviews cover programs in areas such as mathematics, reading, writing, science, comprehensive school reform, and early childhood education; and includes such topics as effectiveness of technology and struggling readers. Research sources and information: BEE selects reviews that meet consistent scientific standards and relate to programs that are available to educators.Educational programs in the reviews are rated according to the overall strength of the evidence supporting their effects on students as determined by the combination the quality of the research design and their effect size. The BEE website contains an explanation of their interpretation of effect size and how it might be viewed as a percentile score. It uses the following categories of ratings: Strong evidence of effectiveness Moderate evidence of effectiveness Limited evidence of effectiveness: Strong evidence of modest effects Limited evidence of effectiveness: Weak evidence with notable effect No qualifying studies Reading programs In 2021, BEE released a review of research on 61 studies of 51 different programs for struggling readers in elementary schools. 84% were randomized experiments and 16% quasi-experiments. The vast majority were done in the US, the programs are replicable, and the studies, done between 1990 and 2018, had a minimum duration of 12 weeks. Many of the programs used phonics-based teaching and/or one or more of the following: cooperative learning, technology-supported adaptive instruction (see Educational technology), metacognitive skills, phonemic awareness, word reading, fluency, vocabulary, multisensory learning, spelling, guided reading, reading comprehension, word analysis, structured curriculum, and balanced literacy (non-phonetic approach). Significantly, table 5 (pg. 88) shows the mean weighted effect sizes of the programs by the manner in which they were conducted (i.e. by school, by classroom, by technology-supported adaptive instruction, by one-to-small-group tutoring, and by one-to-one tutoring). Table 8 (pg. 91) lists the 22 programs meeting ESSA standards for strong and moderate ratings, and their effect size. Research sources and information: The review concludes that a) outcomes were positive for one-to-one tutoring, b) outcomes were positive but not as large for one-to-small group tutoring, c) there were no differences in outcomes between teachers and teaching assistants as tutors, d) technology-supported adaptive instruction did not have positive outcomes, e) whole-class approaches (mostly cooperative learning) and whole-school approaches incorporating tutoring obtained outcomes for struggling readers as large as those found for one- to-one tutoring, and benefitted many more students, and f) approaches mixing classroom and school improvements, with tutoring for the most at-risk students, have the greatest potential for the largest numbers of struggling readers. Research sources and information: The site also offers a newsletter, originated by Robert Slavin the former Director of the Center for Research and Reform in Education, containing information on education around the world. The issue for January 28, 2021 has a chart showing that proven tutoring programs during the regular school year are significantly more effective than other approaches such as summer school (without tutoring), after school, extended-day, and technology. The February 11, 2021 issue makes a case for using Federal Government COVID-19 funding (the Learning Recovery Act) to provide for the "implementation of proven tutoring programs during ordinary school times". Research sources and information: Blueprints for healthy youth development Blueprints for Healthy Youth Development, University of Colorado Boulder, offers a registry of evidence-based interventions with "the strongest scientific support" that are effective in promoting a healthy course of action for youth development. Education Endowment Foundation The Education Endowment Foundation of London, England was established in 2011 by The Sutton Trust, as a lead charity in partnership with Impetus Trust, together being the government-designated What Works Centre for UK Education. It offers an online, downloadable Teaching & Learning Toolkit evaluating and describing a variety of educational interventions according to cost, evidence and impact. As an example, it evaluates and describes a 2018 phonics reading program with low cost, extensive evidence and moderate impact. Research sources and information: Evidence for ESSA Evidence for ESSA began in 2017 and is produced by the Center for Research and Reform in Education (CRRE) at Johns Hopkins University School of Education, Baltimore, MD. It is reported to have received "widespread support ", and offers free up-to-date information on current PK-12 programs in reading, math, social-emotional learning, and attendance that meet the standards of the Every Student Succeeds Act (ESSA) (the United States K–12 public education policy signed by President Obama in 2015). It also provides information on programs that do meet ESSA standards as well as those that do not. Research sources and information: Evidence-based PK-12 programs There are three program categories 1) whole class, 2) struggling readers and 3) English learners. Programs can be filtered by a) ESSA evidence rating (strong, moderate, and promising), b) school grade, c) community (rural, suburban, urban), d) groups (African American, Asian American, Hispanic, White, free and reduced price lunch, English learners, and special education), and e) a variety of features such as cooperative learning, technology, tutoring, etc. Research sources and information: For example, as of June 2020 there were 89 reading programs in the database. After filtering for strong results, grades 1–2, and free and reduced-price lunches, 23 programs remain. If it is also filter for struggling readers, the list is narrowed to 14 programs. The resulting list is shown by the ESSA ratings, Strong, Moderate or Promising. Each program can then be evaluated according to the following: number of studies, number of students, average effect size, ESSA rating, cost, program description, outcomes, and requirements for implementation. Research sources and information: Social programs that work and Straight Talk on Evidence Social programs that work and Straight Talk on Evidence are administered by the Arnold Ventures LLC's evidence-based policy team, with offices in Houston, Washington, D.C., and New York City. The team is composed of the former leadership of the Coalition for Evidence-Based Policy, a nonprofit, nonpartisan organization advocating the use of well-conducted randomized controlled trials (RCTs) in policy decisions. It offers information on twelve types of social programs including education. Research sources and information: Social programs that work evaluates programs according to their RCTs and gives them one of three ratings: Top Tier: Programs with two or more replicable and well conducted RCTs (or one multi-site RTC), in a typical community settings producing sizable sustained outcomes. Near Top Tier: Programs that meet almost all elements of the Top Tier standard but need another replication RCT to confirm the initial findings. Research sources and information: Suggestive Tier: Programs appearing to be a strong candidate with some shortcomings. They produce sizeable positive effects based on one or more well conducted RCTs (or studies that almost meet this standard); however, the evidence is limited by factors such as short-term follow-up or effects that are not statistically significant.Education programs include K-12 and postsecondary. The programs are listed under each category according to their rating and the update date is shown. For example, as of June 2020 there were 12 programs under K-12; two were Top Tier, five were Near Top Tier, and the remainder were Suggestive Tier. Each program contains information about the program, evaluation methods, key findings and other data such as the cost per student. Beyond the general category, there does not appear to be any way to filter for only the type of program of interest, however the list may not be especially long. Research sources and information: Straight Talk on Evidence seeks to distinguish between programs that only claim to be effective and other programs showing credible findings of being effective. It reports mostly on randomized controlled trial (RCT) evaluations, recognizing that RCTs offer no guarantee that the study was implemented well, or that its reported results represented the true findings. The lead author of a study is given an opportunity to respond to their report prior to its publication. Research sources and information: What Works Clearinghouse (WWC) What Works Clearinghouse (WWC) of Washington, DC, was established in 2002 and evaluates numerous educational programs in twelve categories by the quality and quantity of the evidence and the effectiveness. It is operated by the federal National Center for Education Evaluation and Regional Assistance (NCEE), part of the Institute of Education Sciences (IES) Publications WWC publications are available for a variety of topics (e.g. literacy, charter schools, science, early childhood, etc.) and Type (i.e. Practice guide or Intervention report). Research sources and information: Practice guides, tutorials, videos and webinars Practice guides with recommendations are provided covering a wide variety of subjects such as Using Technology to Support Postsecondary Student Learning and Assisting Students Struggling with Reading, etc. Other resources such as tutorials, videos and webinars are also available. Research sources and information: Reviews of individual studies Individual studies are available that have been reviewed by WWC and categorized according to the evidence tiers of the United States Every student succeeds act (ESSA). Search filters are available for the following: WWC ratings (e.g. meets WWC standards with or without reservations, meets WWC standards without reservations, etc.) Topic (e.g. behavior, charter schools, etc.) Studies meeting certain design standards (e.g. Randomized controlled trial, Quasi-experiment design, etc.) ESSA ratings (e.g. ESSA Tier 1, ESSA Tier 2, etc.) Studies with one or more statistically positive findings Intervention reports, programs and search filters Intervention reports are provided for programs according to twelve topics (e.g. literacy, mathematics, science, behavior, etc.).The filters are helpful to find programs that meet specific criteria. For example, as of July 2020 there were 231 literacy programs in the WWC database. (Note: these are literacy programs that may have several individual trials and some of the trials were conducted as early as 2006.) If these programs are filtered for outcomes in Literacy-Alphabetics the list is narrowed to 25 programs that met WWC standards for evidence and had at least one "potentially positive" effectiveness rating. If the list is further filtered to show only programs in grades one or two, and delivery methods of individual, or small group, or whole class the list is down to 14 programs; and five of those have an effectiveness rating of "strong evidence that intervention had a positive effect on outcomes" in alphabetics. Research sources and information: The resulting list of programs can then be sorted by a) evidence of effectiveness, or b) alphabetically, or c) school grades examined. It is also possible to select individual programs to be compared with each other; however it is advisable to recheck each individual program by searching on the Intervention Reports page. The resulting programs show data in the following areas: outcome domain (e.g. alphabetics, oral language, general mathematics achievement, etc.) effectiveness rating (e.g. positive, potentially positive, mixed, etc.) number of studies meeting WWC standards grades examined (e.g. K-4) number of students in studies that met the WWC standards, and improvement index (i.e. the expected change in percentile rank).It is also possible to view the program's Evidence snapshot, detailed Intervention report and Review protocols. For other independent "related reviews", go to the evidence snapshot then the WWC Summary of Evidence. Research sources and information: The following chart, updated in July 2020, shows some programs that had "strong evidence" of a "positive effect on outcomes" in the areas specified. The results may have changed since that time, however current information is available on the WWC website, including the outcome domains that did not have "strong evidence".Some of the concerns expressed about WWC are that it appears to have difficulty keeping up with the research so it may not be current; and when a program is not listed on their database, it may be that it did not meet their criteria or they have not yet reviewed it, but you don't know which. In addition Straight Talk on Evidence, authored by the Arnold Ventures LLC' Evidence-Based Policy team, on January 16, 2018, expressed concerns about the validity of the ratings provided by WWC. It says WWC in some cases reported a "preliminary outcome when high-quality RCTs found no significant effects on more important and final educational outcomes".A summary of the January 2020 changes to the WWC procedures and standards is available on their site. Research sources and information: Other sources of information The British Educational Research Association (BERA) claims to be the home of educational research in the United Kingdom. It is a membership association that aims to improve the knowledge of education by advancing research quality, capacity and engagement. Its resources include a quarterly magazine, journals, articles, and conferences. Campbell Collaboration is a nonprofit organization that promotes evidence-based decisions and policy through the production of systematic reviews and other types of evidence synthesis. It has wide spread international support, and allows users to easily search by topic area (e.g. education) or key word (e.g. reading). Doing What Works is provided by WestEd, a San Francisco-based nonprofit organization, and offers an online library that includes interviews with researchers and educators, in addition to materials and tools for educators. WestEd was criticized in January 2020, claiming they did not interview all interested parties prior to releasing a report. Early Childhood Technical Assistance Center (ECTA), of Chapel Hill, NC, provides resources on evidence-based practices in areas specific to early childhood care and education, professional development, early intervention and early childhood special education. Florida Center for Reading Research is a research center at Florida State University that explores all aspects of reading research. Its Resource Database allows you to search for information based on a variety of criteria. Institute of Education Sciences (IES), Washington, DC, is the statistics, research, and evaluation arm of the U.S. Department of Education. It funds independent education research, evaluation and statistics. It published a Synthesis of its Research on Early Intervention and Early Childhood Education in 2013. Its publications and products can be searched by author, subject, etc. Research sources and information: The International Initiative for Impact Evaluation (3ie) is a registered non-governmental organisation, since 2008, with offices in New Delhi, London and Washington, DC. Its self-described vision is to improve lives through evidence-informed action in developing countries. In 2016 their researchers synthesised evidence from 238 impact evaluations and 121 qualitative research studies and process evaluations in 52 low-and middle-income countries (L&MICs). It looked at children's school enrolment, attendance, completion and learning.The results can be viewed in their report entitled The impact of education programmes on learning and school participation in low- and middle-income countries. Research sources and information: National Foundation for Educational Research (NFER) is a non-profit research and development organization based in Berkshire, England. It produces independent research and reports about issues across the education system, such as Using Evidence in the Classroom: What Works and Why. Office for Standards in Education (Ofsted), in England, conducts research on schools, early education, social care, further education and skills. The Ministry of Education, Ontario, Canada offers a site entitled What Works? Research Into Practice. It is a collection of research summaries of promising teaching practice written by experts at Ontario universities. RAND Corporation, with offices throughout the world, funds research on early childhood, K-12, and higher education. Research sources and information: ResearchED, a U.K. based non-profit since 2013 has organized education conferences around the world (e.g. Africa, Australia, Asia, Canada, the E.U., the Middle East, New Zealand, the U.K. and the U.S.) featuring researchers and educators in order to "promote collaboration between research-users and research-creators". It has been described as a "grass-roots teacher-led project that aims to make teachers research-literate and pseudo-science proof". It also publishes an online magazine featuring articles by practicing teachers and others such as professor Daniel T. Willingham (University of Virginia) and Professor Dylan Wiliam (Emeritus professor, UCL Institute of Education). And finally, it offers frequent, free online video presentations on subjects such as curriculum design, simplifying your practice, unleashing teachers' expertise, the bridge over the reading gap, education post-corona, remote teaching, teaching critical thinking, etc. The free presentations are also available on its YouTube channel. ResearchED has been featured in online debates about so called "teacher populism". Research sources and information: Research 4 Schools, University of Delaware is supported by the Institute of Education Sciences, U.S. Department of Education and offers peer-reviewed research about education. Evidence-based learning techniques: The following are some examples of evidence-based learning techniques. Evidence-based learning techniques: Spaced repetition Spaced repetition is a theory that repetitive training that includes long intervals between training sessions helps to form long-term memory. It is also referred to as spaced training, spacing effect and spaced learning). Such training has been known since the seminal work of Hermann Ebbinghaus to be superior to training that includes short inter-trial intervals (massed training or massed learning) in terms of its ability to promote memory formation. It is a learning technique that is performed with flashcards. Newly introduced and more difficult flashcards are shown more frequently while older and less difficult flashcards are shown less frequently in order to exploit the psychological spacing effect. The use of spaced repetition has been proven to increase rate of learning. Although the principle is useful in many contexts, spaced repetition is commonly applied in contexts in which a learner must acquire a large number of items and retain them indefinitely in memory. It is, therefore, well suited for the problem of vocabulary acquisition in the course of second language learning. A number of spaced repetition software have been developed to aid the learning process. It is also possible to perform spaced repetition with flash cards using the Leitner system. Evidence-based learning techniques: Errorless learning Errorless learning was an instructional design introduced by psychologist Charles Ferster in the 1950s as part of his studies on what would make the most effective learning environment. B. F. Skinner was also influential in developing the technique, and noted: "errors are not necessary for learning to occur. Errors are not a function of learning or vice versa nor are they blamed on the learner. Errors are a function of poor analysis of behavior, a poorly designed shaping program, moving too fast from step to step in the program, and the lack of the prerequisite behavior necessary for success in the program." Errorless learning can also be understood at a synaptic level, using the principle of Hebbian learning ("Neurons that fire together wire together"). Evidence-based learning techniques: Interest from psychologists studying basic research on errorless learning declined after the 1970s. However, errorless learning attracted the interest of researchers in applied psychology, and studies have been conducted with both children (e.g., educational settings) and adults (e.g. Parkinson's patients). Errorless learning continues to be of practical interest to animal trainers, particularly dog trainers.Errorless learning has been found to be effective in helping memory-impaired people learn more effectively. The reason for the method's effectiveness is that, while those with sufficient memory function can remember mistakes and learn from them, those with memory impairment may have difficulty remembering not only which methods work, but may strengthen incorrect responses over correct responses, such as via emotional stimuli. See also the reference by Brown to its application in teaching mathematics to undergraduates. Evidence-based learning techniques: N-back training The n-back task is a continuous performance task that is commonly used as an assessment in cognitive neuroscience to measure a part of working memory and working memory capacity. The n-back was introduced by Wayne Kirchner in 1958.A 2008 research paper claimed that practicing a dual n-back task can increase fluid intelligence (Gf), as measured in several different standard tests. This finding received some attention from popular media, including an article in Wired. However, a subsequent criticism of the paper's methodology questioned the experiment's validity and took issue with the lack of uniformity in the tests used to evaluate the control and test groups. For example, the progressive nature of Raven's Advanced Progressive Matrices (APM) test may have been compromised by modifications of time restrictions (i.e., 10 minutes were allowed to complete a normally 45-minute test). The authors of the original paper later addressed this criticism by citing research indicating that scores in timed administrations of the APM are predictive of scores in untimed administrations.The 2008 study was replicated in 2010 with results indicating that practicing single n-back may be almost equal to dual n-back in increasing the score on tests measuring Gf (fluid intelligence). The single n-back test used was the visual test, leaving out the audio test. In 2011, the same authors showed long-lasting transfer effect in some conditions.Two studies published in 2012 failed to reproduce the effect of dual n-back training on fluid intelligence. These studies found that the effects of training did not transfer to any other cognitive ability tests. In 2014, a meta-analysis of twenty studies showed that n-back training has small but significant effect on Gf and improve it on average for an equivalent of 3-4 points of IQ. In January 2015, this meta-analysis was the subject of a critical review due to small-study effects. The question of whether n-back training produces real-world improvements to working memory remains controversial.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Central giant-cell granuloma** Central giant-cell granuloma: Central giant-cell granuloma (CGCG) is a localised benign condition of the jaws. It is twice as common in females and is more likely to occur before age 30. Central giant-cell granulomas are more common in the anterior mandible, often crossing the midline and causing painless swellings. Signs and symptoms: CGCG is the most common giant cell lesion of the jaws. These lesions are localised fibrous tissue tumours which contain osteoclasts and are usually several centimetres across. Frequently, a painless swelling that grows and expands rapidly is present. This growth can also erode through bone including the alveolar ridge, resulting in a soft tissue swelling that is purple in colour. Paresthesia of the lip has also been observed. Resorption of tooth roots is seen in 37% of cases compared to displacement of teeth in 50%. Two-thirds of lesions are found anterior to molars in the mandible, where teeth have deciduous predecessors.CGCGs are twice as likely to affect females and usually seen in those under 30-years. However, can be seen in a broad age range. Signs and symptoms: Noonan syndrome Multiple CGCGs can be found in individuals with Noonan syndrome. Mutations in PTPN11 or RAS pathway genes are seen. Diagnosis: Radiographically, CGCGs have a rounded cyst-like radiolucent area with a well-defined margin with 53% showing scalloped margins. They can have a multilocular (honeycomb or soap bubble) appearance.Histologically similar to brown tumour found in hyperparathyroidism. Biochemical investigation through serum calcium, to exclude hyperparathyroidism. Histology Unknown pathogenesis. Diagnosis: Histology of CGCG shows a lobulated mass composed of vascular connective tissue and multinucleated giant cells (osteoclasts). The giant cells may be diffusely located throughout the lesion or focally aggregate in the lesion, often clustered around hemorrhagic areas hemosiderin deposits. Lobules of the lesion can be separated by fibrous tissue or even thin layer of bone or osteoid that can be seen radiographically. Giant cells are thought to form in response to signals produced by fibroblasts and blood vessels or as a response to cytokines. Diagnosis: Differential diagnosis Odontogenic keratocyst Ameloblastoma Odontogenic myxoma Hemangioma Central odontogenic fibroma Brown tumour of hyperparathyroidism Cherubism Aneurysmal bone cysts Treatment: The treatment for enlarged CGCG is usually thorough curettage. Recurrence ranges from 15%–20%, second curettage is sufficient to prevent further recurrence. Rapidly growing tumours are more likely to recur and can sometimes require full excision with surrounding bone. Large lesions can require en bloc resections.Alternatives or adjuncts to surgery: Corticosteroids which convert lesions into fibrous tissue Calcitonin which slows growth Interferon α-2a which slow growth Bisphosphonates which slow growthThese therapeutic approaches provide possible alternatives for large lesions which can not go through immediate surgery or in children where facial growth following surgery might be affected. However, no significant differences have been found in the use of surgical and non-surgical methods for treating CGCGs. The long term prognosis of giant-cell granulomas is good and metastases do not develop.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diiminopyridine** Diiminopyridine: Diiminopyridines (DIP, also known a pyridine diimines, PDIs) are a class of diimine ligands. They featuring a pyridine nucleus with imine sidearms appended to the 2,6–positions. The three nitrogen centres bind metals in a tridentate fashion, forming pincer complexes. Diiminopyridines are notable as non-innocent ligand that can assume more than one oxidation state. Complexes of DIPs participate in a range of chemical reactions, including ethylene polymerization, hydrosilylation, and hydrogenation. Synthesis and properties of DIP ligands: Many DIPs have been prepared. They are synthesized by Schiff base condensation of commercially available 2,6-diacetylpyridine or 2,6-diformylpyridine with two equivalents of substituted anilines. Using substituted anilines, complexes one can obtain DIPs with diverse steric environments. Commonly used bulky anilines are 2,4,6-trimethylaniline and 2,6-diisopropylaniline. Unsymmetric variations have been established by successive condensation of different anilines. The dicarbonyl portion of the backbone can be further modified, as with 2,6-dipyridecarboxaldehyde and 2,6-dibenzoylpyridine. Most commonly, variations in the DIP arise from changes in the anilines. Synthesis and properties of DIP ligands: Effect of steric bulk Depending on its steric bulk, DIP ligands form complexes of 2:1 and 1:1 ratios, M(DIP)Lx and M(DIP)2, respectively. The 2:1 complexes occur for unhindered DIP ligands. Although such complexes are coordinatively saturated, they have been studied for their electronic and structural properties. Formation of 2:1 complexation is suppressed with bulky DIP ligands. Complexes of the type M(DIP)Ln exhibit diverse reactivity. Fe and Co complexes: The reduction of the Fe(II)(DIP)X2 with sodium amalgam under nitrogen yields a square-pyramidal bis(nitrogen) complex Fe(II)(DIP)(N2)2. This complex is a useful precursor to other derivatives by exchange of the dinitrogen ligands, e.g. with H2 and CO, to give the monohydrogen or dicarbonyl complexes. Arylazides give imido complexes. Fe(DIP)(N2)2 is a precursor to highly active catalysts for hydrosilylation and hydrogenation reactions. Dissociation of N2 from Fe(DIP)(N2)2 results in binding of the anilino arene in an η6-fashion. This binding mode may play a role in the catalytic hydrogenation cycle. Fe and Co complexes: The reactivity of cobalt- and iron-DIP complexes are similar. Cobalt DIP complexes with azide ligands have been shown to lose N2 to give reactive nitrido complexes that undergo C-H activation of benzylic sites of the aryl substituents. The resulting cyclometalated amide adopt a roughly planar geometry. Noninnocence of DIP complexes: The highly conjugated ligand framework of bis(imino)pyridine stabilizes metals in unusual oxidation states. The ability of the neutral complex to accept up to three electrons leads to ambiguity about the oxidation states of the metal center. The complex Fe(DIP)(N2)2 complex is ostensibly a 18e complex, consisting of Fe(0) with five 2-electron ligands. Mössbauer spectroscopy indicates, however, that this complex is better described as a ferrous derivative of DIP2−. This assignment is corroborated by the high frequency of the νNN vibration in the infrared spectrum, which is more consistent with Fe(II). Thus, reduction of Fe(DIP)Br2 is ligand-centered, not Fe-centered. Noninnocence of DIP complexes: This non-innocent behavior allows iron-DIP complexes to participate in 2e redox reactions, which is a pattern more usually seen for complexes of platinum group metals. Catalytic reactions of M-DIP complexes: The catalytic properties of DIP complexes of Fe, Co, and Ni have attracted much attention. In principle, catalyst derived from "base metals" are preferred to noble transition metal catalysis due to low environmental impact and cost effectiveness. Furthermore, owing to its modular synthesis, the DIP ligand is easily modifiable allowing diversity in ligand screening. Complexes of the type M(DIP)Xn serve as precatalysts for ethylene polymerization. The precatalysts are activated by treatment with methylaluminoxane (MAO), which serves as a co-catalyst. Activities for 2,6-bis(imino)pyridine iron complexes are often comparable to or greater than group 4 metallocenes. The aryl substituents greatly affect the products. Small aryl substituents allow for highly selective production of oligomeric α-olefins, whereas bulky groups provide strictly linear, high molecular weight polyethylene. Silica-supported and homogeneous catalysts have been reported.Traditionally catalyzed by Pt and other precious metals, hydrosilylation is also catalyzed by Fe-DIP complexes. Reactions proceed under mild conditions, show anti-Markovnikov selectiviity, and tolerate diverse functional groups. Depending on the steric properties of the ligand, Fe-DIP complexes catalyze hydrogenation of terminal olefins. Variations of DIP ligands: In N-heterocyclic carbene variations of the diiminopyridine complex, the pyridine or imine substituents is replaced with an NHC group. The aryl substituted bis(imino)NHC complexes produce tridentate ligands, while the pyridine exchanged NHC forms exclusively bidentate complexes. This is presumably due to the additional strain from the 5 member ring of the central carbene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Physical security information management** Physical security information management: Physical security information management (PSIM) is a category of software that provides a platform and applications created by middleware developers, designed to integrate multiple unconnected security applications and devices and control them through one comprehensive user interface. It collects and correlates events from existing disparate security devices and information systems (video, access control, sensors, analytics, networks, building systems, etc.) to empower personnel to identify and proactively resolve situations. PSIM integration enables numerous organizational benefits, including increased control, improved situation awareness and management reporting. Ultimately, these solutions allow organizations to reduce costs through improved efficiency and to improve security through increased intelligence. Physical security information management: A complete PSIM software system has six key capabilities: Collection: Device management independent software collects data from any number of disparate security devices or systems. Analysis: The system analyzes and correlates the data, events, and alarms, to identify the real situations and their priority. Verification: PSIM software presents the relevant situation information in a quick and easily digestible format for an operator to verify the situation. Resolution: The system provides standard operating procedures (SOPs), step-by-step instructions based on best practices and an organization’s policies, and tools to resolve the situation. Reporting: The PSIM software tracks all the information and steps for compliance reporting, training and potentially, in-depth investigative analysis. Audit trail: The PSIM also monitors how each operator interacts with the system, tracks any manual changes to security systems and calculates reaction times for each event. PSIM-based integration: A key differential between PSIM based integration and other forms of physical security system integration is the ability for a PSIM platform to connect systems at a data level, contrasting other forms of integration which interface a limited number of products. PSIM allows use of open technologies which are compatible with a large number of manufacturers. These PSIM products offer more opportunities for expansion and can reduce implementation costs through greater use of existing equipment. PSIM solutions in general are deployed to centralize information to single or multiple control hubs. These are referred to as control rooms or command and control centres (CCC, C4I, etc.). PSIM-based integration: To be connected with other technologies, is an important feature of any basic PSIM as is the capability to integrate with Open Industry Standards such as (PSIA, ONVIF, ODBC, etc.) Security systems typically integrated into a PSIM solution include: Access control systems Automated barriers and bollards Building management systems like Heating, HVAC, lifts/elevators control, etc. CCTV (closed circuit TV) Computer Aided Dispatch systems Electronic article surveillance (EAS) Fire detection GIS mapping systems Intercom and IP phone Intrusion detection system Intrusion systems Lighting control system Perimeter intrusion detection systems Power monitoring system Radar-based detection and perimeter surveillance radar Security alarm Video content analysis Video wall Operator guidance: PSIM solutions manage all of the data produced by the various security applications (where the security application manufacturers API or SDK allows), and aggregates them to produce meaningful intelligence. This in turn is converted to create graphical situation management content; combining relevant visual intelligence, workflow based on on-screen guidance and automated tasks (also referred to as a Common Operating Interface). This is used for both event management and for day to day security operations. Some of the more advanced PSIM products offer dynamic guidance, which can be changed according to the perceived threat level. This threat level is governed by both external intelligence, such as DHS advice and internal intelligence, such as the number of attempted breaches. This level of dynamic guidance again relies on the level of integration achieved with any given manufacturers API or SDK. Typical deployments: PSIM solutions can be found in a wide range of industry and government sectors across the globe. The following are industries where PSIM deployments can be found; Corporate enterprise Critical national infrastructure protection Education Energy, oil & gas Healthcare Homeland defense Industrial & manufacturing Law enforcement Retail & distribution Safe Cities Travel & transportationExamples of PSIM deployments: Atlanta Police Foundation and the Atlanta Police Department: Operation Shield Video Integration Center British Transport Police City of Baltimore: CitiWatch video surveillance program Ventura Police Department: Video Camera Community Partnership Program Washington Metropolitan Area Transit Authority (WMATA) Industry bodies: Open Network Video Interface Forum (ONVIF): open industry forum for the development of a global standard for the interface of IP-based physical security products Physical Security Interoperability Alliance (PSIA): a global consortium physical security manufacturers and systems integrators focused on promoting interoperability of IP-enabled security devices Security Industry Association: trade association for electronic and physical security solution providers OPC Foundation: interoperability standard for the secure and reliable exchange of data SIP Forum: advance the adoption of products and services based on the Session Initiation Protocol BACnet: data communication protocol for building automation and control networks
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Timer coalescing** Timer coalescing: Timer coalescing is a computer system energy-saving technique that reduces central processing unit (CPU) power consumption by reducing the precision of software timers used for synchronization of process wake-ups, minimizing the number of times the CPU is forced to perform the relatively power-costly operation of entering and exiting idle states. Implementations of timer coalescing: The Linux kernel gained support for deferrable timers in 2.6.22, and controllable "timer slack" for threads in 2.6.28 allowing timer coalescing. Timer coalescing has been a feature of Microsoft Windows from Windows 7 onward. Apple's XNU kernel based OS X gained support as of OS X Mavericks. FreeBSD supports it since September 2010.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Help &amp; Manual** Help &amp; Manual: Help & Manual is a Windows-based help authoring tool published by EC Software, a company based in Austria. Help &amp; Manual: Like many help authoring tools, Help & Manual allows the writer to create a single source text which it then converts to a number of target formats. In this case, the author creates the source text using an editor built into the Help & Manual program. The text, along with the user's settings for the project, are stored in XML files. Version history: Version 7.0 of Help & Manual was released June 2015. It supports the following output formats: PDF Compiled HTML HTML Web Page Visual Studio Help Microsoft Word eBook (Windows executable containing an embedded viewer) ePUB e-books Amazon Kindle e-books Printed manualsVersion 4.x of Help & Manual also supports Unicode for creating help in all international languages, including Asian languages like Chinese, Japanese and Thai but excluding right-to-left languages. Version history: Version 5.x of Help & Manual contains a completely redesigned Ribbon interface, and permits multiple users to collaborate and work on a single help project at the same time. Version 6.x of Help & Manual has a slightly modified user interface and adds support for the ePUB publishing format. Version 7.x of Help & Manual adds support for Team Foundation Server version control and the Amazon Kindle publishing format. Version history: The program comes bundled with several tools, including a screenshot capture tool, a graphics program with functions for editing and enhancing screenshots, a help context tool for managing, importing and exporting help context numbers, a "print manual designer" for creating and editing layout templates for PDF files and printed manuals and a reporting tool for generating and exporting project reports. Version history: 3.x - used RTF-based source files with a .HM3 extension. 4.x - uses XML-based source files with a .HMX extension 5.x - uses XML-based source files with a .HMXZ extension 6.x - project source files compatible with version 5 7.x - project source files compatible with version 5 and 6
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sirukumab** Sirukumab: Sirukumab (INN, USAN) (developmental code name CNTO-136, tentative brand name Plivensia) is a human monoclonal antibody designed for the treatment of rheumatoid arthritis. It acts against the proinflammatory cytokine Interleukin 6 (IL-6).Sirukumab is currently under development by Johnson & Johnson's subsidiary Centocor. Clinical trials: Rheumatoid arthritis It has started clinical trials. and reported some phase II results.In December 2015 three phase III trials (SIRROUND-D, -H and -T) were collecting data. By Feb 2017 SIRROUND-D was considered to have met both co-primary endpoints. Other The drug was previously under development for the treatment of depression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Audioprosthology** Audioprosthology: Audioprosthology is the profession of the fitting of a hearing aid, or auditory prosthesis. An audioprosthologist is defined as “an aid-fitting specialist who has completed a course in audioprosthology.” This term was adopted by a group of hearing instrument specialists and the International Hearing Society (IHS) in 1976. The American Conference on Audioprosthology (ACA) sponsored courses in audioprosthology until 2016. Currently, the American Conference of Audioprosthology program is being revamped for distance learning. Definition: The roots of the term make its definition self-explanatory: audio for hearing, prosthetic for device, and ology for science. “Audio” as used in both “audiology” and “audioprosthology,” is derived from the Latin term “audire,” which means “to hear,” and is commonly used in numerous other English words that are related in varying ways to hearing and sound. Audioprosthology was originally formed by the International Hearing Society (IHS).Audioprosthologists, Audiologists, and Hearing Instrument Specialists provide services and testing for hearing aids and hearing loss. All are required from each individual state to pass certain requirements and regulations, most of which the same practical exam must be taken by Audioprosthologists, Audiologists and Hearing Instrument Specialists. Doctors of Audiology can diagnose but still cannot prescribe any type of medication because their Doctorate is not a medical degree. Audioprosthologists can also work in private practice or Doctors offices. Origins: Founded in 1976 by Harold Williams, EdD, and Robert Briskey, the ACA program was developed in response to a need for advanced training for hearing instrument specialists. The name was believed to be an accurate description of the course of study – the application of prostheses, that is hearing aids, to ameliorate auditory impairments. The first graduates of the program in 1978 realized the benefits of the coursework immediately and recommended that the ACA program be offered nationwide. Origins: In 1993 the International Hearing Society (IHS) assumed control of the curriculum and applied to the American Council on Education (ACE) for an assessment of the program's university credit equivalence. ACE determined that completion of the ACA course of study was the equivalent of 15 upper level baccalaureate semester credits. That equivalency meant that colleges and universities that recognized the ACE credit-equivalence paradigm would accept those 15 credits toward an undergraduate degree. The ACA program was launched at sites across the U.S. and was offered regularly until the program went on indefinite hiatus in 2016. Origins: Hearing aid specialists have typically learned their profession through an apprenticeship in that they're trained and supervised by another licensed individual. When qualified they take their state's examination, and upon passing, they're granted a license to practice. This apprenticeship provides them with the skill set necessary for entry level, safe practice. The original intent of the ACA Program was to provide current practitioners with the scientific foundation for their vocation, thus taking them to an advanced practice designation through formal coursework, laboratory exercises, and summative examinations. Purpose: The ACA Program is an opportunity for adult learners to supplement their skill set with the knowledge and theoretical background that could move them to a higher level of proficiency and professionalism, that is, to an advanced practice status. The use of the term audioprosthologist is a privilege of successful completion of the course of study. An individual must complete and pass a 13-month course and a subsequent practicum prior to being granted this privilege. Purpose: The intent of the American Council on Education (ACE) process was to provide an opportunity for practitioners to gain access to a college degree through lifelong learning and workplace skills. The ACA Program has been determined to be equivalent to 15 semester hours of upper level baccalaureate credit by the ACE College Credit Recommendation Service. ACE will only evaluate courses of study that are comparable to the learning offered at the college level in terms of course content, learning methods, and assessment procedures. Over 1800 academic institutions accept the ACE credit recommendations. In the field of hearing instrument sciences, Spokane Falls Community College has provided advanced standing for ACA graduates, have accepted all ACA credits, and have granted an automatic one-third fulfillment toward the requirements for the two-year associate degree in hearing instrument sciences. Purpose: The ACA Program embraces the concept that working adults should have access to academic credit for formal courses and examinations taken outside traditional degree programs with 100% relevance to their chosen careers. Institutions of higher learning embrace these non-traditional approaches because they facilitate adult learners in earning undergraduate degrees. The ACA Program has been embraced by hearing instrument specialists because it provides an opportunity for them to achieve an advanced practice status, and a few have gone on to use this experience for college credit. Course of Study: The ACA educational program contains five courses structured to conform to a semester-hour format common to universities. Each of the five courses is held over three two-day sessions (weekends) for a total of 42 classroom hours per course. The core faculty consists of individuals with extensive knowledge and experience in the academic and/or business world. It is the core faculty's responsibility to teach the courses in the ACA program, evaluate student performance and attainment of learning objectives, make suggestions about additional faculty, periodically review curriculum, and make recommendations for curriculum revisions in light of new knowledge, methodologies, and advancements in hearing aid engineering. Course of Study: Students are required to attend all classes and complete all class assignments with a grade of 70% or better. Failure requires that the course be repeated. Official transcripts are available to each student who completes the ACA through the ACE Transcript Services in Washington, DC.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Astrolinguistics** Astrolinguistics: Astrolinguistics is a field of linguistics connected with the search for extraterrestrial intelligence (SETI). Early Soviet experiments: Arguably the first attempt to construct a language for interplanetary communication was the AO language created by the anarchist philosopher Wolf Gordin (brother of Abba Gordin) in his books Grammar of the Language of the Mankind AO (1920) and Grammar of the Language AO (1924), was presented as a language for interplanetary communication at the First International Exhibition of Interplanetary Machines and Mechanisms (dedicated to the 10th anniversary of the Russian Revolution and the 70th anniversary of the birth of Tsiolkovsky) in Moscow, 1927. The declared goal of Gordin was to construct a language which would be non-"fetishizing", non-"sociomorphic", non-gender based and non-classist. The design of the language was inspired by Russian Futurist poetry, the Gordin brothers' pan-anarchist philosophy, and Tsiolkovsky's early remarks on possible cosmic messaging (which were in accord with Hans Freudenthal's later insights). However, Sergei N. Kuznetsov notes that "Gordin nowhere defines his language as intended for space use," and that "in none of his works does he deal with problems of space communication, only mentioning 'Interplanetary Communication' in passing among other technical areas." Freudenthal's LINCOS: An integral part of the SETI project in general is research in the field of the construction of messages for extraterrestrial intelligence, possibly to be transmitted into space from Earth. As far as such messages are based on linguistic principles, the research can be considered to belong to astrolinguistics. The first proposal in this field was put forward by the mathematician Hans Freudenthal at the University of Utrecht in the Netherlands, in 1960 – around the time of the first SETI effort at Greenbank in the US. Freudenthal conceived a complete Lingua Cosmica. His book LINCOS: Design of a Language for Cosmic Intercourse seems at first sight non-linguistic, because mathematical concepts are the core of the language. The concepts are, however, introduced in conversations between persons (Homo sapiens), de facto by linguistic means. This is witnessed by the innovative examples presented. The book set a landmark in astrolinguistics. This was witnessed by Bruno Bassi's review years later. Bassi noted: “LINCOS is there. In spite of its somewhat ephemeral 'cosmic intercourse' purpose it remains a fascinating linguistic and educational construction, deserving existence as another Toy of Man's Designing”. Freudenthal eventually had lost interest in creating further work altogether because of rising issues in applying LINCOS "for [anything] other than mathematical contents due to the potential different sociological aspects of alien receivers". Ollongren's LINCOS: The concept astrolinguistics in scientific research was coined as such, also with a view towards message construction for ETI, in 2013 in the monograph Astrolinguistics: Design of a Linguistic System for Interstellar Communication Based on Logic, written by the astronomer and computer scientist Alexander Ollongren from the University of Leiden (the Netherlands). This book presents a new Lingua Cosmica totally different from Freudenthal's design. It describes the way the logic of situations in human societies can be formulated in the lingua, also named LINCOS. This astrolinguistic system, also designed for use in interstellar communication, is based on modern constructive logic – which assures that all expressions are verifiable. At a deeper, more fundamental level, however, astrolinguistics is concerned with the question whether linguistic universalia can be identified which are potentially useful in communication across interstellar distances between intelligence species. In the view of the new LINCOS these might be certain logic descriptions of specific situations and relations (possibly in an Aristotelian sense). Kadri Tinn's (Astronomy for Humans) review of Ollongren's book recognised that aspect – she wrote: Astrolinguistics is the study of interstellar languages and possibility of communication using an artificially created language that is self-contained and wouldn't include some of the aspects of natural languages. … new Lingua Cosmica is a language system based on applied logic, the understanding of which might be expected from a civilization that has developed technology advanced enough to receive radio emissions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Window dresser** Window dresser: Window dressers are retail workers who arrange displays of goods in shop windows or within a shop itself. Such displays are themselves known as "window dressing". They may work for design companies contracted to work for clients or for department stores, independent retailers, airport or hotel shops. Alone or in consultation with product manufacturers or shop managers they artistically design and arrange the displays and may put clothes on mannequins—or use the services of a mannequin dresser—and display the prices on the products. They may hire joiners and lighting engineers to augment their displays. When new displays are required they have to dismantle the existing ones, and they may have to maintain displays during their lifetimes. Some window dressers hold formal display design qualifications. Notable window dressers: Diane Arbus’s father David Nemerov was a window dresser at her mother Gertrude's Fifth Avenue department store, Russeks, before they married. Giorgio Armani, the fashion designer, once worked as a window dresser. David Bailey, British photographer Roseanne Barr worked as a waitress and a window dresser in Denver prior to her showbiz career. L. Frank Baum, better known for his novel The Wonderful Wizard of Oz, published a treatise on the art of window dressing. Karl Bissinger, American mid-century photographer of notable artists, was a window-dresser at Lord & Taylor earlier in his career. Henry Clarke, a Vogue photographer, first worked in the 1940s as a window dresser for I. Magnin, luxury department store in San Francisco before becoming a background and accessorising assistant at the Vogue New York studio, where he learned to photograph by observing the different styles of Cecil Beaton, Irving Penn and Horst P. Horst. Salvador Dalí, the surrealist artist, was commissioned by Bonwitt Teller in 1939 to do a store window installation, which made headlines. George Dureau, an American photographer and artist who inspired Robert Mapplethorpe, began his career at D. H. Holmes department store Simon Doonan, columnist for Slate, dressed windows for Barneys department store. Lieutenant Hubert Gruber, a character from the sitcom 'Allo 'Allo!, was a window dresser before his spell in the army. This is frequently alluded to, mainly for comedic effect. Roy Halston Frowick, known simply as Halston, a 1970s American fashion designer, worked as a window dresser while taking a night course at the School of the Art Institute of Chicago. David Hoey is famed for his work at Bergdorf Goodman, most notably on their Christmas season spectaculars. Victor Hugo, a Venezuelan born artist, and one-time assistant to Andy Warhol, produced window dressings for Halston in the 1970s, becoming the first to transform windows and mannequins into Pop Art. Don Imus, American radio personality once worked as a department store window dresser. Ellen Jose, an Australian indigenous artist and photographer. Alice Lex-Nerlinger, after graduation from art school, worked as a shop window decorator in the department store Tempelhof from 1916–18, an experience which brought her closer to sisters in the labour movement, the subjects of her early photography and montage. Peter Lindbergh, German fashion photographer and film director, worked as a window dresser for the Karstadt and Horten department stores in Duisburg. Raymond Loewy, early in his career, dressed windows for Macy's in New York. Christine McVie worked as a window dresser in London in the 1960s. American stage director and film director Vincente Minnelli's first job was at Marshall Field's department store in Chicago as a window dresser Gene Moore was a leading 20th century window dresser. Molina, a fictional character, one of the principals of Manuel Puig's novel Kiss of the Spider Woman, was a window dresser prior to his incarceration. Rhoda Morgenstern, a fictional character from The Mary Tyler Moore Show and its spinoff Rhoda, makes her living as a window dresser in Minneapolis and New York City. Walter Pfeiffer, Swiss photographer. Terry Richardson, American fashion and portrait photographer, was a Bloomingdale's window dresser in the 1950s. Henk Schiffmacher, Dutch tattoo artist, was a window dresser at the De Bijenkorf Joel Schumacher, the film director, was once a window dresser employed by the store Henri Bendel. E. C. Segar left his job as a projectionist and worked at decorating jobs including paper hanging, painting and window dressing, before deciding on a career as a cartoonist. Notable window dressers: Henry Talbot worked as a department store window-dresser in London in the 1930s before being shipped to Australia on the Dunera, where he became a fashion photographer and partner in business of Helmut Newton Hans Hermann Weyer, a German seller of fraudulent nobility and academics titles and flamboyant member of the international jet set who became an honorary consul of Bolivia in Luxembourg, was in youth an apprentice window dresser.worked Johnny Lamberti, American window dresser since 1971, worked mainly for the Syrian Community retail and wholesale stores, including the Crazy Eddie electronics stores.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RCA Spectra 70** RCA Spectra 70: The RCA Spectra 70 was a line of electronic data processing (EDP) equipment manufactured by the Radio Corporation of America’s computer division beginning in April 1965. The Spectra 70 line included several CPU models, various configurations of core memory, mass-storage devices, terminal equipment, and a variety of specialized interface equipment.The system architecture and instruction-set were largely compatible with the non-privileged instruction-set of the IBM System/360, including use of the EBCDIC character set. While this degree of compatibility made some interchange of programs and data possible, differences in the operating system software precluded transparent movement of programs between the two systems. RCA Spectra 70: Competition in the mainframe market was fierce, and in 1971 the company sold the computer division and the Spectra 70 line to Sperry Rand, taking a huge write down in the process. System overview: Five models of the Spectra 70 CPU were announced around 1965, ranging from a small system (70/15) to the large scale (70/55). Some of the main features were: The systems were upward-compatible, allowing programs written for a smaller model to run on any larger machine in the series. Larger machines in the series were also faster, with memory access times ranging from two microseconds in the 70/15 to 0.84 microseconds in the 70/55. Memory capacities ranged from a minimum of 4,096 bytes (4 KB) in the 70/15 to a maximum of 524,288 bytes (512 KB) in the 70/55. All used the Extended Binary Coded Decimal Interchange Code (EBCDIC) of eight bits plus parity for internal data representation. The use of a standard electrical interface allowed the same peripherals to be used with any CPU model in the series. System overview: Simultaneous input and output was accomplished by the use of intelligent communication channels. Like the IBM 360, two types of channel were available (on all but the 70/15): selector channels which could address up to 256 devices (one at a time), and multiplexer channels (not on the 70/15) which could simultaneously address up to 256 channels by time-sharing the channel.The full instruction set comprised 144 instructions, including optional floating-point.: p.16  All machines supported decimal and binary fixed-point arithmetic. Floating-point instructions were not available on the 70/15 and 70/25.: p.4 These systems all ran RCA's real-memory operating systems, DOS and TDOS. The 70/45 could also run a time-sharing operating system, The RCA 70/45 Basic Time Sharing System (BTSS), supporting up to 16 users. The systems that supported virtual memory, the Spectra 70/46 and 70/61 and the later RCA 3 and 7, could also run the RCA's Virtual Memory Operating System (VMOS). VMOS was originally named TSOS (Time Sharing Operating System), but was renamed to expand the market for the system beyond time-sharing. TSOS was the first mainframe, demand paged, virtual memory operating system on the market. The Spectra series was later supplemented by the RCA Series (RCA 2, 3, 6, 7— later renamed the 70/2, 70/3, 70/6, and 70/7, which competed against the IBM System/370. The RCA 2 and 6 ran the real-memory batch-oriented OS/70 operating system, while the RCA 3 and 7 ran VMOS. Some English Electric System 4 mainframes were rebadged Spectra 70 machines; others were English Electric-designed clones of the RCA Spectra 70 clones of the IBM System/360 range. Models: Model 70/15 The RCA Model 70/15 (1965) was a discrete small-scale processor that could still support a variety of applications. Memory limitations and relatively low processing speed made its use as a stand-alone computer system somewhat impractical. It implemented a small subset of 25 instructions of the full Spectra 70 architecture,: p.10  and was not downward compatible with the rest of the range. Also, the limited memory size available "obviates the need for a base address in that the displacement has the necessary addressing range by the addition of a high-order bit to permit addressing of up to 8,192 bytes.": p.3  In this respect it was similar to the IBM System/360 Model 20. Two memory configurations for the 70/15 were available: either 4,096 bytes or 8,192 bytes of core memory. The memory cycle time for a 70/15 was 2 microseconds per byte of information. Models: The 70/15 was often used as a satellite processor for larger systems or used as an intelligent terminal for remote job entry. Typical applications of a satellite processor would include card-to-tape conversion, card/tape-to-printer report generation, tape-to-card punching, input pre-processing and verification, or tab-shop tasks like file sorting, merge, and data selection. Software for this model did not include an operating system—the RCA 70/15 Programming System consisted of an "Assembly System, Loader Routines, Input-Output Control, Test Routines, Utility Routines, Communication Control, System Maintenance Routines, Report Program Generator, and Sort/Merge." Sort/Merge required a system with 8 KB of memory. The remainder could run in 4 KB. Programs could be run from punched-cards or magnetic tape.: pp.43–44 Weighed 600 pounds (270 kg). Models: Model 70/25 The RCA Model 70/25 (1965) was a discrete small-to-medium scale computer system that supported a wider variety of applications, including use as a free standing system. In large installations, the 70/25 might also be used as a subsystem in a multi-processor complex. High throughput was facilitated by the use of fast memory and multiple simultaneous input/output streams. Equipped with selector channels and a multiplexer channel, the 70/25 could concurrently operate eight low-speed devices in addition to eight high-speed devices. Like the Model 15, it implemented a (slightly larger) subset of 31 instructions of the full range architecture.: p.12 Memory capacities for the 70/25 ranged from a minimum of 16,384 bytes to a maximum of 65,536 bytes. The memory cycle time was 1.5 microseconds to access one 8 bit byte. Models: Model 70/35 The RCA Model 70/35 was the fifth in the series of Spectra computers that was announced in September 1965 (first delivery in 1966). It was a medium-scale computer combining third-generation technology (including integrated circuits) and speed in an efficient low-cost data system. The Spectra 70/35 handled a wide range of tasks at almost twice the speed of other general-purpose computers in its price range. Unlike the Model 70/45 and 70/55 it did not offer the option of a floating point processor. The maximum memory was limited to 32,768 bytes from two 16,384 byte core memories. It was offered with both synchronous and asynchronous controllers that allowed it to communicate with other computers. Models: It was used by the Oklahoma State-Wide Computer Science System, starting in 1966, to connect remote RCA 301 computers in 8 cities to host Vocational-Technical Education in computer science, which was the first state-sponsored program set up exclusively to train data processing personnel. The students were learning the fundamentals of programming and system operation with "hands-on" experience. Weighed 1,500 pounds (680 kg). Model 70/45 The RCA Model 70/45 (1966) was a medium scale processor of relatively good performance for its time. A floating-point processor was available as an option and the 70/45 was considered suitable for commercial, scientific, communications, and real-time applications. Models: With a communications multiplexer, the 70/45 could accommodate up to 256 communication lines for interactive use as well as batch processing. Thus, the 70/45 was ideal as the core of a multi-system installation. The 70/45 was one of the first computer systems to use monolithic integrated circuits in its construction. This level of integration was to become the defining characteristic of third-generation computers. Models: Memory capacity for the 70/45 ranged from a minimum of 16,384 bytes (16 KB) to 262,144 bytes (256 KB). The memory cycle time was 1.44 microseconds to access two bytes (one half word) of information. Weighed 1,900–2,700 pounds (860–1,220 kg). Models: Model 70/46 The RCA Model 70/46 (1967) is a modified version of the 70/45 with an added virtual memory capability. Advertisements for this computer as a timesharing machine referred to it as the Octoputer.Programs can run in either 70/45 mode—without virtual memory—or in 70/46 mode with virtual memory enabled. Virtual addresses are 24 bits in length. Pages can be specified to be either 2048 or 4096 bytes in length, depending on program requirements, however 2048 byte pages occupy the lower half of a page frame in memory. The system allows a maximum of 512 pages. Virtual memory is divided into segments of 64 pages indicated by bits 1-5 of a virtual address. Although the Instruction set architecture defines up to 32 segments, only eight are used in the 70/46. Incrementation of addresses wraps around on a segment boundary. With 4 KB pages, segments are 256 KB in length, and total virtual memory size is up to 2 MB. With 2 KB pages these numbers are halved. Models: Model 70/55 The RCA Model 70/55 (1966) was a medium-to-large scale processor with excellent processor characteristics well suited to both scientific and large-scale commercial processing. The 70/55 maintained a high-throughput capability by offering up to 14 simultaneous job-streams. Like the 70/45, the Model 70/55 made extensive use of monolithic integrated circuits. Memory capacity for the 70/55 ranged from 65,536 bytes (64 KB) of core memory to 524,288 bytes (512 KB). The memory cycle time was 0.84 microseconds to access four bytes of information. Weighed 3,000–5,100 pounds (1.5–2.6 short tons; 1.4–2.3 t). Model 70/60 The RCA Model 70/60 was a later addition to the Spectra 70 series, having been announced in 1969. Models: Model 70/61 The RCA Model 70/61 was the virtual memory model of the 70/60, and it was referred to as the Octoputer II in some advertisements. The 70/60 and 70/61 were the first RCA central computers to be capable of supporting 1 MB of core memory which was housed in 4 standard racks that formed a "T" with the rest of the computer. Each memory cabinet housed 256 KB of core memory with memory stacks and control logic and power supply in the bottom. These machines later became RCA 6 and RCA 7 respectively when the company replaced the blue and white cabinets with a new more modern scheme. Although these computers were fast and reliable they came too late to impact the lead of the IBM 360 product line. Input-output devices: Input-output devices on the Spectra 70 series were specifically designed to interface with all models of the Spectra processor using the RCA Standard Interface. Initial product offerings in 1965 included: Card punches that were fully buffered and able to operate at 100 or 300 cards per minute, depending upon the specific model. Three models of printers were offered: a medium-speed printer running at 600 lines per minute, a high-speed printer running at 1,250 lines per minute, and a bill-printer running at 600 lines per minute on continuous forms and 800 lines per minute on card-stock. Like the card punches, the printers were fully buffered. The Spectra optical card reader was able to read at up to 1,435 cards per minute with optional mark-sense reading available. Paper-tape capability was offered with 5, 6, 7, or 8 channel tape punches and readers. The punched tape reader operated at 200 characters per second and the tape punch ran at 100 characters per second. Input-output devices: Three versions of magnetic tape were available running at 30, 60, or 120 kilobytes per second. In purely numeric mode, the tape reading and writing was performed at 240,000 digits per second. All tape drives were “industry” (meaning IBM) compatible and contained automatic error-checking systems. Either 7 or 9 channel tape code could be used and tapes could be written in the forward direction and read in both forward and reverse directions. Input-output devices: Direct access storage was available in the form of a high-speed 70/565 Drum Memory Unit with a capacity of 1 MB and an average access time of 8.6 milliseconds, a 70/564 Disc Storage Unit with an interchangeable 7.25 MB disc-pack and a data interchange rate of 156 kbyte/s, and a 70/568-11 Mass Storage Unit with 8 interchangeable 67 MB magazines. Input-output devices: The Videoscan Document Reader was an optical character recognition scanner with a speed of 1,300 documents per minute. This was primarily used to scan checks and similar transaction documents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cine 160** Cine 160: Cine 160 is a 35 mm film projection process proposed by Allan Silliphant whereby a single frame of film would occupy a length of six film perforations. This could then be used for either of two currently proposed applications: 3-D film projection from two images each occupying 3 perforations (thus attaining a 1.85 aspect ratio already in common use), or making anamorphically squeezed prints of 1.85 ratio films, which would use a greater amount of image area. The system is named Cine 160 because the six-perf frame uses 1.60 times the area of a conventional print. This system has not yet received any mainstream application, however, and it is unknown how receptive theater owners will be to the prospect, which will require significant expenses to re-fit projectors to the format. Claimed benefits: Larger frame area can facilitate better and brighter 3D projection, or offer a low cost means to approach 70 mm film image brightness and clarity using 35 mm film and an anamorphic lens. Allows more brightness and detail to reach the screen than conventional 35mm prints, much greater detail in camera image. Permits better brightness when divided into above and below split frames for 3D, or if used non-stereo with anamorphic lens. Very easy conversion of projector, and can be set up for "quick-change" in theaters. Will look much better, brighter, than 2K digital at 1/10 of the conversion cost. Full 1.60 better than even anamorphic 35. Will permit running of 35mm IMAX reduction prints in small theaters in remote locations. Allows most existing cameras to be modified to shoot in the format, or projectors to be easily modified. Can act as a "value added" marketing attraction, due to promotion of trade name, like 70mm did in the past. There is no waste when fitting image onto an existing 1.85 theater screen, just more brightness, gamma range, and detail. Digital conversion will be a hard sell in the poorer parts of world. This will allow 3D and "faked 70mm" everywhere soon. Easy to shoot in digital, then make a "DI" (digital intermediate) to release on "near 4K quality".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tuberculoma** Tuberculoma: A tuberculoma is a clinical manifestation of tuberculosis which conglomerates tubercles into a firm lump, and so can mimic cancer tumors of many types in medical imaging studies. They often arise within individuals in whom a primary tuberculosis infection is not well controlled. When tuberculomas arise intracranially, they represent a manifestation of CNS tuberculosis. Since these are evolutions of primary complex, the tuberculomas may contain caseum or calcifications. Tuberculoma: With the passage of time, Mycobacterium tuberculosis can transform into crystals of calcium. These can affect any organ such as the brain, intestine, ovaries, breast, lungs, esophagus, pancreas, bones, and many others. Even with guideline-directed treatment they often persist for months to years. Mechanism: The exact mechanism of tuberculoma development has not been determined, although multiple theories have been proposed. It is possible that, following an initial tuberculosis infection resulting in bacteremia, a foci of granulomatous inflammation may coalesce into a caseous tuberculoma. Pulmonary tuberculomas may arise due to repeated cycles of necrosis and re-encapsulation of foci, or, alternatively, the shrinkage and fusion of encapsulated densities. In regards to CNS tuberculoma, it is thought that mycobacterium tuberculosis is capable of penetrating the blood brain barrier after bacterial bacilli induce the release of cytokines by various immunologic cells, leading to an increase in barrier permeability. Similar to pulmonary tuberculomas, small lesions eventually coalesce and undergo both necrosis and enlargement. Signs and symptoms: Symptoms are based on the location of the tuberculoma. Small, scattered lesions may be asymptomatic. Intracranial tuberculomas in children are often infratentorial, occurring near the cerebellum and base of the brain. In this population, symptoms such as headache, fever, focal neurologic findings and seizures have been seen in addition to papilledema with or without meningitis. When the size of a brainstem tuberculoma grows to the point of narrowing the fourth ventricle, obstructing hydrocephalus and its related symptoms can arise. Rupture of tuberculomas adjacent to the arachnoid can lead to arachnoiditis, while rupture near the subarachnoid space or ventricular system can cause meningitis. Diagnosis: The diagnosis of tuberculoma can be challenging, as invasive testing may be required and, occasionally, concomitant malignancy may be present. In children with tuberculoma, CXR is often normal despite a positive TST/IGRA.Diagnosis of brain tuberculoma can be aided with PCR of cerebrospinal fluid, but is of less utility for quickly diagnosing and treating lesions. When CSF is analyzed in patients with suspected tuberculoma, high protein concentrations and cell counts are often seen.Definitive diagnosis can be made through stereotactic, CT-guided biopsy, with excision required in rare cases. Biopsy is chosen when non-invasive testing has failed to produce a diagnosis, when patients fail to respond to a treatment regimen, in cases of drug-resistant tuberculosis, and in non-compliant patients. Diagnosis: Imaging The appearance of a tuberculoma on imaging can vary according to the composition and age of the mass. They may appear as either non-caseating or solidly caseating lesions. Initially, tuberculomas appear hypodense on computed tomography (CT) scans with significant surrounding edema. The "target sign" is pathognomonic for tuberculoma on CT, with a nodular ring-enhancing mass and central calcification. The characteristic ring-enhanced appearance is due to lack of blood supply in the central necrotic core that is visualized with injected contrast. Sometimes a hypodense central area is seen instead of calcification. When considering other potential intracranial masses in a differential diagnosis, such as cysticercosis, pyogenic abscess, and neoplastic lesions, tuberculoma can be identified by its larger size (>2 cm), edema, and irregular border. Diagnosis: Magnetic resonance imaging (MRI) is another useful imaging modality for diagnosing and characterizing of tuberculomas, especially solid caseous necrosis in which 3 zones of varying intensity are seen. Treatment: Tuberculoma is commonly treated through the HRZE drug combination (Isoniazid, Rifampin, Pyrazinamide, Ethambutol) followed by maintenance therapy. Per international guidelines, 9–12 months of medical management is standard. While the majority of tuberculomas resolve in 12–24 months, in patients with multiple or larger lesions prolonged treatment extending beyond two years may be required. In some patients, the release of inflammatory mediators during treatment can cause a paradoxical worsening of symptoms that is treated with anti-inflammatory medications in addition to the standard anti-tuberculosis regimen.Exceptionally large tuberculomas, those exerting a mass effect on the brain, and those which fail to respond to medical management required surgical excision. In some cases, surgical excision is necessary for diagnosis as well as treatment. When intracranial pressure rises in the setting of tuberculoma, removal is considered a surgical emergency. Prognosis: Of patients with a brain tuberculoma treated with an appropriate medication regimen, almost half recover completely. Approximately 10% of those treated fail to recover and succumb to the tuberculoma. Reports issued before the advent of effective anti-tuberculosis therapy showed that, when untreated, 30-50% of tuberculomas enter and remain in a stationary course. Epidemiology: Tuberculomas are most commonly seen in areas where tuberculosis is endemic. In these areas, tuberculomas can account for between 30%-50% of intracranial masses. India and parts of Asia are two areas where tuberculomas have been noted to be particularly prevalent. They occur most often as solitary, infratentorial lesions in young children. In contrast, lesions are most often supratentorial in adults.Pulmonary tuberculomas are among the most common benign nodules, with 5%-24% of all resected nodules being of tuberculous origin. In areas of lower prevalence, such as the United States, they are most commonly seen in the setting of an acquired immunodeficiency. Intracerebral tuberculomas, specifically, are more frequently observed in patients with an HIV infection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microfoam** Microfoam: Microfoam is finely textured milk used for making espresso-based coffee drinks, particularly those with latte art. It is typically made with the steam wand of an espresso machine, which pumps steam into a pitcher of milk. The opposite of microfoam is macrofoam (also called dry foam, in contrast to the wet foam of microfoam), which has visibly large bubbles, a style of milk commonly used for cappuccinos. Characteristics: Microfoam is shiny, slightly thickened, and should have microscopic, uniform bubbles. It is not as viscous or "foamy" as macrofoam – it is better described as "gooey" and resembles melted marshmallows or wet paint. There have been a variety of names used for this ideal standard, such as "microfoam", "velvet milk", "microbubbles", and so forth. Applications: The decorative application of microfoam is called latte art, which involves making patterns in espresso-based drinks. Microfoam is essential for this as the microscopic bubbles give definition and stability to the patterns, which are harder to achieve with macrofoam which disperses more readily. Latte art is traditionally associated with lattes, as the name suggests, but can also be used in cappuccinos and other drinks. Applications: A cappuccino made with microfoam is sometimes called a "wet" cappuccino. However, cappuccinos typically use thicker macrofoam, with a layer of dry foam floating on the top of the drink. Latte macchiato is another drink which generally has separate layers of dry foam and liquid milk, but microfoam is occasionally used instead. Microfoam may also be added to brewed coffee in a café au lait, and faint latte art can be produced. Microfoam may also be used in a steamer (a "coffee-free" cappuccino), though this can instead be made with dry foam. Applications: As it requires a skilled barista to produce microfoam (especially when used for latte art), it is a sign of attention to quality, and a defining characteristic of the third wave of coffee. Procedure: Microfoam is usually created with the steam wand of an espresso machine. This is the quickest method and provides precise control over the timing and depth of air injection. Alternative methods are rarely as effective for producing microfoam, but some are acceptable for macrofoam. These include whisking, shaking, and hand pumps. Dedicated electric milk frothers may also be used, usually consisting of a motorized whisk.When using a steam wand, the volume and type of foam is controlled by the barista during the steaming process, and loosely follows these steps: Air is introduced from the steam wand by immersing only the tip of the wand in the milk. This process is sometimes known as frothing, stretching, or surfing, and usually lasts less than 10 seconds. After the creation of small bubbles, the milk is covered with a soft foam phase which separates from the liquid and floats on top of the milk. Procedure: The second stage involves mixing the incorporated air throughout the milk (mixing or texturing), which is achieved by immersing the steam wand more deeply (typically 20–30 mm). This creates a turbulent vortex or "whirlpool" in the vessel. This step is necessary to integrate the foam which naturally separates from the liquid phase. During this stage, the milk is also heated to about 70 °C (158 °F), at which point the steaming is finished. Procedure: Lastly, the milk is poured from the pitcher into a cup, usually already containing espresso. Methods for pouring vary widely depending on the type of drink and personal technique (see Latte art § Styles). Notable variations The details of the above method vary between baristas, and are influenced by the machine and the desired outcome. It is common to briefly switch on the steam wand before using it, in order to flush any condensed water from the plumbing and preheat the steam wand itself. The same is often done after steaming milk, to remove milk residue. On machines with pivoting steam wands, the wand should be between 10° and 30° from vertical. However, some baristas tilt the jug relative to the steam wand, whilst keeping the wand almost vertical. If milk has been over-aerated (i.e. the froth is too thick), it may be groomed by running the tip of a spoon through it. As a supplementary method of mixing, a barista may swirl the pitcher just before pouring it. This method is also used to assess whether grooming is necessary (see above), and is intended to delay separation of the milk. Procedure: In order to remove any large bubbles from the surface, some baristas tap the jug on a bench before pouring Occasionally a barista may use less-than-full pressure from the steam wand, if they are steaming a very small amount of milk (variable pressure is usually only a feature on professional machines) It is also possible to create microfoam for latte art by using a french press, moving the plunger rapidly to aerate the milk. This method can, with practice, yield close to same consistency as with using a steamwand. Procedure: Portable stovetop steamer is also a viable option for generating microfoam. Chemical and physical properties: The basic requirements for formation of foam are an abundance of gas, water, a surfactant, and energy. The steam wand of an espresso machine supplies energy, in the form of heat, and gas, in the form of steam. The other two components, water and surfactants, are naturally occurring ingredients of milk. Varying the balance of these factors affects the size of bubbles, the foam dissipation rate, and the volume of foam.Microfoam may be represented simply as a metastable liquid-gas colloid of milk and air, consisting of gaseous bubbles suspended in the liquid milk. In reality, the suspension is more complex because milk consists of two different colloids itself - an emulsion of fat and a sol of protein. In fact, these two colloids are what enable milk to form such a mechanically strong foam which does not collapse under its own weight. The interaction between fat and air creates a structure of microscopic bubbles strong enough to support itself, and even be submerged (i.e. suspended within the liquid milk). Chemical and physical properties: Interaction of fat and protein Like in whipped cream, air bubbles are initially stabilized by the protein β-casein, prior to their adsorption of fat. This adsorption causes destabilization of the bubbles, because the fat molecules are amphiphilic (i.e. they have polar and non-polar ends), competing with protein molecules which are more conducive to bubbles. The denaturation of milk fat occurs around 40 °C (104 °F), so milk at higher temperatures is not significantly affected by this problem. At higher temperatures, the protein β-lactoglobulin enables the foam to maintain its structure and is the prime factor in the formation of foam. This can be show trivially by adding various quantities of skim milk powder which contains a high concentration of β-lactoglobulin. Chemical and physical properties: Since fat reduces the likelihood of bonding at the surface of bubbles, it follows that fat content in milk is inversely proportional to its frothing potential. Whilst this is true, an excessive fat constituent also enables larger bubbles, leading to macrofoam rather than microfoam. As a result, most baristas prefer to use whole milk rather than skim milk, due to its tendency to form smaller, more homogeneous bubbles. Chemical and physical properties: Effect of temperature Several studies have confirmed that the foamability of pasteurized whole milk, measured by the volume of foam produced, reaches a minimum at 25 °C (77 °F). This value is higher for raw milk - around 35 °C (95 °F). The dip in foamability occurs due to fat globules consisting of both solid and liquid phases at this temperature. Solid fat crystals in a globule may penetrate the film which separates them from the surrounding air, causing spreading of the membrane material which is then adsorbed onto air bubbles. At temperatures above the minimum foamability temperature, the volume of foam steadily increases, which has been attributed to the trends of decreasing viscosity and surface tension with temperature.If milk is heated above 82 °C (180 °F), it becomes scalded and its texture is compromised. Microfoam cannot exist in overheated milk due to the missing tertiary structure in the protein. When milk is scalded, the suspended protein casein becomes denatured and cannot maintain the intermolecular bonds necessary for microfoam.The stability of milk foam, measured by the half-life of its volume, is also greatly influenced by temperature. For pasteurized whole milk, stability increases with temperature up to about 40 °C (104 °F), then rises steeply until 60 °C (140 °F), where it starts steadily decreasing. Skim milk generally produces more stable foam, owing to its lower concentration of micellar casein. For regular pasteurized, homogenized whole milk, steamed at 70 °C (158 °F), the half-life is roughly 150 minutes. However, microfoam tends to separate into layers more quickly than it reduces in volume, so baristas usually steam milk immediately before serving it. This is especially important when serving latte art which may degrade within minutes. Chemical and physical properties: Sound When using a steam wand, a slight but audible hissing sound occurs when the air enters the milk, mainly due to microscopic cavitation. A louder screaming sound may be heard if the steam orifice becomes blocked or the machine cannot pump enough air.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISO 9897** ISO 9897: ISO 9897 is an ISO international standard for electronic interchange relating to freight containers. It is also known as CEDEX as an acronym of Container Equipment Data Exchange, and "is intended for business entities for use in communications relating to freight container transactions, in particular container Maintenance & Repair estimates and approvals and repair status messages".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Artificial antigen presenting cells** Artificial antigen presenting cells: Artificial antigen presenting cells (aAPCs) are engineered platforms for T-cell activation. aAPCs are used as a new technology and approach to cancer immunotherapy. Immunotherapy aims to utilize the body's own defense mechanism—the immune system—to recognize mutated cancer cells and to kill them the way the immune system would recognize and kill a virus or other micro-organisms causing infectious diseases. Antigen presenting cells are the sentinels of the immune system and patrol the body for pathogens . When they encounter foreign pathogens, the antigen presenting cells activate the T cells—“the soldiers of the immune system”— by delivering stimulatory signals that alert there is foreign material in the body with specific cell surface molecules (epitopes). aAPCs are synthetic versions of these sentinel cells and are made by attaching the specific T-cell stimulating signals to various macro and micro biocompatible surfaces like micron-sized beads. This can potentially reduce the cost while allowing control over generating large numbers of functional pathogen-specific T cells for therapy. Activated and stimulated T cells can be studied in this biomimetic contex and used for adoptive transfer as an immunotherapy. Essential components of an aAPC: Signal 1 Modeled after APCs, aAPCs need to have at least two signals to stimulate antigen specific T cells. The first signal is the major histocompatibility complex (MHC), which in humans is also called the human leukocyte antigen (HLA). This is the molecule which is loaded with the specific antigen. MHC class I are found on all cells and stimulate cytotoxic T cells (CD8 cells), and MHC class II are found on APCs and stimulate helper T cells (CD4 cells). It is the specific antigen or epitope that is loaded into the MHC determines the antigen-specificity. The peptide-loaded MHC engages with the cognate T cell receptor (TCR) found on the T cells. Essential components of an aAPC: Signal 2 T cells need another signal to become activated in addition to Signal 1, this is done by co-stimulatory molecules such as the proteins CD80 (B7.1) or CD86 (B7.2), although other additional co-stimulation molecules have been identified. When Signal 2 is not expressed, but T cells receive Signal 1, the antigen-specific T cells become anergic and do not perform effector function. Essential components of an aAPC: Signal 3 Signal 3 is the aAPC secretion of stimulatory cytokines such as IL-2 which enhances T cell stimulation, though this is not required for T cell activation. Types of aAPCs: Cell-based aAPCs have been produced by transfecting murine fibroblasts to express specific peptide-loaded HLA molecules with co-stimulatory signal B7.1, and cell adhesion molecules ICAM-1 and LFA-3.Many microparticle systems have been developed as microparticles represent physiologically similar sizes to cells. Microparticle curvature and shape has also been shown to play an important role in effective T cell stimulation.Nanoparticles have also been used. Nanoparticles have the additional advantage of enhanced transport once injected into the body as compared to microparticles. Nanoparticles are able to be transported through the porous extracellular matrix much easier and reach the lymph nodes where the T cells reside. Also, iron oxide nanoparticles have been used to take advantage of the superparamagnetic properties and to cluster both Signals to enhance T cell stimulation.Materials which have been used include poly (glycolic acid), poly(lactic-co-glycolic acid), iron-oxide, liposomes, lipid bilayers, sepharose, polystyrene and Polyisocyanopeptides. Types of aAPCs: Lipid based aAPC In natural systems, the dynamic lipid bilayer is crucial for molecular interactions. Lipid bilayer-based particles with a fluid membrane have been developed as aAPCs to replicate interactions between natural APCs and T cells in nature. For instance, it has been observed that in vitro CD4+ T cell activation by MHC-containing liposomes results in T cell proliferation and IL-2 release. It showed how the lipid membrane functions as a support structure for antigen presentation. Even in the absence of T cells, natural APCs have been found to precluster antigens. Researchers have created reconstituted liposomes with membrane microdomains enriched with epitope/MHC complexes to promote T cell proliferation. A higher level of T cell activation is induced by the preclustering of MHC molecules. Types of aAPCs: Researchers also used solid particles as a core for the lipid bilayer to increase the stability of the liposomes. These are known as supported lipid bilayers (SLBs). For example, nanoporous silica cores. Types of aAPCs: Polymeric aAPC A variety of polymers have been added into aAPC systems, including biodegradable PLGA (Poly(lactic-co-glycolic acid)) and non-biodegradable sepharose or polystyrene beads. While IL-2 or other soluble molecules can be progressively released from within the aAPC, immunomodulatory substances (recognition and co-stimulatory ligands) can be attached to the surface of polymeric particles.The size and shape of microbeads are important parameters for T cell activation. The optimal size is 4 to 5 µm and optimal shape is non-spherical or ellipsoid, like natural APCs, to increase the contact area of the particles with the T cells. Types of aAPCs: Inorganic aAPC Superparamgnetic particles can be used as aAPC for ex-vivo T cell expansion. These particles can be covalently bound to stimulatory ligands. Another type of aAPCs are high-surface-are carbon nanotubes coated with ligands. these nanotubes whoxed higher T cell activation and IL-2 secretion than other high-surface-area particles. Uses: aAPCs remove the need to harvest patient specific APCs such as dendritic cells (DCs) and the process of activating the DCs in the stimulation of antigen-specific T cells. As specific cancer antigens have been discovered, these antigens can be loaded to aAPCs to successfully stimulate and expand tumor-specific cytotoxic T cells. These T cells can be then re-infused or adoptively transferred into the patient for effective cancer therapy. This technology is currently being tested within laboratories for potential use in cancer therapy and to study the mechanisms of endogenous APC signaling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ThinkPad P series** ThinkPad P series: The ThinkPad P series line of laptops is produced by Lenovo and was introduced by the company as a successor to the previous ThinkPad W series. With 15.6" and 17.3" screens, the ThinkPad P series saw the reintroduction of physically large laptops into the ThinkPad line. Marketed largely as portable workstations, many P series laptops can be configured with high-end quad-core, hexa-core or octo-core Intel processors as well as ECC memory (only with Xeon Processors) and a discrete Nvidia Quadro GPU. The P series offers ISV certifications from software vendors such as Adobe and Autodesk for various CAD software. The P52 and P72 models are the last current Lenovo laptops with a dedicated magnesium structural frame. ThinkPad P series: All 15" and 17" models have a standard 6-row ThinkPad Precision Keyboard (with Numeric Keypad and optional backlight), TrackPoint and touchpad, and optional fingerprint reader. The HQ processor-based models (such as a P50/P70 and above) use only 170 W AC power adapters, or optional 230 W adapters for the larger 17" configurations. Models: Battery configuration First Generation The first generation comes with a variety of "high-end" options such as Intel Xeon processors, 4K IPS screens and DDR4 RAM up to 64 GB. 1080p screens and Intel Core Series CPUs come standard along with PCIe SSDs. The P Series introduced a cooling system known as FLEX that features two fans connected by a heat pipe and located near the CPU and GPU. A three-button touchpad is included. Models: P40 Yoga P40 Yoga is a version of the ThinkPad Yoga 460 with Nvidia Quadro graphics. P50s ThinkPad P50s is an update of ThinkPad W550s, focused on mobility. Its chassis is based on that of the T560. Models: P50 The ThinkPad P50, while having a 15-inch display, shared little in design with the W541 which it replaced. Its ports had been re-arranged, it was slightly thinner than its predecessor, and it reintroduced indicator lights for hard drive activity. It supports up to three internal storage devices and has a single USB Type-C Thunderbolt 3 port while also featuring Mini DisplayPort and HDMI connections. Weighing 2.5 kilograms and having thickness of 2.6 centimeters, the P50 was lighter than previous W series laptops. Models: P70 The ThinkPad P70 is the successor to the W701. It has a 17-inch display, weighs 3.4 kilograms and is 3.1 centimeters thick. It supports up to four internal storage devices and includes two USB Type-C Thunderbolt 3 ports. Second Generation P51 A minor update to the P50, which included the updated CM238 chipset. P51s The P51s chassis was based on the T570 chassis. The CPU and integrated chipset was updated to Kaby Lake-U. P71 A minor update to the P70, including the updated CM238 chipset. Third generation P1 The ThinkPad P1 was based on the first generation of ThinkPad X1 Extreme. It features Intel Xeon CPUs and Nvidia Quadro graphics. P52 The P52 was a redesign of the P51, which introduced Coffee Lake CPUs, all with 6 cores and 12 threads, the CM246 chipset, and Nvidia Pascal GPUs. It removed the mechanical docking port and ExpressCard slot, and features a narrower keyboard which is present on other ThinkPads. P52s Based on the ThinkPad T580, the P52s features a camera shutter, lateral instead of bottom-mounted docking port, and 8th Gen Core i5/i7 low power CPUs. The P52s includes quad-core Kaby Lake-R 15W CPUs and Nvidia Quadro P500 GPUs. P72 The P72 was a redesign of the P71, with features similar to those of the P52. It also features the narrower keyboard of the P52, and is the first 17" ThinkPad with a soldered GPU. Fourth generation P1 Gen 2 The P1 Gen 2 was an update to the P1 which features Intel 9th Gen Coffee Lake Refresh Core i5/i7/i9 and Xeon E mobile Processors and Nvidia Quadro T series GPUs. P43s The ThinkPad P43s has a new 14 Inch design. It features Intel 8th Gen Coffee Lake U-Series Processors and Nvidia Quadro P Series graphics. P53 Based on the ThinkPad P52, the P53 features a 4K UHD OLED Touch display, WiFi 6, Intel 9th Gen Coffee Lake Refresh Core i5/i7/i9 and Xeon E mobile Processors and Nvidia Quadro RTX Turing GPUs. P53s The P53s is an update to the P52s, which features Intel 8th Gen Coffee Lake U-Series Processors, Nvidia Quadro P520 Graphics and up to 4k UHD (3840 x 2160) display, available with Dolby Vision high dynamic range (HDR) technology. P73 The P73 was a redesign of the P72, with hardware upgrades similar to those of the P53. The P73 features new Intel 9th Gen Coffee Lake Refresh CPUs and Nvidia Quadro RTX Turing GPUs. 2020 - Fifth generation P14s Gen 1 (Intel or AMD) P1 Gen 3 Features NVidia graphics and OLED screen. Models: P15 Gen 1 P15s Gen 1 P15v Gen 1 P17 Gen 1 2021 - Sixth generation P14s Gen 2 (Intel or AMD) The second generation P14s is just an iterative update of the previous model with new AMD Ryzen 5xxx series and Intel 11th gen CPUs. The main difference between the Intel and AMD version is the Thunderbolt with USB 3.1 Gen 1 capabilities of the USB-C ports, which (including the USB-A) are USB 3.2 Gen 2 (no TB) on the AMD model, but also support DisplayPort alternate mode. The AMD and Intel model now also offer the same display panel options, while the aluminium case option is still exclusive to the Intel variant. Models: P15 Gen 2 P15s Gen 2 P15v Gen 2 P1 Gen 4 The OLED screen option was removed. P17 Gen 2 2022 - Seventh generation P14s Gen 3 (AMD or Intel) P15v Gen 3 (AMD or Intel) P1 Gen 5 Features NFC, liquid metal thermal paste, and 16:10 IPS screen options. P16 Gen 1 Replaces both P15 & P17 line models with a single 16:10, 16" screen size in 3 resolutions & a non-touch/multi-touch option for the UHD. P16s Gen 1 (AMD or Intel) Replaces the P15s line with a 16" screen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**50 miles race walk** 50 miles race walk: The 50-mile race walk is a racewalking event. The event is competed as a road race. See Kennedy march for the 50-mile walk started a fitness challenge. Athletes must always keep in contact with the ground and the supporting leg must remain straight until the raised leg passes it. 50 miles is 80.47 kilometers. U.S. records: In 1966, Israeli Shaul Ladany broke United States record in the 50-mile walk, which had stood since 1878 and was at the time the oldest U.S. track record. World bests: The men's world best for the 50-mile race walk is held by Ladany, through his race of 7:23:50 in 1972 in New Jersey, shattering the world mark that had stood since 1935.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**A Mathematical Theory of Natural and Artificial Selection** A Mathematical Theory of Natural and Artificial Selection: A Mathematical Theory of Natural and Artificial Selection is the title of a series of scientific papers by the British population geneticist J.B.S. Haldane, published between 1924 and 1934. Haldane outlines the first mathematical models for many cases of evolution due to selection, an important concept in the modern synthesis of Darwin's theory with Mendelian genetics. Overview: The papers were published in ten parts over ten years in three different journals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Archaeal initiation factors** Archaeal initiation factors: Archaeal initiation factors are proteins that are used during the translation step of protein synthesis in archaea. The principal functions these proteins perform include ribosome RNA/mRNA recognition, delivery of the initiator Met-tRNAiMet, methionine bound tRNAi, to the 40s ribosome, and proofreading of the initiation complex. Conservation of archaeal initiation factors: Of the three domains of life, archaea, eukaryotes, and bacteria, the number of archaeal TIFs is somewhere between eukaryotes and bacteria; eukaryotes have the largest number of TIFs, and bacteria, having streamlined the process, have only three TIFs. Not only are archaeal TIF numbers between that of bacteria and eukaryote numbers, but archaeal initiation factors are seen to have both traits of eukaryotic and prokaryotic initiation factors. Two core TIFs, IF1/IF1A and IF2/IF5B are conserved across the three domains of life. There is also a semi-universal TIF found in all archaea and eukaryote called SUI1, but only in certain bacterial species (YciH). In archaea and eukaryotes, these TIFs help correct the identification of the initiation codon, while its function is unknown in bacteria. Just between eukaryote and archaea, a/eIF2 (trimer) and aIF6 in archaea are conserved in eukaryotes as eIF2(trimer) and eIF6 TIFs. List of initiation factors: aIF1: SUI1 (eIF1) homolog. aIF1A: IF1/eIF1A homolog. Plays a role in occupying the ribosomal A site, helping the unambiguous placement of tRNAi in the P site of in the large ribosomal subunit. aIF2: Trimeric, eIF2 homolog. Binds to the 40S small subunit of the ribosome to help guide the start translation of mRNA into proteins. Can substitute for eIF2. aIF5A: EF-P/eIF5A homolog. Contains hypusine, just like the eukaryotic one. Actually an elongation factor. aIF5B: IF2/eIF5B homolog. Join the ribosomal subunits (small and large) to form the complete single (monomeric) mRNA bound ribosome unit in the late stages of initiation. aIF6: eIF6 homolog. Keeps the two ribosomal subunits apart.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chloroethylclonidine** Chloroethylclonidine: Chloroethylclonidine is an irreversible agonist for adrenergic receptors, in particular alpha1B, D, C and alpha2A/D-subtypes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Josephson vortex** Josephson vortex: In superconductivity, a Josephson vortex (after Brian Josephson from Cambridge University) is a quantum vortex of supercurrents in a Josephson junction (see Josephson effect). The supercurrents circulate around the vortex center which is situated inside the Josephson barrier, unlike Abrikosov vortices in type-II superconductors, which are located in the superconducting condensate. Josephson vortex: Abrikosov vortices (after Alexei Abrikosov) in superconductors are characterized by normal cores where the superconducting condensate is destroyed on a scale of the superconducting coherence length ξ (typically 5-100 nm) . The cores of Josephson vortices are more complex and depend on the physical nature of the barrier. In Superconductor-Normal Metal-Superconductor (SNS) Josephson junctions there exist measurable superconducting correlations induced in the N-barrier by proximity effect from the two neighbouring superconducting electrodes. Similarly to Abrikosov vortices in superconductors, Josephson vortices in SNS Josephson junctions are characterized by cores in which the correlations are suppressed by destructive quantum interference and the normal state is recovered. However, unlike Abrikosov cores, having a size ~ξ, the size of the Josephson ones is not defined by microscopic parameters only. Rather, it depends on supercurrents circulating in superconducting electrodes, applied magnetic field etc. In Superconductor-Insulator-Superconductor (SIS) Josephson tunnel junctions the cores are not expected to have a specific spectral signature; they were not observed. Josephson vortex: Usually the Josephson vortex's supercurrent loops create a magnetic flux which equals, in long enough Josephson junctions, to Φ0—a single flux quantum. Yet fractional vortices may also exist in Superconductor-Ferromagnet-Superconductor Josephson junctions or in junctions in which superconducting phase discontinuities are present. It was demonstrated by Hilgenkamp et al. that Josephson vortices in the so-called 0-π Long Josephson Junctions can also carry half of the flux quantum, and are called semifluxons. It has been shown that under certain conditions a propagating Josephson vortex can initiate another Josephson vortex. This effect is called flux cloning (or fluxon cloning). Although a second vortex appears, this does not violate the conservation of the single flux quantum.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peloton (supercomputer)** Peloton (supercomputer): The Peloton supercomputer purchase is a program at the Lawrence Livermore National Laboratory intended to provide tera-FLOP computing capability using commodity Scalable Units (SUs). The Peloton RFP defines the system configurations. Peloton (supercomputer): Appro was awarded the contract for Peloton which includes the following machines: All of the machines run the CHAOS variant of Red Hat Enterprise Linux and the Moab resource management system. Under the project management of John Lee, the team at Synnex, Voltaire, Supermicro and other suppliers, the scientists were able to dramatically reduce the amount of time it took to go from starting the cluster build to actually having hardware at Livermore in production. In particular, it went from having four SUs on the floor on a Thursday, to bringing in two more SUs for the final cluster and by Saturday, having all of them wired up, burned in, and running Linpack.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Linea terminalis** Linea terminalis: The linea terminalis or innominate line consists of the pubic crest, pectineal line (pecten pubis), the arcuate line, the sacral ala, and the sacral promontory.It is the pelvic brim, which is the edge of the pelvic inlet. The pelvic inlet is typically used to divide the abdominopelvic cavity into an abdominal (above the inlet) and a pelvic cavity (below the inlet). Sometimes, the pelvis cavity is considered to extend above the pelvic inlet, and in this case the pelvic inlet is used to divide the pelvic cavity into a false (above the inlet) and a true pelvis (below the inlet).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Disaster Girl** Disaster Girl: "Disaster Girl" is a name given to a photograph of a young girl staring at the camera with a structure fire behind her.The girl in the photo, Zoë Roth, was four years old when the photo was taken in 2005. A non-fungible token (NFT) based on the photo sold for $470,000 at auction on April 29, 2021. Photograph: The photograph depicts a four-year-old Zoë Roth overlooking a structure fire while facing the camera. Roth's expression, described by The New York Times as "a devilish smirk" and "a knowing look in her eyes", jokingly implying that she was responsible for the fire. History: Conception When Zoë Roth was four years old, her family went to view a burning house that had been subject to a controlled burn in Mebane, North Carolina, United States.The Roth family lived near a fire station in Mebane, North Carolina, and as they watched a house being burned for training, Roth's father, an amateur photographer, took her picture. Her father entered it into a photo contest in 2007 and it won. The photo became famous in 2008 when it won an Emotion Capture contest in JPG magazine. Roth had given permission to use the image in educational material, but the photo had been used hundreds of times for various purposes, without the Roth family being in control. History: Use as an internet meme Disaster Girl spread as an internet meme, with many editing the photo to depict Roth overlooking historic disasters, such as the extinction of the dinosaurs or the sinking of the Titanic. Roth appreciated the spread of the meme, saying that she loves "seeing how creative people are", and that she is "super grateful for the entire experience" of being the subject of a viral meme. History: Non-fungible token auction After receiving an email in February 2021 suggesting she sell the meme as a non-fungible token (NFT) for as much as "six figures", Roth decided to sell an NFT of the photo. On April 17, 2021, Roth sold the NFT for 180 Ether, or US$486,716 to a collector identified only as @3FMusic. The Roth family retained copyright over the work, as well as an entitlement to 10 percent of proceeds when the NFT is resold. According to Roth, she sold the photograph to take control over its spread, after consulting Kyle Craven / Bad Luck Brian and Laney Griner, the mother of the child depicted in the Success Kid meme. Reception: Marie Fazio of The New York Times described Disaster Girl as "a vital part of [internet] meme canon", considering it to be part of the internet meme "hall of fame", alongside the likes of Bad Luck Brian and Success Kid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Japanese sound symbolism** Japanese sound symbolism: The Japanese language has a large inventory of sound symbolic or mimetic words, known in linguistics as ideophones. Such words are found in written as well as spoken Japanese. Known popularly as onomatopoeia, these words do not just imitate sounds but also cover a much wider range of meanings; indeed, many sound-symbolic words in Japanese are for things that make no noise originally, most clearly demonstrated by 'silently' (しーんと, shīnto), not to be confused with the religion Shintō. Categories: The sound-symbolic words of Japanese can be classified into four main categories: Animate phonomime (擬声語, giseigo) words that mimic sounds made by living things, like a dog's bark (wan-wan). Inanimate phonomime (擬音語, giongo) words that mimic sounds made by inanimate objects, like wind blowing or rain falling (zā-zā). Phenomime (擬態語, gitaigo) words that depict states, conditions, or manners of the external world (non-auditory senses), such as "damp" or "stealthily". Psychomime (擬情語, gijōgo) words that depict psychological states or bodily feelings.These divisions are not always drawn: sound-symbolism may be referred to generally as onomatopoeia (though strictly this refers to imitative sounds, phonomimes); phonomimes may not be distinguished as animate/inanimate, both being referred to as giseigo; and both phenomimes and psychomimes may be referred to as gitaigo. Categories: In Japanese grammar, sound-symbolic words primarily function as adverbs, though they can also function as verbs (verbal adverbs) with the auxiliary verb suru (する, "do"), often in the continuous/progressive form shiteiru (している, "doing"), and as adjectives (participle) with the perfective form of this verb shita (した, "done"). Just like ideophones in many other languages, they are often introduced by a quotative complementizer to (と). Most sound symbolic words can be applied to only a handful of verbs or adjectives. In the examples below, the classified verb or adjective is placed in square brackets. Categories: * Unlike the other examples, doki doki is an onomatopoeic word and mimics the sound of two heartbeats. Other types: In their Dictionary of Basic Japanese Grammar, Seiichi Makino and Michio Tsutsui point out several other types of sound symbolism in Japanese, that relate phonemes and psychological states. For example, the nasal sound [n] gives a more personal and speaker-oriented impression than the velars [k] and [ɡ]; this contrast can be easily noticed in pairs of synonyms such as node (ので) and kara (から) which both mean because, but with the first being perceived as more subjective. This relationship can be correlated with phenomimes containing nasal and velar sounds: While phenomimes containing nasals give the feeling of tactuality and warmth, those containing velars tend to represent hardness, sharpness, and suddenness. Other types: Similarly, i-type adjectives that contain the fricative [ɕ] in the group shi tend to represent human emotive states, such as in the words kanashii (悲しい, "sad"), sabishii (寂しい, "lonely"), ureshii (嬉しい, "happy"), and tanoshii (楽しい, "enjoyable"). This too is correlated with those phenomimes and psychomimes containing the same fricative sound, for example shitoshito to furu (しとしとと降る, "to rain / snow quietly") and shun to suru (しゅんとする, "to be dispirited"). Other types: The use of the gemination can create a more emphatic or emotive version of a word, as in the following pairs of words: pitari / pittari (ぴたり / ぴったり, "tightly"), yahari / yappari (やはり / やっぱり, "as expected"), hanashi / ppanashi (放し / っ放し, "leaving, having left [something] in a particular state"), and many others.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vespa GTS** Vespa GTS: The Vespa GTS range is a scooter currently manufactured by Piaggio under the Vespa brand. GTS stands for Granturismo Sport, while the 250ie is the displacement and electronic fuel injection. History: The GTS 250ie is a development of the design direction, and restyled, from the Vespa GT200. Along with the engine changes are digital gauges, a tachometer, chrome fender-crest, chrome horn grill, restyled rear fairings/taillight, chrome foldable rack, redesigned centerstand, non-painted front suspension cover, and a restyled seat. Where the GT200 used the Piaggio 'Leader' engine, the GTS models use variations of the new architecture of the 'Quasar' engine. The engine has a displacement of 244 cc. Along with electronic fuel injection, the water-cooled motor also has four valves and a single overhead camshaft. The increased engine capacity, combined with reduced emissions, gives a top speed of 140 km/h (87 mph). In 2008, Piaggio released a 278 cc special edition of the GTS, named the Vespa GTS300ie Super. Its larger engine gave 22 bhp (16 kW) and 16.5 lb⋅ft (22.4 N⋅m) of torque, enhancing the performance of the Vespa GTS. Comparison of the specifications and performance, on paper, between the GTS250ie and the GTS300ie Super does not show significant differences in either power or top speed. The GTS300ie does have an advantage with acceleration and with torque, which peaks lower down the rev range. The GTS300ie Super was restyled over the original GT and GTS models to give a mildly more sporty look, and was originally available only in either black or white. History: The restyling of the GTS300ie saw the replacement of the GTS250's fold down tail rack, with a simple grab-handle / loop. However, the fold down tail rack is still available as an optional extra, to allow the fitment of the tail-mounted storage box. Additional changes included: The small front running light was deleted, with cosmetic detail changes to the 'horn' panel. History: The headlight had a black detailing. The instrumentation on the dash returned to analogue dials (as opposed to the part-digital display with the GTS250). The front (suspension) spring is coloured red. The right-hand body panel features (cosmetic) vents, to reflect earlier (air-cooled) Vespa designs. It is considered to have been designed in reply to the Honda SH300i, which was designed and built by Honda as an upmarket, high-powered scooter aimed at outselling the original Vespa GTS in Europe. History: In 2019, Vespa released an updated version of the GTS, the Vespa 300i GTS HPE. The main change for this update was a new High Performance Engine (HPE) with 24 HP (17.5kW) at 8,250 rpm and 26 Nm of torque at 5,250 rpm. Engine displacement remained the same at 278 cc. The new engine can be recognized by the smooth cover on the left and the oil dipstick on the right. Gearbox oil dipstick remains on the left rear. In terms of styling, the front of the scooter has significantly changed. The bodywork has a more noticeable line below the daylight running lights. Also the 'tie' has now been extended to the full length of the front. Rear styling remains the same. The scooter now has full LED lights, daytime running lights and indicators in the front, and a revised rear light.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IBM System/390** IBM System/390: The IBM System/390 is a discontinued mainframe product family implementing ESA/390, the fifth generation of the System/360 instruction set architecture. The first computers to use the ESA/390 were the Enterprise System/9000 (ES/9000) family, which were introduced in 1990. These were followed by the 9672, Multiprise, and Integrated Server families of System/390 in 1994–1999, using CMOS microprocessors. The ESA/390 succeeded ESA/370, used in the Enhanced 3090 and 4381 "E" models, and the System/370 architecture last used in the IBM 9370 low-end mainframe. ESA/390 was succeeded by the 64-bit z/Architecture in 2000. History: On September 5, 1990, IBM published a group of hardware and software announcements, two of which included overviews of three announcements: System/390 (S/390), as in 360 for 1960s, 370 for 1970s. Enterprise System/9000 (ES/9000), as in 360 for 1960s, 370 for 1970s. History: Enterprise Systems Architecture/390 (ESA/390) was IBM's last 31-bit-address/32-bit-data mainframe computing design, copied by Amdahl, Hitachi, and Fujitsu among other competitors. It was the successor of ESA/370 and, in turn, was succeeded by the 64-bit z/Architecture in 2000. Among other things, ESA/390 added fiber optics channels, known as Enterprise Systems Connection (ESCON) channels, to the parallel (Bus and Tag) channels of ESA/370.Despite the fact that IBM mentioned the 9000 family first in some of the day's announcements, it was clear "by the end of the day" that it was "for System/390," although it was a shortened name, S/390, that was placed on some of the actual "boxes" later shipped.The ES/9000 include rack-mounted models, free standing air cooled models and water cooled models. The low end models were substantially less expensive than the 3090 or 4381 previously needed to run MVS/ESA, and could also run VM/ESA and VSE/ESA, which IBM announced at the same time. History: IBM periodically added named features to ESA/390 in conjunction with new processors; the ESA/390 Principles of Operation manual identifies them only by name, not by the processors supporting them. Machines supporting the architecture were sold under the brand System/390 (S/390) from September 1990. The 9672 implementations of System/390 were the first high-end IBM mainframe architecture implemented first with CMOS CPU electronics rather than the traditional bipolar logic. The IBM z13 was the last z Systems server to support running an operating system in ESA/390 architecture mode. However, all 24-bit and 31-bit problem-state application programs originally written to run on the ESA/390 architecture readily run unaffected by this change. S/390 computers: ES/9000 Eighteen models were announced September 5, 1990 for the ES/9000 in three form factors; the water-cooled 9021 to succeed the IBM 3090, and the air-cooled standalone 9121 and rack-mounted 9221 to succeed the IBM 4381 and 9370 respectively. The largest announced model had a 100-fold performance over the smallest model, and the clock frequency ranged from 67-111 MHz (15-9 ns) in the 9021 and 67 MHz in the 9121 to 26-33 MHz (38-30 ns) in the 9221. The 9221 models 120, 130 and 150 were initially available only with the "System/370 Base Option"; the "ESA Option" shipped in July 1991. The 9221 processors were made of VLSI CMOS chips designed in Böblingen, Germany, whence the 9672 line later originated. S/390 computers: The lower 6 of the 8 water-cooled models (codenamed H0) were immediately available, but used the same processor as the 3090-J, still at the 69 MHz (14.5 ns) maximum frequency and thus with unchanged performance. Those models' main difference from the 3090-J was the optional addition of ESCON, Sysplex and Integrated Cryptographic Feature. Only the models 900 and 820 had an all-new design (codenamed H2), featuring private split I+D 128+128 KB L1 caches and a shared 4 MB L2 cache (2 MB per side) with 11-cycle latency, more direct interconnects between the processors, multi-level TLBs, branch target buffer and 111 MHz (9 ns) clock frequency. These were the first models with out-of-order execution since the System/370-195 of 1973. However unlike the old S/360-91-derived systems, the models 900 and 820 had full out-of-order execution for both integer and floating-point units, with precise exception handling, and a fully superscalar pipeline. Models 820 and 900 shipped to customers in September 1991, a year later than the models with older technology. Later these new technologies were used in models 520, 640, 660, 740 and 860.All three lines got additions and upgrades until 1993–1994. In February 1993 an 8-processor 141 MHz (7.1 ns) model 982 became available, with models 972, 962, 952, 942, 941, 831, 822, 821 and 711 following in March. These models, codenamed H5, had double the L2 cache and 30% higher per-processor performance than the H2 line, and added a hardware data compression. The compression was also included in the new, 50% faster models of the 9121. In April 1994, alongside the CMOS-based new 9672 series and improved 9221 models, IBM announced also their ultimate bipolar model, the 10-processor model 9X2 rated at 468 MIPS, to become available in October. S/390 computers: Models ES/9000 features ESCON fiber optic channels Sysplex for synchronizing the systems to ease management Vector Facility: up to one vector processor per Central Processor available on the 9021 and 9121. First used on the 3090 to replace the IBM 3838 array processor announced in 1976 for System/370. Up to one Integrated Cryptographic Feature (ICRF) per side was available on the 9021 for accelerating encryption, succeeding the 3848 Cryptographic Unit. (Each Central Processor accommodates one coprocessor at a time; the combined number of installed Vector Facilities and ICRFs cannot exceed the number of Central Processors.) The new models of the 9021 and 9121 from 1993 feature data compression hardware. S/390 computers: Logical partitioning Previously available only on IBM 3090, Logical Partitions (LPARs) are a standard feature of the ES/9000 processors whereby IBM's Processor Resource/Systems Manager (PR/SM) hypervisor allows different operating systems to run concurrently in separate logical partitions (LPARs), with a high degree of isolation. Initially 7 partitions per a disconnected side were supported. In December 1992 the LPAR capacity of the H2 (520-based) models was increased to 10 per a disconnected side. For example, a two-processor model 660 could now support up to 20 partitions instead of 14, if the two sides (each with one processor) are electrically isolated.This was introduced as part of IBM's moving towards "lights-out" operation and increased control of multiple system configurations. S/390 computers: 9672 Launched in 1994 first as the "Parallel Transaction Server" (alongside the 9673 "Parallel Query Server"), subsumed by the "Parallel Enterprise Server" launched later in the year, the six generations of the IBM 9672 machines transitioned IBM's mainframes fully to CMOS microprocessors, as by a strategic decision no more ES/9000 (bipolar-based except the 9221) models would be released after 1994. The initial generations of 9672 were slower than the largest ES/9000 sold in parallel, but the fifth and sixth generations were the most powerful and capable ESA/390 machines built by IBM. S/390 computers: In the course of the generations, CPUs added more instructions and increased performance. The first three generations (G1 to G3) focused on low cost. The 4th generation was aimed at matching the performance of the last bipolar model, the 9021-9X2. It was decided to be accomplished by pursuing high clock frequencies. The G4 could reach 70% higher frequency than the G3 at silicon process parity, but it suffered a 23% IPC reduction from the G3. The initial G4-based models became available in June 1997, but it wasn't until the 370 MHz model RY5 (with a "Modular Cooling Unit") became available at the end of the year that a 9672 would almost match the 141 MHz model 9X2's performance. At 370 MHz it was the second-highest clocked microprocessor at the time, after the Alpha 21164 of DEC. The execution units in each G4 processor are duplicated for the purpose of error detection and correction. Arriving in late September 1998, the G5 more than doubled the performance over any previous IBM mainframe, and restored IBM's performance lead that had been lost to Hitachi's Skyline mainframes in 1995. The G5 operated at up to 500 MHz, again second only to the DEC Alphas into early 1999. The G5 also added support for the IEEE 754 floating-point formats. The thousandth G5 system shipped less than 100 days after the manufacturing began; the greatest ramping of production in S/390's history. In late May 1999 the G6 arrived featuring copper interconnects, raising the frequency to 637MHz, higher than the fastest DEC machines at the time. S/390 computers: Other In September 1996 IBM launched the S/390 Multiprise 2000, positioned below the 9672. It used the same technology as the 9672 G3, but it fit half as many processors (up to five) and its off-chip caches were smaller. The 9672 G3 and the Multiprise 2000 were the last versions to support pre-XA System/370 mode. In October 1997 models of Multiprise 2000 with an 11% higher performance were launched. The Multiprise 3000, based on the 9672 G5, became available in September 1999, featuring PCI buses.The S/390 Integrated Server, an even lower-end S/390 system than Multiprise, shipped by the end of 1998. It emerged from a line of S/390-compatibility/coprocessor cards for PCs, but is a true S/390 system capable of server duties, having relegated the Pentium II to the role of an I/O coprocessor. It was the first S/390 server to support PCI. It had the same performance and 256 MB maximum memory capacity as the 7 years older low-end 9221 model 170.From 1997 IBM also offered a "S/390 Application StarterPak", intended as a devkit for developing and testing mainframe software.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Composite (graphics)** Composite (graphics): The Composite Extension of the X Window System renders the graphical output of clients "...to an off-screen buffer. Applications can then take the contents of that buffer and do whatever they like. The off-screen buffer can be automatically merged into the parent window or merged by external programs, called compositing managers."This enabled the creation of compositing managers for X, capable of effects like transparency, 3D rotation, and jiggly windows. Composite (graphics): The composite extension was added to X.org in version X11R6.8 in September 2004.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anne Bennett Prize** Anne Bennett Prize: The Anne Bennett Prize and Senior Anne Bennett Prize are awards given by the London Mathematical Society.In every third year, the society offers the Senior Anne Bennett prize to a mathematician normally based in the United Kingdom for work in, influence on or service to mathematics, particularly in relation to advancing the careers of women in mathematics.In the two years out of three in which the Senior Anne Bennett Prize is not awarded, the society offers the Anne Bennett Prize to a mathematician within ten years of their doctorate for work in and influence on mathematics, particularly acting as an inspiration for women mathematicians.Both prizes are awarded in memory of Anne Bennett, an administrator for the London Mathematical Society who died in 2012.The Anne Bennett Prizes should be distinguished from the Anne Bennett Memorial Award for Distinguished Service of the Royal Society of Chemistry, for which Anne Bennett also worked. Winners: The winners of the Anne Bennett Prize have been: 2015 Apala Majumdar, in recognition of her outstanding contributions to the mathematics of liquid crystals and to the liquid crystal community. 2016 Julia Wolf, in recognition of her outstanding contributions to additive number theory, combinatorics and harmonic analysis and to the mathematical community. 2018 Lotte Hollands, in recognition of her outstanding research at the interface between quantum theory and geometry and of her leadership in mathematical outreach activities. 2019 Eva-Maria Graefe, in recognition of her outstanding research in quantum theory and the inspirational role she has played among female students and early career researchers in mathematics and physics. 2021 Viveka Erlandsson, "for her outstanding achievements in geometry and topology and her inspirational active role in promoting women mathematicians". Winners: 2022 Asma Hassannezhad, in recognition of her "work in spectral geometry and her substantial contributions toward the advancement of women in mathematics".The winners of the Senior Anne Bennett Prize have been: 2014 Caroline Series, in recognition of her leading contributions to hyperbolic geometry and symbolic dynamics, and of the major impact of her numerous initiatives towards the advancement of women in mathematics. Winners: 2017 Alison Etheridge, in recognition of her outstanding research on measure-valued stochastic processes and applications to population biology; and for her impressive leadership and service to the profession. 2020 Peter Clarkson, "in recognition of his tireless work to support gender equality in UK mathematics, and particularly for his leadership in developing good practice among departments of mathematical sciences".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strela candy** Strela candy: Strela or strila (Russian: Стрела, Ukrainian: Стріла - arrow) is a popular type of candy sold in the Commonwealth of Independent States (CIS), primarily in Ukraine. It gets its name from its distinctive foiled cone shape which resembles an arrowhead. Form: Strela has a very distinctive shape, taste and structure. The base is a metal cone made from a round piece of foil, sprinkled with a thin layer of chocolate. The candy is filled with a brandy-flavoured cream filling (although some manufacturers use fruit flavourings), which is extremely soft at room temperature. The wide end is sealed with a thick piece of chocolate and the confection is decorated according to the manufacturer's taste. Form: The foil cone is the distinguishing feature of this type of candy. It allows for ease of manufacture and safe storage and transportation of an otherwise too-soft candy body. Before eating, the candy is chilled so that the thin chocolate layer does not melt and become sticky; 20 °C is sufficient, but some people prefer a temperature of around 0 so that the candy will slowly melt in the mouth. The candy is held by the thick cone cap (which is deliberately left outside the foil cone and thick enough to hold) and the foil is gently unrolled immediately before consumption. Form: Because of their delicious taste, exquisite look and quality, this type of candy has a steady consumer base in the CIS, but the relatively high price (~30 US cents per piece) and small amount of chocolate restricts it from more widespread popularity. Manufacturers: Since the trademark belonged to the former USSR and the candy was produced in more than one factory, the name is permitted for use by any manufacturer as long as the product satisfies certain technical requirements. Nestle-owned Svitoch, which manufactures Strela in Ukraine, has re-branded the product as Stozhary (Ukrainian: Стожари - torches).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Invisible Man Appears** The Invisible Man Appears: The Invisible Man Appears (Japanese: 透明人間現わる, Hepburn: Tōmei Ningen Arawaru) is a 1949 Japanese science fiction tokusatsu film directed by Nobuo Adachi, with special effects by Eiji Tsuburaya. The film was loosely based on H. G. Wells' 1897 The Invisible Man and produced by Daiei Film, the film stars Kanji Koshiba, Chizuru Kitagawa, Takiko Mizunoe, Daijirō Natsukawa, Ryūnosuke Tsukigata, and Kichijiro Ueda. Plot: A gang of thugs intend to use an invisibility formula created by Professor Nakazato to rob a priceless jewel necklace "The Tears of Amour." Cast: Kanji Koshiba as Shunji Kurokawa, the Invisible Man Chizuru Kitagawa as Machiko Nakazato Takiko Mizunoe as Ryuko Mizuki Ryūnosuke Tsukigata as Kenzo Nakazato Daijirō Natsukawa as Kyosuke Segi Kichijiro Ueda as Otoharu Sugimoto Shosaku Sugiyama as Ichiro Kawabe Mitsusaburo Ramon as Matsubara, lead investigator Themes: The Invisible Man Appears was influenced by exposure to American films during the Allied Occupation of Japan following World War II. Production: The film was initially tentatively titled Invisible Demon by Hisashi Okuda. According to Okuda, When he showed the plan to Eiji Tsuburaya, who had just been expelled from public office, Tsuburaya promised, "I am willing to cooperate because I believe this is worth considering." Release: The Invisible Man Appears was released in Japan in 1949. The film and its follow-up were never released outside of Japan until Arrow Video released the film on Blu-ray March 15, 2021. Follow-up: Daiei produced a second film inspired by H. G. Wells' The Invisible Man novel, titled The Invisible Man vs. The Human Fly (透明人間と蝿男), which was released to Japanese theaters on August 25, 1957.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tricalcium phosphate** Tricalcium phosphate: Tricalcium phosphate (sometimes abbreviated TCP), more commonly known as Calcium phosphate, is a calcium salt of phosphoric acid with the chemical formula Ca3(PO4)2. It is also known as tribasic calcium phosphate and bone phosphate of lime (BPL). It is a white solid of low solubility. Most commercial samples of "tricalcium phosphate" are in fact hydroxyapatite.It exists as three crystalline polymorphs α, α′, and β. The α and α′ states are stable at high temperatures. Nomenclature: Calcium phosphate refers to numerous materials consisting of calcium ions (Ca2+) together with orthophosphates (PO3−4), metaphosphates or pyrophosphates (P2O4−7) and occasionally oxide and hydroxide ions. Especially, the common mineral apatite has formula Ca5(PO4)3X, where X is F, Cl, OH, or a mixture; it is hydroxyapatite if the extra ion is mainly hydroxide. Much of the "tricalcium phosphate" on the market is actually powdered hydroxyapatite. Preparation: Tricalcium phosphate is produced commercially by treating hydroxyapatite with phosphoric acid and slaked lime.It cannot be precipitated directly from aqueous solution. Typically double decomposition reactions are employed, involving a soluble phosphate and calcium salts, e.g. (NH4)2HPO4 + Ca(NO3)2. is performed under carefully controlled pH conditions. The precipitate will either be "amorphous tricalcium phosphate", ATCP, or calcium deficient hydroxyapatite, CDHA, Ca9(HPO4)(PO4)5(OH), (note CDHA is sometimes termed apatitic calcium triphosphate). Crystalline tricalcium phosphate can be obtained by calcining the precipitate. β-Ca3(PO4)2 is generally formed, higher temperatures are required to produce α-Ca3(PO4)2. Preparation: An alternative to the wet procedure entails heating a mixture of a calcium pyrophosphate and calcium carbonate: CaCO3 + Ca2P2O7 → Ca3(PO4)2 + CO2 Structure of β-, α- and α′- Ca3(PO4)2 polymorphs: Tricalcium phosphate has three recognised polymorphs, the rhombohedral β form (shown above), and two high temperature forms, monoclinic α and hexagonal α′. β-Tricalcium phosphate has a crystallographic density of 3.066 g cm−3 while the high temperature forms are less dense, α-tricalcium phosphate has a density of 2.866 g cm−3 and α′-tricalcium phosphate has a density of 2.702 g cm−3 All forms have complex structures consisting of tetrahedral phosphate centers linked through oxygen to the calcium ions. The high temperature forms each have two types of columns, one containing only calcium ions and the other both calcium and phosphate.There are differences in chemical and biological properties between the beta and alpha forms, the α form is more soluble and biodegradable. Both forms are available commercially and are present in formulations used in medical and dental applications. Occurrence: Calcium phosphate is one of the main combustion products of bone (see bone ash). Calcium phosphate is also commonly derived from inorganic sources such as mineral rock. Occurrence: Tricalcium phosphate occurs naturally in several forms, including: as a rock in Morocco, Israel, Philippines, Egypt, and Kola (Russia) and in smaller quantities in some other countries. The natural form is not completely pure, and there are some other components like sand and lime which can change the composition. The content of P2O5 in most calcium phosphate rocks is 30% to 40% P2O5 by weight. Occurrence: in the skeletons and teeth of vertebrate animals in milk. Biphasic calcium phosphate, BCP: Biphasic calcium phosphate, BCP, was originally reported as tricalcium phosphate, but X-Ray diffraction techniques showed that the material was an intimate mixture of two phases, hydroxyapatite (HA) and β-tricalcium phosphate. It is a ceramic. Preparation involves sintering, causing irreversible decomposition of calcium deficient apatites alternatively termed non-stoichiometric apatites or basic calcium phosphate. An example is: Ca10−δ(PO4)6−δ(HPO4)δ(OH)2−δ → (1−δ) Ca10(PO4)6(OH)2 + 3δ Ca3(PO4)2β-TCP can contain impurities, for example calcium pyrophosphate, CaP2O7 and apatite. β-TCP is bioresorbable. The biodegradation of BCP involves faster dissolution of the β-TCP phase followed by elimination of HA crystals. β-TCP does not dissolve in body fluids at physiological pH levels, dissolution requires cell activity producing acidic pH. Uses: Food additive Tricalcium phosphate is used in powdered spices as an anticaking agent, e.g. to prevent table salt from caking. The calcium phosphates have been assigned European food additive number E341. Health and beauty products It is also found in baby powder, antacids and toothpaste. Biomedical It is also used as a nutritional supplement and occurs naturally in cow milk, although the most common and economical forms for supplementation are calcium carbonate (which should be taken with food) and calcium citrate (which can be taken without food). There is some debate about the different bioavailabilities of the different calcium salts. Uses: It can be used as a tissue replacement for repairing bony defects when autogenous bone graft is not feasible or possible. It may be used alone or in combination with a biodegradable, resorbable polymer such as polyglycolic acid. It may also be combined with autologous materials for a bone graft.Porous beta-tricalcium phosphate scaffolds are employed as drug carrier systems for local drug delivery in bone. Natural occurrence: Tuite, a natural analogue of tricalcium orthophosphate(V), is a rare component of some meteorites. Its formation is related to shock metamorphism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Caffeine dependence** Caffeine dependence: Caffeine dependence is the condition of having a substance dependence on caffeine, a commonplace central nervous system stimulant drug which occurs naturally in coffee, tea, yerba mate, cocoa, and other plants. Caffeine is one of the most common additives in many consumer products, including pills and beverages such as caffeinated alcoholic beverages, energy drinks, and colas. Studies have found that 89 percent of adults in the U.S. consume on average 200 mg of caffeine daily. Cultural influence is a large factor in deciding how and what way caffeine is used. For example, in African, Asian and Pacific countries, tea is the most popular form of caffeine, whilst in Europe and North America, coffee is the mainstream choice.The Diagnostic and Statistical Manual of Mental Disorders describes four caffeine-related disorders including intoxication, withdrawal, anxiety, and sleep. Dependence: Mild physical dependence can result from long-term caffeine use. In the human body, caffeine blocks adenosine receptors A1 and A2A. Adenosine is a by-product of cellular activity, and stimulation of adenosine receptors produces feelings of tiredness and the need to sleep. Caffeine's ability to block these receptors means the levels of the body's natural stimulants, dopamine and norepinephrine, continue at higher levels. Dependence: Continued exposure to caffeine leads the body to create more adenosine-receptors in the central nervous system, which makes it more sensitive to the effects of adenosine. This reduces the stimulatory effects of caffeine by increasing tolerance. It also causes the body to suffer withdrawal symptoms (such as headaches, fatigue, and irritability) if caffeine intake decreases. Dependence: Addiction vs. dependence Caffeine use is classified as a dependence, not an addiction. For a drug to be considered addictive, it must activate the brain's reward circuit. Caffeine, like addictive drugs, enhances dopamine signaling in the brain (is eugeroic), but not enough to activate the brain's reward circuit like addictive substances such as cocaine, morphine, and nicotine. Caffeine dependence forms due to caffeine antagonizing the adenosine A2A receptor, effectively blocking adenosine from the adenosine receptor site. This delays the onset of drowsiness and releases dopamine. As of right now, caffeine withdrawal qualifies as a psychiatric condition by the American Psychiatric Association, but caffeine use disorder does not.Professor Roland R. Griffiths, a professor of neurology at Johns Hopkins in Baltimore, strongly believes that caffeine withdrawal should be classified as a psychological disorder. His research suggests that withdrawal affects 50% of habitual coffee drinkers, beginning within 12–24 hours after cessation of caffeine intake, and peaking in 20–48 hours, lasting as long as 9 days. In another study, he concluded that people who take in a minimum of 100 mg of caffeine per day (about the amount in one cup of coffee) can acquire a physical dependence that would trigger withdrawal symptoms, including muscle pain and stiffness, nausea, vomiting, and depressed mood. Physiological effects: Caffeine dependence can cause a person to suffer different physiological effects if caffeine consumption is not maintained. Commonly known caffeine withdrawal symptoms include headaches, fatigue, loss of focus, lack of motivation, mood swings, nausea, insomnia, dizziness, cardiac issues, hypertension, anxiety, and backache and joint pain; these can range in severity from mild to severe. These symptoms may occur within 12–24 hours and can last well up to two to nine days.Tests are still being done to get a better understanding of the effects that occur to people when they become dependent on different forms of caffeine to make it through the day. There has been research findings that suggest that the circadian cycle is not significantly changed under popular practices of caffeine consumption in the morning and during the afternoon. Physiological effects: Pregnancy If pregnant, it is recommended not to consume over 200 mg of caffeine a day (though relative to the physical of the person). If a pregnant female consumes high levels of caffeine, it can result in low birth weights due to loss of blood flow to the placenta which could lead to an increase in health problems later in that child's life. It can also result in premature labor, reduced fertility, and other reproductive issues. The American Pregnancy Association suggests "avoiding caffeine as much as possible" before and during pregnancy or discussing how to curtail dependency with a healthcare provider. Physiological effects: Children and teenagers According to the American Academy of Pediatrics (AAP), it is not recommended for individuals under the age of 18 to consume several caffeinated drinks in one day. If they were to consume caffeine, it is recommended to follow some guidelines so they do not consume too much throughout the day. Such guidelines is commonly lacking in actual strategies to incorporate in to daily life. If they do not follow, they can become dependent on caffeine and without it can suffer many different side effects. These include increase of heart rate and blood pressure, sleep disturbance, mood swings, and acidic problems. Long lasting problems on children's nervous system and cardiovascular system are currently unknown, and studies are still being conducted on it. Some research has suggested that caffeinated drinks should not focus on children as their target audience or to be consumed by children.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Morantel** Morantel: Morantel is an anthelmintic drug used for the removal of parasitic worms in livestock. It affects the nervous system of worms given the drug is an inhibitor of acetylcholinesterase. It is derived in part from 3-methylthiophene. Morantel is closely related to pyrantel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1β-Methylseleno-N-acetyl-D-galactosamine** 1β-Methylseleno-N-acetyl-D-galactosamine: In organic chemistry, 1β-Methylseleno-N-acetyl-D-galactosamine is an amino sugar containing selenium. It is found in urine, as a disposal metabolite for selenium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Undergarment** Undergarment: Underwear, underclothing, or undergarments are items of clothing worn beneath outer clothes, usually in direct contact with the skin, although they may comprise more than a single layer. They serve to keep outer clothing from being soiled or damaged by bodily excretions, to lessen the friction of outerwear against the skin, to shape the body, and to provide concealment or support for parts of it. In cold weather, long underwear is sometimes worn to provide additional warmth. Special types of undergarments have religious significance. Some items of clothing are designed as undergarments, while others, such as T-shirts and certain types of shorts, are appropriate both as underwear and outerwear. If made of suitable material or textile, some underwear can serve as nightwear or swimwear, and some undergarments are intended for sexual attraction or visual appeal. Undergarment: Undergarments are generally of two types, those that are worn to cover the torso and those that are worn to cover the waist and legs, although there are also underclothes which cover both. Different styles of underwear are generally worn by females and males. Undergarments commonly worn by females today include bras and panties (knickers in British English), while males often wear classic briefs, boxer briefs, or boxer shorts. Items worn by both sexes include T-shirts, sleeveless shirts (also called singlets, tank tops, A-shirts, or vests), bikini underpants, thongs, G-strings and T-fronts. Terminology: Undergarments are known by a number of terms. Underclothes, underclothing and underwear are formal terms, while undergarments may be more casually called, in Australia, Reg Grundys (rhyming slang for undies) and Reginalds, and, in the United Kingdom, smalls (from the earlier smallclothes) and (historically) unmentionables. In the United States, women's underwear may be known as delicates due to the recommended washing machine cycle or because they are, simply put, delicate.Women's undergarments collectively are also called lingerie. They also are called intimate clothing and intimates. Terminology: An undershirt (vest in the United Kingdom) is a piece of underwear covering the torso, while underpants (often called pants in the United Kingdom), drawers, and undershorts cover the genitals and buttocks. Terms for specific undergarments are shown in the table below. Function: Underwear is worn for a variety of reasons. They keep outer garments from being soiled by perspiration, urine, semen, pre-seminal fluid, feces, vaginal discharge, and menstrual blood. Women's brassieres provide support for the breasts, and men's briefs serve the same function for the male genitalia. A corset may be worn as a foundation garment to provide support for the breasts and torso, as well as to alter a woman's body shape. For additional support and protection when playing sports, men often wear more tightly fitting underwear, including jockstraps and jockstraps with cup pocket and protective cup. Women may wear sports bras which provide greater support, thus increasing comfort and reducing the chance of damage to the ligaments of the chest during high-impact exercises such as jogging.In cold climates, underwear may constitute an additional layer of clothing helping to keep the wearer warm. Underwear may also be used to preserve the wearer's modesty – for instance, some women wear camisoles and slips (petticoats) under clothes that are sheer. Conversely, some types of underwear can be worn for sexual titillation, such as edible underwear or crotchless panties.Undergarments are worn for insulation under space suits and dry suits. In the case of dry suits, the insulation value of the undergarments is selected to match the expected water temperature and the level of activity for the planned dive or water activity.Some items of clothing are designed exclusively as underwear, while others such as T-shirts and certain types of shorts are suitable both as underwear and as outer clothing. The suitability of underwear as outer clothing is, apart from the indoor or outdoor climate, largely dependent on societal norms, fashion, and the requirements of the law. If made of suitable material, some underwear can serve as nightwear or swimsuits. Function: Religious functions Undergarments can also have religious significance: Judaism. To conform with societal dress codes, the tallit katan is often worn beneath the shirt. Mormonism. Following their endowment in a temple, Mormons wear special temple garments which help them to remember the teachings of the temple. Sikhism. One of the five articles of faith (panj kakaar) worn by Sikh men and women is a certain style of underpants similar to boxer shorts and known as the kacchera. Zoroastrianism. Zoroastrians wear an undershirt called a Sedreh that is fastened with a sacred girdle around the waist known as a Kushti. History: Ancient history The loincloth is the simplest form of underwear; it was probably the first undergarment worn by human beings. In warmer climates, the loincloth was often the only clothing worn (effectively making it an outer garment rather than an undergarment), as was doubtless its origin, but in colder regions, the loincloth often formed the basis of a person's clothing and was covered by other garments. In most ancient civilizations, this was the only undergarment available. History: A loincloth may take three major forms. The first, and simplest, is simply a long strip of material that is passed between the legs and then around the waist. Archaeologists have found the remains of such loincloths made of leather dating back 7,000 years. The ancient Hawaiian malo was of this form, as are several styles of the Japanese fundoshi. Another form is usually called a cache-sexe: a triangle of cloth is provided with strings or loops, which are used to fasten the triangle between the legs and over the genitals. Egyptian king Tutankhamun (1341 BC – 1323 BC) was found buried with numerous linen loincloths of this style. An alternate form is more skirt-like: a cloth is wrapped around the hips several times and then fastened with a girdle. History: Men are said to have worn loincloths in ancient Greece and Rome, though it is unclear whether Greek women wore undergarments. There is some speculation that only slaves wore loincloths and that citizens did not wear undergarments beneath their chitons. Mosaics of the Roman period indicate that women (primarily in an athletic context, whilst wearing nothing else) sometimes wore strophiae (breastcloths) or brassieres made of soft leather, along with subligacula which were either in the form of shorts or loincloths. Subligacula were also worn by men.The fabric used for loincloths may have been wool, linen or a linsey-woolsey blend. Only the upper classes could have afforded imported silk. History: The loincloth continues to be worn by people around the world – it is the traditional form of undergarment in many Asian societies, for example. In various, mainly tropical, cultures, the traditional male dress may still consist of only a single garment below the waist or even none at all, with underwear as optional, including the Indian dhoti and lungi, or the Scottish kilt. History: Middle Ages and Renaissance In the Middle Ages, western men's underwear became looser fitting. The loincloth was replaced by loose, trouser-like clothing called braies, which the wearer stepped into and then laced or tied around the waist and legs at about mid-calf. Wealthier men often wore chausses as well, which only covered the legs. Braies (or rather braccae) were a type of trouser worn by Celtic and Germanic tribes in antiquity and by Europeans subsequently into the Middle Ages. In the later Middle Ages they were used exclusively as undergarments. History: By the time of the Renaissance, braies had become shorter to accommodate longer styles of chausses. Chausses were also giving way to form-fitting hose, which covered the legs and feet. Fifteenth-century hose were often particolored, with each leg in a different-colored fabric or even more than one color on a leg. However, many types of braies, chausses and hose were not intended to be covered up by other clothing, so they were not actually underwear in the strict sense. History: Braies were usually fitted with a front flap that was buttoned or tied closed. This codpiece allowed men to urinate without having to remove the braies completely. Codpieces were also worn with hose when very short doublets – vest- (UK: waistcoat-) like garments tied together in the front and worn under other clothing – were in fashion, as early forms of hose were open at the crotch. Henry VIII of England began padding his codpiece, which caused a spiralling trend of larger and larger codpieces that only ended by the end of the 16th century. It has been speculated that the King may have had the sexually transmitted disease syphilis, and his large codpiece may have included a bandage soaked in medication to relieve its symptoms. Henry VIII also wanted a healthy son and may have thought that projecting himself in this way would portray fertility. Codpieces were sometimes used as a pocket for holding small items. History: Over the upper part of their bodies, both medieval men and women usually wore a close-fitting shirt-like garment called a chemise in France, or a smock or shift in England. The forerunner of the modern-day shirt, the chemise was tucked into a man's braies, under his outer clothing. Women wore a chemise underneath their gowns or robes, sometimes with petticoats over the chemise. Elaborately quilted petticoats might be displayed by a cut-away dress, in which case they served a skirt rather than an undergarment. During the 16th century, the farthingale was popular. This was a petticoat stiffened with reed or willow rods so that it stood out from a woman's body like a cone extending from the waist. History: Corsets also began to be worn about this time. At first they were called pairs of bodies, which refers to a stiffened decorative bodice worn on top of another bodice stiffened with buckram, reeds, canes, whalebone or other materials. These were not the small-waisted, curved corsets familiar from the Victorian era, but straight-lined stays that flattened the bust. History: Men's braies and hose were eventually replaced by simple cotton, silk, or linen drawers, which were usually knee-length trousers with a button flap in the front.Medieval people wearing only tunics, without underpants, can be seen on works like The Ass in the School by Pieter Bruegel the Elder, in the Très Riches Heures du duc de Berry by Limbourg Brothers, or in the Grimani Breviary: The Month of February by Gerard Horenbout. History: In 2012, findings in Lengberg Castle, in Austria, showed that lace and linen brassiere-like garments, one of which greatly resembled the modern bra, date back to hundreds of years before it was thought to exist. Enlightenment and Industrial Age The invention of the spinning jenny machines and the cotton gin in the second half of the 18th century made cotton fabrics widely available. This allowed factories to mass-produce underwear, and for the first time, large numbers of people began buying undergarments in stores rather than making them at home. History: Women's stays of the 18th century were laced behind and drew the shoulders back to form a high, round bosom and erect posture. Colored stays were popular. With the relaxed country styles of the end of the century, stays became shorter and were unboned or only lightly boned, and were now called corsets. As tight waists became fashionable in the 1820s, the corset was again boned and laced to form the figure. By the 1860s, a tiny ("wasp") waist came to be seen as a symbol of beauty, and the corsets were stiffened with whalebone or steel to accomplish this. While "tight lacing" of corsets was not a common practice except among a minority of women, which sometimes led to a woman needing to retire to the fainting room, the primary use of a corset was to create a smooth line for the garments to effect the fashionable shape of the day, using the optical illusion created by the corset and garments together to achieve the look of a smaller waist. By the 1880s, the dress reform movement was campaigning against the alleged pain and damage to internal organs and bones caused by tight lacing. Inez Gaches-Sarraute invented the "health corset", with a straight-fronted busk made to help support the wearer's muscles. History: The corset was usually worn over a thin shirt-like shift of linen or cotton or muslin. Skirt styles became shorter and long drawers called pantalettes or pantaloons kept the legs covered. Pantalettes originated in France in the early 19th century, and quickly spread to Britain and America. Pantalettes were a form of leggings or long drawers. They could be one-piece or two separate garments, one for each leg, attached at the waist with buttons or laces. The crotch was left open for hygiene reasons. History: As skirts became fuller from the 1830s, women wore many petticoats to achieve a fashionable bell shape. By the 1850s, stiffened crinolines and later hoop skirts allowed ever wider skirts to be worn. The bustle, a frame or pad worn over the buttocks to enhance their shape, had been used off and on by women for two centuries, but reached the height of its popularity in the later 1880s, and went out of fashion for good in the 1890s. History: Women dressed in crinolines often wore drawers under them for modesty and warmth. History: Another common undergarment of the late 19th century for men, women, and children was the union suit. Invented in Utica, New York and patented in 1868, this was a one-piece front-buttoning garment usually made of knitted material with sleeves extending to the wrists and legs down to the ankles. It had a buttoned flap (known colloquially as the "access hatch", "drop seat", or "fireman's flap") in the back to ease visits to the toilet. The union suit was the precursor of long johns, a two-piece garment consisting of a long-sleeved top and long pants possibly named after American boxer John L. Sullivan who wore a similar garment in the ring.The jockstrap was invented in 1874, by C.F. Bennett of a Chicago sporting goods company, Sharp & Smith, to provide comfort and support for bicycle jockeys riding the cobblestone streets of Boston, Massachusetts. In 1897 Bennett's newly formed Bike Web Company patented and began mass-producing the Bike Jockey Strap. History: 1900s to 1920s By the early 20th century, the mass-produced undergarment industry was booming, and competition forced producers to come up with all sorts of innovative and gimmicky designs to compete. The Hanes company emerged from this boom and quickly established itself as a top manufacturer of union suits, which were common until the 1930s. Textile technology continued to improve, and the time to make a single union suit dropped from days to minutes. History: Meanwhile, designers of women's undergarments relaxed the corset. The invention of new, flexible but supportive materials allowed whalebone and steel bones to be removed. The emancipation or liberty bodice offered an alternative to constricting corsets, and in Australia and the UK the liberty bodice became a standard item for girls as well as women. History: Men's underwear was also on the rise. Benjamin Joseph Clark, a migrant to Louisiana from New Jersey, opened a venture capitalist firm named Bossier in Bossier Parish. One product manufactured by his firm was tightly fitting boxer shorts that resembled modern underwear. Though the company was bankrupt by the early 20th century, it had some impact on men's underwear design. History: Underwear advertising first made an appearance in the 1910s. The first underwear print advertisement in the US appeared in The Saturday Evening Post in 1911 and featured oil paintings by J. C. Leyendecker of the "Kenosha Klosed Krotch". Early underwear advertisements emphasized durability and comfort, and fashion was not regarded as a selling point. By the end of the 1910s, Chalmers Knitting Company split the union suit into upper and lower sections, effectively inventing the modern undershirt and drawers. Women wore lacier versions of this basic duo known as the camisole and tap pants. History: In 1912, the US had its first professional underwear designer. Lindsay "Layneau" Boudreaux, a French immigrant, established the short-lived panty company Layneau. Though her company closed within one year, it had a significant impact on many levels. Boudreaux showed the world that an American woman could establish and run a company, and she also caused a revolution in the underwear industry. History: In 1913, a New York socialite named Mary Phelps Jacob created the first modern brassiere by tying two handkerchiefs together with ribbon. Jacob's original intention was to cover the whalebone sticking out of her corset, which was visible through her sheer dress. Jacob began making brassieres for her family and friends, and news of the garment soon spread by word of mouth. By 1914, Jacob had a patent for her design and was marketing it throughout the US. Although women had worn brassiere-like garments in years past, Jacob's was the first to be successfully marketed and widely adopted. History: By the end of the decade, trouser-like "bloomers", which were popularized by Amelia Jenks Bloomer (1818–1894) but invented by Elizabeth Smith Miller, gained popularity with the so-called Gibson Girls who enjoyed pursuits such as cycling and tennis. This new female athleticism helped push the corset out of style. The other major factor in the corset's demise was the fact that metal was globally in short supply during the First World War. Steel-laced corsets were dropped in favor of the brassiere. History: Meanwhile, World War I soldiers were issued button-front shorts as underwear. The buttons attached to a separate piece of cloth, or "yoke", sewn to the front of the garment, and tightness of fit was adjusted by means of ties on the sides. This design proved so popular that it began to supplant the union suit in popularity by the end of the war. Rayon garments also became widely available in the post-war period. History: In the 1920s, manufacturers shifted emphasis from durability to comfort. Union suit advertisements raved about patented new designs that reduced the number of buttons and increased accessibility. Most of these experimental designs had to do with new ways to hold closed the crotch flap common on most union suits and drawers. A new woven cotton fabric called nainsook gained popularity in the 1920s for its durability. Retailers also began selling preshrunk undergarments. History: Also in the 1920s, as hemlines of women's dresses rose, women began to wear stockings to cover the exposed legs. Women's bloomers also became much shorter. The shorter bloomers became looser and less supportive as the boyish flapper look came into fashion. By the end of the decade, they came to be known as "step-ins", very much like modern panties but with wider legs. They were worn for the increased flexibility they afforded. History: The garter belt was invented to keep stockings from falling. In 1928, Maidenform, a company operated by Ida Rosenthal, a Jewish immigrant from Russia, developed the brassiere and introduced modern cup sizes for bras. History: 1930s and 1940s Modern men's underwear was largely an invention of the 1930s. On 19 January 1935, Coopers Inc. sold the world's first briefs in Chicago. Designed by an "apparel engineer" named Arthur Kneibler, briefs dispensed with leg sections and had a Y-shaped overlapping fly. The company dubbed the design the "Jockey" since it offered a degree of support that had previously only been available from the jockstrap. Jockey briefs proved so popular that over 30,000 pairs were sold within three months of their introduction. Coopers, renaming their company Jockey decades later, sent its "Mascul-line" plane to make special deliveries of "masculine support" briefs to retailers across the US. In 1938, when jockeys were introduced in the UK, they sold at the rate of 3,000 a week.In this decade, companies also began selling buttonless drawers fitted with an elastic waistband. These were the first true boxer shorts, which were named for their resemblance to the shorts worn by professional fighters. Scovil Manufacturing introduced the snap fastener at this time, which became a popular addition to various kinds of undergarments. History: Women of the 1930s brought the corset back, now called the "girdle". The garment lacked the whalebone and metal supports and usually came with a brassiere (now usually called a "bra") and attached garters. History: During World War II, elastic waistbands and metal snaps gave way once again to button fasteners due to rubber and metal shortages. Undergarments were harder to find as well, since soldiers abroad had priority to obtain them. By the end of the war, Jockey and Hanes remained the industry leaders in the US, but Cluett, Peabody and Company made a name for itself when it introduced a preshrinking process called "Sanforization", invented by Sanford Cluett in 1933, which came to be licensed by most major manufacturers. History: Meanwhile, some women adopted the corset once again, now called the "waspie" for the wasp-shaped waistline it gave the wearer. Many women began wearing the strapless bra as well, which gained popularity for its ability to push the breasts up and enhance cleavage. History: 1950s and 1960s Before the 1950s, underwear consisted of simple, functional, white pieces of clothing which were not to be shown in public. In the 1950s, underwear came to be promoted as a fashion item in its own right, and came to be made in prints and colors. Manufacturers also experimented with rayon and newer fabrics like Dacron, nylon, and Spandex. By the 1960, men's underwear was regularly printed in loud patterns, or with messages or images such as cartoon characters. By the 1960s, department stores began offering men's double-seat briefs, an optional feature that would double the wear and add greater comfort. Stores advertising the double thickness seat as well as the manufacturing brands such as Hanes and BVD during this time period can be viewed using Newspapers.com. History: Women's undergarments began to emphasize the breasts instead of the waist. The decade saw the introduction of the bullet bra pointed bust, inspired by Christian Dior's "New Look", which featured pointed cups. The original Wonderbra and push-up bra by Frederick's of Hollywood finally hit it big. Women's panties became more colorful and decorative, and by the mid-1960s were available in two abbreviated styles called the hip-hugger and the bikini (named after the Pacific Ocean island of that name), frequently in sheer nylon fabric. History: Pantyhose, also called tights in British English, which combined panties and hose into one garment, made their first appearance in 1959, invented by Glen Raven Mills of North Carolina. The company later introduced seamless pantyhose in 1965, spurred by the popularity of the miniskirt. By the end of the decade, the girdle had fallen out of favor as women chose sexier, lighter, and more comfortable alternatives.With the emergence of the woman's movement in the United States sales for pantyhose dropped off during the later half of the 1960s having soared initially. History: 1970s to the present day Underwear as fashion reached its peak in the 1970s and 1980s, and underwear advertisers forgot about comfort and durability, at least in advertising. Sex appeal became a main selling point, in swimwear as well, bringing to fruition a trend that had been building since at least the flapper era. The tank top, an undershirt named after the type of swimwear dating from the 1920s known as a tank suit or maillot, became popular warm-weather casual outerwear in the US in the 1980s. Performers such as Madonna and Cyndi Lauper were also often seen wearing their undergarments on top of other clothes. History: Although worn for decades by exotic dancers, in the 1980s the G-string first gained popularity in South America, particularly in Brazil. Originally a style of swimsuit, the back of the garment is so narrow that it disappears between the buttocks. By the 1990s the design had made its way to most of the Western world, and thong underwear became popular. Today, the thong is one of the fastest-selling styles of underwear among women, and is also worn by men. History: While health and practicality had previously been emphasized, in the 1970s retailers of men's underpants began focusing on fashion and sex appeal. Designers such as Calvin Klein began featuring near-naked models in their advertisements for briefs. The increased wealth of the gay community helped to promote a diversity of undergarment choices. In his book The Philosophy of Andy Warhol (1975), Andy Warhol wrote: I told B I needed some socks too and at least 30 pairs of Jockey shorts. He suggested I switch to Italian-style briefs, the ones with the T-shaped crotch that tends to build you up. I told him I'd tried them once, in Rome, the day I was walking through a Liz Taylor movie – and I didn't like them because they made me too self-aware. It gave me the feeling girls must have when they wear uplift bras. History: Warhol liked his Jockey briefs so much that he used a pair as a canvas for one of his dollar-sign paintings.In the UK in the 1970s, tight jeans gave briefs a continued edge over boxer shorts among young men, but a decade later boxers were given a boost by Nick Kamen's performance in Levi's "Laundrette" TV commercial for its 501 jeans, during which he stripped down to a pair of white boxers in a public laundromat. Briefs however remained popular in America amongst young men from the 1950s until the mid 1990s; while in Australia the brief remains popular today and has become iconic. History: The 1990s saw the introduction of boxer briefs, which take the longer shape of boxer shorts but maintain the tightness of briefs. Hip hop stars popularized "sagging", in which loosely fitting pants or shorts were allowed to droop below the waist thusly exposing the waistband or a greater portion of the underpants worn underneath; typically boxer shorts or boxer briefs. The chiseled muscularity of Mark Wahlberg (then known as Marky Mark) in a series of 1990s underwear advertisements for Calvin Klein boxer briefs led to his success as a hip hop star and a Hollywood actor. Trends: Some people choose not to wear any underpants, a practice sometimes referred to as "going commando", for comfort, to enable their outer garments (particularly those which are form-fitting) to look more flattering, to avoid creating a panty line, because they find it sexually exciting, to increase ventilation and reduce moisture or because they do not see any need for them. Certain types of clothes, such as cycling shorts and kilts (See True Scotsman), are designed to be worn or are traditionally worn without underpants. This also applies for most clothes worn as nightwear and as swimwear. Some analysts have encouraged people with a higher than average libido to change their underpants more frequently than average due to hygiene-related issues of by-products such as cowper's fluid and vaginal lubrication. Trends: Underwear is sometimes partly exposed for fashion reasons or to titillate. A woman may, for instance, allow the top of her brassiere to be visible from under her collar, or wear a see-through blouse over it. Some men wear T-shirts or A-shirts underneath partly or fully unbuttoned shirts. A common style among young men (2018) is to allow the trousers to sag below the waist, thus revealing the waistband or a greater portion of their underpants. This is commonly referred to (in North America) as "hang-low style". A woman wearing low-rise trousers may expose the upper rear portion of her thong underwear is said to display a "whale tail". Trends: Used underwear The sale of used female underwear for sexual purposes began in Japan, in stores called burusera, and it was even sold in vending machines. In the 21st century, when the Internet made anonymous mail-order sales possible for individuals, some women in the U.S. and UK, in response to male demand, began selling their dirty panties, and sometimes other underwear. Some men find the odor of a woman's bodily secretions sexually arousing, and will use the dirty panties as a masturbation aid. The sale of dirty panties, sometimes worn for several days, and sometimes customized with requested stains, is a significant niche in the sex work field. A far smaller market sells used male underwear to gay men.Celebrity underwear is sometimes sold. A framed pair of Elvis Presley's dirty underwear sold for $8,000 in 2012. Undergarments of Marilyn Monroe, Queen Elizabeth, and former Austrian Emperor Franz Joseph have been sold at auction. The celebrities Jarvis Cocker, Alison Goldfrapp, Nick Cave, Sacha Baron Cohen, Ricky Gervais, Jah Wobble, Fergie, and Helen Mirren donated underwear to be sold for charity. Types and styles: Common contemporary types and styles of undergarments are listed in the table below. Industry: Market In January 2008 it was reported that, according to market research firm Mintel, the men's underwear market in the UK was worth £674 million, and volume sales of men's underpants rose by 24% between 2000 and 2005. British manufacturers and retailers claim that most British men prefer "trunks", or short boxer briefs. The director of menswear of major British retailer Marks & Spencer (M&S), which sells 40 million pairs of men's underpants a year, was quoted as saying that while boxer shorts were still the most popular at M&S, demand was easing off in favor of hipster trunks similar in design to the swimming trunks worn by actor Daniel Craig in the James Bond film Casino Royale (2006).In 1985, Fruit of the Loom, Hanes, and Jockey International had the largest shares of the U.S. men's underwear market; these companies had about 35%, 15%, and 10% of the market, respectively.Gregory Woods, author of "We're Here, We're Queer and We're not Going Catalogue Shopping", stated that in companies often do not market men's underwear to straight men on the assumption that they are not interested in buying underwear for themselves; therefore many such advertisements are catered to women to convince them to buy underwear for their husbands, as well as to gay or bisexual men. In 1985 Jockey International president Howard Cooley stated that women often shop more than men do, and men request women to buy underwear for them. According to multiple studies conducted c. 1985, 60-80% of men's undergarments for sale had been purchased by women. Industry: Designers and retailers A number of major designer labels are renowned for their underwear collections, including Calvin Klein, Dolce & Gabbana, and La Perla. Likewise, specialist underwear brands are constantly emerging, such as Andrew Christian, 2(x)ist, Leonisa, and Papi. Specialist retailers of underwear include high street stores La Senza (Canada), Agent Provocateur (UK), Victoria's Secret (U.S.), and GapBody, the lingerie division of the Gap established in 1998 (U.S.). In 2000, the online retailer, Freshpair, started in New York and in 2008 Abercrombie & Fitch opened a new chain of stores, Gilly Hicks, to compete with other underwear retailers. The 2014 Stockholm Skateathon was sponsored by Björn Borg and the advertising campaign encouraged participants either skateboarding or longboarding, for example, to wear undergarments, and whilst it received criticism by the skateboarders, some people ended up dressing in the undergarments Not wearing undergarments: Going without lower body undergarments has come to be known by the slang term going commando, as well as sometimes free-balling or free-buffing (referencing testicles and vulva respectively).The origins of the phrase go commando are uncertain, with some speculating that it may refer to being "out in the open" or "ready for action". The modern usage may be traced in the United States to university students c. 1974, where it was perhaps associated with soldiers in the Vietnam War, who were reputed to go without underwear to "increase ventilation and reduce moisture". The phrase was in use in the UK before then, referring mainly to women, from the late 1960s. The connection to the UK and women has been suggested to link to a World War II euphemism for prostitutes working in London's West End, who were termed "Piccadilly Commandos". The term was re-popularized after it appeared in a 1996 episode of Friends, where Joey Tribbiani wears everything Chandler Bing owns in an act of revenge, while also going "commando".In a 2014 open-access internet-based poll, 60 Minutes and Vanity Fair asked visitors to their websites the question "How often do you 'go commando'?" A quarter of participants said that they did this at least occasionally, while 39% said they never, and 35% said that they did not know the meaning of the term.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HeroQuest (video game)** HeroQuest (video game): HeroQuest is a video game based on the HeroQuest board game. A sequel, HeroQuest II: Legacy of Sorasil, was released in 1994 for the Amiga 1200 and Amiga CD32. Reception: The One gave the Amiga version of Hero Quest an overall score of 91%, expressing that it "for the most part" faithfully recreates the tabletop version, but is 'oversimplified' in some areas, and stating that "this over-simplifying is mainly apparent in [combat]: a larger feeling of involvement would have been generated by even the simplest of additions such as the rolling of a dice [sic]. As it stands, the fights are pretty bland and act more as a temporary obstacle than as a major part of the excitement". The One also criticises Hero Quest's 'minimal' animation, but expresses that aside from these grievances, Hero Quest has succeeded in "taking all the elements from the board game and convincingly turning them into a highly playable computer game", furthermore calling it "an excellent conversion of an already enjoyable table-top".The reviewer from Amiga Computing wrote that "Hero Quest represents great value for the money". The reviewer from Amiga Action considered the game "worth buying whether you are a fan of the boardgame or not. Excellent!". The reviewer from Amiga Format said: "Gremlin have managed to produce the computer doppleganger of the original board-game bestseller and 300,000 people can't be wrong: can they?" The reviewer from CU Amiga stated that "Gremlin must be congratulated for a job well done". The reviewer from Amiga Power wrote that "Hero Quest is an enjoyable piece of software indeed, and one of the best multiplayer experiences available for the Amiga". The reviewer from ACAR called the game "technically superb".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wildspace (module)** Wildspace (module): Wildspace is an adventure module published in 1990 for the Advanced Dungeons & Dragons fantasy role-playing game. Plot summary: Wildspace is a Spelljammer adventure scenario and an introduction to campaigning in space, in which the player characters board a ship from the skies and are taken into space where they fight a monster that wants to eat their world. Publication history: SJA1 Wildspace was written by Allen Varney, with a cover by Brom, and was published by TSR in 1990 as a 64-page booklet with a large color map and an outer folder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Colour centre** Colour centre: The colour centre is a region in the brain primarily responsible for visual perception and cortical processing of colour signals received by the eye, which ultimately results in colour vision. The colour centre in humans is thought to be located in the ventral occipital lobe as part of the visual system, in addition to other areas responsible for recognizing and processing specific visual stimuli, such as faces, words, and objects. Many functional magnetic resonance imaging (fMRI) studies in both humans and macaque monkeys have shown colour stimuli to activate multiple areas in the brain, including the fusiform gyrus and the lingual gyrus. These areas, as well as others identified as having a role in colour vision processing, are collectively labelled visual area 4 (V4). The exact mechanisms, location, and function of V4 are still being investigated. Primary visual cortex: The primary part of the visual cortex, (V1), is located in the calcarine sulcus, and is the first cortical area involved in visual processing. It receives visual input from the lateral geniculate nucleus, which is located in the thalamus. V1 sends the visual information received from the LGN to other extrastriate cortex areas for higher order processing. This higher order processing includes the recognition of shapes, motion, and colour.V1 has multiple areas that are colour-sensitive, which indicates that colour processing is not limited to one area. According to a paper by Dr Robert Shapley, V1 has an important role in colour perception. fMRI experimental results showed that V1 has two kinds of colour sensitive neurons: single-opponent and double-opponent cells. These cells are integral in the opponent process of interpreting colour signals. Single-opponent neurons respond to large areas of colour. This is advantageous for recognizing large colour scenes and atmospheres. In comparison, double opponent cells respond to patterns, textures, and colour boundaries. This is more important for perceiving the colour of objects and pictures. The double-opponent cells are receptive to opposite inputs from different cone cells in the retina. This is ideal for identifying contrasting colours, such as red and green. [1] Double-opponent cells are particularly important in computing local cone ratios from visual information from their receptive fields.Single opponent colour-sensitive neurons can be divided into two categories depending on the signals they receive from the cone cells: L-M neurons and S/(L+M) neurons. The three types of cone cells, small (S), medium (M), and long (L), detect different wavelengths across the visible spectrum. S cone cells can see short wavelength colours, which corresponds to violet and blue. Similarly, M cells detect medium wavelength colours, such as green and yellow, and L cells detect long wavelength colours, like red. L-M neurons, also called red-green opponent cells, receive input from long wavelength cones opposed by input from medium wavelength cones. S/(L+M) neurons receive input from S-cells and is opposed by a sum of the L and M-cell inputs. S/(L+M) neurons are also called blue-yellow opponent cells. The opposition between the colours allows the visual system to interpret differences in colour, which is ultimately more efficient than processing colours separately. Higher order visual processing: The primary visual cortex V1 sends visual information to the extrastriate cortical areas for higher order visual processing. These extrastriate cortical areas are located anterior to the occipital lobe. The main ones are designated as visual areas V2, V3, V4, and V5/MT. Each area can have multiple functions. Recent findings have shown that the colour centre is neither isolated nor traceable to a single area in the visual cortex. Rather, there are multiple areas that possibly have different roles in the ability to process colour stimulus. Higher order visual processing: Visual area V4 Anatomical and physiological studies have established that the colour centre begins in V1 and sends signals to extrastriate areas V2 and V4 for further processing. V4 in particular is an area of interest because of the strength of the colour receptive fields in its neurons. V4 was initially identified in macaque monkey visual cortex experiments. Originally, it was proposed that colour was selectively processed in V4. However, this hypothesis was later rejected in favour of another hypothesis which suggested that V4 and other areas around V4 work together to process colour in the form of multiple colour selective regions. After identification of V4 as the colour-selective region in macaque monkeys, scientists began searching for a homologous structure in the human cortex. Using fMRI brain imaging, scientists found three main areas stimulated by colour: V1, an area in the ventral occipital lobe, specifically the lingual gyrus, which was designated as human V4, or hV4, and another area located anteriorly in the fusiform gyrus, designated as V4α.The purpose of V4 has changed dynamically as new studies are performed. Since V4 responds strongly to colour in both macaque monkeys and humans, it has become an area of interest to scientists. The V4 area was originally attributed to colour selectivity, but new evidence has shown that V4, as well as other areas of the visual cortex, are receptive to various inputs. V4 neurons are receptive to a number of properties, such as colour, brightness, and texture. It is also involved in processing shape, orientation, curvature, motion, and depth.The actual organization of hV4 in the cortex is still being investigated. In the macaque monkey, V4 spans the dorsal and ventral occipital lobe. Human experiments have shown that V4 only spans the ventral portion. This led to distinguishing hV4 from the macaque V4. A recent study from Winawer et al. analysing fMRI measurements to map the hV4 and ventral occipital areas showed variances between subjects used for hV4 mapping was at first attributed to instrumentation error, but Winawer argued that the sinuses in the brain interfered with fMRI measurements. Two models for hV4 were tested: one model had hV4 completely in the ventral side, and the second model had hV4 split into dorsal and ventral sections. It was concluded that it was still difficult to map the activity of hV4, and that further investigation was required. However, other evidence, such as lesions in the ventral occipital lobe causing achromatopsia, suggested that the ventral occipital area plays an important role in colour vision. Higher order visual processing: V4α The search for the human equivalent of V4 led to the discovery of other areas that were stimulated by colour. The most significant was an area anterior in the ventral occipital lobe, subsequently named V4α. Further fMRI experiments found that V4α had a different function than V4, but worked cooperatively with it. V4α is involved in a number of processes, and is active during tasks requiring colour ordering, imagery, knowledge about colour, colour illusions, and object colour. Higher order visual processing: V4-V4α complex The V4 and V4α areas are separate entities, but because of their close proximity in the fusiform gyrus, these two areas are often collectively called the V4-complex. Research into the V4-complex discovered that different chromatic stimulations activated either the V4 or the V4α area, and some stimulation parameters activated both. For example, naturally coloured images activated V4α more powerfully than V4. Unnaturally coloured images activated both V4α and V4 equally. It was concluded that the two sub-divisions co-operate with each other in order to generate colour images, but they are also functionally separate.A study by Nunn et al. on the activation of the V4-complex in people with visual synaesthesia from hearing spoken words was used to predict the location of the colour centre. Synaesthesia is the phenomenon where a sensory stimulus produces an automatic and involuntary reaction in a different sensation. In this study, people who would see colours upon hearing words were studied to see if the colour reaction could be traced to a specific cortical area. fMRI results showed that the left fusiform gyrus, an area consistent with V4, was activated when the subjects spoke. They also found a simultaneous activation of V4α. There was little activity in areas V1 and V2. These results validated the existence of the V4-complex in humans as an area specialized for colour vision. Higher order visual processing: V2 prestriate cortex V2, also called the prestriate cortex, is believed to have a small role in colour processing by projecting signals from V1 to the V4-complex. Whether or not colour selective cells are present in V2 is still being investigated. Some optical imaging studies have found small clusters of red-green colour selective cells in V1 and V2, but not any blue-yellow colour selective cells. Other studies have shown that V2 is activated by colour stimuli, but not colour after images.[8] V4 also has feedback on V2, suggesting that there is a defined network of communication between the multiple areas of the visual cortex. When GABA, an inhibitory neurotransmitter, was injected into V4 cells, V2 cells experienced a significant decrease in excitability. Research methods: Functional magnetic resonance imaging, or fMRI for short, has been key in determining the colour selective regions in the visual cortex. fMRI is able to track brain activity by measuring blood flow throughout the brain. Areas that have more blood flowing to them indicates an occurrence of neuronal activity. This change in blood flow is called haemodynamic response. Among the benefits of fMRI includes dynamic, real-time mapping of cortical processes. However, fMRI cannot track the actual firing of neurons, which happen on a millisecond timescale, but it can track the haemodynamic response, which happens on a seconds timescale. This method is ideal for tracking colour selective neurons because colour perception results in a visual after-image that can be observed in the neurons, which lasts about 15 seconds.Sakai et al. used fMRI to observe whether activation of the fusiform gyrus correlated with the perception of colour and the after image. The subjects in the Sakai study were placed in the fMRI machine and were subsequently subjected to various visual stimuli. A series of three images were shown to subjects while fMRI was used to focus on the haemodynamics of the fusiform gyrus. The first image was a pattern of six coloured circles. The next two images were achromatic. One of the images had a grey cross, and the other image had the same six circles as the first image, except they were six shades of grey that correlated with the coloured images. The subjects were cycled between the circle and cross images. During the cross images, the subjected perceived an after-image. The results of the experiment showed that there was a significant increase of activity in the fusiform gyrus when the subject viewed the colour image. This provided more evidence to the existence of the colour centre outside of the primary visual cortex. Cerebral achromatopsia: Cerebral achromatopsia is a chronic condition where a person is unable to see colour, but they are still able to recognize shape and form. Cerebral achromatopsia differs from congenital achromatopsia in that it is caused by damage to the cerebral cortex as opposed to abnormalities in the retinal cells. The search for the colour centre was motivated by the discovery that lesions in the ventral occipital lobe led to colour blindness, as well as the idea that there are area specializations in the cortex. Many studies have shown that lesions in the areas commonly identified as the colour centre, such as V1, V2, and the V4-complex lead to achromatopsia. Cerebral achromatopsia occurs after injury to the lingual or fusiform gyrus, the areas associated with hV4. These injuries include physical trauma, stroke, and tumour growth. One of the primary initiatives to locating the colour centre in the visual cortex is to discover the cause and a possible treatment of cerebral achromatopsia. Cerebral achromatopsia: The extent of the symptoms and the damage is different from person to person. If a person has complete achromatopsia, then their entire visual field is devoid of colour. A person with dyschromatopsia, or incomplete achromatopsia, has similar symptoms to complete achromatopsia, but to a lesser degree. This can occur in people who had achromatopsia, but the brain recovered from the injury, restoring some colour vision. The person may be able to see certain colours. However, there are many cases where there is no recovery. Finally, a person with hemiachromatopsia see half of their field of vision in colour, and the other half in grey. The visual hemifield contralateral to a lesion in the lingual or fusiform gyrus is the one that appears grey, while the ipsilateral visual hemifield appears in colour. The variance in symptoms emphasizes the need to understand the architecture of the colour centre in order to better diagnose and possible treat cerebral achromatopsia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eicosanoid** Eicosanoid: Eicosanoids are signaling molecules made by the enzymatic or non-enzymatic oxidation of arachidonic acid or other polyunsaturated fatty acids (PUFAs) that are, similar to arachidonic acid, around 20 carbon units in length. Eicosanoids are a sub-category of oxylipins, i.e. oxidized fatty acids of diverse carbon units in length, and are distinguished from other oxylipins by their overwhelming importance as cell signaling molecules. Eicosanoids function in diverse physiological systems and pathological processes such as: mounting or inhibiting inflammation, allergy, fever and other immune responses; regulating the abortion of pregnancy and normal childbirth; contributing to the perception of pain; regulating cell growth; controlling blood pressure; and modulating the regional flow of blood to tissues. In performing these roles, eicosanoids most often act as autocrine signaling agents to impact their cells of origin or as paracrine signaling agents to impact cells in the proximity of their cells of origin. Eicosanoids may also act as endocrine agents to control the function of distant cells. Eicosanoid: There are multiple subfamilies of eicosanoids, including most prominently the prostaglandins, thromboxanes, leukotrienes, lipoxins, resolvins, and eoxins. For each subfamily, there is the potential to have at least 4 separate series of metabolites, two series derived from ω-6 PUFAs (arachidonic and dihomo-gamma-linolenic acids), one series derived from the ω-3 PUFA (eicosapentaenoic acid), and one series derived from the ω-9 PUFA (mead acid). This subfamily distinction is important. Mammals, including humans, are unable to convert ω-6 into ω-3 PUFA. In consequence, tissue levels of the ω-6 and ω-3 PUFAs and their corresponding eicosanoid metabolites link directly to the amount of dietary ω-6 versus ω-3 PUFAs consumed. Since certain of the ω-6 and ω-3 PUFA series of metabolites have almost diametrically opposing physiological and pathological activities, it has often been suggested that the deleterious consequences associated with the consumption of ω-6 PUFA-rich diets reflects excessive production and activities of ω-6 PUFA-derived eicosanoids, while the beneficial effects associated with the consumption of ω-3 PUFA-rich diets reflect the excessive production and activities of ω-3 PUFA-derived eicosanoids. In this view, the opposing effects of ω-6 PUFA-derived and ω-3 PUFA-derived eicosanoids on key target cells underlie the detrimental and beneficial effects of ω-6 and ω-3 PUFA-rich diets on inflammation and allergy reactions, atherosclerosis, hypertension, cancer growth, and a host of other processes. Nomenclature: Fatty acid sources "Eicosanoid" (eicosa-, Greek for "twenty"; see icosahedron) is the collective term for straight-chain polyunsaturated fatty acids (PUFAs) of 20 carbon units in length that have been metabolized or otherwise converted to oxygen-containing products. The PUFA precursors to the eicosanoids include: Arachidonic acid (AA), i.e. 5Z, 8Z,11Z,14Z-eicosatetraenoic acid is an ω-6 fatty acid with four double bonds in the cis configuration (see Cis–trans isomerism), each located between carbons 5-6, 8-9, 11-12, and 14-15. Nomenclature: Adrenic acid (AdA), 7,10,13,16-docosatetraenoic acid, is an ω-6 fatty acid with four cis double bounds, each located between carbons 7-8, 10-11, 13-14, and 17-18. Eicosapentaenoic acid (EPA), i.e.i.e. 5Z, 8Z,11Z,14Z,17Z-eicosapentaenoic acid is an ω-3 fatty acid with five cis double bonds, each located between carbons 5-6, 8-9, 11-12, 14-15, and 17-18. Dihomo-gamma-linolenic acid (DGLA), 8Z, 11Z,14Z-eicosatrienoic acid is an ω-6 fatty acid with three cis double bonds, each located between carbons 8-9, 11-12, and 14-15. Mead acid, i.e. 5Z,8Z,11Z-eicosatrienoic acid, is an ω-9 fatty acid containing three cis double bonds, each located between carbons 5-6, 8-9, and 11-12. Nomenclature: Abbreviation A particular eicosanoid is denoted by a four-character abbreviation, composed of: its two-letter abbreviation (LT, EX or PG, as described above), one A-B-C sequence-letter, A subscript or plain script number following the designated eicosanoid's trivial name indicates the number of its double bonds. Examples are: The EPA-derived prostanoids have three double bonds (e.g. PGG3 or PGG3) while leukotrienes derived from EPA have five double bonds (e.g. LTB5 or LTB5). Nomenclature: The AA-derived prostanoids have two double bonds (e.g. PGG2 or PGG2) while their AA-derived leukotrienes have four double bonds (e.g. LTB4 or LTB4). Nomenclature: Hydroperoxy-, hydroxyl-, and oxo-eicosanoids possess a hydroperoxy (-OOH), hydroxy (-OH), or oxygen atom (=O) substituents link to a PUFA carbon by a single (-) or double (=) bond. Their trivial names indicate the substituent as: Hp or HP for a hydroperoxy residue (e.g. 5-hydroperooxy-eicosatraenoic acid or 5-HpETE or 5-HPETE); H for a hydroxy residue (e.g. 5-hydroxy-eicosatetraenoic acid or 5-HETE); and oxo- for an oxo residue (e.g. 5-oxo-eicosatetraenioic acid or 5-oxo-ETE or 5-oxoETE). The number of their double bounds is indicated by their full and trivial names: AA-derived hydroxy metabolites have four (i.e. 'tetra' or 'T') double bonds (e.g. 5-hydroxy-eicosatetraenoic acid or 5-HETE; EPA-derived hydroxy metabolites have five ('penta' or 'P') double bonds (e.g. 5-hydroxy-eicosapentaenoic acid or 5-HEPE); and DGLA-derived hydroxy metabolites have three ('tri' or 'Tr') double bonds (e.g. 5-hydroxy-eicosatrienoic acid or 5-HETrE).The stereochemistry of the eicosanoid products formed may differ among the pathways. For prostaglandins, this is often indicated by Greek letters (e.g. PGF2α versus PGF2β). For hydroperoxy and hydroxy eicosanoids an S or R designates the chirality of their substituents (e.g. 5S-hydroxy-eicosateteraenoic acid [also termed 5(S)-, 5S-hydroxy-, and 5(S)-hydroxy-eicosatetraenoic acid] is given the trivial names of 5S-HETE, 5(S)-HETE, 5S-HETE, or 5(S)-HETE). Since eicosanoid-forming enzymes commonly make S isomer products either with marked preference or essentially exclusively, the use of S/R designations has often been dropped (e.g. 5S-HETE is 5-HETE). Nonetheless, certain eicosanoid-forming pathways do form R isomers and their S versus R isomeric products can exhibit dramatically different biological activities. Failing to specify S/R isomers can be misleading. Here, all hydroperoxy and hydroxy substituents have the S configuration unless noted otherwise. Nomenclature: Classic eicosanoids Current usage limits the term eicosanoid to: ω-6 Series eicosanoids derived from arachidonic acid: Hydroxyeicosatetraenoic acids (HETE) include the following metabolites of arachidonic acid: 5-HETE, 12-HETE, 15-Hydroxyeicosatetraenoic acid (i.e. 15-HETE), 20-Hydroxyeicosatetraenoic acid (i.e. 20-HETE), and 19-HETE (see 20-Hydroxyeicosatetraenoic acid). Leukotrienes (LT) include the following metabolites of arachidonic acid: LTA4, LTB4, LTC4, LTD4, and LTE4. Eoxins (EX) include the following metabolites of arachidnoic acid: EXA4, EXC4, EXD4, and EXE4. Prostanoids consisting of several different types: Prostaglandins (PG) include the following metabolites of arachidonic acid: PGG2, PGH2, PGE2, PGD2, PGF2alpha, PGA2, PGB2, (see Prostanoid and Specialized pro-resolving mediators#Prostaglandins and Isoprostanes). Prostacyclins include: PGI2 (see prostacyclin). Thromboxanes (TX) include the following metabolites of aracidonic acid: TXA2 and TXB2. Cyclopentenone prostaglandins include the following metabolites of arachidonic acid: PGA1, PGA2 (see 'prostanoid, PGJ2, Δ12-PGJ2, and 15-deoxy-Δ12,14-PGJ2. ω-6 Series eicosanoids derived from dihomo-gamma-linolenic acid. These metabolites are analogs of arachidonic acid-derived eicosanoids but lack a double bound between carbons 5 and 6 and therefore have 1 less double bound than their arachidonic acid-derived analogs. They the following: PGA1, PGE1, and TXA1. ω-3 Series eicosanoids: Resolvins of the E series (RvE) (D series resolvins (RvD's are metabolites of the 22-carbon ω-3 fatty acid docosahexaenoic acid; see Specialized pro-resolving mediators#DHA-derived Resolvins). RvE's include the following metabolites of eicosapentaenoic acid: RvE1, 18S-RvE1, RvE2, and RvE3. Nomenclature: Other ω-3 series eicosapentaenoic acid-derived eicosanoids are analogs of ω-6 fatty acid-derived metabolites but contain a double bond between carbon 17 and 18 and therefore have one more double bound than their arachidonic acid-derived analogs. They include (HEPE is hydroxy-eicsapentaenoic acid): 5-HEPE (see Arachidonate 5-lipoxygenase#Eicosapentaenoic acid), 12-HEPE, 15-HEPE, and 20-HETE; LTA5, LTB5 (see Essential fatty acid interactions#counteractions), LTC5, LTD5, and LTE5 (see Arachidonate 5-lipoxygenase#Eicosapentaenoic acid); PGE3, PGD3, PGF3α, and Δ(17)-6-keto PGF1α; PGI3 (see Essential fatty acid interactions#Counteraction); and TXA3 and TXB3 (see Essential fatty acid interactions#nomenclature). Nomenclature: ω-9 Series eicosanoids Hydroxy are derived form mead acid, is metabolized to the 3 double bond-containing analog of 5-HETE viz., 5-HETrE (see arachidonate 5-lipoxygenase#Mead acid).Hydroxyeicosatetraenoic acids, leukotrienes, eoxins and prostanoids are sometimes termed "classic eicosanoids" Nonclassic eicosanoids In contrast to the classic eicosanoids, several other classes of PUFA metabolites have been termed 'novel', 'eicosanoid-like' or 'nonclassic eicosanoids'. These included the following classes: Oxoeicosanoids (oxo-ETE) include the following metabolites: 5-oxo-eicosatetraenoic acid (5-oxo-ETE), 12-oxo-ETE (see 12-HETE#Further metabolism), and 15-oxo-ETE, which are metabolites of arachidonic acid (see 15-Hydroxyeicosatetraenoic acid) and 5-oxo-ETrE which is a metabolite of mead acid (see arachidonate 5-lipoxygenase#Mead acid). Nomenclature: Hepoxilins (Hx) include the following arachidonic acid metabolites: HxA3 and HxB3 (see Hepoxilins). Lipoxins (Lx) include the following metabolites of arachidonic acid: LxA4 and LxB4(see Specialized pro-resolving mediators). Epi-lipoxins (epi-Lx) include the following metabolites of arachidonic acid: 15-epi-LxA4 (also termed AT-LxA4) and 15-epi-LxB4 (also termed AT-LxB4)(see Specialized pro-resolving mediators). Epoxyeicosatrienoic acids (EET) include the following metabolites of arachidonic acid: 5,6-EET, 8,9-EET, 11,12-EET, and 14,15-EET (see epoxyeicosatrienoic acid). Epoxyeicosatetraenoic acid (EEQ) include the following metabolites of eicosapentaenoic acid: 5,6-EEQ, 8,9-EEQ, 11,12-EEQ, 14,15-EEQ, and 15,16-EEQ (see epoxyeicosatetraenoic acid). Nomenclature: Isoprostanes (isoP) are non-enzymatically formed derivatives of polyunsaturated fatty acids studied as markers of oxidative stress; they include the following arachidonic acid-derived isoP's which are named based on their structural similarities to PGs:D2-isoPs, E2-isoPs, A2-isoPs, and J2-isoPs; and two epoxide-containing isoPs, 5,6-epoxyisoprostane E2 and 5,6-epoxyisoprostane A2. Some of these isoPs have been shown to possess anti-inflammatory activity(see Specialized pro-resolving mediators#Prostaglandins and Isoprostanes). Nomenclature: Isofurans are non-enzymatically formed derivatives of polyunsaturated fatty acids that possess a Furan ring structure; they are studied as markers of oxidative stress. There are 256 potentially different furan ring-containing isomers that can be derived from arachidonic acid. Nomenclature: Endocannabinoids are certain glycerolipids or dopamine that are esterified to polyunsaturated fatty acids that activate cannabinoid receptors. They include the following arachidonic acid-esterified agents: Arachidonoylethanolamine, 2-Arachidonoylglycerol, 2-Arachidonyl glyceryl ether, O-arachidonoyl-ethanolamine, and N-Arachidonoyl dopamine.Metabolism of eicosapentaenoic acid to HEPEs, leukotrienes, prostanoids, and epoxyeicosatetraenoic acids as well as the metabolism of dihomo-gamma-linolenic acid to prostanoids and mead acid to 5(S)-hydroxy-6E,8Z,11Z-eicosatrienoic acid (5-HETrE), 5-oxo-6,8,11-eicosatrienoic acid (5-oxo-ETrE), LTA3, and LTC3 involve the same enzymatic pathways that make their arachidonic acid-derived analogs. Biosynthesis: Eicosanoids typically are not stored within cells but rather synthesized as required. They derive from the fatty acids that make up the cell membrane and nuclear membrane. These fatty acids must be released from their membrane sites and then metabolized initially to products which most often are further metabolized through various pathways to make the large array of products we recognize as bioactive eicosanoids. Biosynthesis: Fatty acid mobilization Eicosanoid biosynthesis begins when a cell is activated by mechanical trauma, ischemia, other physical perturbations, attack by pathogens, or stimuli made by nearby cells, tissues, or pathogens such as chemotactic factors, cytokines, growth factors, and even certain eicosanoids. The activated cells then mobilize enzymes, termed phospholipase A2's (PLA2s), capable of releasing ω-6 and ω-3 fatty acids from membrane storage. These fatty acids are bound in ester linkage to the SN2 position of membrane phospholipids; PLA2s act as esterases to release the fatty acid. There are several classes of PLA2s with type IV cytosolic PLA2s (cPLA2s) appearing to be responsible for releasing the fatty acids under many conditions of cell activation. The cPLA2s act specifically on phospholipids that contain AA, EPA or GPLA at their SN2 position. cPLA2 may also release the lysophospholipid that becomes platelet-activating factor. Biosynthesis: Peroxidation and reactive oxygen species Next, the free fatty acid is oxygenated along any of several pathways; see the Pathways table. The eicosanoid pathways (via lipoxygenase or COX) add molecular oxygen (O2). Although the fatty acid is symmetric, the resulting eicosanoids are chiral; the oxidations proceed with high stereoselectivity (enzymatic oxidations are considered practically stereospecific). Biosynthesis: Four families of enzymes initiate or contribute to the initiation of the catalysis of fatty acids to eicosanoids: Cyclooxygenases (COXs): COX-1 and COX-2 initiate the metabolism of arachidonic acid to prostanoids that contain two double bonds, i.e. the prostaglandins (e.g. PGE2), prostacyclin (i.e. PGI2), and thromboxanes (e.g. TXA2). The two COX enzymes likewise initiate the metabolism of: a) eicosapentaenoic acid, which has 5 double bonds compared to the 4 double bonds of arachidonic acid, to prostanoid, prostacyclin, and thromboxane products that have three double bonds, e.g. PGE3, PGI3, and TXA3 and b) Dihomo-γ-linolenic acid, which has three double bonds, to prostanoid, prostacyclin, and thromboxane products that have only one double bond, e.g. PGE1, PGI1, and TXA1. Biosynthesis: Lipoxygenases (LOXs): 5-Lipoxygenase (5-LOX or ALOX5) initiates the metabolism of arachidonic acid to 5-hydroperoxyeicosatetraenoic acid (5-HpETE) which then may be rapidly reduced to 5-hydroxyeicosatetraenoic acid (5-HETE) or further metabolized to the leukotrienes (e.g. LTB4 and LTC4); 5-HETE may be oxidized to 5-oxo-eicosatetraenoic acid (5-oxo-ETE). In similar fashions, 15-lipoxygenase (15-lipoxygenase 1, 15-LOX, 15-LOX1, or ALOX15) initiates the metabolism of arachidonic acid to 15-HpETE, 15-HETE, eoxins, 8,15-dihydroxyeicosatetraenoic acid (i.e. 8,15-DiHETE), and 15-oxo-ETE and 12-lipoxygenase (12-LOX or ALOX12) initiates the metabolism of arachidonic acid to 12-HpETE, 12-HETE, hepoxilins, and 12-oxo-ETE. These enzymes also initiate the metabolism of; a) eicosapentaenoic acid to analogs of the arachidonic acid metabolites that contain 5 rather than four double bonds, e.g. 5-hydroxy-eicosapentaenoic acid (5-HEPE), LTB5, LTC5, 5-oxo-EPE, 15-HEPE, and 12-HEPE; b) the three double bond-containing dihomo-γ-linolenic acid to products that contain 3 double bonds, e.g. 8-hydroxy-eicosatrienoic acid (8-HETrE), 12-HETrE, and 15-HETrE (this fatty acid cannot be converted to leukotrienes); and the three double bond-containing mead acid (by ALOX5) to 5-hydroperoxy-eicosatrienoic acid (5-HpETrE), 5-HETrE, and 5-oxo-HETrE. In the most studied of these pathways, ALOX5 metabolizes eicosapentaenoic acid to 5-hydroperoxyeicosapentaenoic acid (5-HpEPE), 5-HEPE, and LTB5, and 5-oxo-EPE, all of which are less active than there arachidonic acid analogs. Since eicosapentaenoic acid competes with arachidonic acid for ALOX5, production of the eicosapentaenoate metabolites leads to a reduction in the eicosatetraenoate metabolites and therefore reduction in the latter metabolites' signaling. The initial mono-hydroperoxy and mono-hydroxy products made by the aforementioned lipoxygenases have their hydroperosy and hydroxyl residues positioned in the S chiral configuration and are more properly termed 5S-HpETE, 5S-HETE, 12S-HpETE, 12S-HETE, 15S-HpETE and, 15S-HETE. ALOX12B (i.e. arachidonate 12-lipoxygenase, 12R type) forms R chirality products, i.e. 12R-HpETE and 12R-HETE. Similarly, ALOXE3 (i.e. epidermis-type lipoxygenase 3 or eLOX3) metabolizes arachidonic acid to 12R-HpETE and 12R-HETE; however these are minor products that this enzyme forms only under a limited set of conditions. ALOXE3 preferentially metabolizes arachidonic acid to hepoxilins. Biosynthesis: Epoxygenases: these are cytochrome P450 enzymes which generate nonclassic eicosanoid epoxides derived from: a) arachidonic acid viz., 5,6-epoxy-eicosatrienoic acid (5,6-EET), 8,9-EET, 11,12-EET, and 14,15-EET (see Epoxyeicosatrienoic acid); b) eicosapentaenoic acid viz., 5,6,-epoxy-eicosatetraenoic acid (5,6-EEQ), 8,9-EEQ, 11,12-EEQ, 14,15-EEQ, and 17,18-EEQ (see Epoxyeicosatetraenoic acid); c) di-homo-γ-linolenic acid viz., 8,9-epoxy-eicosadienoic acid (8,9-EpEDE), 11,12-EpEDE, and 14,15-EpEDE; and d) adrenic acid viz., 7,8-epox-eicosatrienoic acid (7,8-EpETrR), 10,11-EpTrE, 13,14-EpTrE, and 16,17-EpETrE. All of these epoxides are converted, sometimes rapidly, to their dihydroxy metabolites, by various cells and tissues. For example, 5,6-EET is converted to 5,6-dihydroxy-eicosatrienoic acid (5,6-DiHETrE), 8,9-EEQ to 8,9-dihydroxy-eicosatetraenoic acid (8,9-DiHETE, 11,12-EpEDE to 11,12-dihydroxy-eicosadienoic acid (11,12DiHEDE), and 16,17-EpETrE to 16,17-dihydroxy-eicosatrienoic acid (16,17-DiETrE Cytochrome P450 microsome ω-hydroxylases: CYP4A11, CYP4A22, CYP4F2, and CYP4F3 metabolize arachidonic acid primarily to 20-Hydroxyeicosatetraenoic acid (20-HETE) but also to 16-HETE, 17-HETE, 18-HETE, and 19-HETE; they also metabolize eicosapentaenoic acid primarily to 20-hydroxy-eicosapentaenoic acid (20-HEPE) but also to 19-HEPE.Two different enzymes may act in series on a PUFA to form more complex metabolites. For example, ALOX5 acts with ALOX12 or aspirin-treated COX-2 to metabolize arachidonic acid to lipoxins and with cytochrome P450 monooxygenase(s), bacterial cytochrome P450 (in infected tissues), or aspirin-treated COX2 to metabolize eicosapentaenoic acid to the E series resolvins (RvEs) (see Specialized pro-resolving mediators). When this occurs with enzymes located in different cell types and involves the transfer of one enzyme's product to a cell which uses the second enzyme to make the final product it is referred to as transcellular metabolism or transcellular biosynthesis.The oxidation of lipids is hazardous to cells, particularly when close to the nucleus. Biosynthesis: There are elaborate mechanisms to prevent unwanted oxidation. COX, the lipoxygenases, and the phospholipases are tightly controlled—there are at least eight proteins activated to coordinate generation of leukotrienes. Several of these exist in multiple isoforms.Oxidation by either COX or lipoxygenase releases reactive oxygen species (ROS) and the initial products in eicosanoid generation are themselves highly reactive peroxides. LTA4 can form adducts with tissue DNA. Other reactions of lipoxygenases generate cellular damage; murine models implicate 15-lipoxygenase in the pathogenesis of atherosclerosis. Biosynthesis: The oxidation in eicosanoid generation is compartmentalized; this limits the peroxides' damage. The enzymes that are biosynthetic for eicosanoids (e.g., glutathione-S-transferases, epoxide hydrolases, and carrier proteins) belong to families whose functions are involved largely with cellular detoxification. This suggests that eicosanoid signaling might have evolved from the detoxification of ROS. The cell must realize some benefit from generating lipid hydroperoxides close-by its nucleus. PGs and LTs may signal or regulate DNA-transcription there; LTB4 is ligand for PPARα.(See diagram at PPAR). Biosynthesis: Prostanoid pathways Both COX1 and COX2 (also termed prostaglandin-endoperoxide synthase-1 (PTGS1) and PTGS2, respectively) metabolize arachidonic acid by adding molecular O2 between carbons 9 and 11 to form an endoperoxide bridge between these two carbons, adding molecular O2 to carbon 15 to yield a 15-hydroperoxy product, creating a carbon-carbon bond between carbons 8 and 12 to create a cyclopentane ring in the middle of the fatty acid, and in the process making PGG2, a product that has two fewer double bonds than arachidonic acid. The 15-hydroperoxy residue of PGG2 is then reduced to a 15-hydroxyl residue thereby forming PGH2. PGH2 is the parent prostanoid to all other prostanoids. It is metabolized by (see diagram in Prostanoids: a) the Prostaglandin E synthase pathway in which any one of three isozymes, PTGES, PTGES2, or PTGES3, convert PGH2 to PGE2 (subsequent products of this pathway include PGA2 and PGB2 (see Prostanoid#Biosynthesis); b) PGF synthase which converts PGH2 to PGF2α; c) Prostaglandin D2 synthase which converts PGH2 to PGD2 (subsequent products in this pathway include 15-dPGJ2 (see Cyclopentenone prostaglandin); d) thromboxane synthase which converts PGH2 to TXA2 (subsequent products in this pathway include TXB2); and e) Prostacyclin synthase which converts PGH2 to PGI2 (subsequent products in this pathway include 6-keto-PGFα. These pathways have been shown or in some cases presumed to metabolize eicosapentaenoic acid to eicosanoid analogs of the sited products that have three rather than two double bonds and therefore contain the number 3 in place of 2 attached to their names (e.g. PGE3 instead of PGE2).The PGE2, PGE1, and PGD2 products formed in the pathways just cited can undergo a spontaneous dehydration reaction to form PGA2, PGA1, and PGJ2, respectively; PGJ2 may then undergo a spontaneous isomerization followed by a dehydration reaction to form in series Δ12-PGJ2 and 15-deoxy-Δ12,14-PGJ2.PGH2 has a 5-carbon ring bridged by molecular oxygen. Its derived PGS have lost this oxygen bridge and contain a single, unsaturated 5-carbon ring with the exception of thromboxane A2 which possesses a 6-member ring consisting of one oxygen and 5 carbon atoms. The 5-carbon ring of prostacyclin is conjoined to a second ring consisting of 4 carbon and one oxygen atom. And, the 5 member ring of the cyclopentenone prostaglandins possesses an unsaturated bond in a conjugated system with a carbonyl group that causes these PGs to form bonds with a diverse range of bioactive proteins (for more see the diagrams at Prostanoid). Biosynthesis: Hydroxyeicosatetraenoate (HETE) and leukotriene (LT) pathways See Leukotriene#Synthesis, Hydroxyeicosatetraenoic acid, and Eoxin#Human biosynthesis. Biosynthesis: The enzyme 5-lipoxygenase (5-LO or ALOX5) converts arachidonic acid into 5-hydroperoxyeicosatetraenoic acid (5-HPETE), which may be released and rapidly reduced to 5-hydroxyeicosatetraenoic acid (5-HETE) by ubiquitous cellular glutathione-dependent peroxidases. Alternately, ALOX5 uses its LTA synthase activity to act convert 5-HPETE to leukotriene A4 (LTA4). LTA4 is then metabolized either to LTB4 by Leukotriene A4 hydrolase or Leukotriene C4 (LTC4) by either LTC4 synthase or microsomal glutathione S-transferase 2 (MGST2). Either of the latter two enzymes act to attach the sulfur of cysteine's thio- (i.e. SH) group in the tripeptide glutamate-cysteine-glycine to carbon 6 of LTA4 thereby forming LTC4. After release from its parent cell, the glutamate and glycine residues of LTC4 are removed step-wise by gamma-glutamyltransferase and a dipeptidase to form sequentially LTD4 and LTE4. The decision to form LTB4 versus LTC4 depends on the relative content of LTA4 hydrolase versus LTC4 synthase (or glutathione S-transferase in cells; Eosinophils, mast cells, and alveolar macrophages possess relatively high levels of LTC4 synthase and accordingly form LTC4 rather than or to a far greater extent than LTB4. 5-LOX may also work in series with cytochrome P450 oxygenases or aspirin-treated COX2 to form Resolvins RvE1, RvE2, and 18S-RvE1 (see Specialized pro-resolving mediators#EPA-derived resolvins). Biosynthesis: The enzyme arachidonate 12-lipoxygenase (12-LO or ALOX12) metabolizes arachidonic acid to the S stereoisomer of 12-hydroperoxyeicosatetraenoic acid (12-HPETE) which is rapidly reduced by cellular peroxidases to the S stereoisomer of 12-hydroxyeicosatetraenoic acid (12-HETE) or further metabolized to hepoxilins (Hx) such as HxA3 and HxB.The enzymes 15-lipoxygenase-1 (15-LO-1 or ALOX15) and 15-lipoxygenase-2 (15-LO-2, ALOX15B) metabolize arachidonic acid to the S stereoisomer of 15-Hydroperoxyeicosatetraenoic acid (15(S)-HPETE) which is rapidly reduced by cellular peroxidases to the S stereoisomer of 15-Hydroxyicosatetraenoic acid (15(S)-HETE). The 15-lipoxygenases (particularly ALOX15) may also act in series with 5-lipoxygenase, 12-lipoxygenase, or aspirin-treated COX2 to form the lipoxins and epi-lipoxins or with P450 oxygenases or aspirin-treated COX2 to form Resolvin E3 (see Specialized pro-resolving mediators#EPA-derived resolvins. Biosynthesis: A subset of cytochrome P450 (CYP450) microsome-bound ω-hydroxylases (see 20-Hydroxyeicosatetraenoic acid) metabolize arachidonic acid to 20-Hydroxyeicosatetraenoic acid (20-HETE) and 19-hydroxyeicosatetraenoic acid by an omega oxidation reaction. Biosynthesis: Epoxyeicosanoid pathway The human cytochrome P450 (CYP) epoxygenases, CYP1A1, CYP1A2, CYP2C8, CYP2C9, CYP2C18, CYP2C19, CYP2E1, CYP2J2, and CYP2S1 metabolize arachidonic acid to the non-classic Epoxyeicosatrienoic acids (EETs) by converting one of the fatty acid's double bonds to its epoxide to form one or more of the following EETs, 14,15-ETE, 11,12-EET, 8,9-ETE, and 4,5-ETE. 14,15-EET and 11,12-EET are the major EETs produced by mammalian, including human, tissues. The same CYPs but also CYP4A1, CYP4F8, and CYP4F12 metabolize eicosapentaenoic acid to five epoxide epoxyeicosatetraenoic acids (EEQs) viz., 17,18-EEQ, 14,15-EEQ, 11,12-EEQ. 8,9-EEQ, and 5,6-EEQ (see epoxyeicosatetraenoic acid). Function, pharmacology, and clinical significance: The following table lists a sampling of the major eicosanoids that possess clinically relevant biological activity, the cellular receptors (see Cell surface receptor) that they stimulate or, where noted, antagonize to attain this activity, some of the major functions which they regulate (either promote or inhibit) in humans and mouse models, and some of their relevancies to human diseases. Prostanoids Many of the prostanoids are known to mediate local symptoms of inflammation: vasoconstriction or vasodilation, coagulation, pain, and fever. Inhibition of COX-1 and/or the inducible COX-2 isoforms, is the hallmark of NSAIDs (non-steroidal anti-inflammatory drugs), such as aspirin. Prostanoids also activate the PPARγ members of the steroid/thyroid family of nuclear hormone receptors, and directly influence gene transcription. Prostanoids have numerous other relevancies to clinical medicine as evidence by their use, the use of their more stable pharmacological analogs, of the use of their receptor antagonists as indicated in the following chart. Cyclopentenone prostaglandins PGA1, PGA2, PGJ2, Δ12-PGJ2, and 15-deox-Δ12,14-PGJ2 exhibit a wide range of anti-inflammatory and inflammation-resolving actions in diverse animal models. They therefore appear to function in a manner similar to Specialized pro-resolving mediators although one of their mechanisms of action, forming covalent bonds with key signaling proteins, differs from those of the specialized pro-resolving mediators. Function, pharmacology, and clinical significance: HETEs and oxo-ETEs As indicated in their individual Wikipedia pages, 5-hydroxyeicosatetraenoic acid (which, like 5-oxo-eicosatetraenoic acid, acts through the OXER1 receptor), 5-oxo-eicosatetraenoic acid, 12-Hydroxyeicosatetraenoic acid, 15-Hydroxyeicosatetraenoic acid, and 20-Hydroxyeicosatetraenoic acid show numerous activities in animal and human cells as well as in animal models that are related to, for example, inflammation, allergic reactions, cancer cell growth, blood flow to tissues, and/or blood pressure. However, their function and relevancy to human physiology and pathology have not as yet been shown. Function, pharmacology, and clinical significance: Leukotrienes The three cysteinyl leukotrienes, LTC4, LTD4, and LTE4, are potent bronchoconstrictors, increasers of vascular permeability in postcapillary venules, and stimulators of mucus secretion that are released from the lung tissue of asthmatic subjects exposed to specific allergens. They play a pathophysiological role in diverse types of immediate hypersensitivity reactions. Drugs that block their activation of the CYSLTR1 receptor viz., montelukast, zafirlukast, and pranlukast, are used clinically as maintenance treatment for allergen-induced asthma and rhinitis; nonsteroidal anti-inflammatory drug-induced asthma and rhinitis (see aspirin-exacerbated respiratory disease); exercise- and cold-air induced asthma (see Exercise-induced bronchoconstriction); and childhood sleep apnea due to adenotonsillar hypertrophy (see Acquired non-inflammatory myopathy#Diet and Trauma Induced Myopathy). When combined with antihistamine drug therapy, they also appear useful for treating urticarial diseases such as hives. Function, pharmacology, and clinical significance: Lipoxins and epi-lipoxins LxA4, LxB4, 15-epi-LxA4, and 15-epi-LXB4, like other members of the specialized pro-resolving mediators) class of eicosanoids, possess anti-inflammatory and inflammation resolving activity. In a randomized controlled trial, AT-LXA4 and a comparatively stable analog of LXB4, 15R/S-methyl-LXB4, reduced the severity of eczema in a study of 60 infants and, in another study, inhaled LXA4 decreased LTC4-initiated bronchoprovocation in patients with asthma. Function, pharmacology, and clinical significance: Eoxins The eoxins (EXC4, EXD4, EXE5) are newly described. They stimulate vascular permeability in an ex vivo human vascular endothelial model system, and in a small study of 32 volunteers EXC4 production by eosinophils isolated from severe and aspirin-intolerant asthmatics was greater than that from healthy volunteers and mild asthmatic patients; these findings have been suggested to indicate that the eoxins have pro-inflammatory actions and therefore potentially involved in various allergic reactions. Production of eoxins by Reed-Sternburg cells has also led to suggestion that they are involve in Hodgkins disease. However, the clinical significance of eoxins has not yet been demonstrated. Function, pharmacology, and clinical significance: Resolvin metabolites of eicosapentaenoic acid RvE1, 18S-RvE1, RvE2, and RvE3, like other members of the specialized pro-resolving mediators) class of eicosanoids, possess anti-inflammatory and inflammation resolving activity. A synthetic analog of RvE1 is in clinical phase III testing (see Phases of clinical research) for the treatment of the inflammation-based dry eye syndrome; along with this study, other clinical trials (NCT01639846, NCT01675570, NCT00799552 and NCT02329743) using an RvE1 analogue to treat various ocular conditions are underway. RvE1 is also in clinical development studies for the treatment of neurodegenerative diseases and hearing loss. Function, pharmacology, and clinical significance: Other metabolites of eicosapentaenoic acid The metabolites of eicosapentaenoic acid that are analogs of their arachidonic acid-derived prostanoid, HETE, and LT counterparts include: the 3-series prostanoids (e.g. PGE3, PGD3, PGF3α, PGI3, and TXA3), the hydroxyeicosapentaenoic acids (e.g. 5-HEPE, 12-HEPE, 15-HEPE, and 20-HEPE), and the 5-series LTs (e.g. LTB5, LTC5, LTD5, and LTE5). Many of the 3-series prostanoids, the hydroxyeicosapentaenoic acids, and the 5-series LT have been shown or thought to be weaker stimulators of their target cells and tissues than their arachidonic acid-derived analogs. They are proposed to reduce the actions of their aracidonate-derived analogs by replacing their production with weaker analogs. Eicosapentaenoic acid-derived counterparts of the Eoxins have not been described. Function, pharmacology, and clinical significance: Epoxyeicosanoids The epoxy eicostrienoic acids (or EETs)—and, presumably, the epoxy eicosatetraenoic acids—have vasodilating actions on heart, kidney, and other blood vessels as well as on the kidney's reabsorption of sodium and water, and act to reduce blood pressure and ischemic and other injuries to the heart, brain, and other tissues; they may also act to reduce inflammation, promote the growth and metastasis of certain tumors, promote the growth of new blood vessels, in the central nervous system regulate the release of neuropeptide hormones, and in the peripheral nervous system inhibit or reduce pain perception. The ω-3 and ω-6 series: The reduction in AA-derived eicosanoids and the diminished activity of the alternative products generated from ω-3 fatty acids serve as the foundation for explaining some of the beneficial effects of greater ω-3 intake. The ω-3 and ω-6 series: Arachidonic acid (AA; 20:4 ω-6) sits at the head of the "arachidonic acid cascade" – more than twenty eicosanoid-mediated signaling paths controlling a wide array of cellular functions, especially those regulating inflammation, immunity, and the central nervous system.In the inflammatory response, two other groups of dietary fatty acids form cascades that parallel and compete with the arachidonic acid cascade. EPA (20:5 ω-3) provides the most important competing cascade. DGLA (20:3 ω-6) provides a third, less prominent cascade. These two parallel cascades soften the inflammatory effects of AA and its products. Low dietary intake of these less-inflammatory fatty acids, especially the ω-3s, has been linked to several inflammation-related diseases, and perhaps some mental illnesses. The ω-3 and ω-6 series: The U.S. National Institutes of Health and the National Library of Medicine state that there is 'A' level evidence that increased dietary ω-3 improves outcomes in hypertriglyceridemia, secondary cardiovascular disease prevention, and hypertension. There is 'B' level evidence ('good scientific evidence') for increased dietary ω-3 in primary prevention of cardiovascular disease, rheumatoid arthritis, and protection from ciclosporin toxicity in organ transplant patients. The ω-3 and ω-6 series: They also note more preliminary evidence showing that dietary ω-3 can ease symptoms in several psychiatric disorders.Besides the influence on eicosanoids, dietary polyunsaturated fats modulate immune response through three other molecular mechanisms. They (a) alter membrane composition and function, including the composition of lipid rafts; (b) change cytokine biosynthesis; and (c) directly activate gene transcription. Of these, the action on eicosanoids is the best explored. The ω-3 and ω-6 series: Mechanisms of ω-3 action In general, the eicosanoids derived from AA promote inflammation, and those from EPA and from GLA (via DGLA) are less inflammatory, or inactive, or even anti-inflammatory and pro-resolving. The figure shows the ω-3 and -6 synthesis chains, along with the major eicosanoids from AA, EPA, and DGLA. Dietary ω-3 and GLA counter the inflammatory effects of AA's eicosanoids in three ways, along the eicosanoid pathways: Displacement—Dietary ω-3 decreases tissue concentrations of AA, so there is less to form ω-6 eicosanoids. Competitive inhibition—DGLA and EPA compete with AA for access to the cyclooxygenase and lipoxygenase enzymes. So the presence of DGLA and EPA in tissues lowers the output of AA's eicosanoids. Counteraction—Some DGLA and EPA derived eicosanoids counteract their AA derived counterparts. Role in inflammation Since antiquity, the cardinal signs of inflammation have been known as: calor (warmth), dolor (pain), tumor (swelling), and rubor (redness). The eicosanoids are involved with each of these signs. The ω-3 and ω-6 series: Redness—An insect's sting will trigger the classic inflammatory response. Short acting vasoconstrictors — TXA2—are released quickly after the injury. The site may momentarily turn pale. Then TXA2 mediates the release of the vasodilators PGE2 and LTB4. The blood vessels engorge and the injury reddens.Swelling—LTB4 makes the blood vessels more permeable. Plasma leaks out into the connective tissues, and they swell. The process also loses pro-inflammatory cytokines.Pain—The cytokines increase COX-2 activity. This elevates levels of PGE2, sensitizing pain neurons.Heat—PGE2 is also a potent pyretic agent. Aspirin and NSAIDS—drugs that block the COX pathways and stop prostanoid synthesis—limit fever or the heat of localized inflammation. History: In 1930, gynecologist Raphael Kurzrok and pharmacologist Charles Leib characterized prostaglandin as a component of semen. Between 1929 and 1932, Burr and Burr showed that restricting fat from animal's diets led to a deficiency disease, and first described the essential fatty acids. In 1935, von Euler identified prostaglandin. In 1964, Bergström and Samuelsson linked these observations when they showed that the "classical" eicosanoids were derived from arachidonic acid, which had earlier been considered to be one of the essential fatty acids. In 1971, Vane showed that aspirin and similar drugs inhibit prostaglandin synthesis. Von Euler received the Nobel Prize in medicine in 1970, which Samuelsson, Vane, and Bergström also received in 1982. E. J. Corey received it in chemistry in 1990 largely for his synthesis of prostaglandins.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parallel Redundancy Protocol** Parallel Redundancy Protocol: Parallel Redundancy Protocol (PRP) is a network protocol standard for Ethernet that provides seamless failover against failure of any network component. This redundancy is invisible to the application. Parallel Redundancy Protocol: PRP nodes have two ports and are attached to two separated networks of similar topology. PRP can be implemented entirely in software, i.e. integrated in the network driver. Nodes with single attachment can be attached to one network only. This is in contrast to the companion standard HSR (IEC 62439-3 Clause 5), with which PRP shares the operating principle. PRP and HSR are independent of the application-protocol and can be used by most Industrial Ethernet protocols in the IEC 61784 suite. PRP and HSR are standardized by the IEC 62439-3:2016). They have been adopted for substation automation in the framework of IEC 61850. Parallel Redundancy Protocol: PRP and HSR are suited for applications that request high availability and short switchover time, such as: protection for electrical substation, synchronized drives, for instance in printing machines or high power inverters. For such applications, the recovery time of commonly used protocols such as the Rapid Spanning Tree Protocol (RSTP) is too long.The cost of PRP is a duplication of all network elements that require it. Cost impact is low since it makes little difference if the spares lie on the shelf or are actually working in the plant. The maintenance interval is shortened since more components can fail in use, but such outage will remain invisible to the application. Parallel Redundancy Protocol: PRP does not cover end node failures, but redundant nodes may be connected via a PRP network. Topology: Each PRP network node (DANP) has two Ethernet ports attached to two separate local area networks of arbitrary, but similar topology. The two LANs have no links connecting them and are assumed to be fail-independent, to avoid common mode failures. Topology: Nodes with single attachment (such as a printer) are either attached to one network only (and therefore can communicate only with other nodes attached to the same network), or are attached through a RedBox, a device that behaves like a doubly attached node.Since HSR and PRP use the same duplicate identification mechanism, PRP and HSR networks can be connected without single point of failure and the same nodes can be built to be used in both PRP and HSR networks. Operation: A source node (DANP) sends simultaneously two copies of a frame, one over each port. The two frames travel through their respective LANs until they reach a destination node (DANP) with a certain time skew. The destination node accepts the first frame of a pair and discards the second (if it arrives). Therefore, as long as one LAN is operational, the destination application always receives one frame. PRP provides zero-time recovery and allows to check the redundancy continuously to detect lurking failures. Frame format: To simplify the detection of duplicates, the frames are identified by their source address and a sequence number that is incremented for each frame sent according to the PRP protocol. The sequence number, the frame size, the path identifier and an Ethertype are appended just before the Ethernet checksum in a 6-octet PRP trailer. This trailer is ignored (considered as padding) by all nodes that are unaware of the PRP protocol, and therefore these singly attached nodes (SAN) can operate in the same network. Frame format: NOTE: all legacy devices should accept Ethernet frames up to 1528 octets, this is below the theoretical limit of 1535 octets. Implementation: The two Ethernet interfaces of a node use the same MAC address. This is allowed since the two LANs have no connection. Therefore, PRP is a layer 2 redundancy, which allows higher layer network protocols to operate without modification. A PRP node needs only one IP address. Especially, the ARP protocol will correctly relate the MAC to the IP address. Clock synchronization: IEC 62439-3 Annex C specifies the Precision Time Protocol Industry Profile that support a clock synchronization over PRP with an accuracy of 1 μs after 15 network elements, as profile of IEEE Std 1588 Precision Time Protocol. Clocks can be doubly attached according to PRP, but since the correction is different according to the path, the duplicate discard method of PRP is not applicable. Also, delay measurement messages (Pdelay_Req & Pdelay_Resp) are not duplicated since they are link-local. About every second, a master clock sends two copies of a Sync message, but not at exactly the same time since the ports are separate, therefore the original Syncs have already different time stamps. A slave receives the two Sync messages at different times and applies the Best Master Clock Algorithm (BMCA), and when the two Sync come from the same grandmaster, the clock quality is used as a tie-breaker. A slave will normally listen to one port and supervise the other, rather than switching back and forth or using both Syncs. Clock synchronization: This method works for several options in 1588, with Layer 2 / Layer 3 operation, and with peer-to-peer / end-to-end delay measurement. IEC 62439-3 defines these two profiles as: L3E2E (Layer 3, end-to-end) that addresses the requirements of ODVA L2P2P (Layer 2, peer-to-peer) that addresses the requirements of power utility in IEC 61850 and has been adopted by IEEE in IEC&IEEE 61850-9-3. Legacy versions: The original standard IEC 62439:2010 incremented the sequence number of the Redundancy Control Trailer (RCT) in the PRP frames on a per-connection basis. This gave a good error detection coverage but made difficult the transition from PRP to the High-availability Seamless Redundancy (HSR) protocol, which uses a ring topology instead of parallel networks. The revised standard IEC 62439-3:2012 aligned PRP with HSR using the same duplicate discard algorithm. This allowed building transparent PRP-HSR connection bridges and nodes that can operate both as PRP (DANP) and HSR (DANH). The old IEC 62439:2010 standard is sometimes referred to as PRP-0 as it is still used in some control systems, and PRP 2012 as "PRP". Applications: An interesting application of PRP was found in the area of wireless communication as "Timing Combiner" [], yielding significant improvement in packet loss and timing behaviour over parallel redundant wireless links.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sync sound** Sync sound: Sync sound (synchronized sound recording) refers to sound recorded at the time of the filming of movies. It has been widely used in movies since the birth of sound movies. History: Even in the silent film era, films were shown with sounds, often with musical accompaniment by a pianist or an orchestra keeping time with the screen action. The first synchronization was a turning recording device marked with a white spot. As the white spot rotated, the cameraman hand-cranked the camera to keep it in sync with the recording. The method was then repeated for playback, but with the projectionist hand cranking the film projector. "Single-system" sound recorded sound optically to part of the original camera film, or magnetically to a stripe of magnetic coating along the film edge."Double-system" sound used independent cameras and sound recorders. The first sync sound standard used recorders and cameras both powered by AC (alternating current) motors - essentially clock motors.Later the 50 Hz or 60 Hz sine wave, called a Pilottone, was recorded on a second parallel track of an audio recorder. History: In double-system film, speed variations of camera and recorder, as well as the elasticity of the magnetic recording tape, requires some positive means of keying the dialogue to its appropriate film frame. The inclusion on the sound recorder of a second, parallel, sync or "Pilotone" track, has been the most common method in use until today. In video recording, synchronism is electronically generated and generally called dual-system sound If location, a camera is driven by a DC motor, with some sort of governor control to hold it fairly accurate at 24 fps, a sync pulse generator geared to the movement or motor shaft could be employed to provide the sync pulse output. A cable conducts the sync pulse from camera to sound recorder. The sync pulse is typically a sine wave of 50 or 60 Hz with an RMS amplitude of approximately 1 volt.This double-system audio recording could then be transferred or "resolved" to sprocketed magnetic film, with sprocket holes that match one to one with the original camera film. These two sprocketed media could be run through a "Moviola" or flat-bed editing table such as the Steenbeck for synchronous sound editing. With the introduction of the Bulova "Accutron" watch that used a tuning fork as a time reference (watches later used an oscillating electronic crystal), the camera no longer needed to be connected to the sound recorder with a cable. The camera speed was controlled by one oscillator, and a second oscillator in the recorder generated the Pilotone. History: This method was developed in the 1960s by pioneering filmmaker Richard Leacock. It was called Direct Cinema. Filmmakers abandoned the studio and went out on location to film, often with hand-held cameras. In 1972, Bell & Howell brought out a consumer version of a double-system Super-8 sound filmmaking system called "Filmosound". A compact cassette recorder was attached to the camera with a cable that transmitted a single pulse to the recorder every time a new frame of film was exposed in the camera. On playback, the cassette recorder pulse was used to control the projector speed. History: At that time, Ricky Leacock, a professor in the MIT architecture department film section, developed a Super-8 film production system with a crystal-controlled camera, a crystal-generated Pilotone cassette recorder, a sprocketed magnetic film recorder, a flatbed editing table, and a projector. The MIT/Leacock System was funded with a $300,000 grant from the founder of Polaroid, Edwin Land. In 1973, the one-pulse-per-frame technique was used to control recording directly onto sprocketed magnetic film in the Super8 Sound Recorder. The Super8 Sound Recorder could also "resolve" sound that had been recorded onto cassette tape with this new "digital" sync pulse.Today, digital video cameras and digital sound recorders synchronize electronically, being used for double-system video production. Pioneering films: On the Bowery by Lionel Rogosin (1956) Chronicle of a Summer by Jean Rouch (1958) Les Raquetteurs by Michel Brault and Gilles Groulx (1958) Sync sound in Asia: In Hong Kong, sync sound was not widely used until the 1990s, as the generally noisy environment and lower production budgets made such a method impractical.Indian films shot using sync sound include the first Indian talkie Alam Ara released in 1931 and art house films such as Satyajit Ray's Pather Panchali. The then popular Mitchell camera, which could be operated silently made it possible to shoot in sync sound. However, due to the change of shooting environments from studios to locations, as well as the surging popularity of the more portable but noisy Arri 2c camera, shooting with sync sound became less common during the mid 60s. Thus, most Indian films, including Bollywood films, shot after the 1960s do not use sync sound and for that very reason the 2001 films Lagaan and Dil Chahta Hai were noted for its use. The common practice in the Indian film industry, even today, is to dub the dialogues during post-production.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automatic message exchange** Automatic message exchange: Automatic message exchange (AME): In an adaptive high-frequency (HF) radio network, an automated process allowing the transfer of a message from message injection to addressee reception, without human intervention. Through the use of machine-addressable transport guidance information, i.e., the message header, the message is automatically routed through an on-line direct connection through single or multiple transmission media. Source: from Federal Standard 1037C
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aluminium gallium indium phosphide** Aluminium gallium indium phosphide: Aluminium gallium indium phosphide (AlGaInP, also AlInGaP, InGaAlP, GaInP, etc.) is a semiconductor material that provides a platform for the development of novel multi-junction photovoltaics and optoelectronic devices, as it spans a direct bandgap from deep ultraviolet to infrared.AlGaInP is used in manufacture of light-emitting diodes of high-brightness red, orange, green, and yellow color, to form the heterostructure emitting light. It is also used to make diode lasers. Formation: AlGaInP layer is often grown by heteroepitaxy on gallium arsenide or gallium phosphide in order to form a quantum well structure. Heteroepitaxy is a kind of epitaxy performed with materials that are different from each other. In heteroepitaxy, a crystalline film grows on a crystalline substrate or film of a different material.This technology is often used to grow crystalline films of materials for which single crystals cannot 1D view.Another example of heteroepitaxy is gallium nitride (GaN) on sapphire. Properties: AlGaInP is a semiconductor, which means that its valence band is completely full. The eV of the band gap between the valence band and the conduction band is small enough that it is able to emit visible light (1.7 eV - 3.1 eV). The band gap of AlGaInP is between 1.81 eV and 2 eV. This corresponds to red, orange, or yellow light, and that is why the LEDs made from AlGaInP are those colors. Zinc blende structure: AlGaInP's structure is categorized within a specific unit cell called the zinc blende structure. Zinc blende/sphalerite is based on a face-centered cubic lattice of anions. It has 4 asymmetric units in its unit cell. It is best thought of as a face-centered cubic array of anions and cations occupying one half of the tetrahedral holes. Each ion is 4-coordinate and has local tetrahedral geometry. Zinc blende is its own antitype—you can switch the anion and cation positions in the cell and it has no effect (as in NaCl). In fact, replacement of both the zinc and sulfur with carbon gives the diamond structure. Applications: AlGaInP can be applied to: Light emitting diodes of high brightness Diode lasers Quantum well structures Solar cells (potential). The use of aluminium gallium indium phosphide with high aluminium content, in a five junction structure, can lead to solar cells with maximum theoretical efficiencies (solar cell efficiency) above 40% AlGaInP laser: A diode laser consists of a semiconductor material in which a p-n junction forms the active medium and optical feedback is typically provided by reflections at the device facets. AlGaInP diode lasers emit visible and near-infrared light with wavelengths of 0.63-0.76 μm. The primary applications of AlGaInP diode lasers are in optical disc readers, laser pointers, and gas sensors, as well as for optical pumping, and machining. LED: AlGaInP can be used as an LED. An LED is composed of a p-n junction which contain a p-type and an n-type. The material used in the semiconducting element of an LED determines its color.AlGaInP is one of type of LEDs used for lighting systems. Another is indium gallium nitride (InGaN). Slight changes in the composition of these alloys changes the color of the emitted light. AlGaInP alloys are used to make red, orange and yellow LEDs. InGaN alloys are used to make green, blue and white LEDs. Safety and toxicity aspects: The toxicology of AlGaInP has not been fully investigated. The dust is an irritant to skin, eyes and lungs. The environment, health and safety aspects of aluminium indium gallium phosphide sources (such as trimethylgallium, trimethylindium and phosphine) and industrial hygiene monitoring studies of standard MOVPE sources have been reported in a review. Illumination by an AlGaInP laser was associated in one study with slower healing of skin wounds in laboratory rats.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Urban rail transit in Canada** Urban rail transit in Canada: Urban rail transit in Canada encompasses a broad range of rail mass transit systems, including commuter rail, rapid transit, light rail, and streetcar systems. Terminology: "Commuter rail" refers to urban passenger train service between a central city and its suburbs. Three such systems exist in Canada. "Airport rail link" refers to rail transport between a central city and a nearby international airport. The Union Pearson Express is the only dedicated airport rail link in Canada. The SkyTrain's Canada Line also serves as an airport rail link. "Subway" refers to a rapid transit system using heavy rail with steel wheels. The Toronto subway is the only such system in Canada. "Rubber-tired metro" refers to a rapid transit system using heavy rail with rubber tires. The Montreal Metro is the only such system in Canada. "Light metro" refers to a rapid transit system using intermediate or medium-capacity rail. The SkyTrain and the Réseau express métropolitain are the only full light metro systems in Canada. The Toronto subway system includes one light metro line. "Light rail" refers to a rail transit system using light rail vehicles in a dedicated right-of-way. Four such systems exist in Canada. "Streetcar" refers to a rail transit system using light rail vehicles entirely or mostly on streets providing local service in mixed traffic. The Toronto streetcar is the only such system in Canada. "People mover" refers to a small-scale automated guideway transit system. The Terminal Link is the only such system in Canada. Existing systems: Italics indicate a line under construction. Existing systems: Calgary Calgary Transit's CTrain network, which started operation in 1981, now has the second-highest weekday ridership of any light rail transit system in North America, surpassed only by Guadalajara light rail system in Mexico. The CTrain carried over 312,000 passengers per weekday in the fourth quarter of 2018. There are 45 stations in operation in the 60-kilometre (37 mi) CTrain light rail system. After starting by running on one leg in 1981, the system has expanded and now has four legs radiating out into Calgary's suburbs in different directions. The legs have been organized into two routes (identified as the Red Line and the Blue Line) that connect the four legs via shared tracks in a downtown transit mall. The existing four legs of the system, as built in chronological order, are the south leg (1981), the northeast leg (1985), the northwest leg (1987), and the west leg (2012). Existing systems: The Downtown Transit Mall along 7th Avenue South is shared by the Red and Blue lines. The Red Line is a 32.2-kilometre (20.0 mi) line that connects the south and northwest legs via the downtown transit mall. The Blue Line is a 23-kilometre (14 mi) line that connects the northeast and west legs via the downtown transit mall.The Green Line is a planned line that would connect new southeast and north legs via a downtown tunnel. Edmonton The Edmonton Transit System's LRT system consisted of only one line from its opening in 1978 to 2015. The current 24.3-kilometre (15.1 mi) system includes the original Capital Line and the new Metro Line, sharing part of their route. The first phase of the Valley Line is under construction. The Capital Line runs roughly north–south, between northeast Edmonton and the Century Park community, with a mix of tunnels and at-grade track. Six stations are underground, while the remaining nine are at-grade. The Metro Line is interlined with the Capital Line from Health Sciences/Jubilee and through the underground portions before branching northwest towards NAIT. The Valley Line is currently under construction. The low-floor line will travel southeast from downtown towards Mill Woods.Extensions to the Capital, Metro, and Valley lines have been approved. The construction of two new lines, the Energy and Festival lines, has been proposed. Existing systems: Montreal Exo operates five commuter rail lines in Greater Montreal, including the Island of Montreal, Montreal, and South Shore. Each line terminates at Montreal Central Station or Lucien-L'Allier, both in downtown Montreal, with connections to the metro system. Most of the system is run on Canadian National or Canadian Pacific trackage. Exo formerly owned and operated the Mount Royal Tunnel and the Deux-Montagnes line until service was ended in 2020. The Réseau express métropolitain light metro system is set to take over the Mount Royal Tunnel and the Deux-Montagnes line. Existing systems: The Montreal Metro is Canada's second-busiest rail transit system. Drawing inspiration from the Paris Métro, it uses rubber-tired metro technology, the only such system in Canada. The 69.2-kilometre (43.0 mi) system has 68 stations on four lines, which serve the north, east, and central portions of the Island of Montreal, as well as the suburbs of Laval and Longueuil. The metro began in 1966 with the east–west Green Line and the north–south Orange Line. A series of expansions since 1966 have expanded the original lines and added the Yellow and Blue lines. Existing systems: The Green Line is a 22.1-kilometre (13.7 mi) line that runs northeast to southwest between Angrignon and Honoré-Beaugrand. The two ends are connected through a central section that runs under De Maisonneuve Boulevard in downtown Montreal. The Orange Line is a 30.0-kilometre (18.6 mi) U-shaped line. The central section runs through downtown Montreal, south of the Green Line's alignment. The two legs connect to Côte-Vertu in the northwest and Montmorency in Laval, northeast of Montreal. The Yellow Line is a 4.25-kilometre (2.64 mi) line with three stations. It connects to the Green and Orange lines at Berri–UQAM station, the system's busiest station, and crosses under the Saint Lawrence River to connect Saint Helen's Island and Longueuil. The Blue Line is a 9.7-kilometre (6.0 mi) line. It runs in a northeast to southwest alignment north of the Green Line, connecting the east island with both legs of the Orange Line.An eastward extension of the Blue Line is planned to begin construction in 2021. Existing systems: Ottawa The O-Train began in 2001 as a light rail pilot project to supplement Ottawa's Transitway bus rapid transit system. This original line, now known as the Trillium Line, was relatively inexpensive to construct ($21 million) due to its single-track route along a little used freight-rail right-of-way and used diesel multiple units (DMUs) to avoid the cost of building overhead lines along the tracks. The Confederation Line opened in September 2019, replacing portions of the Transitway with an underground tunnel through downtown. Existing systems: The Confederation Line (Line 1) is a light rail line which runs east–west from Blair to Tunney's Pasture connecting to the Transitway at each terminus and with the Trillium Line at Bayview. The line runs both underground and on the surface and is completely grade-separated. There is a tunnel downtown with three underground stations. Existing systems: The Trillium Line (Line 2) is an 8-kilometre (5.0 mi) diesel light rail line running north to south from Bayview station to Greenboro station, connecting with the Confederation Line at its northern terminus and the Transitway at its southern terminus. There are three passing sidings along the single-track line.Stage 2 of Ottawa's O-Train expansion is currently under construction, which will expand the Confederation Line east and west and the Trillium Line south. Existing systems: Toronto GO Transit operates commuter rail services in the Greater Golden Horseshoe, including the metropolitan areas of Toronto, Hamilton, Kitchener, Niagara, Oshawa, Barrie, and Guelph. Each of its seven lines terminate at Union Station in downtown Toronto. With 217,500 average weekday riders, it is Canada's busiest commuter rail service, and the fifth-busiest in North America. The GO Expansion project currently underway will bring electrification, new trackage, bridges, and tunnels to the system, allowing for two-way all-day service with 15-minute frequencies to sections of five of its lines. Existing systems: GO Transit's parent agency, Metrolinx, also operates the Union Pearson Express, an airport rail link between Union Station and Toronto Pearson International Airport. It opened in advance of the 2015 Pan American Games, sharing most of its routing with GO's Kitchener line before travelling along a 3.3-kilometre (2.1 mi) rail spur to the airport. At the airport, the line connects with the Terminal Link, a free people mover transporting passenger between the airport's terminals and parking garage. Existing systems: The Toronto Transit Commission's 76.9-kilometre (47.8 mi) subway is Canada's oldest rapid transit system, having opened as the "Yonge subway" in 1954. It is also Canada's busiest system, with 1,603,300 average weekday riders. It is an intermodal system, with three subway lines and one light metro line, with a total of 75 stations, the most of any Canadian system. The system connects each of Toronto's former municipalities, as well as the suburb of Vaughan. Existing systems: Line 1 Yonge–University is Toronto's oldest, longest, and busiest. It forms a U-shape, with Union station at its base, connecting to Toronto's intercity and commuter rail hub. The eastern leg travels north along Yonge Street to Finch. The western leg travels northwest, connecting to the University of Toronto and York University, before terminating at Vaughan Metropolitan Centre. Line 2 Bloor–Danforth is an east–west line, running primarily along its two namesakes, Bloor Street and Danforth Avenue. The line connects the east and western suburbs with Line 1 and downtown Toronto. Line 3 Scarborough is a light metro line. It connects to Line 2 at its terminus at Kennedy station and extends to Scarborough Civic Centre. The line is planned to be replaced with an extension of Line 2. Existing systems: Line 4 Sheppard is the shortest line on the system, with five stations along Sheppard Avenue in North York. It connects to Line 1 at Sheppard–Yonge station.Line 5 Eglinton and Line 6 Finch West are both light rail lines under construction. The two lines will be fully integrated with the subway system upon their opening in 2023.Toronto also operates a streetcar system. Unlike light rail, the majority of the ten routes operate in mixed traffic and all make frequent stops. Three routes operate in a dedicated right-of-way: 510 Spadina running between Spadina station and Union station. Existing systems: 509 Harbourfront running between Union station and Exhibition Place via Queens Quay station. 512 St. Clair running along St. Clair Avenue West between St. Clair station and Gunns Loop via St. Clair West station.The central section of the 504 King route runs along the King Street Transit Priority Corridor. The proposed East Bayfront LRT would be a fourth streetcar line operating in a dedicated right-of-way. Existing systems: Vancouver The West Coast Express is a commuter rail line operated by TransLink. The 69-kilometre (43 mi) line runs from Waterfront station in downtown Vancouver to Mission, with six stations in between. The line only operates during peak hours on weekdays, with five trains heading west in the morning rush hour and 5 heading east in the afternoon rush hour. It is Canada's least-used urban rail transit system.The SkyTrain is TransLink's fully-automated medium-capacity metro system. The system opened in 1985 for Expo 86. This original portion, now known as the Expo Line, had been joined by the Millennium and Canada lines, making it Canada's longest rapid transit system by track length, at 79.6 kilometres (49.5 mi). The system serves Vancouver and many of its surrounding municipalities in the Metro Vancouver Regional District. Existing systems: The Expo Line is named after Expo 86, for which it was originally constructed. It connects Waterfront station, an intermodal transit station in Downtown Vancouver, with Burnaby, New Westminster, and northwest Surrey. It roughly follows a northwest–southeast direction. Since 2016, a second branch of the line connects northward from Columbia station to the Millennium Line in Burnaby. A southeastward extension is planned to extend down the Fraser Highway to connect eastern Surrey and Langley. Existing systems: The Millennium Line is named after the 3rd millennium, at the beginning of which it opened. Originally, it operated as a branch service of the Expo Line, following its alignment from Waterfront to Columbia station, before branching northeast back towards Vancouver through Burnaby. The opening of the Evergreen Extension in 2016 brought its current alignment, running roughly east–west from VCC–Clark station in Vancouver to Lafarge Lake–Douglas station in Coquitlam. An additional westward extension is under construction along Broadway to Arbutus station. Existing systems: The Canada Line was built in advance of the 2010 Winter Olympics. It uses distinct technology from the rest of the system and runs roughly north–south from Waterfront station, splitting in Richmond to head west to the Vancouver International Airport and south to the Brighouse area of Richmond. Existing systems: Waterloo Region The first phase of the 19-kilometre (12 mi) Ion LRT system runs from Conestoga station in the City of Waterloo to Fairway station in Kitchener. It opened to the public on June 21, 2019. The system operates in reserved lanes on public streets and on private rights-of-way. Waterloo Region, Ontario, has also approved plans for a light rail extension to the Ainslie St. Transit Terminal in Cambridge, as phase two of Ion. In development: Gatineau Gatineau, Quebec is proposing a 26-kilometre (16 mi) LRT system that would connect with Ottawa's O-Train system. In development: Hamilton Hamilton's B-Line route, part of the region's BLAST rapid transit network, was a proposed light rail line to run east–west along King and Main streets, with McMaster University and Eastgate Square as its termini. However, in announcing the financing for the line, the Government of Ontario changed the eastern terminus to Queenston Circle instead of Eastgate Square but added a branch to the new West Harbour GO Station. After uncertainty among Hamilton's city council and poor ridership projections in provincially funded studies, the provincial government announced that they would abandon the spur line down James North and a previously announced BRT system along James in favour of reinstating Eastgate Square as the terminal station of the B-Line. In December 2019, the Ontario government announced that the project would be abandoned, in part due to higher-than-anticipated costs. In February 2021, the province reversed their decision and announced their re-commitment to the Hamilton light rail project, and in May 2021, federal funding was confirmed. In development: Longueuil In February 2020, the mayor of Longueuil, Quebec, proposed building a tramway in stages running east to west, from Hôpital Pierre Boucher in Longueuil to La Prairie. The proposed line would mostly run along a reconfigured Taschereau Boulevard passing Cégep Édouard-Montpetit, Longueuil station (terminus of the Yellow Line of the Montreal Metro), Hôpital Charles-LeMoyne and the planned Panama station of the future Réseau express métropolitain in Brossard. In development: Montreal REM The Réseau express métropolitain is a light metro line under construction in Montreal. It is set to open in phases, beginning in 2023. When completed, it will consist of a central section connecting to the Green, Orange, and Blue metro lines, with four branches with service to the North Shore, West Island, airport, and South Shore. In development: Peel Region The Hurontario LRT is a 17.6-kilometre (10.9 mi) light rail line under construction which is largely financed by Ontario provincial government. It will run on the surface along Hurontario Street from Port Credit GO Station in Mississauga to Steeles Avenue in Brampton. On October 28, 2015, Brampton City Council cancelled the proposed 5.6-kilometre (3.5 mi) section of the line along Main Street in Brampton to Brampton GO Station. On March 21, 2019, Metrolinx announced that the most of the downtown loop would be deferred to a later date due to financial restrictions, although a short spur to a stop at Square One Shopping Centre would remain. In development: Quebec City The Quebec City Tramway is a planned light rail transit line in Quebec City set to open in 2029. It will link Beauport to Cap Rouge, passing through Quebec Parliament Hill. The 19-kilometre (12 mi) line will include a 1.8-kilometre (1.1 mi) underground segment, with the rest of the line being on the surface. Cancelled: Surrey A 27-kilometre (17 mi) light rail network to consist of three lines radiating from SkyTrain stations had been proposed for construction in Surrey, British Columbia. The planned lines were: Surrey City Centre to Guildford Town Centre along 104 Avenue Surrey City Centre to Newton Town Centre along King George Boulevard Surrey City Centre via Fleetwood Town Centre to Langley along the Fraser HighwayThe lines on 104 Avenue and King George Boulevard were to be built in seven years while the Surrey–Langley Line on the Fraser Highway would be finished five years later. A report on the economic benefits of the project was produced by a consulting firm in May 2015.This project (among others major transit infrastructure initiatives, including the extension of the Millennium Line under Broadway in Vancouver) was originally made contingent, by the governing BC Liberal party, on the approval, by plebiscite in 2015, of a sales tax increase to generate new funds for public transit. The electorate voted against the tax increase, leaving the project unfunded. Subsequently, the project was included in the second phase of TransLink's 10-Year Investment Plan, which was approved in late 2017. However, in 2018, more than 80 percent of the city's residents objected to the line and potential problems, prompting several parties to adopt its cancellation as part of their platform during that year's civic election. A mayor and council who objected to the LRT were elected and their first order of business was to vote unanimously to cancel the LRT line in favour of extending the existing SkyTrain line to Langley, despite the lack of funding to do so. The LRT was "indefinitely suspended" by the regional Mayors' Council on November 15. Cancelled: Toronto LRT projects The Jane LRT was a proposed 16.5-kilometre (10.3 mi) light rail transit line that would have run along Jane Street from Jane station on Line 2 Bloor–Danforth to Pioneer Village station on Line 1 Yonge–University. It was cancelled by Rob Ford in December 2010.The Sheppard East LRT was a proposed 13-kilometre (8.1 mi) light rail transit line that would have run along the surface of Sheppard Avenue from Don Mills subway station to east of Morningside Avenue. It was cancelled in April 2019 by the Ontario provincial government under Premier Doug Ford in favour of a Line 4 Sheppard subway extension. Cancelled: Victoria region In August 2011, Victoria Regional Transit System announced that light rail transit was recommended as the preferred technology to connect Victoria to Saanich and the West Shore communities. In 2018, British Columbia premier John Horgan rejected the idea of light rail service in the Victoria area, arguing that the area's low population would not justify light rail.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar eclipse of May 2, 2087** Solar eclipse of May 2, 2087: A partial solar eclipse will occur on Friday 2 May 2087. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. Related eclipses: Solar eclipses 2087–2090 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. Related eclipses: Saros 120 This eclipse is a part of Saros cycle 120, repeating every 18 years, 11 days, containing 71 events. The series started with partial solar eclipse on May 27, 933 AD, and reached an annular eclipse on August 11, 1059. It was a hybrid event for 3 dates: May 8, 1510, through May 29, 1546, and total eclipses from June 8, 1564, through March 30, 2033. The series ends at member 71 as a partial eclipse on July 7, 2195. The longest duration of totality was 2 minutes, 50 seconds on March 9, 1997. All eclipses in this series occurs at the Moon’s descending node.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UBA5** UBA5: Ubiquitin-like modifier-activating enzyme 5 is a protein that in humans is encoded by the UBA5 gene.This gene encodes a member of the E1-like activating enzyme family. Two alternatively spliced transcript variants encoding distinct isoforms have been found for this gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxford knee score** Oxford knee score: The Oxford Knee Score (OKS) is a Patient Reported Outcome questionnaire that was developed to specifically assess the patient's perspective of outcome following Total Knee Arthroplasty. The OKS has subsequently been validated for use in assessing other non-surgical therapies applied to those suffering from issues with the knee. The OKS consists of twelve questions covering function and pain associated with the knee. It was designed and developed by researchers within the department of Public Health and Primary Health Care at the University of Oxford in association with surgical colleagues at the Nuffield Orthopaedic Centre. The benefit to this questionnaire is that it is short, practical, reliable, valid and sensitive to clinically important changes over time.The Oxford Knee Score is owned, managed and supported by Isis Outcomes, an activity within Isis Innovation Ltd, the Technology Transfer Company for the University of Oxford. Score Evaluation: The original evaluation of the Oxford Knee Score was as follows: First, each of the 12 answers are assigned the previously defined number of points. They range from 1 = least difficult to 5 = most difficult. The 12 ratings are then added together to give a total score used to assess the patient. The possible total score ranges from 12 to 60 points. Here, a low score (e.g. 12 points) indicates good outcomes and vice versa. Because of misunderstandings concerning this, the right holders proposed a different system where response points range from 0 to 4 with a total score range from 0 to 48. Here, a high score (e.g. 48) indicates satisfactory joint function and vice versa. Score Evaluation: Both scoring systems remain valid. To avoid misinterpretation one should always show the scoring system used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded