text
stringlengths
60
353k
source
stringclasses
2 values
**Solar eclipse of June 19, 1936** Solar eclipse of June 19, 1936: A total solar eclipse occurred at the Moon's descending node on June 19, 1936 (June 18, 1936 east of the International Date Line). A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Solar eclipse of June 19, 1936: The path of totality crossed Europe and Asia. The full phase could be seen in Greece, Turkey, USSR, China and the Japanese island of Hokkaido. The maximum eclipse was near Bratsk and lasted about 2.5 minutes. The sun was 57 degrees above horizon, gamma had a value of 0.539, and the eclipse was part of Solar Saros 126. Related eclipses: Solar eclipses 1935–1938 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. Related eclipses: Saros 126 It is a part of Saros cycle 126, repeating every 18 years, 11 days, containing 72 events. The series started with partial solar eclipse on March 10, 1179. It contains annular eclipses from June 4, 1323 through April 4, 1810, hybrid eclipses from April 14, 1828 through May 6, 1864 and total eclipses from May 17, 1882 through August 23, 2044. The series ends at member 72 as a partial eclipse on May 3, 2459. The longest duration of central eclipse (annular or total) was 6 minutes, 30 seconds of annularity on June 26, 1359. The longest duration of totality was 2 minutes, 36 seconds on July 10, 1972. All eclipses in this series occurs at the Moon’s descending node. Related eclipses: Inex series This eclipse is a part of the long period inex cycle, repeating at alternating nodes, every 358 synodic months (≈ 10,571.95 days, or 29 years minus 20 days). Their appearance and longitude are irregular due to a lack of synchronization with the anomalistic month (period of perigee). However, groupings of 3 inex cycles (≈ 87 years minus 2 months) comes close (≈ 1,151.02 anomalistic months), so eclipses are similar in these groupings. Related eclipses: Tritos series This eclipse is a part of a tritos cycle, repeating at alternating nodes every 135 synodic months (≈ 3986.63 days, or 11 years minus 1 month). Their appearance and longitude are irregular due to a lack of synchronization with the anomalistic month (period of perigee), but groupings of 3 tritos cycles (≈ 33 years minus 3 months) come close (≈ 434.044 anomalistic months), so eclipses are similar in these groupings. Related eclipses: Metonic series The metonic series repeats eclipses every 19 years (6939.69 days), lasting about 5 cycles. Eclipses occur in nearly the same calendar date. In addition, the octon subseries repeats 1/5 of that or every 3.8 years (1387.94 days).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Proctosedyl** Proctosedyl: Proctosedyl is the brand name for a family of two products with identical active ingredients designed to treat a variety of proctological disorders. One is a topical ointment, the other a rectal suppository. In the United Kingdom both products are contract manufactured by Patheon Limited on behalf of the Sanofi-Aventis group. Manufacture and distribution is provided by Sanofi Aventis subsidiaries Hoechst and Hoechst Marion Roussel in other territories worldwide. Application: Both the yellowish-white, translucent, greasy ointment and the smooth, off-white suppositories are formulated for the relief of chronic pruritus ani (otherwise known anal itching or anusitis) and the treatment of pain, irritation, discharge and itching associated with haemorrhoids (otherwise known as piles). However both products are also used to provide pain relief in the treatment of anal fissure, for patients undergoing haemorrhoidectomy, (pre and post-operative), in the relief of post-partum (otherwise known as post-natal) haemorrhoidal conditions, and in the treatment of non-infective proctitis. Active ingredients: Both preparations contain: Cinchocaine hydrochloride at a concentration of 5 mg/g to provide anaesthesia, analgesia and to act as a spasmolytic. Hydrocortisone at a concentration of 5 mg/g. to provide antipruritic and anti-inflammatory reliefPreparations in some territories may also contain: Framycetin sulfate at a concentration of 10 mg/g as an antibacterial agent. Aesculin at a concentration of 10 mg/g for its retardant effect on Escherichia coli (otherwise known as E. Coli).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Winner-take-all in action selection** Winner-take-all in action selection: Winner-take-all is a computer science concept that has been widely applied in behavior-based robotics as a method of action selection for intelligent agents. Winner-take-all systems work by connecting modules (task-designated areas) in such a way that when one action is performed it stops all other actions from being performed, so only one action is occurring at a time. The name comes from the idea that the "winner" action takes all of the motor system's power. History: In the 1980s and 1990s, many roboticists and cognitive scientists were attempting to find speedier and more efficient alternatives to the traditional world modeling method of action selection. In 1982, Jerome A. Feldman and D.H. Ballard published the "Connectionist Models and Their Properties", referencing and explaining winner-take-all as a method of action selection. Feldman's architecture functioned on the simple rule that in a network of interconnected action modules, each module will set its own output to zero if it reads a higher input than its own in any other module. In 1986, Rodney Brooks introduced behavior-based artificial intelligence. Winner-take-all architectures for action selection soon became a common feature of behavior-based robots, because selection occurred at the level of the action modules (bottom-up) rather than at a separate cognitive level (top-down), producing a tight coupling of stimulus and reaction. Types of winner-take-all architectures: Hierarchy In the hierarchical architecture, actions or behaviors are programmed in a high-to-low priority list, with inhibitory connections between all the action modules. The agent performs low-priority behaviors until a higher-priority behavior is stimulated, at which point the higher behavior inhibits all other behaviors and takes over the motor system completely. Prioritized behaviors are usually key to the immediate survival of the agent, while behaviors of lower priority are less time-sensitive. For example, "run away from predator" would be ranked above "sleep." While this architecture allows for clear programming of goals, many roboticists have moved away from the hierarchy because of its inflexibility. Types of winner-take-all architectures: Heterarchy and fully distributed In the heterarchy and fully distributed architecture, each behavior has a set of pre-conditions to be met before it can be performed, and a set of post-conditions that will be true after the action has been performed. These pre- and post-conditions determine the order in which behaviors must be performed and are used to causally connect action modules. This enables each module to receive input from other modules as well as from the sensors, so modules can recruit each other. For example, if the agent’s goal were to reduce thirst, the behavior "drink" would require the pre-condition of having water available, so the module would activate the module in charge of "find water". The activations organize the behaviors into a sequence, even though only one action is performed at a time. The distribution of larger behaviors across modules makes this system flexible and robust to noise. Some critics of this model hold that any existing set of division rules for the predecessor and conflictor connections between modules produce sub-par action selection. In addition, the feedback loop used in the model can in some circumstances lead to improper action selection. Types of winner-take-all architectures: Arbiter and centrally coordinated In the arbiter and centrally coordinated architecture, the action modules are not connected to each other but to a central arbiter. When behaviors are triggered, they begin "voting" by sending signals to the arbiter, and the behavior with the highest number of votes is selected. In these systems, bias is created through the "voting weight", or how often a module is allowed to vote. Some arbiter systems take a different spin on this type of winner-take-all by using a "compromise" feature in the arbiter. Each module is able to vote for or against each smaller action in a set of actions, and the arbiter selects the action with the most votes, meaning that it benefits the most behavior modules. This can be seen as violating the general rule against creating representations of the world in behavior-based AI, established by Brooks. By performing command fusion, the system is creating a larger composite pool of knowledge than is obtained from the sensors alone, forming a composite inner representation of the environment. Defenders of these systems argue that forbidding world-modeling puts unnecessary constraints on behavior-based robotics, and that agents benefits from forming representations and can still remain reactive.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GSM procedures** GSM procedures: GSM procedures are sets of steps performed by the GSM network and devices on it in order for the network to function. GSM (Global System for Mobile Communications) is a set of standards for cell phone networks established by the European Telecommunications Standards Institute and first used in 1991. Its procedures refers to the steps a GSM network takes to communicate with cell phones and other mobile devices on the network. GSM procedures: IMSI attach refers to the procedure used when a mobile device or mobile station joins a GSM network when it turns on and IMSI detach refers to the procedure used to leave or disconnect from a network when the device is turned off. IMSI attach: In a GSM network, when a Mobile Station (MS) is switched ON, the International Mobile Subscriber Identity (IMSI) attach procedure is executed. This procedure is required for the Mobile Switching Center (MSC) and Visitor Location Register (VLR) to register the MS in the network. If the MS has changed Location area (LA) while it was powered off, then the IMSI attach procedure will lead to a Location update. IMSI attach: When the MS is switched on, it searches for a mobile network to connect to. Once the MS identifies its desired network, it sends a message to the network to indicate that it has entered into an idle state. The Visitor Location Register (VLR) checks its database to determine whether there is an existing record of the particular subscriber. If no record is found, the VLR communicates with the subscriber's Home Location Register (HLR) and obtains a copy of the subscription information. The obtained information is stored in the database of the VLR. Then an acknowledge message is sent to the MS. IMSI attach: Steps for IMSI attach procedure are as follows: The MS will send a Channel Request message to the BSS (base station subsystem) on the RACH (random access channel). The BSS responds on the AGCH (access grant channel) with an Immediate Assignment message and assigns an SDCCH to the MS. The MS immediately switches to the assigned SDCCH (stand-alone dedicated control channel) and sends a Location Update Request to the BSS. The MS will send either an IMSI or a TMSI (Temporary Mobile Subscriber Identity) to the BSS. The BSS will acknowledge the message. This acknowledgement only tells the MS that the BTS has received the message, it does not indicate the location update has been processed. The BSS forwards the Location Update Request to the MSC/VLR. The MSC/VLR forwards the IMSI to the HLR and requests verification of the IMSI as well as Authentication Triplets (RAND, Kc, SRES). The HLR will forward the IMSI to the Authentication Center (AuC) and request authentication triplets. The AuC generates the triplets and sends them along with the IMSI, back to the HLR. The HLR validates the IMSI by ensuring it is allowed on the network and is allowed subscriber services. It then forwards the IMSI and Triplets to the MSC/VLR. The MSC/VLR stores the SRES and the Kc and forwards the RAND to the BSS and orders the BSS to authenticate the MS. The BSS sends the MS an Authentication Request message. The only parameter sent in the message is the RAND. The MS uses the RAND to calculate the SRES and sends the SRES back to the BSS on the SDCCH in an Authentication Response. The BSS forwards the SRES up to the MSC/VLR. The MSC/VLR compares the SRES generated by the AuC with the SRES generated by the MS. If they match, then authentication is completed successfully. The MSC/VLR forwards the Kc for the MS to the BSS. The Kc is NOT sent across the Air Interface to the MS. The BSS stores the Kc and forwards the Set Cipher Mode command to the MS. The CIPH_MOD_CMD only tells the MS which encryption to use (A5/X), no other information is included. The MS immediately switches to cipher mode using the A5 encryption algorithm. All transmissions are now enciphered. It sends a Ciphering Mode Complete message to the BSS. IMSI attach: The MSC/VLR sends a Location Updating Accept message to the BSS. It also generates a new TMSI for the MS. TMSI assignment is a function of the VLR. The BSS will either send the TMSI in the LOC_UPD_ACC message or it will send a separate TMSI Reallocation Command message. In both cases, since the Air Interface is now in cipher mode, the TMSI is not compromised. IMSI attach: The MS sends a TMSI Reallocation Complete message up to the MSC/VLR. The BSS instructs the MS to go into idle mode by sending it a Channel Release message. The BSS then unassigns the SDCCH. The MSC/VLR sends an Update Location message to the HLR. The HLR records which MSC/VLR the MS is currently in, so it knows which MSC to point to when it is queried for the location of the MS. IMSI detach: IMSI detach is the process of detaching a MS from the mobile network to which it was connected. The IMSI detach procedure informs the network that the Mobile Station is switched off or is unreachable. At power-down the MS requests a signaling channel. Once assigned, the MS sends an IMSI detach message to the VLR. When the VLR receives the IMSI detach-message, the corresponding IMSI is marked as detached by setting the IMSI detach flag. The HLR is not informed of this and the VLR does not acknowledge the MS about the IMSI detach. If the radio link quality is poor when IMSI detach occurs, the VLR may not properly receive the IMSI-detach request. Since an acknowledgment message is not sent to the MS, it does not make further attempts to send IMSI detach messages. Therefore, the GSM network considers the MS to be still attached. IMSI detach: Implicit IMSI detach The GSM air-interface, designated Um, transmits network-specific information on specific broadcast channels. This information includes whether the periodic location update is enabled. If enabled, then the MS must send location update requests at time intervals specified by the network. If the MS is switched off, having not properly completed the IMSI detach procedure, the network will consider the MS as switched off or unreachable if no location update is made. In this situation the VLR performs an implicit IMSI detach. Location update: This procedure is used to update the location of the Mobile Station in the network and is described in more detail here. Cancel location: When a mobile station registers in a new VLR, the subscriber's data is deleted from the previous VLR in a cancel location procedure. The HLR initiates the procedure when it receives an 'update location' message from a VLR other than the one in which the MS was located at the time when its location information was last updated in the HLR database. Cancel location: The cancel location procedure can also be initiated with MML commands, with those, for example, that are used for changing the area, or deleting the MS from the HLR.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vax2os1** Vax2os1: In molecular biology, Vax2os1 is a long non-coding RNA. It is found on the opposite strand of the chromosome to the gene encoding the Vax2 homeobox transcription factor. In mice it is expressed in the developing ventral retina, where it is involved in the control of cell cycle progression of photoreceptor progenitor cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Internet Research Steering Group** Internet Research Steering Group: The Internet Research Task Force (IRTF) is an organization, overseen by the Internet Architecture Board, that focuses on longer-term research issues related to the Internet. A parallel organization, the Internet Engineering Task Force (IETF), focuses on the shorter term issues of engineering and standards making. The IRTF promotes research of importance to the evolution of the Internet by creating focused, long-term research groups working on topics related to Internet protocols, applications, architecture and technology. Unlike the IETF, the task force does not set standards and there is no explicit outcome expected of IRTF research groups. Organization: The IRTF is composed of a number of focused and long-term research groups. These groups work on topics related to Internet protocols, applications, architecture and technology. Research groups have the stable long-term membership needed to promote the development of research collaboration and teamwork in exploring research issues. Participation is by individual contributors, rather than by representatives of organizations. The list of current groups can be found on the IRTF's homepage. Operations: The IRTF is managed by the IRTF chair in consultation with the Internet Research Steering Group (IRSG). The IRSG membership includes the IRTF chair, the chairs of the various Research Groups and other individuals (members at large) from the research community selected by the IRTF chair. The chair of the IRTF is appointed by the Internet Architecture Board (IAB) for a two-year term. Operations: These individuals have chaired the IRTF: David D. Clark, 1989–1992 Jon Postel, 1992–1995 Abel Weinrib, 1995–1999 Erik Huizer, 1999–2001 Vern Paxson, 2001–2005 Aaron Falk, 2005–2011 Lars Eggert, 2011–2017 Allison Mankin, 2017–2019 Colin Perkins, 2019–PresentThe IRTF chair is responsible for ensuring that research groups produce coherent, coordinated, architecturally consistent and timely output as a contribution to the overall evolution of the Internet architecture. In addition to the detailed tasks related to research groups outlined below, the IRTF chair may also from time to time arrange for topical workshops attended by the IRSG and perhaps other experts in the field. Operations: The RFC Editor publishes documents from the IRTF and its research groups on the IRTF stream. The detailed IRTF research group guidelines and procedures are described in RFC 2014. The procedures for publishing documents on the IRTF RFC stream are defined in RFC 5743. The concept of RFC streams is defined in RFC 4844. Operations: IRSG The IRTF is managed by the IRTF chair in consultation with the Internet Research Steering Group (IRSG). The IRSG membership includes the IRTF chair, the chairs of the various IRTF research groups and other individuals (members at large) from the research or IETF communities. IRSG members at large are chosen by the IRTF chair in consultation with the rest of the IRSG and on approval by the Internet Architecture Board. Operations: In addition to managing the research groups, the IRSG may from time to time hold topical workshops focusing on research areas of importance to the evolution of the Internet, or more general workshops to, for example, discuss research priorities from an Internet perspective. The IRSG also reviews and approves documents published as part of the IRTF document stream (RFC 5743).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apyrexy** Apyrexy: In pathology, apyrexy, or apyrexia (Greek Ancient Greek: απυρεξια, from α-, privative, Ancient Greek: πυρεσσειν, to be in a fever, Ancient Greek: πυρ, fire, fever) is the normal interval or period of intermission in a fever or the absence of a fever.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Many-body problem** Many-body problem: The many-body problem is a general name for a vast category of physical problems pertaining to the properties of microscopic systems made of many interacting particles. Microscopic here implies that quantum mechanics has to be used to provide an accurate description of the system. Many can be anywhere from three to infinity (in the case of a practically infinite, homogeneous or periodic system, such as a crystal), although three- and four-body systems can be treated by specific means (respectively the Faddeev and Faddeev–Yakubovsky equations) and are thus sometimes separately classified as few-body systems. Many-body problem: In general terms, while the underlying physical laws that govern the motion of each individual particle may (or may not) be simple, the study of the collection of particles can be extremely complex. In such a quantum system, the repeated interactions between particles create quantum correlations, or entanglement. As a consequence, the wave function of the system is a complicated object holding a large amount of information, which usually makes exact or analytical calculations impractical or even impossible. Many-body problem: This becomes especially clear by a comparison to classical mechanics. Imagine a single particle that can be described with k numbers (take for example a free particle described by its position and velocity vector, resulting in k=6 ). In classical mechanics, n such particles can simply be described by k⋅n numbers. The dimension of the classical many-body system scales linearly with the number of particles n In quantum mechanics, however, the many-body-system is in general in a superposition of combinations of single particle states - all the kn different combinations have to be accounted for. The dimension of the quantum many body system therefore scales exponentially with n , much faster than in classical mechanics. Many-body problem: Because the required numerical expense grows so quickly, simulating the dynamics of more than three quantum-mechanical particles is already infeasible for many physical systems. Thus, many-body theoretical physics most often relies on a set of approximations specific to the problem at hand, and ranks among the most computationally intensive fields of science. In many cases, emergent phenomena may arise which bear little resemblance to the underlying elementary laws. Many-body problems play a central role in condensed matter physics. Examples: Condensed matter physics (solid-state physics, nanoscience, superconductivity) Bose–Einstein condensation and Superfluids Quantum chemistry (computational chemistry, molecular physics) Atomic physics Molecular physics Nuclear physics (Nuclear structure, nuclear reactions, nuclear matter) Quantum chromodynamics (Lattice QCD, hadron spectroscopy, QCD matter, quark–gluon plasma) Approaches: Mean-field theory and extensions (e.g. Hartree–Fock, Random phase approximation) Dynamical mean field theory Many-body perturbation theory and Green's function-based methods Configuration interaction Coupled cluster Various Monte-Carlo approaches Density functional theory Lattice gauge theory Matrix product state Neural network quantum states Numerical renormalization group
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pucker** Pucker: Pucker is a line of fruit-flavored liqueurs made by the DeKuyper company. By volume it is 15% alcohol (30 proof) and is often used in mixed drinks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corneal hydrops** Corneal hydrops: Corneal hydrops is an uncommon complication seen in people with advanced keratoconus or other corneal ectatic disorders, and is characterized by stromal edema due to leakage of aqueous humor through a tear in Descemet's membrane. Although a hydrops usually causes increased scarring of the cornea, occasionally it will benefit a patient by creating a flatter cone, aiding the fitting of contact lenses. Corneal transplantation is not usually indicated during corneal hydrops. Signs and symptoms: The person experiences pain and a sudden severe clouding of vision, with the cornea taking on a translucent milky-white appearance known as a corneal hydrops. Diagnosis: Patients are recommended to take a sodium chloride eye drop solution as well as a dexamethasone solution for a period of 4-6 weeks, timeframes may vary depending on the severity of a patients condition. Once the medication cycle is complete and the cloud clears, scarring will be left on the cornea. Management: The effect is normally temporary and after a period of six to eight weeks, the cornea usually returns to its former transparency. The recovery can be aided nonsurgically by bandaging with an osmotic saline solution. Non-steroidal anti-inflammatory topical may be used to reduce the pain and inflammation. Research: Corneal hydrops might be caused by a tear in the recently discovered Dua's layer, a 15 micron thick layer between the corneal stroma and Descemet’s membrane, Harminder Dua suggests that this finding will affect corneal surgery, including penetrating keratoplasty, and understanding of corneal dystrophies and pathologies, such as acute hydrops.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aptiganel** Aptiganel: Aptiganel (Cerestat; CNS-1102) is an unsuccessful drug candidate which acts as a noncompetitive NMDA antagonist, and that was under development by Cambridge Neuroscience, Inc as a treatment for stroke. It has neuroprotective effects and was researched for potential use in the treatment of stroke, but despite positive results in animal studies, human trials showed limited efficacy, as well as undesirable side effects such as sedation and hallucinations, and clinical development was ultimately not continued.The drug's failure led to the collapse of Cambridge Neuroscience in 1998 and its eventual sale to CeNeS Pharmaceuticals in 2000.Other guanidine substances that the company had been bowling on was Cns-1145 & CNS1237. Synthesis: 1-Naphthylamine is reacted with cyanogen bromide to give 2. Treatment of this intermediate with 3-ethyl-N-methylaniline leads to addition to the cyano group and formation of the corresponding diaryl guanidine, aptiganel, 3.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electrical reactance** Electrical reactance: In electrical circuits, reactance is the opposition presented to alternating current by inductance and capacitance. Along with resistance, it is one of two elements of impedance; however, while both elements involve transfer of electrical energy, no dissipation of electrical energy as heat occurs in reactance; instead, the reactance stores energy until a quarter-cycle later when the energy is returned to the circuit. Greater reactance gives smaller current for the same applied voltage. Electrical reactance: Reactance is used to compute amplitude and phase changes of sinusoidal alternating current going through a circuit element. Like resistance, reactance is measured in ohms, with positive values indicating inductive reactance and negative indicating capacitive reactance. It is denoted by the symbol X . An ideal resistor has zero reactance, whereas ideal inductors and capacitors have zero resistance. As frequency increases, inductive reactance increases and capacitive reactance decreases. Comparison to resistance: Reactance is similar to resistance in that larger reactance leads to smaller currents for the same applied voltage. Further, a circuit made entirely of elements that have only reactance (and no resistance) can be treated the same way as a circuit made entirely of resistances. These same techniques can also be used to combine elements with reactance with elements with resistance but complex numbers are typically needed. This is treated below in the section on impedance. Comparison to resistance: There are several important differences between reactance and resistance, though. First, reactance changes the phase so that the current through the element is shifted by a quarter of a cycle relative to the phase of the voltage applied across the element. Second, power is not dissipated in a purely reactive element but is stored instead. Third, reactances can be negative so that they can 'cancel' each other out. Finally, the main circuit elements that have reactance (capacitors and inductors) have a frequency dependent reactance, unlike resistors which have the same resistance for all frequencies, at least in the ideal case. Comparison to resistance: The term reactance was first suggested by French engineer M. Hospitalier in L'Industrie Electrique on 10 May 1893. It was officially adopted by the American Institute of Electrical Engineers in May 1894. Capacitive reactance: A capacitor consists of two conductors separated by an insulator, also known as a dielectric. Capacitive reactance: Capacitive reactance is an opposition to the change of voltage across an element. Capacitive reactance XC is inversely proportional to the signal frequency f (or angular frequency ω ) and the capacitance C .There are two choices in the literature for defining reactance for a capacitor. One is to use a uniform notion of reactance as the imaginary part of impedance, in which case the reactance of a capacitor is the negative number, XC=−1ωC=−12πfC .Another choice is to define capacitive reactance as a positive number, XC=1ωC=12πfC .In this case however one needs to remember to add a negative sign for the impedance of a capacitor, i.e. Zc=−jXc At f=0 , the magnitude of the capacitor's reactance is infinite, behaving like an open circuit (preventing any current from flowing through the dielectric). As frequency increases, the magnitude of reactance decreases, allowing more current to flow. As f approaches ∞ , the capacitor's reactance approaches 0 , behaving like a short circuit. Capacitive reactance: The application of a DC voltage across a capacitor causes positive charge to accumulate on one side and negative charge to accumulate on the other side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero. Capacitive reactance: Driven by an AC supply (ideal AC current source), a capacitor will only accumulate a limited amount of charge before the potential difference changes polarity and the charge is returned to the source. The higher the frequency, the less charge will accumulate and the smaller the opposition to the current. Inductive reactance: Inductive reactance is a property exhibited by an inductor, and inductive reactance exists based on the fact that an electric current produces a magnetic field around it. In the context of an AC circuit (although this concept applies any time current is changing), this magnetic field is constantly changing as a result of current that oscillates back and forth. It is this change in magnetic field that induces another electric current to flow in the same wire (counter-EMF), in a direction such as to oppose the flow of the current originally responsible for producing the magnetic field (known as Lenz's Law). Hence, inductive reactance is an opposition to the change of current through an element. Inductive reactance: For an ideal inductor in an AC circuit, the inhibitive effect on change in current flow results in a delay, or a phase shift, of the alternating current with respect to alternating voltage. Specifically, an ideal inductor (with no resistance) will cause the current to lag the voltage by a quarter cycle, or 90°. Inductive reactance: In electric power systems, inductive reactance (and capacitive reactance, however inductive reactance is more common) can limit the power capacity of an AC transmission line, because power is not completely transferred when voltage and current are out-of-phase (detailed above). That is, current will flow for an out-of-phase system, however real power at certain times will not be transferred, because there will be points during which instantaneous current is positive while instantaneous voltage is negative, or vice versa, implying negative power transfer. Hence, real work is not performed when power transfer is "negative". However, current still flows even when a system is out-of-phase, which causes transmission lines to heat up due to current flow. Consequently, transmission lines can only heat up so much (or else they would physically sag too much, due to the heat expanding the metal transmission lines), so transmission line operators have a "ceiling" on the amount of current that can flow through a given line, and excessive inductive reactance can limit the power capacity of a line. Power providers utilize capacitors to shift the phase and minimize the losses, based on usage patterns. Inductive reactance: Inductive reactance XL is proportional to the sinusoidal signal frequency f and the inductance L , which depends on the physical shape of the inductor: XL=ωL=2πfL The average current flowing through an inductance L in series with a sinusoidal AC voltage source of RMS amplitude A and frequency f is equal to: IL=AωL=A2πfL. Inductive reactance: Because a square wave has multiple amplitudes at sinusoidal harmonics, the average current flowing through an inductance L in series with a square wave AC voltage source of RMS amplitude A and frequency f is equal to: 16 fL making it appear as if the inductive reactance to a square wave was about 19% smaller 16 πfL than the reactance to the AC sine wave. Inductive reactance: Any conductor of finite dimensions has inductance; the inductance is made larger by the multiple turns in an electromagnetic coil. Faraday's law of electromagnetic induction gives the counter-emf E (voltage opposing current) due to a rate-of-change of magnetic flux density B through a current loop. Inductive reactance: E=−dΦBdt For an inductor consisting of a coil with N loops this gives: E=−NdΦBdt .The counter-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency. Impedance: Both reactance X and resistance R are components of impedance Z .Z=R+jX where: Z is the complex impedance, measured in ohms; R is the resistance, measured in ohms. It is the real part of the impedance: Re (Z) X is the reactance, measured in ohms. It is the imaginary part of the impedance: Im (Z) j is the square root of minus one, usually represented by i in non-electrical formulas. j is used so as not to confuse the imaginary unit with current, commonly represented by i .When both a capacitor and an inductor are placed in series in a circuit, their contributions to the total circuit impedance are opposite. Capacitive reactance XC and inductive reactance XL contribute to the total reactance X as follows: X=XL+XC=ωL−1ωC where: XL is the inductive reactance, measured in ohms; XC is the capacitive reactance, measured in ohms; ω is the angular frequency, 2π times the frequency in Hz.Hence: if X>0 , the total reactance is said to be inductive; if X=0 , then the impedance is purely resistive; if X<0 , the total reactance is said to be capacitive.Note however that if XL and XC are assumed both positive by definition, then the intermediary formula changes to a difference: X=XL−XC=ωL−1ωC but the ultimate value is the same. Impedance: Phase relationship The phase of the voltage across a purely reactive device (i.e. with zero parasitic resistance) lags the current by π2 radians for a capacitive reactance and leads the current by π2 radians for an inductive reactance. Without knowledge of both the resistance and reactance the relationship between voltage and current cannot be determined. The origin of the different signs for capacitive and inductive reactance is the phase factor e±jπ2 in the impedance. ZC=1ωCe−jπ2=j(−1ωC)=jXCZL=ωLejπ2=jωL=jXL For a reactive component the sinusoidal voltage across the component is in quadrature (a π2 phase difference) with the sinusoidal current through the component. The component alternately absorbs energy from the circuit and then returns energy to the circuit, thus a pure reactance does not dissipate power.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cardboard furniture** Cardboard furniture: Cardboard furniture is classified as furniture designed and made from corrugated fibreboard (including inverted corrugated boards), heavy paperboard, honeycomb board, fibre tubes or a combination of these materials. Cardboard furniture is misleading, since "cardboard" is a depreciated term, describing mainly corrugated cardboard, but not being sufficiently specific to describe the various forms of paper-based boards used today in order to make furniture. Cardboard furniture: Generally cardboard furniture is lightweight and easy to assemble, without using screws or glue. History and development: First usage of cardboard as a material for engineered lightweight structures occurred at the 1954 Triennale in Milan with Richard Buckminster Fuller displaying a Geodesic Dome made of cardboard. In 1968, German designer Peter Raacke demonstrated the possibilities of creating a cardboard chair within five minutes live on NBC, calling it the "first really modern piece of furniture".In 1972, Canadian-born architect Frank Gehry (b. 1929) introduced the first publicly well-received cardboard furniture series ("Easy Edges"), including the iconic Wiggle Chair. Being confronted with some resistance at the time - i.g. New York Times calling it "paper furniture for penny pinchers" - and simultaneously worrying the furniture's popularity would be paramount to his work as an architect, Gehry stopped production in 1973 and quit cardboard furniture altogether by 1982, eventually giving the rights to Vitra, where the Wiggle Chair still is manufactured to this day.In the 1990s, Japanese architect Shigeru Ban, recognized for his architecture using paper tubes, created furniture pieces which later resulted in his "Carta Collection" in 2016. Between 2001 and 2002, IKEA started to replace the core of selected designs with cardboard in order to reduce costs for the consumer and contribute to sustainability. In 2010, British designer Giles Miller created a pop-up store for Stella McCartney in Paris, using cardboard furniture. For the 2020 Tokyo Olympics cardboard beds were used in the athlete's accommodations, creating a media discussion whether or not these beds were made to prevent the athletes from having sexual intercourse. Consumer market: Cardboard furniture mainly is classified as ready-to-assemble furniture (RTA), taking advantage of the low weight of cardboard and the ability to flatpack easily. As of 2020, the RTA consumer market in the USA alone was estimated to be worth 13.8 billion dollars with large companies being less dominant than widely expected, but facing competition from regional chains, making drop shipping economically interesting for smaller companies. The 2021 European Union market is estimated to be worth over 15 billion Euro. Furthermore, cardboard furniture generally appeals to a younger demographic, such as Millennials or Gen-Z, leaving potential for growth. At this point, none of the major furniture producers has entered the cardboard furniture market. Consumer market: However, whether cardboard furniture only remains a trend or not is still debated. Products and material: The market offers various cardboard furniture designs, such as beds, benches, chairs, shelves, stools, tables, and many more. Not all types of cardboard can be used for every type of furniture. Generally, to make cardboard furniture, heavy paperboard, corrugated fibreboard (including inverted corrugated board), honeycomb cardboard and core material without a liner are all being used. Also, the liner can alternate between Test- and Kraftliner, depending on the design. Perception of cardboard furniture: Cardboard as a material generally is viewed negatively when used as a primary material for furniture or as a building material in general. Several studies and research programs have been conducted, entering not only into structural questions, but also questions of acceptance. Examples are programs such as BAMP at the University of Darmstadt, the CATSE program at ETH Zürich, Cardboard Technical Research and Developments at TU Delft and others. One potential reason is the widely fragmented cardboard industry with thin corrugated cardboard used for packaging as the primary material for potential consumers to mainly get in contact with, depreciating the material in consumers perception in general without differentiating between cheap packaging material and high-performance paper-boards.On the design side, a 2018 study at GuangDong University of Technology researched consumer perception of cardboard furniture depending on the design using eye tracking technology. The researchers found that simpler, more familiar shapes are more likely to lead to a positive purchasing decisions, with recognition of familiar shapes as a driving factor. However, this study has been conducted in China. Therefore the cultural background in comparison to western consumption behaviour must be taken into consideration.In order to elevate the perception of cardboard furniture, German-Canadian design studio Nordwerk Design published construction plans for cardboard furniture for free in 2020, arguing that it requires a critical mass of consumers to lead to a shift in the general perception and that this only can be achieved by getting as much quality design out as possible. Literature: Dry, Graham. "Hans Günther Reinstein und seine Möbel aus Pappe". In: Kunst in Hessen und am Mittelrhein (1982) 22, pp. 131 ff. Martens, Bob. "Das Kartonmöbel". Wien: Technische Universität Wien, 1995, ISBN 3-901153-03-9 Minke, Gemot. "Bauen mit Pappe". In: DBZ (1977) 11, pp. 1497–1500. Schreibmayer, Peter. "Cardboards. Bauen mit Pappe." In: Architektur Aktuell (1991) 146, pp. 20–21. Digel, Marion. "Papermade. Wohnen mit Objekten aus Papier und Karton", München 2002, ISBN 3-576-11580-3 Leblois, Olivier. "Carton. Mobilier/Éco-Design/Architecture", Marseille 2008, ISBN 978-2-86364-186-6 Begleitbuch zur Ausstellung "Einrichten – Leben in Karton", Städtische Galerie Villa Zanders, Bergisch-Gladbach 2008 Cardboardbook (Ginko Press 2010), ISBN 978-1-58423-371-8 Soroka, Walter. "Illustrated Glossary of Packaging Terminology", 2008, ISBN 978-1-930268-27-2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beneficiary rule** Beneficiary rule: The beneficiary rule, commonly referred to as the "lucky dog" or "free pass", is a rule in some motor racing leagues allowing the closest lapped driver to the front of the field to gain back a lap when a caution is called. The driver is called to move to the end of the longest line of the cars at the end of that caution period. This rule was instituted to prevent drivers from racing back to the start/finish line when a caution was called. The rule was first implemented by NASCAR in the 2003 seasons of its three national series, and in all NASCAR-sanctioned series by 2005. Background: Before the rule was installed, drivers would "race back to the caution"; however, there was a gentlemen's agreement not to race, but to slow down and not pass, to allow slower cars to get their laps back. During a September 14, 2003, Sylvania 300 at New Hampshire International Speedway, Casey Mears came close to contacting the stalled car of Dale Jarrett while racing back to the line during a caution caused by Jarrett's crash. NASCAR chose to abandon the practice and stop racing immediately in the wake of the incident. The rule was created as a way of continuing the practice of yielding to the slower cars without sacrificing safety. Naming: The popular term for this rule, Lucky Dog, was first used by Benny Parsons in 2003 during a TNT broadcast at Dover International Speedway. His boothmate Wally Dallenbach Jr., concurred when Jimmy Spencer, who drove a car sponsored by Sirius Satellite Radio (whose company mascot was a dog, named "Deejay Mongobot"), saying, "That IS a lucky dog." This became used by all NBC and TNT broadcasts, along with the Performance Racing Network radio broadcasts. The term is also used by NASCAR SimRacing and iRacing, among other licensed NASCAR video games produced after the rule was implemented. Naming: Another oft-used term, Free pass, was first used by Mike Joy during the 2004 broadcast of the Subway 400 at North Carolina Speedway in Rockingham, North Carolina. Sometimes, Larry McReynolds, especially during the 2004 season, would refer to it as a pardon (sometimes accompanied by "from the Oval Office"; "Oval Office" is a term referring to the NASCAR mobile office and the proper series logo), and sometimes Darrell Waltrip uses it only for the #38 Robert Yates Racing Ford and later #18 Joe Gibbs Racing Toyota, because that car is sponsored by Mars, Incorporated, which manufactures the Pedigree dog food brand. It is used by MRN Radio and Fox Sports by its main announcers, and is used by the Fox graphics package. (Note that starting in 2007, TNT's coverage is produced by Fox Sports; as part of the 2009 restart rule changes, the TNT graphics package states the driver with the Free Pass and Wave-Around before the restart.) On Speed Channel and ESPN the term Aaron's "Lucky Dog" (which is the Aaron's corporate mascot, and is part of its branding) is used. During ESPN broadcasts, it is used only when it is officially awarded. During ESPN broadcasts, Jerry Punch follows the code established by ESPN producer Neil Goldberg, using Free Pass (which was Goldberg's policy on Fox). In the NASCAR Pinty's Series, the term VTech Lucky Dog is a contingency award among drivers who were Beneficiaries during the race. The highest-finishing driver who had earned a Beneficiary Rule lap wins a CAD 1,000 prize. Naming: Reception The use of the term lucky dog is often criticized by specific Fox staff members for not being informative and producer Neil Goldberg, who has since moved to ESPN. Mike Joy has mocked the term on Fox broadcasts, first in March 2005 during the Busch Series Telcel-Motorola 200 at the Autodromo Hermanos Rodriguez, where a dog ran across the track during a caution, and in April 2006 during the DirecTV 500 in Martinsville Speedway, where Joy referred to Ryan Newman's love of pets and said despite his love of dogs, he hadn't been a lucky dog.During a 2004 conversation with fans on the Fox Sports website, Waltrip said, "You're not lucky, and you're not a dog. You just happen to be the recipient of a free pass. You get to go around the track and get back on the lead lap."Conversely, Goldberg told the NASCAR.COM Viewer's Guide in April 2004 the term free pass suits the audience easily because "it easily bridged us into explaining of what it is each time it happens." He also mentioned, for new viewers, the Fox terminology is easier to explain, especially since "we feel 'free pass' signals something happening better than throwing out the term 'lucky dog.'" Later, however, Fox agreed to use the term on-air after rent-to-own company Aaron's paid to sponsor the 'award', even creating a cartoon dog character to accompany the captioning. Conditions: The rule applies regardless of the number of laps a car is behind the leader. Furthermore, a driver may not receive a beneficiary rule lap in certain situations: The driver caused the situation bringing out the yellow. Conditions: The driver had been penalized one (or more) laps for rough driving. This rule may be waived if the driver passes the leader and regains his lap back, and then is passed back.There are two restrictions on the pitting in regards to the beneficiary rule: The driver pits with the lap-down cars, unless officials declare a quick yellow, when all cars may pit. Conditions: During that pit stop, it is the only lap that car may take fuel. This rule was implemented October 30, 2004, after Ryan Newman won the first race with the beneficiary rule by stopping for fuel multiple times after gaining the free pass during that caution period, resulting in a win.In 2006, NASCAR began to use this rule at road course races, despite previous years where it was not used at road course events. Conditions: In June 2009, double-file restart rule changes resulted in changes to the Beneficiary Rule: The beneficiary would now be implemented during the entire race. Previously, the beneficiary was discontinued when less than ten laps remained in the race. Conditions: After pit stops, once the starter signals one lap before the restart, the pit is closed, and all cars between the safety car and leader will be allowed to advance to the rear of the field. The leader will be the first car on the restart. Cars that were not waved around (such as lead lap cars, but not the beneficiary) will be allowed to pit. Conditions: Such a situation occurs when the leaders pit, but some lapped cars do not pit. This usually occurs when different pit strategies are used between leaders, or when a cycle of pit stops is interrupted by a caution; those cars which have pitted and are lapped will take the wave-around, restarting behind the leaders who pitted, and advancing one lap.The 2009 NASCAR rule change brings it in line with Grand-Am road racing, while rules where lapped cars between leaders may gain one lap were adopted in Formula One as of 2007. The lapped-car rule in Formula One applies when the "lapped cars may overtake" signal appears on team monitors from race control. Conditions: In the IndyCar Series, lapped cars ahead of the leader following pit stops (which may happen if a lapped car does not pit during yellow when the lead lap cars do so) are allowed to move to the tail end of the lead lap on restarts on the one lap to go signal—which automatically closes the pit lane until the restart. This ensures that the leaders take the green flag without interference from lapped traffic. NASCAR follows the same policy with the 2009 change to the Beneficiary Rule, except that pit lane is only closed to those cars that were waved around the safety car to allow the leaders to start at the front. Statistics: According to Jayski.com, seven drivers have won a race being the beneficiary in NASCAR Cup Series alone, with two drivers doing it twice. Statistics: Ryan Newman, Dover, September 2003 and Michigan, June 2004 Mark Martin, Dover, June 2004 Jeff Gordon, Martinsville, April 2005 Kyle Busch, Phoenix, November 2005 and Talladega, April 2008 Kurt Busch, Bristol, March 2006 Kasey Kahne, Michigan, June 2006 Joey Logano, New Hampshire, June 2009 Kevin Harvick, Daytona, July 2010Another notable win that occurred with a driver receiving the Beneficiary rule was when Aric Almirola and Denny Hamlin (who was not credited for the win) won the 2007 AT&T 250 in the NASCAR Xfinity Series after their driver change put them a lap down. Most beneficiaries accumulated in a race: Jamie McMurray, 6, Talladega, May 2014, finished 29th Kyle Busch, 5, Watkins Glen, August 2006, finished 9th David Reutimann, 5, Daytona, July 2008, finished 21st Joe Nemechek, 5, New Hampshire, July 2013, finished 25th Kevin Lepage, 4, Charlotte, October 2005, finished 21st Bobby Labonte, 4, Talladega, April 2007, finished 20th David Gilliland, 4, Talladega, October 2007, finished 27th Kevin Lepage, 3, Charlotte, May 2005 Kevin Lepage, 3, Chicago, July 2005 Mike Wallace, 3, Bristol, August 2005 Dale Earnhardt Jr., 3, Bristol, August 2005 Kyle Petty, 3, Talladega, October 2005 Rusty Wallace, 3, Charlotte, October 2005, finished 24th Terry Labonte, 3, Bristol, March 2006, finished 27th Jeff Gordon, 3. Martinsville, April 2005, finished 1st Jeff Gordon, 3, Indianapolis, August 2006, finished 16th David Stremme, 3, Michigan, August 2006, finished 28th David Ragan, 3, Martinsville, October 2006, finished 25th Jimmie Johnson, 3, Pocono, August 2009, finished 13thNOTE: Kyle Busch was the beneficiary in five consecutive caution periods at the 2006 AMD at the Glen; the beneficiary rule was not used on road course events in 2004. The first driver not on the lead lap—no matter how many laps they are behind the leader—gains one lap back per beneficiary; another reason the rule is somewhat unpopular. In Busch's case, he lost five laps from repairs caused by an oil leak, and upon returning to the track, gained all five laps back through the beneficiary rule because no other driver was between him and the lead lap on any of the caution periods. Statistics: Aric Almirola and Denny Hamlin (who was not credited for the win) won the 2007 AT&T 250 put them a lap down.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Open Hematology Journal** The Open Hematology Journal: The Open Hematology Journal is an open-access peer-reviewed medical journal covering ecology. It publishes reviews and letters in all areas of clinical, laboratory, and experimental hematology including stem cells and blood disorders. Abstracting and indexing: The journal is indexed in: Chemical Abstracts EMBASE Scopus
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phorbol-12-myristate-13-acetate-induced protein 1** Phorbol-12-myristate-13-acetate-induced protein 1: Phorbol-12-myristate-13-acetate-induced protein 1 is a protein that in humans is encoded by the PMAIP1 gene, and is also known as Noxa.Noxa (Latin for damage) is a pro-apoptotic member of the Bcl-2 protein family. Bcl-2 family members can form hetero- or homodimers, and they act as anti- or pro-apoptotic regulators that are involved in a wide variety of cellular activities. The expression of Noxa is regulated by the tumor suppressor p53, and Noxa has been shown to be involved in p53-mediated apoptosis. Interactions: Noxa has been shown to interact with: BCL2-like 1, Bcl-2, and MCL1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Four corners offense** Four corners offense: The four corners offense, also known as the four corner stall or the four corners delay offense, is an offensive strategy for stalling in basketball, primarily used in college basketball and high school basketball before the shot clock was instituted. Four players stand in the corners of the offensive half-court while the fifth player dribbles the ball in the middle. Most of the time, the point guard stays in the middle, but the middle player may periodically switch, temporarily, with one of the corner players. Usage: A four corners offense was most frequently used prior to the introduction of the shot clock in order to retain a lead by holding on to the ball until the clock ran out. The trailing team would be forced to spread their defense in hopes of getting a steal, which often allowed easy drives to the basket by the offense. The offense typically would seek to score, but only on extremely safe shots. The players in the corners might try to make backdoor cuts, or the point guard could drive the lane. Sometimes, one team would run the four corners offense throughout a game to reduce the number of possessions, in hopes of being able to defeat a superior opponent.Even if the offense wanted to hold the ball until the end of the game, some strategy was necessary since the rules did not (and still do not) let a player hold the ball for more than five seconds while being closely guarded. So, some mechanism to facilitate safe passes was needed, which this offense provided. There were (and still are) other slowdown strategies, but the four corners was the most well known. History: The offense was created by the early 1950s by John McLendon, head coach of the North Carolina Central Eagles, and popularized by longtime North Carolina Tar Heels head coach Dean Smith in the early 1960s. He used it to great effect under point guard Phil Ford; it was during his career that some writers referred to the offense as the "Ford Corners."Basketball's "5 seconds closely guarded" rule was originally introduced partly to prevent stalling, and other rule changes were made to the college rules through the 1970s in hopes of eliminating stalling without using a shot clock as the National Basketball Association (NBA) had since its 1954–55 season. There was a perception that the NBA shot clock did not allow time to work the ball to get a good shot, and that it would reduce the opportunity for varied styles of play. History: However, by the early 1980s, fans were fed up. In the nationally televised 1982 Atlantic Coast Conference (ACC) championship game between the North Carolina Tar Heels and the Virginia Cavaliers, North Carolina held the ball for roughly the last seven minutes of the second half to nurse a small lead, eventually winning, 47–45. The next year, the ACC and other conferences introduced a shot clock experimentally, along with a three-point field goal to force defenses to spread out. In 1985, the National Collegiate Athletic Association (NCAA) adopted a shot clock nationally and added the three-pointer a year later. History: Tributes This style of offense was so distinctive that a local restaurant-bar in Chapel Hill, North Carolina, was named Four Corners in homage to Dean Smith, a local hero.On February 21, 2015, the Tar Heels, coached by Smith protege Roy Williams, successfully ran the offense on the opening possession against Georgia Tech as a tribute to the recently deceased Smith.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minimally manipulated cells** Minimally manipulated cells: Minimally manipulated cells are non-cultured (non-expanded) cells isolated from the biological material by its grinding, homogenization or selective collection of cells, which undergo minimal manipulation. Minimally manipulated cells are usually using for the treatment of skin ulceration, alopecia, and arthritis. Minimally manipulated cells can be used for the intraoperative creation of tissue-engineered grafts in situ. International regulation: Minimally manipulated cells are allowed to be an object of manufacture and homologous transplantation in USA and European Countries. The criteria of "minimal manipulation" are variative in different countries. European regulations, according to the Reflection Paper on the classification of advanced therapy medicinal products of the European Medicines Agency, define "minimal manipulation" as the procedure that does not change biological characteristics and functions of cells. In particular, enzymatic digestion of biomaterial is prohibited, when cell-to-cell contacts are dissociated.According to the US regulations (US 21 Code of Federal Regulations § 1271.3(f)(1), Section 361) human cells and tissues and tissue-based products (section 361 HCT/Ps), “minimal manipulation” is a processing that does not alter the original relevant characteristics of the structural tissue relating to the tissue’s utility for reconstruction, repair, or replacement.Russian regulations provide no specific definition for “minimally manipulated” cells. However, it follows from the content of the Order of Russian Ministry of Health No. 1158n “On amending the list of transplantation objects”. According to the Order, cells obtained from the biomaterial by its grinding, homogenization, enzymatic treatment, removal of unwanted components or by selective collection of cells, could be considered as “minimally manipulated”. Minimally manipulated cells are allowed to be an object of transplantation, when they do not contain any other substances except for water, crystalloids, sterilizing, storage, and (or) specific preserving agents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Earworm** Earworm: An earworm or brainworm, also known as sticky music or stuck song syndrome, is a catchy or memorable piece of music or saying that continuously occupies a person's mind even after it is no longer being played or spoken about. Involuntary Musical Imagery (INMI) is most common after earworms, but INMI as a label is not solely restricted to earworms; musical hallucinations also fall into this category, although they are not the same thing. Earworms are considered to be a common type of involuntary cognition. Some of the phrases often used to describe earworms include "musical imagery repetition" and "involuntary musical imagery".The word earworm is a calque from the German Ohrwurm. The earliest known English usage is in Desmond Bagley's 1978 novel Flyaway, where the author points out the German origin of his coinage.Researchers who have studied and written about the phenomenon include Theodor Reik, Sean Bennett, Oliver Sacks, Daniel Levitin, James Kellaris, Philip Beaman, Vicky Williamson, Diana Deutsch, and, in a more theoretical perspective, Peter Szendy, along with many more. The phenomenon should be distinct from palinacousis, a rare medical condition caused by damage to the temporal lobe of the brain that results in auditory hallucinations. Incidence and causes: Researcher Vicky Williamson at Goldsmiths, University of London, found in an uncontrolled study that earworms correlated with music exposure, but could also be triggered by experiences that trigger the memory of a song (involuntary memory) such as seeing a word that reminds one of the song, hearing a few notes from the song, or feeling an emotion one associates with the song. The list of songs collected in the study showed no particular pattern, other than popularity.According to research by James Kellaris, 98% of individuals experience earworms. Women and men experience the phenomenon equally often, but earworms tend to last longer for women and irritate them more. Kellaris produced statistics suggesting that songs with lyrics may account for 73.7% of earworms, whereas instrumental music may cause only 7.7%.In 2010, published data in the British Journal of Psychology directly addressed the subject, and its results support earlier claims that earworms are usually 15 to 30 seconds in length and are more common in those with an interest in music.Earworms can occur with 'positive' or 'negative' music. Positive music in this case would be music that sounds happy and/or calm. Negative music would be the opposite, where the music sounds angry or sad. Earworms are also not solely regulated to only music with lyrics; in a research experiment conducted by Ella Moeck and her colleagues in an attempt to find out if the positive/negative feeling of the music affected earworms caused by that piece, they only used instrumental music. Her experiment determined that all participants experienced a similar quantity of earworms, regardless of the emotional valence, although the quality of the earworm did vary. The earworms born from the negatively valenced music brought about more distress and occurred less frequently than those produced by positively valenced music. Antidotes: Scientists at Western Washington University found that engaging working memory in moderately difficult tasks such as anagrams, puzzles or reading was an effective way of stopping earworms and of reducing their recurrence. Another publication points out that melodic music has a tendency to demonstrate repeating rhythm which may lead to endless repetition, unless a climax can be achieved to break the cycle.Research reported in 2015 by the School of Psychology and Clinical Language Sciences at the University of Reading demonstrated that chewing gum could help by similarly blocking the sub-vocal rehearsal component of auditory short-term or "working" memory associated with generating and manipulating auditory and musical images. It has also been suggested to ask oneself why one is experiencing this particular song. Another suggested remedy is to try to find a "cure song" to stop the repeating music.There are also so-called "cure songs" or "cure tunes" to get the earworm out of one's head. "God Save the King" is cited as a very popular and helpful choice of cure song. "Happy Birthday" was also a popular choice in cure songs.Individual songs may become less likely to cause an earworm as their exciting effect fades as a result of excessive repetition. Antidotes: Listening to the tune in a different/lower tempo or lower pitch, or a remixed version if it exists, can be an antidote. Listening to the tune from start to finish can also help. Since earworms are usually only a fragment of music, playing the tune all the way through can help break the loop. Notable cases: Jean Harris, who murdered Herman Tarnower, was obsessed with the song "Put the Blame on Mame", which she first heard in the film Gilda. She would recall this regularly for over 33 years and could hold a conversation while playing it in her mind. In popular culture: Mark Twain's 1876 story "A Literary Nightmare" (also known as "Punch, Brothers, Punch") is about a jingle that one can get rid of only by transferring it to another person. In popular culture: In 1943 Henry Kuttner published the short story "Nothing but Gingerbread Left" about a song engineered to damage the Nazi war effort, culminating in Adolf Hitler being unable to continue a speech.In Alfred Bester's 1953 novel The Demolished Man, the protagonist uses a jingle specifically crafted to be a catchy, irritating nuisance as a tool to block mind readers from reading his mind. In popular culture: In Arthur C. Clarke's 1957 science fiction short story "The Ultimate Melody", a scientist, Gilbert Lister, develops the ultimate melody – one that so compels the brain that its listener becomes completely and forever enraptured by it. As the storyteller, Harry Purvis, explains, Lister theorized that a great melody "made its impression on the mind because it fitted in with the fundamental electrical rhythms going on in the brain." Lister attempts to abstract from the hit tunes of the day to a melody that fits in so well with the electrical rhythms that it dominates them completely. He succeeds and is found in a catatonic state from which he never awakens.In Fritz Leiber's Hugo Award-nominated short story "Rump-Titty-Titty-Tum-TAH-Tee" (1959), the title describes a rhythmic drumbeat so powerful that it rapidly spreads to all areas of human culture, until a counter-rhythm is developed that acts as an antidote.In Joe Simpson's 1988 book Touching the Void, he talks about not being able to get the tune "Brown Girl in the Ring" by Boney M out of his head. The book tells of his survival, against the odds, after a mountaineering accident in the remote Siula Grande region of South America. Alone, badly injured, and in a semi-delirious state, he is confused as to whether he is imagining the music or really hearing it.In the Dexter's Laboratory episode titled "Head Band", a contagious group of viruses force their host to sing what they are saying to the same "boy band" tune. The only way to be cured of the Boy Band Virus is for the viruses to break up and start their own solo careers.In the SpongeBob SquarePants episode titled “Earworm”, SpongeBob gets the “Musical Doodle” song stuck in his head, giving him an earworm, which ultimately turns out to be an actual worm, which is removed by his friends singing or playing other songs. In popular culture: In The Lego Movie 2: The Second Part is in a scene in which most of the film's characters are subjected to "Catchy Song" and all except Lucy dance to it, while simultaneously the denizens of Harmony Town sing it to Emmet and Rex. Lucy/Wildstyle avoids being "brainwashed" by the song by breaking one of the speakers and using some of its pieces to build earmuffs for herself before escaping via air ducts, while Emmet and Rex escape in a similar fashion. In popular culture: E. B. White's 1933 satirical short story "The Supremacy of Uruguay" (reprinted in Timeless Stories for Today and Tomorrow) relates a fictional episode in the history of Uruguay where a powerful earworm is discovered in a popular American song. The Uruguayan military builds a squadron of pilotless aircraft armed with phonographs playing a highly amplified recording of the earworm, and conquers the entire world by reducing the citizens of all nations to mindless insanity. "[T]he peoples were hopelessly mad, ravaged by an ineradicable noise ... No one could hear anything except the noise in his own head." Key characteristics: According to research done in 2016 by the American Psychological Association, there are certain characteristics that make songs more likely to become earworms. Earworm songs usually have a fast-paced tempo and an easy-to-remember melody. However, earworms also tend to have unusual intervals or repetitions that make them stand out from other songs. Earworms also tend to be played on the radio more than other songs and are usually featured at the top of the charts. The chorus of a song is one of the most reported causes of earworms.The most frequently named earworms during this study were the following: "Bad Romance" by Lady Gaga "Can't Get You Out of My Head" by Kylie Minogue "Don't Stop Believin'" by Journey "Somebody That I Used to Know" by Gotye "Moves like Jagger" by Maroon 5 "California Gurls" by Katy Perry "Bohemian Rhapsody" by Queen "Alejandro" by Lady Gaga "Poker Face" by Lady Gaga Susceptible traits: Kazumasa Negishi and Takahiro Sekiguchi did a study to see if there are specific traits that make a person more or less susceptible to earworms or involuntary musical imagery. The participants in the study were assessed on obsessive-compulsive tendencies, the Big Five personality traits, and musical expertise. Negishi and Sekiguchi found that some of the obsessive-compulsive traits, such as intrusive thoughts, played a role in experiencing earworms while compulsive washing did not. In terms of the Big Five personality traits, neuroticism significantly predicted occurrences of earworms. Musical expertise created an effect of sophistication when it came to earworm occurrences. Tools used in data gathering: One tool used to gather data on involuntary musical imagery (INMI)—and, more specifically, earworms—is called the Involuntary Musical Imagery Scale; it was created with the research compiled from George Floridou, Victoria Williamson, and Danial Müllensiefen. It uses four factors to measure different experiences surrounding earworms and INMI in general. Those four factors include 'Negative Valence', 'Movement', 'Personal Reflections', and 'Help'. Negative Valence is the category that measures the subjective response to the INMI experience. Movement is a relatively new aspect to apply to INMI, it is essentially the INMI experience with accompanied embodied responses, which can include singing, humming, and dancing. Personal Reflections is the occurrence of a personal quality, like unrelated thoughts, associated with the INMI; which are not directly related to the valence of the INMI itself. Help is the category which determines the beneficial and constructive aspects to the INMI experiences, which could potentially reflect similarities in the characteristics of unfocused music listing and task-unrelated thought.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coastal upwelling of the South Eastern Arabian Sea** Coastal upwelling of the South Eastern Arabian Sea: Coastal upwelling of the South Eastern Arabian Sea (SEAS) is a typical eastern boundary upwelling system (EBUS) similar to the California, Bengulea, Canary Island and Peru-Chili systems. Unlike those four, the SEAS upwelling system needs to be explored in a much focused manner to clearly understand the chemical and biological responses associated with this coastal process.The coastal upwelling in the south-eastern Arabian Sea occurs seasonally. It begins in the mid Spring (Mid May) along the southern tip of India and as the season advances it spreads northward. It is not a uniform wind-driven upwelling system, but is driven by various factors. While at Cape Comorin it can be modeled as just wind-driven, as the phenomena rises along the west coast of India, longshore wind stresses play an increasing role, as do atmospheric effects from the Bay of Bengal, such as Kelvin and Rossby waves.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Retrovirus** Retrovirus: A retrovirus is a type of virus that inserts a DNA copy of its RNA genome into the DNA of a host cell that it invades, thus changing the genome of that cell. After invading a host cell's cytoplasm, the virus uses its own reverse transcriptase enzyme to produce DNA from its RNA genome, the reverse of the usual pattern, thus retro (backwards). The new DNA is then incorporated into the host cell genome by an integrase enzyme, at which point the retroviral DNA is referred to as a provirus. The host cell then treats the viral DNA as part of its own genome, transcribing and translating the viral genes along with the cell's own genes, producing the proteins required to assemble new copies of the virus. Many retroviruses cause serious diseases in humans, other mammals, and birds.Retroviruses have many subfamilies in three basic groups. Oncoretroviruses (cancer-causing retroviruses) include human T-lymphotropic virus (HTLV) causing a type of leukemia in humans, and murine leukemia viruses (MLVs) in mice. Retrovirus: Lentiviruses (slow viruses) include HIV-1 and HIV-2, the cause of acquired immune deficiency syndrome (AIDS) in humans. Retrovirus: Spumaviruses (foamy viruses) are benign and not linked to any disease in humans or animals.The specialized DNA-inflitration enzymes in retroviruses make them valuable research tools in molecular biology, and they have been used successfully in gene delivery systems.Evidence from endogenous retroviruses (inherited provirus DNA in animal genomes) suggests that retroviruses have been infecting vertebrates for at least 450 million years. Structure: Virions, viruses in the form of independent particles of retroviruses, consist of enveloped particles about 100 nm in diameter. The outer lipid envelope consists of glycoprotein. The virions also contain two identical single-stranded RNA molecules 7–10 kilobases in length. The two molecules are present as a dimer, formed by base pairing between complementary sequences. Interaction sites between the two RNA molecules have been identified as a "kissing stem-loop". Although virions of different retroviruses do not have the same morphology or biology, all the virion components are very similar.The main virion components are: Envelope: composed of lipids (obtained from the host plasma membrane during the budding process) as well as glycoprotein encoded by the env gene. The retroviral envelope serves three distinct functions: protection from the extracellular environment via the lipid bilayer, enabling the retrovirus to enter/exit host cells through endosomal membrane trafficking, and the ability to directly enter cells by fusing with their membranes. Structure: RNA: consists of a dimer RNA. It has a cap at the 5' end and a poly(A) tail at the 3' end. Genomic RNA (gRNA) is produced as a result of host RNA polymerase II (Pol II) activity and by adding a 5' methyl cap and a 3' poly-A tail is processed as a host mRNA. The RNA genome also has terminal noncoding regions, which are important in replication, and internal regions that encode virion proteins for gene expression. The 5' end includes four regions, which are R, U5, PBS, and L. The R region is a short repeated sequence at each end of the genome used during the reverse transcription to ensure correct end-to-end transfer in the growing chain. U5, on the other hand, is a short unique sequence between R and PBS. PBS (primer binding site) consists of 18 bases complementary to 3' end of tRNA primer. L region is an untranslated leader region that gives the signal for packaging of the genome RNA. The 3' end includes three regions, which are PPT (polypurine tract), U3, and R. The PPT is a primer for plus-strand DNA synthesis during reverse transcription. U3 is a sequence between PPT and R, which serves as a signal that the provirus can use in transcription. R is the terminal repeated sequence at 3' end. Structure: Proteins: consisting of gag proteins, protease (PR), pol proteins, and env proteins. Structure: Group-specific antigen (gag) proteins are major components of the viral capsid, which are about 2000–4000 copies per virion. Gag possesses two nucleic acid binding domains, including matrix (MA) and nucleocapsid (NC). Specifically recognizing, binding, and packaging the retroviral genomic RNA into assembling virions is one of the important functions of Gag protein. Gag interactions with cellular RNAs also regulate aspects of assembly. The expression of gag alone gives rise to assembly of immature virus-like particles that bud from the plasma membrane. In all retroviruses the Gag protein is the precursor to the internal structural protein. Structure: Protease (pro) is expressed differently in different viruses. It functions in proteolytic cleavages during virion maturation to make mature gag and pol proteins. Retroviral Gag proteins are responsible for coordinating many aspects of virion assembly. Pol proteins are responsible for synthesis of viral DNA and integration into host DNA after infection. Structure: Env proteins play a role in association and entry of virions into the host cell. Possessing a functional copy of an env gene is what makes retroviruses distinct from retroelements. The ability of the retrovirus to bind to its target host cell using specific cell-surface receptors is given by the surface component (SU) of the Env protein, while the ability of the retrovirus to enter the cell via membrane fusion is imparted by the membrane-anchored trans-membrane component (TM). Thus it is the Env protein that enables the retrovirus to be infectious. Structure: Several protein species are associated with the RNA in the retrovirus virion. Nucleocapsid (NC) protein is the most abundant protein, which coats the RNA; while other proteins, present in much smaller amounts and have enzyme activities. Some enzyme activities that are present in the retrovirus virion includes RNA-dependent DNA polymerase (reverse transcriptase; RT), DNA-dependent DNA polymerase, Ribonuclease H (RNase H) Integrase and Protease. The retroviral RNases H encoded by all retroviruses, including HIV have been demonstrated to show three different modes of cleavage: internal, DNA 3′ end-directed, and RNA 5′ end-directed. All three modes of cleavage constitute roles in reverse transcription. Therefore, The RNase H activity is essential in several aspects of reverse transcription. The use of an RNase H activity during retroviral replication displays a unique strategy to copy a single-stranded RNA genome into a double-stranded DNA, since the minus-strand DNA are complementary and make base pairing to retrovirus genome in the first cycle of DNA synthesis. The RNase H ribonuclease activity is also required in the retroviral life cycle, since it generates and removes primers essential by the Reverse Transcriptase (RT) for the initiation of DNA synthesis. Retroviruses that are lacking RNase H activity are noninfectious. Structure: Genomic structure The retroviral genome is packaged as viral particles. These viral particles are dimers of single-stranded, positive-sense, linear RNA molecules.Retroviruses (and orterviruses in general) follow a layout of 5'–gag–pro–pol–env–3' in the RNA genome. gag and pol encode polyproteins, each managing the capsid and replication. The pol region encodes enzymes necessary for viral replication, such as reverse transcriptase, protease and integrase. Depending on the virus, the genes may overlap or fuse into larger polyprotein chains. Some viruses contain additional genes. The lentivirus genus, the spumavirus genus, the HTLV / bovine leukemia virus (BLV) genus, and a newly introduced fish virus genus are retroviruses classified as complex. These viruses have genes called accessory genes, in addition to gag, pro, pol and env genes. Accessory genes are located between pol and env, downstream from the env, including the U3 region of LTR, or in the env and overlapping portions. While accessory genes have auxiliary roles, they also coordinate and regulate viral gene expression. Structure: In addition, some retroviruses may carry genes called oncogenes or onc genes from another class. Retroviruses with these genes (also called transforming viruses) are known for their ability to quickly cause tumors in animals and transform cells in culture into an oncogenic state.The polyproteins are cleaved into smaller proteins each with their own function. The nucleotides encoding them are known as subgenes. Multiplication: When retroviruses have integrated their own genome into the germ line, their genome is passed on to a following generation. These endogenous retroviruses (ERVs), contrasted with exogenous ones, now make up 5–8% of the human genome. Most insertions have no known function and are often referred to as "junk DNA". However, many endogenous retroviruses play important roles in host biology, such as control of gene transcription, cell fusion during placental development in the course of the germination of an embryo, and resistance to exogenous retroviral infection. Endogenous retroviruses have also received special attention in the research of immunology-related pathologies, such as autoimmune diseases like multiple sclerosis, although endogenous retroviruses have not yet been proven to play any causal role in this class of disease.While transcription was classically thought to occur only from DNA to RNA, reverse transcriptase transcribes RNA into DNA. The term "retro" in retrovirus refers to this reversal (making DNA from RNA) of the usual direction of transcription. It still obeys the central dogma of molecular biology, which states that information can be transferred from nucleic acid to nucleic acid but cannot be transferred back from protein to either protein or nucleic acid. Reverse transcriptase activity outside of retroviruses has been found in almost all eukaryotes, enabling the generation and insertion of new copies of retrotransposons into the host genome. These inserts are transcribed by enzymes of the host into new RNA molecules that enter the cytosol. Next, some of these RNA molecules are translated into viral proteins. The proteins encoded by the gag and pol genes are translated from genome-length mRNAs into Gag and Gag–Pol polyproteins. In example, for the gag gene; it is translated into molecules of the capsid protein, and for the pol gene; it is translated into molecules of reverse transcriptase. Retroviruses need a lot more of the Gag proteins than the Pol proteins and have developed advanced systems to synthesize the required amount of each. As an example, after Gag synthesis nearly 95 percent of the ribosomes terminate translation, while other ribosomes continue translation to synthesize Gag–Pol. In the rough endoplasmic reticulum glycosylation begins and the env gene is translated from spliced mRNAs in the rough endoplasmic reticulum, into molecules of the envelope protein. When the envelope protein molecules are carried to the Golgi complex, they are divided into surface glycoprotein and transmembrane glycoprotein by a host protease. These two glycoprotein products stay in close affiliation, and they are transported to the plasma membrane after further glycosylation.It is important to note that a retrovirus must "bring" its own reverse transcriptase in its capsid, otherwise it is unable to utilize the enzymes of the infected cell to carry out the task, due to the unusual nature of producing DNA from RNA.Industrial drugs that are designed as protease and reverse-transcriptase inhibitors are made such that they target specific sites and sequences within their respective enzymes. However these drugs can quickly become ineffective due to the fact that the gene sequences that code for the protease and the reverse transcriptase quickly mutate. These changes in bases cause specific codons and sites with the enzymes to change and thereby avoid drug targeting by losing the sites that the drug actually targets.Because reverse transcription lacks the usual proofreading of DNA replication, a retrovirus mutates very often. This enables the virus to grow resistant to antiviral pharmaceuticals quickly, and impedes the development of effective vaccines and inhibitors for the retrovirus.One difficulty faced with some retroviruses, such as the Moloney retrovirus, involves the requirement for cells to be actively dividing for transduction. As a result, cells such as neurons are very resistant to infection and transduction by retroviruses. This gives rise to a concern that insertional mutagenesis due to integration into the host genome might lead to cancer or leukemia. This is unlike Lentivirus, a genus of Retroviridae, which are able to integrate their RNA into the genome of non-dividing host cells. Multiplication: Recombination Two RNA genomes are packaged into each retrovirus particle, but, after an infection, each virus generates only one provirus. After infection, reverse transcription occurs and this process is accompanied by recombination. Recombination involves template strand switching between the two genome copies (copy choice recombination) during reverse transcription. From 5 to 14 recombination events per genome occur at each replication cycle. Genetic recombination appears to be necessary for maintaining genome integrity and as a repair mechanism for salvaging damaged genomes. Transmission: Cell-to-cell Fluids Airborne, like the Jaagsiekte sheep retrovirus. Provirus: The DNA formed after reverse transcription (the provirus) is longer than the RNA genome because each of the terminals have the U3 - R - U5 sequences called long terminal repeat (LTR). Thus, 5' terminal has the extra U3 sequence, while the other terminal has the U5 sequence. LTRs are able to send signals for vital tasks to be carried out such as initiation of RNA production or management of the rate of transcription. This way, LTRs can control replication, hence, the entire progress of the viral cycle. Although located in the nucleus, the non-integrated retroviral cDNA is a very weak substrate for transcription. For this reason, an integrated provirus is a necessary for permanent and an effective expression of retroviral genes.This DNA can be incorporated into host genome as a provirus that can be passed on to progeny cells. The retrovirus DNA is inserted at random into the host genome. Because of this, it can be inserted into oncogenes. In this way some retroviruses can convert normal cells into cancer cells. Some provirus remains latent in the cell for a long period of time before it is activated by the change in cell environment. Early evolution: Studies of retroviruses led to the first demonstrated synthesis of DNA from RNA templates, a fundamental mode for transferring genetic material that occurs in both eukaryotes and prokaryotes. It has been speculated that the RNA to DNA transcription processes used by retroviruses may have first caused DNA to be used as genetic material. In this model, the RNA world hypothesis, cellular organisms adopted the more chemically stable DNA when retroviruses evolved to create DNA from the RNA templates.An estimate of the date of evolution of the foamy-like endogenous retroviruses placed the time of the most recent common ancestor at > 450 million years ago. Gene therapy: Gammaretroviral and lentiviral vectors for gene therapy have been developed that mediate stable genetic modification of treated cells by chromosomal integration of the transferred vector genomes. This technology is of use, not only for research purposes, but also for clinical gene therapy aiming at the long-term correction of genetic defects, e.g., in stem and progenitor cells. Retroviral vector particles with tropism for various target cells have been designed. Gammaretroviral and lentiviral vectors have so far been used in more than 300 clinical trials, addressing treatment options for various diseases. Retroviral mutations can be developed to make transgenic mouse models to study various cancers and their metastatic models. Cancer: Retroviruses that cause tumor growth include Rous sarcoma virus and mouse mammary tumor virus. Cancer can be triggered by proto-oncogenes that were mistakenly incorporated into proviral DNA or by the disruption of cellular proto-oncogenes. Rous sarcoma virus contains the src gene that triggers tumor formation. Later it was found that a similar gene in cells is involved in cell signaling, which was most likely excised with the proviral DNA. Nontransforming viruses can randomly insert their DNA into proto-oncogenes, disrupting the expression of proteins that regulate the cell cycle. The promoter of the provirus DNA can also cause over expression of regulatory genes. Cancer: Retroviruses can cause diseases such as cancer and immunodeficiency. If viral DNA is integrated into host chromosomes, it can lead to permanent infections. It is therefore important to discover the body's response to retroviruses. Exogenous retroviruses are especially associated with pathogenic diseases. For example, mice have mouse mammary tumor virus (MMTV), which is a retrovirus. This virus passes to newborn mice through mammary milk. When they are 6 months old, the mice carrying the virus get mammary cancer because of the retrovirus. In addition, leukemia virus I (HTLV-1), found in human T cell, has been found in humans for many years. It is estimated that this retrovirus causes leukemia in the ages of 40 and 50. It has a replicable structure that can induce cancer. In addition to the usual gene sequence of retroviruses, HTLV-1 contains a fourth region, PX. This region encodes Tax, Rex, p12, p13 and p30 regulatory proteins. The Tax protein initiates the leukemic process and organizes the transcription of all viral genes in the integrated HTLV proviral DNA. Classification: Exogenous Exogenous retroviruses are infectious RNA- or DNA-containing viruses that are transmitted from one organism to another. In the Baltimore classification system, which groups viruses together based on their manner of messenger RNA synthesis, they are classified into two groups: Group VI: single-stranded RNA viruses with a DNA intermediate in their life cycle, and Group VII: double-stranded DNA viruses with an RNA intermediate in their life cycle. Classification: Group VI viruses All members of Group VI use virally encoded reverse transcriptase, an RNA-dependent DNA polymerase, to produce DNA from the initial virion RNA genome. This DNA is often integrated into the host genome, as in the case of retroviruses and pseudoviruses, where it is replicated and transcribed by the host. Classification: Group VI includes: Order Ortervirales Family Belpaoviridae Family Metaviridae Family Pseudoviridae Family Retroviridae – Retroviruses, e.g. HIV Family Caulimoviridae – a VII group virus family (see below)The family Retroviridae was previously divided into three subfamilies (Oncovirinae, Lentivirinae, and Spumavirinae), but are now divided into two: Orthoretrovirinae and Spumaretrovirinae. The term oncovirus is now commonly used to describe a cancer-causing virus. This family now includes the following genera: Subfamily Orthoretrovirinae: Genus Alpharetrovirus; including Avian leukosis virus and Rous sarcoma virus Genus Betaretrovirus; including Mouse mammary tumour virus Genus Gammaretrovirus; including Murine leukemia virus and Feline leukemia virus Genus Deltaretrovirus; including Bovine leukemia virus and the cancer-causing Human T-lymphotropic virus Genus Epsilonretrovirus Genus Lentivirus; including Human immunodeficiency virus 1 and Simian and Feline immunodeficiency viruses Subfamily Spumaretrovirinae: Genus Bovispumavirus Genus Equispumavirus Genus Felispumavirus Genus Prosimiispumavirus Genus SimiispumavirusNote that according to ICTV 2017, genus Spumavirus has been divided into five genera, and its former type species Simian foamy virus is now upgraded to genus Simiispumavirus with not less than 14 species, including new type species Eastern chimpanzee simian foamy virus. Classification: Group VII viruses Both families in Group VII have DNA genomes contained within the invading virus particles. The DNA genome is transcribed into both mRNA, for use as a transcript in protein synthesis, and pre-genomic RNA, for use as the template during genome replication. Virally encoded reverse transcriptase uses the pre-genomic RNA as a template for the creation of genomic DNA. Classification: Group VII includes: Family Caulimoviridae — e.g. Cauliflower mosaic virus Family Hepadnaviridae — e.g. Hepatitis B virusThe latter family is closely related to the newly proposed Family Nackednaviridae — e.g. African cichlid nackednavirus (ACNDV), formerly named African cichlid hepatitis B virus (ACHBV).whilst families Belpaoviridae, Metaviridae, Pseudoviridae, Retroviridae, and Caulimoviridae constitute the order Ortervirales. Endogenous Endogenous retroviruses are not formally included in this classification system, and are broadly classified into three classes, on the basis of relatedness to exogenous genera: Class I are most similar to the gammaretroviruses Class II are most similar to the betaretroviruses and alpharetroviruses Class III are most similar to the spumaviruses. Controversy: Retroviruses have been the focus of several recent claims and assertions which have been largely discredited by the science community. An initial study in 2009 seemed to make new findings which might change some of the established knowledge on this topic. However, although later research disproved some of the claims made about retroviruses, there are several controversial figures who continue to make claims which overall are considered to not have any valid basis or consensus in support of these claims. Treatment: Antiretroviral drugs are medications for the treatment of infection by retroviruses, primarily HIV. Different classes of antiretroviral drugs act on different stages of the HIV life cycle. Combination of several (typically three or four) antiretroviral drugs is known as highly active antiretroviral therapy (HAART). Treatment of veterinary retroviruses: Feline leukemia virus and Feline immunodeficiency virus infections are treated with biologics, including the only immunomodulator currently licensed for sale in the United States, Lymphocyte T-Cell Immune Modulator (LTCI).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Turning vanes (HVAC)** Turning vanes (HVAC): HVAC turning vanes are sheet metal devices inside of mechanical ductwork used to smoothly direct air inside a duct where there is a change in direction, by reducing resistance and turbulence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DNase footprinting assay** DNase footprinting assay: A DNase footprinting assay is a DNA footprinting technique from molecular biology/biochemistry that detects DNA-protein interaction using the fact that a protein bound to DNA will often protect that DNA from enzymatic cleavage. This makes it possible to locate a protein binding site on a particular DNA molecule. The method uses an enzyme, deoxyribonuclease (DNase, for short), to cut the radioactively end-labeled DNA, followed by gel electrophoresis to detect the resulting cleavage pattern. DNase footprinting assay: For example, the DNA fragment of interest may be PCR amplified using a 32P 5' labeled primer, with the result being many DNA molecules with a radioactive label on one end of one strand of each double stranded molecule. Cleavage by DNase will produce fragments. The fragments which are smaller with respect to the 32P-labelled end will appear further on the gel than the longer fragments. The gel is then used to expose a special photographic film. DNase footprinting assay: The cleavage pattern of the DNA in the absence of a DNA binding protein, typically referred to as free DNA, is compared to the cleavage pattern of DNA in the presence of a DNA binding protein. If the protein binds DNA, the binding site is protected from enzymatic cleavage. This protection will result in a clear area on the gel which is referred to as the "footprint". DNase footprinting assay: By varying the concentration of the DNA-binding protein, the binding affinity of the protein can be estimated according to the minimum concentration of protein at which a footprint is observed. This technique was developed by David J. Galas and Albert Schmitz at Geneva in 1977
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Behavioral engineering** Behavioral engineering: Behavioral engineering, also called applied behavior analysis, is intended to identify issues associated with the interface of technology and the human operators in a system and to generate recommended design practices that consider the strengths and limitations of the human operators. Behavioral engineering: "The behavior of the individual has been shaped according to revelations of 'good conduct' never as the result of experimental study." Watson wrote in 1924 "Behaviorism ... holds that the subject matter of human psychology is the behavior of the human being. Behaviorism claims that consciousness is neither a definite nor a usable concept."This approach is often used in organizational behavior management, which is behavior analysis applied to organizations and behavioral community psychology. Success of approach: Behavioral engineering has been used to increase safety in organizations (see Behavior-based safety). Other areas include performance in organization and lessening problems in prison. In addition, it has had some success in social service systems, understanding the long-term effects of humans in space and developing the human landscape, understanding political behavior in organizations, and understanding how organizations function.It has also been successful in helping individuals to set goals and manage pay systems. Behavioral engineering has also been applied to social welfare policy.In the school system behavioral engineering has inspired two programs of behavior management based on the principles of applied behavior analysis in a social learning format. Programs were successful in reducing disruption in children with conduct disorders, as well as improving their academic achievement. The programs show good maintenance and generalization of treatment effects when the child was returned to the natural classroom. In addition, the programs were successfully replicated. Behavior analytic programs continued to function to control truancy and reduce delinquency.The journal Behavioral Engineering was published from 1973 to 1985. Many of the topics of behavioral engineering are now covered in the journals Behavior and Social Issues, The Behavior Analyst and the Journal of Organizational Behavior Management.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skene's gland** Skene's gland: In female human anatomy, Skene's glands or the Skene glands ( SKEEN, also known as the lesser vestibular glands, paraurethral glands) are glands located around the lower end of the urethra. The glands are surrounded by tissue that swells with blood during sexual arousal, and secrete a fluid from openings near the urethra, particularly during orgasm. Structure and function: The Skene's glands are located in the vestibule of the vulva, around the lower end of the urethra. The two Skene's ducts lead from the Skene's glands to the vulvar vestibule, to the left and right of the urethral opening, from which they are structurally capable of secreting fluid. Although there remains debate about the function of the Skene's glands, one purpose is to secrete a fluid that helps lubricate the urethral opening.Skene's glands produce a milk-like ultrafiltrate of blood plasma. The glands may be the source of female ejaculation, but this has not been proven. Because they and the male prostate act similarly by secreting prostate-specific antigen (PSA), which is an ejaculate protein produced in males, and of prostate-specific acid phosphatase, some authors refer to the Skene's glands as the "female prostate". It is homologous to the male prostate (developed from the same embryological tissues), but its homology is still a matter of research. Female ejaculate may result from sexual activity for some women, especially during orgasm. In addition to PSA and acid phosphatase, Skene's gland fluid contains high concentrations of glucose and fructose.In an amount of a few milliliters, fluid is secreted from these glands when stimulated from inside the vagina. Female ejaculation and squirting (secretion of large amounts of fluid) are believed by researchers to be two different processes. They may occur in combination during orgasm. Squirting alone is a sudden expulsion of liquid that at least partly comes from the bladder and contains urine, whereas ejaculation fluid includes a whitish transparent ejaculate that appears to come from the Skene's gland. Clinical significance: Disorders of the Skene's glands may include: Infection (called skenitis, urethral syndrome, or female prostatitis) Skene's duct cyst: lined by stratified squamous epithelium, the cyst is caused by obstruction of the Skene's glands. It is located lateral to the urinary meatus. Magnetic resonance imaging (MRI) is used for diagnosis. The cyst is treated by surgical excision or marsupialization. Trichomoniasis: the Skene's glands (along with other structures) act as a reservoir for Trichomonas vaginalis, which explains why topical treatments are not as effective as oral medication for this condition. History: While the glands were first described in 1672 by Regnier de Graaf and by the French surgeon Alphonse Guérin (1816–1895), they were named after the Scottish gynaecologist Alexander Skene, who wrote about it in Western medical literature in 1880. In 2002, the term female prostate as a second term after paraurethral gland was added in Terminologia Histologica by the Federative International Committee on Anatomical Terminology. The 2008 edition notes that the term was introduced "because of the morphological and immunological significance of the structure".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fabric.js** Fabric.js: Fabric.js is a Javascript HTML5 canvas library. It is a fully open-source project with many contributions over the years. The library was originally developed in 2010 by Juriy Zaytsev, whom also led the project until 2016. Since 2016, the project is led by Andrea Bogazzi.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**H5N1 vaccine clinical trials** H5N1 vaccine clinical trials: H5N1 clinical trials are clinical trials concerning H5N1 vaccines, which are intended to provide immunization to influenza A virus subtype H5N1. They are intended to discover pharmacological effects and identify any adverse reactions the vaccines may achieve in humans. Current status of H5N1 candidate vaccines: Candidate vaccines were developed in the United States and the United Kingdom during 2003 for protection against the strain that was isolated from humans in Hong Kong in February 2003 but the 2003 strain died out in 2004 making the vaccine of little use. In April 2004, WHO made an H5N1 prototype seed strain available to manufacturers. In August 2006, WHO changed the prototype strains and now offers three new prototype strains which represent three of the six subclades of the clade 2 virus which have been responsible for many of the human cases that have occurred since 2005.The National Institute of Allergy and Infectious Diseases (NIAID) awarded H5N1 vaccine contracts to Aventis Pasteur (now Sanofi Pasteur) of Swiftwater, Pennsylvania, and to Chiron Corporation of Emeryville, California. Each manufacturer is using established techniques in which the virus is grown in eggs and then inactivated and further purified before being formulated into vaccines."A universal influenza vaccine could provide protection against all types of influenza and would eliminate the need to develop individual vaccines to specific H and N virus types. Such a vaccine would not need to be reengineered each year and could protect against an emergent pandemic strain. Developing a universal vaccine requires that researchers identify conserved regions of the influenza virus that do not exhibit antigenic variability by strain or over time. A universal vaccine, ACAM-FLU-A, is being developed by the British company Acambis and is being researched by others as well. Acambis (meanwhile also acquired by Sanofi Pasteur) announced in early August 2005 that it has had successful results in animal testing. The vaccine focuses on the M2 viral protein, which does not change, rather than the surface hemagglutinin and neuraminidase proteins targeted by traditional flu vaccines. The universal vaccine is made through bacterial fermentation technology, which would greatly speed up the rate of production over that possible with culture in chicken eggs, plus the vaccine could be produced constantly, since its formulation would not change. Still, such a vaccine is years away from full testing, approval, and use." As of July 2007, phase I clinical trials on humans are underway in which a vaccine that focuses on the M2 viral protein "is being administered to a small group of healthy people in order to verify the safety of the product and to provide an initial insight into the vaccine’s effect on the human immune system." (See also Universal flu vaccines) The current development state of ACAM-FLU-A is unclear. In June 2006, the National Institutes of Health (NIH) began enrolling participants in a Phase 1 H5N1 study of an intranasal influenza vaccine candidate based on MedImmune's live, attenuated vaccine technology.Oct 2010 Inovio starts a phase I clinical trial of its H5N1 vaccine (VGX-3400X).Oct 2012 Novavax, Inc. pandemic influenza vaccine Phase I trials meet primary objectives. Current status of H5N1 candidate vaccines: Approved human H5N1 vaccines On April 17, 2007 the US FDA approved "Influenza Virus Vaccine, H5N1" by manufacturer Sanofi Pasteur Inc for manufacture at its Swiftwater, PA facility.In March 2006, Hungarian Prime Minister Ferenc Gyurcsány reported that Omninvest developed a vaccine to protect humans against the H5N1 influenza strain. The vaccine was approved by the country's national pharmaceutical institute for commercial production. Current status of H5N1 candidate vaccines: Results of trials Early results from H5N1 clinical trials showed poor immunogenicity compared to the 15-mcg dose that induces immunity in a seasonal flu vaccine. Trials in 2006 and 2007 using two 30-mcg doses produced unacceptable results while a 2006 trial using two doses of 90 mcg each achieved acceptable levels of protection. Current flu vaccine manufacturing plants can not produce enough pandemic flu vaccine at this high dose level."Adjuvanted vaccines appear to hold the greatest promise for solving the grave supply-demand imbalance in pandemic influenza vaccine development. They come with obstacles—immunologic, regulatory, and commercial—but they also have generated more excitement than any other type of vaccine thus far. [In August 2007], scientists working with a GlaxoSmithKline formula published a trial of a two-dose regimen of an inactivated split-virus vaccine adjuvanted with a proprietary oil-in-water emulsion; after the second injection, even the lowest dose of 3.8 mcg exceeded EU criteria for immune response (see Bibliography: Leroux-Roels 2007). And in September, Sanofi Pasteur reported in a press release that an inactivated vaccine adjuvanted with the company's own proprietary formula induced EU-accepted levels of protection after two doses of 1.9 mcg."The "GlaxoSmithKline-backed team that described an acceptable immune response after two adjuvanted 3.8-microgram (mcg) doses found that three fourths of their subjects were protected not only against the clade 1 Vietnam virus on which the vaccine was based, but against a drifted clade 2 virus from Indonesia as well [...] To achieve prepandemic vaccines, researchers would have to ascertain the right dose and dose interval, determine how long priming lasts, and solve the puzzle of measuring primed immunity. Further, regulatory authorities would have to determine the trial design that could deliver those answers, the public discussion that would be necessary for prepandemic vaccines to be accepted, and the safety data that would need to be gathered once the vaccines went into use". Individual studies: Revaccination - January 2006 Study completion: January 2006 The purpose of this study is to determine whether having received an H5 vaccine in the past primes the immune system to respond rapidly to another dose of H5 vaccine. Subjects who participate in this study will have participated in a previous vaccine study (involving the A/Hong/Kong/97 virus) during the fall of 1998 at the University of Rochester. Individual studies: A/H5N1 in adult - February 2006 Study start: April 2005; Study completion: February 2006 The purpose of this study is to determine the dose-related safety of flu vaccine in healthy adults. To determine the dose-related effectiveness of flu vaccine in healthy adults approximately 1 month following receipt of 2 doses of vaccine. To provide information for the selection of the best dose levels for further studies. Individual studies: H5 booster after two doses - June 2006 Study start: October 2005; Study completion: June 2006 The purpose of this study is to determine whether a third dose of vaccines containing A/Vietnam/1203/04 provides more immunity than two doses. Subjects who participate in this study, will have participated in DMID protocol 04-063 involving the A/Vietnam/1203/04. In this study, each subject will be asked to receive a third dose of the H5 vaccine at the same level administered in protocol 04-063. Individual studies: H5 in the elderly - August 2006 Study start: October 2005; Study completion: August 2006 This study is intended to examine the safety and dose-related immunogenicity of three dosage levels of the Influenza A/H5N1 vaccine, as compared to saline placebo, given intramuscularly to healthy elderly adults approximately 4 weeks apart. Individual studies: H5 in healthy adults - November 2006 Study start: March 2006; Expected completion: November 2006 This randomized, controlled, double-blinded, dose-ranging, Phase I-II study in 600 healthy adults, 18 to 49 years old, is designed to investigate the safety, reactogenicity, and dose-related immunogenicity of an investigational inactivated influenza A/H5N1 virus vaccine when given alone or combined with aluminum hydroxide. A secondary goal is to guide selection of vaccine dosage levels for expanded Phase II trials based on reactogenicity and immunogenicity profiles. This dose optimization will be applied to both younger and older subject populations in subsequent studies. Subjects who meet the entry criteria for the study will be enrolled at one of up to 5 study sites and will be randomized into 8 groups to receive two doses of influenza A/H5N1 vaccine containing 3.75, 7.5, 15, or 45 mcg of HA with or without aluminum hydroxide adjuvant by IM injection (N= 60 or 120/vaccine dose group). Individual studies: Bird flu - November 2006 Study start: March 2006; Study completion: November 2006 This study is designed to gather critical information on the safety, tolerability, and the immunogenicity (capability of inducing an immune response) of A/H5N1 virus vaccine in healthy adults. Up to 280 healthy adults, aged 18 to 64, will participate in the study. Each subject will participate for 7 months and will be randomly placed in one of several different study groups receiving a different dose of vaccine, vaccine plus adjuvant, or placebo. All subjects will receive two injections of their assigned study product, about 28 days apart, in their muscle tissue. Subjects will keep a journal of their temperature and any adverse effects between study visits. A small amount of blood will also be drawn before the first injection, 7 days after each injection, and 6 months after the second injection. Individual studies: Pandemic flu - January 2007 Study start: October 2005; Study completion: January 2007 This Australian study will test the safety and immunogenicity of an H5N1 pandemic influenza vaccine in healthy adults. Individual studies: Children - February 2007 Study start: January 2006; Study completion: February 2007 This is a randomized, double-blinded, placebo-controlled, staged, dose-ranging, Phase I/II study to evaluate the safety, reactogenicity, and immunogenicity of 2 doses of an IM inactivated influenza A/H5N1 vaccine in healthy children, aged 2 through 9 years. This study is designed to investigate the safety, tolerability, and dose-related immunogenicity of an investigational inactivated influenza A/H5N1 vaccine. A secondary goal is to identify an optimal dosage level of the vaccine that generates an acceptable immunogenic response, while maintaining an adequate safety profile.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ocarina Networks** Ocarina Networks: Ocarina Networks was a technology company selling a hardware/software solution designed to reduce data footprints with file-aware storage optimization. A subsidiary of Dell, their flagship product, the Ocarina Appliance/Reader, released in April 2008, uses patented data compression techniques incorporating such methods as record linkage and context-based lossless data compression. The product includes the hardware-appliance-based compressor, the Ocarina Optimizer (Models 2400, 3400, 4600) and a real-time decompressor, the software-based Ocarina Reader. History: Ocarina was founded by Murli Thirumale, formerly a vice-president and general manager at Citrix Systems; Carter George, formerly a vice-president and co-founder of PolyServe (acquired by HP); and Goutham Rao, formerly Chief Technical Officer and Chief Architect for Advanced Solutions Group of Citrix Systems.Its solution works by identifying redundancy at a global file system level, and applying specific algorithms for different data formats, such as algorithms specific to images, text, executables, seismic data, and other "unstructured data". Ocarina's Optimizers work with existing storage systems through standard network protocols such as NFS, or are directly integrated with partner vendors storage systems. History: On July 19, 2010, Dell announced it plans to acquire Ocarina Networks. The transaction was completed on July 31, 2010. In late 2010, the original Ocarina Optimizer product family was removed from the market, enabling the Ocarina team to focus on the integration of dedupe and compression into Dell storage products. The most notable examples were the DR-family of deduplication appliances, launched in 2012, and integration of dedupe into Dell's Fluid File System. Technology: The company's ECOsystem (Extract, Correlate, Optimize) provided data reduction technology, providing both deduplication and content-aware data compression in a reliable, scalable, policy-based package. ECOsystem consists of 3 primary components, an optimizer, a reader, and a management and reporting framework. These components were delivered in software or appliance form depending on customer, application, and underlying storage solution. Technology: The standard ECOsystem workflow was a post-process. Files were first stored to disk in native form. Policies are used to specify which files were to be optimized (based on age, location, or file type), and what compression settings to use. Policies were commonly used to avoid optimization of files that are actively being modified. ECOsystem could also be configured to migrate optimized data to a secondary tier of lower-cost storage for disk-based archival applications. Technology: Compression and deduplication algorithms ECOsystem was content aware, with selection of compression solution based on the type of data being processed. This went beyond file-extension filtering. ECOsystem recursively decomposed compound files, until elemental text, media, or binary components are identified. At the heart of the optimizer software is a context-weighted neural net that will apply the most effective compression solution based on the nature of the elemental file component identified, and will efficiently remember optimal settings based on similar files processed. Technology: ECOsystem in most cases is highly effective at achieving results on novel or proprietary file-types, as well as pre-compressed media such as JPEG images and MPEG4 video. Ocarina successfully processed data in over 600 file formats to-date. Technology: ECOmax and NFO workflows Two forms of Ocarina's post-processing workflow were available: ECOmax and Native Format Optimization (NFO).ECOmax utilized all available compression methods to shrink data, including on-disk structures that maximized utilization of physical blocks. The ECOmax workflow required the use of the ECOreader, which is run-anywhere software that efficiently decodes data for transparent read-back. ECOmax could be applied to any file or data types including specialized files used by various vertical industries. Technology: The NFO workflow is designed specifically for web-based media companies. In NFO, media files (for example JPEGs) were stored in their native state, which eliminates the need for decoding, and allows customers to capture data-reduction benefits throughout the workflow, including web distribution (bandwidth savings and better end-user experience), and movement into archival systems. NFO provided "visually identical" compression that tailors image parameters to the sensitivities of the Human Visual System Model, and the intended use of the image, without creating any perceivable quality degradation. Technology: Note that many of the features and capabilities of the Ocarina ECOsystem were not included in later Dell products. Funding: In 2007, Ocarina Networks secured $12M in series 'A' funding from Kleiner Perkins Caufield & Byers and Highland Capital Partners, In 2009, Ocarina secured an additional $20M from the same investors and Jafco Ventures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Relative accessible surface area** Relative accessible surface area: Relative accessible surface area or relative solvent accessibility (RSA) of a protein residue is a measure of residue solvent exposure. It can be calculated by formula: RSA ASA MaxASA where ASA is the solvent accessible surface area and MaxASA is the maximum possible solvent accessible surface area for the residue. Both ASA and MaxASA are commonly measured in Å2 To measure the relative solvent accessibility of the residue side-chain only, one usually takes MaxASA values that have been obtained from Gly-X-Gly tripeptides, where X is the residue of interest. Several MaxASA scales have been published and are commonly used (see Table). Relative accessible surface area: In this table, the more recently published MaxASA values (from Tien et al. 2013) are systematically larger than the older values (from Miller et al. 1987 or Rose et al. 1985). This discrepancy can be traced back to the conformation in which the Gly-X-Gly tripeptides are evaluated to calculate MaxASA. The earlier works used the extended conformation, with backbone angles of 120 ∘ and 140 ∘ . However, Tien et al. 2013 demonstrated that tripeptides in extended conformation fall among the least-exposed conformations. The largest ASA values are consistently observed in alpha helices, with backbone angles around 50 ∘ and 45 ∘ . Tien et al. 2013 recommend to use their theoretical MaxASA values (2nd column in Table), as they were obtained from a systematic enumeration of all possible conformations and likely represent a true upper bound to observable ASA.ASA and hence RSA values are generally calculated from a protein structure, for example with the software DSSP. However, there is also an extensive literature attempting to predict RSA values from sequence data, using machine-learning approaches. Prediction tools: Experimentally predicting RSA is an expensive and time-consuming task. In recent decades, several computational methods have been introduced for RSA prediction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aldehyde oxidase and xanthine dehydrogenase, a/b hammerhead domain** Aldehyde oxidase and xanthine dehydrogenase, a/b hammerhead domain: The aldehyde oxidase and xanthine dehydrogenase, a/b hammerhead domain is an evolutionary conserved protein domain. Aldehyde oxidase and xanthine dehydrogenase, a/b hammerhead domain: Aldehyde oxidase (EC 1.2.3.1) catalyzes the conversion of an aldehyde in the presence of oxygen and water to an acid and hydrogen peroxide. The enzyme is a homodimer, and requires FAD, molybdenum and two 2FE-2S clusters as cofactors. Xanthine dehydrogenase (EC 1.1.1.204) catalyzes the hydrogenation of xanthine to urate, and also requires FAD, molybdenum and two 2FE-2S clusters as cofactors. This activity is often found in a bifunctional enzyme with xanthine oxidase (EC 1.1.3.22) activity too. The enzyme can be converted from the dehydrogenase form to the oxidase form irreversibly by proteolysis or reversibly through oxidation of sulfhydryl groups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epoch Co.** Epoch Co.: Epoch Co., Ltd. (株式会社エポック社, Kabushikigaisha Epokku Sha) is a Japanese toy and computer games company founded in 1958 which is best known for manufacturing Barcode Battler and Doraemon video games, Aquabeads, and the Sylvanian Families series of toys. Its current Representative President is Michihiro Maeda. They also made Japan's first successful programmable console video game system, the Cassette Vision, in 1981. History: Founded in May 1958 by Maeda Taketora and three others in Tokyo with ¥1 million, Maeda Taketora is made president, eleven months later, it had increased its capital to ¥2.5 million. Epoch participated in the first Japanese international toy trade fair in 1962. It moved to its headquarters to its current location in Tokyo in 1963. After 20 years of its founding in 1978, Epoch had increased to ¥200 million - 200 times the original startup cost. It also had a United States office, which sold imported English versions of its products. In September 2001 it founded an international branch. It acquired International Playthings of the United States in 2008. It is most famous for its Doraemon and Sylvanian Families toy and video game productions. History: Video game consoles TV Tennis Electrotennis (September 12, 1975) TV Game System 10 (1977) TV Baseball (1978) Cassette TV Game (1979) TV Vader (1980) Cassette Vision (July 30, 1981) Cassette Vision Jr. (1983) Super Cassette Vision (July 17, 1984) Epoch Game Pocket Computer (1984, first programmable handheld game console) SCV Lady’s Set (1985) Barcode Battler (March 1991) LCD handheld electronic games Epoch also created many LCD handheld electronic games. Some of these were made in cooperation with ITMC, Gama-Mangold, Tomy and other companies. Computer games produced: Doraemon Games Doraemon: Giga Zombie no Gyakushuu Doraemon Doraemon 2 Doraemon 3 Doraemon 4 Doraemon: Nobita to Fukkatsu no Hoshi Doraemon 2: SOS! Otogi no Kuni Doraemon Doraemon Kart Doraemon no GameBoy de Asobou yo DX10 Doraemon 2 Doraemon Kart 2 Doraemon: Aruke Aruke Labyrinth Doraemon Memories: Nobita no Omoide Daibouken Doraemon: Nobita to 3-tsu no Seirei Ishi (N64) Doraemon 2: Nobita to Hikari no Shinden (N64) Doraemon 3: Nobita no Machi SOS! (N64) Doraemon 3: Makai no Dungeon Doraemon no Study Boy: Kuku Game Doraemon no Study Boy: Gakushuu Kanji Game Doraemon Kimi to Pet no Monogatari Doraemon Board Game Doraemon no Quiz Boy 2 Doraemon no Study Boy: Kanji Yomikaki Master Sylvanian Families Games Sylvanian Families: Otogi no Kuni no Pendant (シルバニアファミリー おとぎの国のペンダント, Shirubania famirī: Otogi no kuni no pendanto, lit. Sylvanian Families: The Fairyland Pendant) (Game Boy Color) Sylvanian Melodies ~Mori no Nakama to Odori Masho!~ (シルバニアメロディー ~森のなかまと踊りましょ!~, Shirubania merodī ~Mori no naka ma to odorimasho!~, lit. Sylvanian Melodies ~Let's Dance with the Forest Friends!~) (Game Boy Color) Sylvanian Families 2: Irozuku Mori no Fantasy (シルバニアファミリー2 色づく森のファンタジー, Shirubania famirī tsu: Irodzuku mori no fantajī, lit.Sylvanian Families 2: Rainbow Forest Fantasy) (Game Boy Color) Sylvanian Families 3: Hoshifuru Yoru no Sunadokei (シルバニアファミリー3 星ふる夜のすなどけい, Shirubania famirī suri: Hoshifuru yoru no sunadokei, lit. Sylvanian Families 3: Hourglass of the Wishing Stars) (Game Boy Color) Sylvanian Families 4: Meguru Kisetsu no Tapestry (シルバニアファミリー4 めぐる季節のタペストリー, Shirubania famirī fo: Meguru kisetsu no tapesutorī, lit. Sylvanian Families 4: Tapestry of the Four Seasons) (Game Boy Advance) Sylvanian Families: Yosei no Stick to Fushigi no Ki Maron Inu no Onnanoko (シルバニアファミリー 妖精のステッキとふしぎの木 マロン犬の女の子, Shirubania famirī: Yōsei no sutekki to fushigi no ki maron inu no on'nanoko, lit. Sylvanian Families: The Fairy's Wands and the Mystery Tree Esme Huckleberry) (Game Boy Advance) Sylvanian Families: Fashion Designer ni Naritai! Kurumi Risu no Onnanoko (シルバニアファミリー ファッションデザイナーになりたい! くるみリスの女の子, Shirubania famirī: fasshondezainā ni naritai! Kurumi risu no on'nanoko, lit. Sylvanian Families: I wanna be a Fashion Designer! Saffron Walnut) (Game Boy Advance) Licensed Games Chibi Maruko-chan: Harikiri 365-Nichi no Maki Lupin III: Densetsu no Hihō o Oe! The Amazing Spider-Man: Lethal Foes Donald Duck no Mahō no Bōshi St Andrews: Eikō to Rekishi no Old Course Alice no Paint Adventure Chibi Maruko-Chan: Go-Chōnai Minna de Game da yo! Other games Famicom Yakyuuban Kiteretsu Daihyakka Cyraid Dragon Slayer I Parasol Henbee Dai Meiro: Meikyu no Tatsujin Dragon Slayer (Game Boy) Dragon Slayer Gaiden (Game Boy) Dragon Slayer: The Legend of Heroes (Super Famicom) Dragon Slayer: The Legend of Heroes II (Super Famicom) Panel no Ninja Kesamaru Lord Monarch Metal Jack Barcode Battler Senki Hatayama Hatch no Pro Yakyuu News! Jitsumei Han Oha Star Yamachan & Reimondo Hole in One Golf Meisha Retsuden: Greatest 70's J.League Excite Stage '94 J.League Excite Stage '95 J.League Excite Stage '96 J-League Excite Stage GB J-League Excite Stage Tactics International Soccer Excite Stage 2000 R-Type DX Ling Rise Pocket Pro Yakyuu Macross 7: Ginga no Heart o Furuwasero!! Gauntlet Legends DaiaDroids World Kidou Tenshi Angelic Layer The Legend of Zelda: A Link to the Past (Barcode Battler II) Magi Nation Daia Droid Daisakusen
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poisson point process** Poisson point process: In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The Poisson point process is often called simply the Poisson process, but it is also called a Poisson random measure, Poisson random point field or Poisson point field. This point process has convenient mathematical properties, which has led to its being frequently defined in Euclidean space and used as a mathematical model for seemingly random processes in numerous disciplines such as astronomy, biology, ecology, geology, seismology, physics, economics, image processing, and telecommunications.The process is named after French mathematician Siméon Denis Poisson despite Poisson's never having studied the process. Its name derives from the fact that if a collection of random points in some space forms a Poisson process, then the number of points in a region of finite size is a random variable with a Poisson distribution. The process was discovered independently and repeatedly in several settings, including experiments on radioactive decay, telephone call arrivals and insurance mathematics.The Poisson point process is often defined on the real line, where it can be considered as a stochastic process. In this setting, it is used, for example, in queueing theory to model random events, such as the arrival of customers at a store, phone calls at an exchange or occurrence of earthquakes, distributed in time. In the plane, the point process, also known as a spatial Poisson process, can represent the locations of scattered objects such as transmitters in a wireless network, particles colliding into a detector, or trees in a forest. In this setting, the process is often used in mathematical models and in the related fields of spatial point processes, stochastic geometry, spatial statistics and continuum percolation theory. The Poisson point process can be defined on more abstract spaces. Beyond applications, the Poisson point process is an object of mathematical study in its own right. In all settings, the Poisson point process has the property that each point is stochastically independent to all the other points in the process, which is why it is sometimes called a purely or completely random process. Modeling a system as a Poisson Process is insufficient when the point-to-point interactions are too strong (i.e. the points are not stochastically independent). Such a system may be better modeled with a different point process.The point process depends on a single mathematical object, which, depending on the context, may be a constant, a locally integrable function or, in more general settings, a Radon measure. In the first case, the constant, known as the rate or intensity, is the average density of the points in the Poisson process located in some region of space. The resulting point process is called a homogeneous or stationary Poisson point process. In the second case, the point process is called an inhomogeneous or nonhomogeneous Poisson point process, and the average density of points depend on the location of the underlying space of the Poisson point process. The word point is often omitted, but there are other Poisson processes of objects, which, instead of points, consist of more complicated mathematical objects such as lines and polygons, and such processes can be based on the Poisson point process. Both the homogeneous and nonhomogeneous Poisson point processes are particular cases of the generalized renewal process. Overview of definitions: Depending on the setting, the process has several equivalent definitions as well as definitions of varying generality owing to its many applications and characterizations. The Poisson point process can be defined, studied and used in one dimension, for example, on the real line, where it can be interpreted as a counting process or part of a queueing model; in higher dimensions such as the plane where it plays a role in stochastic geometry and spatial statistics; or on more general mathematical spaces. Consequently, the notation, terminology and level of mathematical rigour used to define and study the Poisson point process and points processes in general vary according to the context.Despite all this, the Poisson point process has two key properties—the Poisson property and the independence property— that play an essential role in all settings where the Poisson point process is used. The two properties are not logically independent; indeed, independence implies the Poisson distribution of point counts, but not the converse. Overview of definitions: Poisson distribution of point counts A Poisson point process is characterized via the Poisson distribution. The Poisson distribution is the probability distribution of a random variable {\textstyle N} (called a Poisson random variable) such that the probability that N equals n is given by: Pr {N=n}=Λnn!e−Λ where {\textstyle n!} denotes factorial and the parameter {\textstyle \Lambda } determines the shape of the distribution. (In fact, {\textstyle \Lambda } equals the expected value of {\textstyle N} .) By definition, a Poisson point process has the property that the number of points in a bounded region of the process's underlying space is a Poisson-distributed random variable. Overview of definitions: Complete independence Consider a collection of disjoint and bounded subregions of the underlying space. By definition, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others. This property is known under several names such as complete randomness, complete independence, or independent scattering and is common to all Poisson point processes. In other words, there is a lack of interaction between different regions and the points in general, which motivates the Poisson process being sometimes called a purely or completely random process. Homogeneous Poisson point process: If a Poisson point process has a parameter of the form {\textstyle \Lambda =\nu \lambda } , where {\textstyle \nu } is Lebesgue measure (that is, it assigns length, area, or volume to sets) and {\textstyle \lambda } is a constant, then the point process is called a homogeneous or stationary Poisson point process. The parameter, called rate or intensity, is related to the expected (or average) number of Poisson points existing in some bounded region, where rate is usually used when the underlying space has one dimension. The parameter {\textstyle \lambda } can be interpreted as the average number of points per some unit of extent such as length, area, volume, or time, depending on the underlying mathematical space, and it is also called the mean density or mean rate; see Terminology. Homogeneous Poisson point process: Interpreted as a counting process The homogeneous Poisson point process, when considered on the positive half-line, can be defined as a counting process, a type of stochastic process, which can be denoted as {\textstyle \{N(t),t\geq 0\}} . A counting process represents the total number of occurrences or events that have happened up to and including time {\textstyle t} . A counting process is a homogeneous Poisson counting process with rate {\textstyle \lambda >0} if it has the following three properties: {\textstyle N(0)=0;} has independent increments; and the number of events (or points) in any interval of length {\textstyle t} is a Poisson random variable with parameter (or mean) {\textstyle \lambda t} .The last property implies: E⁡[N(t)]=λt. Homogeneous Poisson point process: In other words, the probability of the random variable {\textstyle N(t)} being equal to {\textstyle n} is given by: Pr {N(t)=n}=(λt)nn!e−λt. The Poisson counting process can also be defined by stating that the time differences between events of the counting process are exponential variables with mean {\textstyle 1/\lambda } . The time differences between the events or arrivals are known as interarrival or interoccurence times. Homogeneous Poisson point process: Interpreted as a point process on the real line Interpreted as a point process, a Poisson point process can be defined on the real line by considering the number of points of the process in the interval {\textstyle (a,b]} . For the homogeneous Poisson point process on the real line with parameter {\textstyle \lambda >0} , the probability of this random number of points, written here as {\textstyle N(a,b]} , being equal to some counting number {\textstyle n} is given by: Pr {N(a,b]=n}=[λ(b−a)]nn!e−λ(b−a), For some positive integer {\textstyle k} , the homogeneous Poisson point process has the finite-dimensional distribution given by: Pr {N(ai,bi]=ni,i=1,…,k}=∏i=1k[λ(bi−ai)]nini!e−λ(bi−ai), where the real numbers {\textstyle a_{i}<b_{i}\leq a_{i+1}} In other words, {\textstyle N(a,b]} is a Poisson random variable with mean {\textstyle \lambda (b-a)} , where {\textstyle a\leq b} . Furthermore, the number of points in any two disjoint intervals, say, {\textstyle (a_{1},b_{1}]} and {\textstyle (a_{2},b_{2}]} are independent of each other, and this extends to any finite number of disjoint intervals. In the queueing theory context, one can consider a point existing (in an interval) as an event, but this is different to the word event in the probability theory sense. It follows that {\textstyle \lambda } is the expected number of arrivals that occur per unit of time. Homogeneous Poisson point process: Key properties The previous definition has two important features shared by Poisson point processes in general: the number of arrivals in each finite interval has a Poisson distribution; the number of arrivals in disjoint intervals are independent random variables.Furthermore, it has a third feature related to just the homogeneous Poisson point process: the Poisson distribution of the number of arrivals in each interval {\textstyle (a+t,b+t]} only depends on the interval's length {\textstyle b-a} .In other words, for any finite {\textstyle t>0} , the random variable {\textstyle N(a+t,b+t]} is independent of {\textstyle t} , so it is also called a stationary Poisson process. Homogeneous Poisson point process: Law of large numbers The quantity {\textstyle \lambda (b_{i}-a_{i})} can be interpreted as the expected or average number of points occurring in the interval {\textstyle (a_{i},b_{i}]} , namely: E⁡[N(ai,bi]]=λ(bi−ai), where E denotes the expectation operator. In other words, the parameter {\textstyle \lambda } of the Poisson process coincides with the density of points. Furthermore, the homogeneous Poisson point process adheres to its own form of the (strong) law of large numbers. More specifically, with probability one: lim t→∞N(t)t=λ, where lim {\textstyle \lim } denotes the limit of a function, and λ is expected number of arrivals occurred per unit of time. Homogeneous Poisson point process: Memoryless property The distance between two consecutive points of a point process on the real line will be an exponential random variable with parameter {\textstyle \lambda } (or equivalently, mean {\textstyle 1/\lambda } ). This implies that the points have the memoryless property: the existence of one point existing in a finite interval does not affect the probability (distribution) of other points existing, but this property has no natural equivalence when the Poisson process is defined on a space with higher dimensions. Homogeneous Poisson point process: Orderliness and simplicity A point process with stationary increments is sometimes said to be orderly or regular if: Pr {N(t,t+δ]>1}=o(δ), where little-o notation is being used. A point process is called a simple point process when the probability of any of its two points coinciding in the same position, on the underlying space, is zero. For point processes in general on the real line, the property of orderliness implies that the process is simple, which is the case for the homogeneous Poisson point process. Homogeneous Poisson point process: Martingale characterization On the real line, the homogeneous Poisson point process has a connection to the theory of martingales via the following characterization: a point process is the homogeneous Poisson point process if and only if N(−∞,t]−λt, is a martingale. Relationship to other processes On the real line, the Poisson process is a type of continuous-time Markov process known as a birth process, a special case of the birth–death process (with just births and zero deaths). More complicated processes with the Markov property, such as Markov arrival processes, have been defined where the Poisson process is a special case. Restricted to the half-line If the homogeneous Poisson process is considered just on the half-line {\textstyle [0,\infty )} , which can be the case when {\textstyle t} represents time then the resulting process is not truly invariant under translation. In that case the Poisson process is no longer stationary, according to some definitions of stationarity. Homogeneous Poisson point process: Applications There have been many applications of the homogeneous Poisson process on the real line in an attempt to model seemingly random and independent events occurring. It has a fundamental role in queueing theory, which is the probability field of developing suitable stochastic models to represent the random arrival and departure of certain phenomena. For example, customers arriving and being served or phone calls arriving at a phone exchange can be both studied with techniques from queueing theory. Homogeneous Poisson point process: Generalizations The homogeneous Poisson process on the real line is considered one of the simplest stochastic processes for counting random numbers of points. This process can be generalized in a number of ways. One possible generalization is to extend the distribution of interarrival times from the exponential distribution to other distributions, which introduces the stochastic process known as a renewal process. Another generalization is to define the Poisson point process on higher dimensional spaces such as the plane. Homogeneous Poisson point process: Spatial Poisson point process A spatial Poisson process is a Poisson point process defined in the plane R2 . For its mathematical definition, one first considers a bounded, open or closed (or more precisely, Borel measurable) region {\textstyle B} of the plane. The number of points of a point process N existing in this region B⊂R2 is a random variable, denoted by N(B) . If the points belong to a homogeneous Poisson process with parameter λ>0 , then the probability of n points existing in B is given by: Pr {N(B)=n}=(λ|B|)nn!e−λ|B| where |B| denotes the area of B For some finite integer k≥1 , we can give the finite-dimensional distribution of the homogeneous Poisson point process by first considering a collection of disjoint, bounded Borel (measurable) sets B1,…,Bk . The number of points of the point process N existing in Bi can be written as N(Bi) . Then the homogeneous Poisson point process with parameter λ>0 has the finite-dimensional distribution: Pr {N(Bi)=ni,i=1,…,k}=∏i=1k(λ|Bi|)nini!e−λ|Bi|. Homogeneous Poisson point process: Applications The spatial Poisson point process features prominently in spatial statistics, stochastic geometry, and continuum percolation theory. This point process is applied in various physical sciences such as a model developed for alpha particles being detected. In recent years, it has been frequently used to model seemingly disordered spatial configurations of certain wireless communication networks. For example, models for cellular or mobile phone networks have been developed where it is assumed the phone network transmitters, known as base stations, are positioned according to a homogeneous Poisson point process. Homogeneous Poisson point process: Defined in higher dimensions The previous homogeneous Poisson point process immediately extends to higher dimensions by replacing the notion of area with (high dimensional) volume. For some bounded region B of Euclidean space Rd , if the points form a homogeneous Poisson process with parameter λ>0 , then the probability of n points existing in B⊂Rd is given by: Pr {N(B)=n}=(λ|B|)nn!e−λ|B| where |B| now denotes the d -dimensional volume of B . Furthermore, for a collection of disjoint, bounded Borel sets B1,…,Bk⊂Rd , let N(Bi) denote the number of points of N existing in Bi . Then the corresponding homogeneous Poisson point process with parameter λ>0 has the finite-dimensional distribution: Pr {N(Bi)=ni,i=1,…,k}=∏i=1k(λ|Bi|)nini!e−λ|Bi|. Homogeneous Poisson point process: Homogeneous Poisson point processes do not depend on the position of the underlying space through its parameter λ , which implies it is both a stationary process (invariant to translation) and an isotropic (invariant to rotation) stochastic process. Similarly to the one-dimensional case, the homogeneous point process is restricted to some bounded subset of {\textstyle \mathbb {R} ^{d}} , then depending on some definitions of stationarity, the process is no longer stationary. Homogeneous Poisson point process: Points are uniformly distributed If the homogeneous point process is defined on the real line as a mathematical model for occurrences of some phenomenon, then it has the characteristic that the positions of these occurrences or events on the real line (often interpreted as time) will be uniformly distributed. More specifically, if an event occurs (according to this process) in an interval (a,b] where a≤b , then its location will be a uniform random variable defined on that interval. Furthermore, the homogeneous point process is sometimes called the uniform Poisson point process (see Terminology). This uniformity property extends to higher dimensions in the Cartesian coordinate, but not in, for example, polar coordinates. Inhomogeneous Poisson point process: The inhomogeneous or nonhomogeneous Poisson point process (see Terminology) is a Poisson point process with a Poisson parameter set as some location-dependent function in the underlying space on which the Poisson process is defined. For Euclidean space Rd , this is achieved by introducing a locally integrable positive function λ:Rd→[0,∞) , such that for every bounded region B the ( d -dimensional) volume integral of λ(x) over region B is finite. In other words, if this integral, denoted by Λ(B) , is: Λ(B)=∫Bλ(x)dx<∞, where dx is a ( d -dimensional) volume element, then for every collection of disjoint bounded Borel measurable sets B1,…,Bk , an inhomogeneous Poisson process with (intensity) function λ(x) has the finite-dimensional distribution: Pr {N(Bi)=ni,i=1,…,k}=∏i=1k(Λ(Bi))nini!e−Λ(Bi). Inhomogeneous Poisson point process: Furthermore, Λ(B) has the interpretation of being the expected number of points of the Poisson process located in the bounded region B , namely Λ(B)=E⁡[N(B)]. Inhomogeneous Poisson point process: Defined on the real line On the real line, the inhomogeneous or non-homogeneous Poisson point process has mean measure given by a one-dimensional integral. For two real numbers a and b , where a≤b , denote by N(a,b] the number points of an inhomogeneous Poisson process with intensity function λ(t) occurring in the interval (a,b] . The probability of n points existing in the above interval (a,b] is given by: Pr {N(a,b]=n}=[Λ(a,b)]nn!e−Λ(a,b). Inhomogeneous Poisson point process: where the mean or intensity measure is: Λ(a,b)=∫abλ(t)dt, which means that the random variable N(a,b] is a Poisson random variable with mean E⁡[N(a,b]]=Λ(a,b) A feature of the one-dimension setting, is that an inhomogeneous Poisson process can be transformed into a homogeneous by a monotone transformation or mapping, which is achieved with the inverse of Λ Counting process interpretation The inhomogeneous Poisson point process, when considered on the positive half-line, is also sometimes defined as a counting process. With this interpretation, the process, which is sometimes written as {N(t),t≥0} , represents the total number of occurrences or events that have happened up to and including time t . A counting process is said to be an inhomogeneous Poisson counting process if it has the four properties: N(0)=0; has independent increments; Pr {N(t+h)−N(t)=1}=λ(t)h+o(h); and Pr {N(t+h)−N(t)≥2}=o(h), where o(h) is asymptotic or little-o notation for o(h)/h→0 as h→0 In the case of point processes with refractoriness (e.g., neural spike trains) a stronger version of property 4 applies: Pr {N(t+h)−N(t)≥2}=o(h2) The above properties imply that N(t+h)−N(t) is a Poisson random variable with the parameter (or mean) E⁡[N(t+h)−N(t)]=∫tt+hλ(s)ds, which implies E⁡[N(h)]=∫0hλ(s)ds. Inhomogeneous Poisson point process: Spatial Poisson process An inhomogeneous Poisson process defined in the plane R2 is called a spatial Poisson process It is defined with intensity function and its intensity measure is obtained performing a surface integral of its intensity function over some region. For example, its intensity function (as a function of Cartesian coordinates {\textstyle x} and y ) can be λ(x,y)=e−(x2+y2), so the corresponding intensity measure is given by the surface integral Λ(B)=∫Be−(x2+y2)dxdy, where {\textstyle B} is some bounded region in the plane {\textstyle \mathbb {R} ^{2}} In higher dimensions In the plane, {\textstyle \Lambda (B)} corresponds to a surface integral while in {\textstyle \mathbb {R} ^{d}} the integral becomes a ( {\textstyle d} -dimensional) volume integral. Inhomogeneous Poisson point process: Applications When the real line is interpreted as time, the inhomogeneous process is used in the fields of counting processes and in queueing theory. Examples of phenomena which have been represented by or appear as an inhomogeneous Poisson point process include: Goals being scored in a soccer game. Inhomogeneous Poisson point process: Defects in a circuit boardIn the plane, the Poisson point process is important in the related disciplines of stochastic geometry and spatial statistics. The intensity measure of this point process is dependent on the location of underlying space, which means it can be used to model phenomena with a density that varies over some region. In other words, the phenomena can be represented as points that have a location-dependent density. This processes has been used in various disciplines and uses include the study of salmon and sea lice in the oceans, forestry, and search problems. Inhomogeneous Poisson point process: Interpretation of the intensity function The Poisson intensity function {\textstyle \lambda (x)} has an interpretation, considered intuitive, with the volume element {\textstyle \mathrm {d} x} in the infinitesimal sense: {\textstyle \lambda (x)\,\mathrm {d} x} is the infinitesimal probability of a point of a Poisson point process existing in a region of space with volume {\textstyle \mathrm {d} x} located at {\textstyle x} .For example, given a homogeneous Poisson point process on the real line, the probability of finding a single point of the process in a small interval of width {\textstyle \delta } is approximately {\textstyle \lambda \delta } . In fact, such intuition is how the Poisson point process is sometimes introduced and its distribution derived. Inhomogeneous Poisson point process: Simple point process If a Poisson point process has an intensity measure that is a locally finite and diffuse (or non-atomic), then it is a simple point process. For a simple point process, the probability of a point existing at a single point or location in the underlying (state) space is either zero or one. This implies that, with probability one, no two (or more) points of a Poisson point process coincide in location in the underlying space. Simulation: Simulating a Poisson point process on a computer is usually done in a bounded region of space, known as a simulation window, and requires two steps: appropriately creating a random number of points and then suitably placing the points in a random manner. Both these two steps depend on the specific Poisson point process that is being simulated. Step 1: Number of points The number of points {\textstyle N} in the window, denoted here by {\textstyle W} , needs to be simulated, which is done by using a (pseudo)-random number generating function capable of simulating Poisson random variables. Simulation: Homogeneous case For the homogeneous case with the constant {\textstyle \lambda } , the mean of the Poisson random variable {\textstyle N} is set to {\textstyle \lambda |W|} where {\textstyle |W|} is the length, area or ( {\textstyle d} -dimensional) volume of {\textstyle W} Inhomogeneous case For the inhomogeneous case, {\textstyle \lambda |W|} is replaced with the ( {\textstyle d} -dimensional) volume integral Λ(W)=∫Wλ(x)dx Step 2: Positioning of points The second stage requires randomly placing the N points in the window W Homogeneous case For the homogeneous case in one dimension, all points are uniformly and independently placed in the window or interval W . For higher dimensions in a Cartesian coordinate system, each coordinate is uniformly and independently placed in the window W . If the window is not a subspace of Cartesian space (for example, inside a unit sphere or on the surface of a unit sphere), then the points will not be uniformly placed in W , and suitable change of coordinates (from Cartesian) are needed. Simulation: Inhomogeneous case For the inhomogeneous case, a couple of different methods can be used depending on the nature of the intensity function λ(x) . If the intensity function is sufficiently simple, then independent and random non-uniform (Cartesian or other) coordinates of the points can be generated. For example, simulating a Poisson point process on a circular window can be done for an isotropic intensity function (in polar coordinates r and θ ), implying it is rotationally variant or independent of θ but dependent on r , by a change of variable in r if the intensity function is sufficiently simple.For more complicated intensity functions, one can use an acceptance-rejection method, which consists of using (or 'accepting') only certain random points and not using (or 'rejecting') the other points, based on the ratio: λ(xi)Λ(W)=λ(xi)∫Wλ(x)dx. Simulation: where xi is the point under consideration for acceptance or rejection. General Poisson point process: In measure theory, the Poisson point process can be further generalized to what is sometimes known as the general Poisson point process or general Poisson process by using a Radon measure Λ , which is a locally finite measure. In general, this Radon measure Λ can be atomic, which means multiple points of the Poisson point process can exist in the same location of the underlying space. In this situation, the number of points at x is a Poisson random variable with mean Λ(x) . But sometimes the converse is assumed, so the Radon measure Λ is diffuse or non-atomic.A point process N is a general Poisson point process with intensity Λ if it has the two following properties: the number of points in a bounded Borel set B is a Poisson random variable with mean Λ(B) . In other words, denote the total number of points located in B by N(B) , then the probability of random variable N(B) being equal to n is given by: Pr {N(B)=n}=(Λ(B))nn!e−Λ(B) the number of points in n disjoint Borel sets forms n independent random variables.The Radon measure Λ maintains its previous interpretation of being the expected number of points of N located in the bounded region B , namely Λ(B)=E⁡[N(B)]. General Poisson point process: Furthermore, if Λ is absolutely continuous such that it has a density (which is the Radon–Nikodym density or derivative) with respect to the Lebesgue measure, then for all Borel sets B it can be written as: Λ(B)=∫Bλ(x)dx, where the density λ(x) is known, among other terms, as the intensity function. History: Poisson distribution Despite its name, the Poisson point process was neither discovered nor studied by the French mathematician Siméon Denis Poisson; the name is cited as an example of Stigler's law. The name stems from its inherent relation to the Poisson distribution, derived by Poisson as a limiting case of the binomial distribution. This describes the probability of the sum of n Bernoulli trials with probability p , often likened to the number of heads (or tails) after n biased flips of a coin with the probability of a head (or tail) occurring being p . For some positive constant Λ>0 , as n increases towards infinity and p decreases towards zero such that the product np=Λ is fixed, the Poisson distribution more closely approximates that of the binomial.Poisson derived the Poisson distribution, published in 1841, by examining the binomial distribution in the limit of p (to zero) and n (to infinity). It only appears once in all of Poisson's work, and the result was not well known during his time. Over the following years a number of people used the distribution without citing Poisson, including Philipp Ludwig von Seidel and Ernst Abbe. At the end of the 19th century, Ladislaus Bortkiewicz would study the distribution again in a different setting (citing Poisson), using the distribution with real data to study the number of deaths from horse kicks in the Prussian army. History: Discovery There are a number of claims for early uses or discoveries of the Poisson point process. For example, John Michell in 1767, a decade before Poisson was born, was interested in the probability a star being within a certain region of another star under the assumption that the stars were "scattered by mere chance", and studied an example consisting of the six brightest stars in the Pleiades, without deriving the Poisson distribution. This work inspired Simon Newcomb to study the problem and to calculate the Poisson distribution as an approximation for the binomial distribution in 1860.At the beginning of the 20th century the Poisson process (in one dimension) would arise independently in different situations. In Sweden 1903, Filip Lundberg published a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process.In Denmark in 1909 another discovery occurred when A.K. Erlang derived the Poisson distribution when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang was not at the time aware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent to each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.In 1910 Ernest Rutherford and Hans Geiger published experimental results on counting alpha particles. Their experimental work had mathematical contributions from Harry Bateman, who derived Poisson probabilities as a solution to a family of differential equations, though the solution had been derived earlier, resulting in the independent discovery of the Poisson process. After this time there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists. History: Early applications The years after 1909 led to a number of studies and applications of the Poisson point process, however, its early history is complex, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and others working in the physical sciences. The early results were published in different languages and in different settings, with no standard terminology and notation used. For example, in 1922 Swedish chemist and Nobel Laureate Theodor Svedberg proposed a model in which a spatial Poisson point process is the underlying process to study how plants are distributed in plant communities. A number of mathematicians started studying the process in the early 1930s, and important contributions were made by Andrey Kolmogorov, William Feller and Aleksandr Khinchin, among others. In the field of teletraffic engineering, mathematicians and statisticians studied and used Poisson and other point processes. History: History of terms The Swede Conny Palm in his 1943 dissertation studied the Poisson and other point processes in the one-dimensional setting by examining them in terms of the statistical or stochastic dependence between the points in time. In his work exists the first known recorded use of the term point processes as Punktprozesse in German.It is believed that William Feller was the first in print to refer to it as the Poisson process in a 1940 paper. Although the Swede Ove Lundberg used the term Poisson process in his 1940 PhD dissertation, in which Feller was acknowledged as an influence, it has been claimed that Feller coined the term before 1940. It has been remarked that both Feller and Lundberg used the term as though it were well-known, implying it was already in spoken use by then. Feller worked from 1936 to 1939 alongside Harald Cramér at Stockholm University, where Lundberg was a PhD student under Cramér who did not use the term Poisson process in a book by him, finished in 1936, but did in subsequent editions, which his has led to the speculation that the term Poisson process was coined sometime between 1936 and 1939 at the Stockholm University. Terminology: The terminology of point process theory in general has been criticized for being too varied. In addition to the word point often being omitted, the homogeneous Poisson (point) process is also called a stationary Poisson (point) process, as well as uniform Poisson (point) process. The inhomogeneous Poisson point process, as well as being called nonhomogeneous, is also referred to as the non-stationary Poisson process.The term point process has been criticized, as the term process can suggest over time and space, so random point field, resulting in the terms Poisson random point field or Poisson point field being also used. A point process is considered, and sometimes called, a random counting measure, hence the Poisson point process is also referred to as a Poisson random measure, a term used in the study of Lévy processes, but some choose to use the two terms for Poisson points processes defined on two different underlying spaces.The underlying mathematical space of the Poisson point process is called a carrier space, or state space, though the latter term has a different meaning in the context of stochastic processes. In the context of point processes, the term "state space" can mean the space on which the point process is defined such as the real line, which corresponds to the index set or parameter set in stochastic process terminology. Terminology: The measure Λ is called the intensity measure, mean measure, or parameter measure, as there are no standard terms. If Λ has a derivative or density, denoted by λ(x) , is called the intensity function of the Poisson point process. For the homogeneous Poisson point process, the derivative of the intensity measure is simply a constant λ>0 , which can be referred to as the rate, usually when the underlying space is the real line, or the intensity. It is also called the mean rate or the mean density or rate . For λ=1 , the corresponding process is sometimes referred to as the standard Poisson (point) process.The extent of the Poisson point process is sometimes called the exposure. Notation: The notation of the Poisson point process depends on its setting and the field it is being applied in. For example, on the real line, the Poisson process, both homogeneous or inhomogeneous, is sometimes interpreted as a counting process, and the notation {N(t),t≥0} is used to represent the Poisson process.Another reason for varying notation is due to the theory of point processes, which has a couple of mathematical interpretations. For example, a simple Poisson point process may be considered as a random set, which suggests the notation x∈N , implying that x is a random point belonging to or being an element of the Poisson point process N . Another, more general, interpretation is to consider a Poisson or any other point process as a random counting measure, so one can write the number of points of a Poisson point process N being found or located in some (Borel measurable) region B as N(B) , which is a random variable. These different interpretations results in notation being used from mathematical fields such as measure theory and set theory.For general point processes, sometimes a subscript on the point symbol, for example x , is included so one writes (with set notation) xi∈N instead of x∈N , and x can be used for the bound variable in integral expressions such as Campbell's theorem, instead of denoting random points. Sometimes an uppercase letter denotes the point process, while a lowercase denotes a point from the process, so, for example, the point x or xi belongs to or is a point of the point process X , and be written with set notation as x∈X or xi∈X .Furthermore, the set theory and integral or measure theory notation can be used interchangeably. For example, for a point process N defined on the Euclidean state space Rd and a (measurable) function f on Rd , the expression ∫Rdf(x)dN(x)=∑xi∈Nf(xi), demonstrates two different ways to write a summation over a point process (see also Campbell's theorem (probability)). More specifically, the integral notation on the left-hand side is interpreting the point process as a random counting measure while the sum on the right-hand side suggests a random set interpretation. Functionals and moment measures: In probability theory, operations are applied to random variables for different purposes. Sometimes these operations are regular expectations that produce the average or variance of a random variable. Others, such as characteristic functions (or Laplace transforms) of a random variable can be used to uniquely identify or characterize random variables and prove results like the central limit theorem. In the theory of point processes there exist analogous mathematical tools which usually exist in the forms of measures and functionals instead of moments and functions respectively. Functionals and moment measures: Laplace functionals For a Poisson point process N with intensity measure Λ on some space X , the Laplace functional is given by: LN(f)=Ee−∫Xf(x)N(dx)=e−∫X(1−e−f(x))Λ(dx), One version of Campbell's theorem involves the Laplace functional of the Poisson point process. Functionals and moment measures: Probability generating functionals The probability generating function of non-negative integer-valued random variable leads to the probability generating functional being defined analogously with respect to any non-negative bounded function v on Rd such that 0≤v(x)≤1 . For a point process N the probability generating functional is defined as: G(v)=E⁡[∏x∈Nv(x)] where the product is performed for all the points in {\textstyle N} . If the intensity measure Λ of N is locally finite, then the {\textstyle G} is well-defined for any measurable function u on Rd . For a Poisson point process with intensity measure Λ the generating functional is given by: G(v)=e−∫Rd[1−v(x)]Λ(dx), which in the homogeneous case reduces to G(v)=e−λ∫Rd[1−v(x)]dx. Functionals and moment measures: Moment measure For a general Poisson point process with intensity measure Λ the first moment measure is its intensity measure: M1(B)=Λ(B), which for a homogeneous Poisson point process with constant intensity λ means: M1(B)=λ|B|, where |B| is the length, area or volume (or more generally, the Lebesgue measure) of B The Mecke equation The Mecke equation characterizes the Poisson point process. Let Nσ be the space of all σ -finite measures on some general space Q . A point process η with intensity λ on Q is a Poisson point process if and only if for all measurable functions f:Q×Nσ→R+ the following holds Pr Pr [f(x,η+δx)]λ(dx) For further details see. Functionals and moment measures: Factorial moment measure For a general Poisson point process with intensity measure Λ the n -th factorial moment measure is given by the expression: M(n)(B1×⋯×Bn)=∏i=1n[Λ(Bi)], where Λ is the intensity measure or first moment measure of N , which for some Borel set B is given by Λ(B)=M1(B)=E⁡[N(B)]. For a homogeneous Poisson point process the n -th factorial moment measure is simply: M(n)(B1×⋯×Bn)=λn∏i=1n|Bi|, where |Bi| is the length, area, or volume (or more generally, the Lebesgue measure) of Bi . Furthermore, the n -th factorial moment density is: μ(n)(x1,…,xn)=λn. Avoidance function: The avoidance function or void probability v of a point process N is defined in relation to some set B , which is a subset of the underlying space Rd , as the probability of no points of N existing in B . More precisely, for a test set B , the avoidance function is given by: Pr {N(B)=0}. Avoidance function: For a general Poisson point process N with intensity measure Λ , its avoidance function is given by: v(B)=e−Λ(B) Rényi's theorem Simple point processes are completely characterized by their void probabilities. In other words, complete information of a simple point process is captured entirely in its void probabilities, and two simple point processes have the same void probabilities if and if only if they are the same point processes. The case for Poisson process is sometimes known as Rényi's theorem, which is named after Alfréd Rényi who discovered the result for the case of a homogeneous point process in one-dimension.In one form, the Rényi's theorem says for a diffuse (or non-atomic) Radon measure Λ on Rd and a set A is a finite union of rectangles (so not Borel) that if N is a countable subset of Rd such that: Pr {N(A)=0}=v(A)=e−Λ(A) then N is a Poisson point process with intensity measure Λ Point process operations: Mathematical operations can be performed on point processes to get new point processes and develop new mathematical models for the locations of certain objects. One example of an operation is known as thinning which entails deleting or removing the points of some point process according to a rule, creating a new process with the remaining points (the deleted points also form a point process). Point process operations: Thinning For the Poisson process, the independent p(x) -thinning operations results in another Poisson point process. More specifically, a p(x) -thinning operation applied to a Poisson point process with intensity measure Λ gives a point process of removed points that is also Poisson point process Np with intensity measure Λp , which for a bounded Borel set B is given by: Λp(B)=∫Bp(x)Λ(dx) This thinning result of the Poisson point process is sometimes known as Prekopa's theorem. Furthermore, after randomly thinning a Poisson point process, the kept or remaining points also form a Poisson point process, which has the intensity measure Λp(B)=∫B(1−p(x))Λ(dx). Point process operations: The two separate Poisson point processes formed respectively from the removed and kept points are stochastically independent of each other. In other words, if a region is known to contain n kept points (from the original Poisson point process), then this will have no influence on the random number of removed points in the same region. This ability to randomly create two independent Poisson point processes from one is sometimes known as splitting the Poisson point process. Point process operations: Superposition If there is a countable collection of point processes N1,N2,… , then their superposition, or, in set theory language, their union, which is N=⋃i=1∞Ni, also forms a point process. In other words, any points located in any of the point processes N1,N2… will also be located in the superposition of these point processes N Superposition theorem The superposition theorem of the Poisson point process says that the superposition of independent Poisson point processes N1,N2… with mean measures Λ1,Λ2,… will also be a Poisson point process with mean measure Λ=∑i=1∞Λi. Point process operations: In other words, the union of two (or countably more) Poisson processes is another Poisson process. If a point {\textstyle x} is sampled from a countable {\textstyle n} union of Poisson processes, then the probability that the point x belongs to the {\textstyle j} th Poisson process {\textstyle N_{j}} is given by: Pr {x∈Nj}=Λj∑i=1nΛi. For two homogeneous Poisson processes with intensities {\textstyle \lambda _{1},\lambda _{2}\dots } , the two previous expressions reduce to λ=∑i=1∞λi, and Pr {x∈Nj}=λj∑i=1nλi. Clustering The operation clustering is performed when each point x of some point process N is replaced by another (possibly different) point process. If the original process N is a Poisson point process, then the resulting process Nc is called a Poisson cluster point process. Point process operations: Random displacement A mathematical model may require randomly moving points of a point process to other locations on the underlying mathematical space, which gives rise to a point process operation known as displacement or translation. The Poisson point process has been used to model, for example, the movement of plants between generations, owing to the displacement theorem, which loosely says that the random independent displacement of points of a Poisson point process (on the same underlying space) forms another Poisson point process. Point process operations: Displacement theorem One version of the displacement theorem involves a Poisson point process N on Rd with intensity function λ(x) . It is then assumed the points of N are randomly displaced somewhere else in Rd so that each point's displacement is independent and that the displacement of a point formerly at x is a random vector with a probability density ρ(x,⋅) . Then the new point process ND is also a Poisson point process with intensity function λD(y)=∫Rdλ(x)ρ(x,y)dx. Point process operations: If the Poisson process is homogeneous with λ(x)=λ>0 and if ρ(x,y) is a function of y−x , then λD(y)=λ. In other words, after each random and independent displacement of points, the original Poisson point process still exists. The displacement theorem can be extended such that the Poisson points are randomly displaced from one Euclidean space Rd to another Euclidean space Rd′ , where d′≥1 is not necessarily equal to d Mapping Another property that is considered useful is the ability to map a Poisson point process from one underlying space to another space. Point process operations: Mapping theorem If the mapping (or transformation) adheres to some conditions, then the resulting mapped (or transformed) collection of points also form a Poisson point process, and this result is sometimes referred to as the mapping theorem. The theorem involves some Poisson point process with mean measure Λ on some underlying space. If the locations of the points are mapped (that is, the point process is transformed) according to some function to another underlying space, then the resulting point process is also a Poisson point process but with a different mean measure Λ′ More specifically, one can consider a (Borel measurable) function f that maps a point process N with intensity measure Λ from one space S , to another space T in such a manner so that the new point process N′ has the intensity measure: Λ(B)′=Λ(f−1(B)) with no atoms, where B is a Borel set and f−1 denotes the inverse of the function f . If N is a Poisson point process, then the new process N′ is also a Poisson point process with the intensity measure Λ′ Approximations with Poisson point processes: The tractability of the Poisson process means that sometimes it is convenient to approximate a non-Poisson point process with a Poisson one. The overall aim is to approximate both the number of points of some point process and the location of each point by a Poisson point process. There a number of methods that can be used to justify, informally or rigorously, approximating the occurrence of random events or phenomena with suitable Poisson point processes. The more rigorous methods involve deriving upper bounds on the probability metrics between the Poisson and non-Poisson point processes, while other methods can be justified by less formal heuristics. Approximations with Poisson point processes: Clumping heuristic One method for approximating random events or phenomena with Poisson processes is called the clumping heuristic. The general heuristic or principle involves using the Poisson point process (or Poisson distribution) to approximate events, which are considered rare or unlikely, of some stochastic process. In some cases these rare events are close to being independent, hence a Poisson point process can be used. When the events are not independent, but tend to occur in clusters or clumps, then if these clumps are suitably defined such that they are approximately independent of each other, then the number of clumps occurring will be close to a Poisson random variable and the locations of the clumps will be close to a Poisson process. Approximations with Poisson point processes: Stein's method Stein's method is a mathematical technique originally developed for approximating random variables such as Gaussian and Poisson variables, which has also been applied to point processes. Stein's method can be used to derive upper bounds on probability metrics, which give way to quantify how different two random mathematical objects vary stochastically. Upperbounds on probability metrics such as total variation and Wasserstein distance have been derived.Researchers have applied Stein's method to Poisson point processes in a number of ways, such as using Palm calculus. Techniques based on Stein's method have been developed to factor into the upper bounds the effects of certain point process operations such as thinning and superposition. Stein's method has also been used to derive upper bounds on metrics of Poisson and other processes such as the Cox point process, which is a Poisson process with a random intensity measure. Convergence to a Poisson point process: In general, when an operation is applied to a general point process the resulting process is usually not a Poisson point process. For example, if a point process, other than a Poisson, has its points randomly and independently displaced, then the process would not necessarily be a Poisson point process. However, under certain mathematical conditions for both the original point process and the random displacement, it has been shown via limit theorems that if the points of a point process are repeatedly displaced in a random and independent manner, then the finite-distribution of the point process will converge (weakly) to that of a Poisson point process.Similar convergence results have been developed for thinning and superposition operations that show that such repeated operations on point processes can, under certain conditions, result in the process converging to a Poisson point processes, provided a suitable rescaling of the intensity measure (otherwise values of the intensity measure of the resulting point processes would approach zero or infinity). Such convergence work is directly related to the results known as the Palm–Khinchin equations, which has its origins in the work of Conny Palm and Aleksandr Khinchin, and help explains why the Poisson process can often be used as a mathematical model of various random phenomena. Generalizations of Poisson point processes: The Poisson point process can be generalized by, for example, changing its intensity measure or defining on more general mathematical spaces. These generalizations can be studied mathematically as well as used to mathematically model or represent physical phenomena. Generalizations of Poisson point processes: Poisson-type random measures The Poisson-type random measures (PT) are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed under Point process operation#Thinning. These random measures are examples of the mixed binomial process and share the distributional self-similarity property of the Poisson random measure. They are the only members of the canonical non-negative power series family of distributions to possess this property and include the Poisson distribution, negative binomial distribution, and binomial distribution. The Poisson random measure is independent on disjoint subspaces, whereas the other PT random measures (negative binomial and binomial) have positive and negative covariances. The PT random measures are discussed and include the Poisson random measure, negative binomial random measure, and binomial random measure. Generalizations of Poisson point processes: Poisson point processes on more general spaces For mathematical models the Poisson point process is often defined in Euclidean space, but has been generalized to more abstract spaces and plays a fundamental role in the study of random measures, which requires an understanding of mathematical fields such as probability theory, measure theory and topology.In general, the concept of distance is of practical interest for applications, while topological structure is needed for Palm distributions, meaning that point processes are usually defined on mathematical spaces with metrics. Furthermore, a realization of a point process can be considered as a counting measure, so points processes are types of random measures known as random counting measures. In this context, the Poisson and other point processes have been studied on a locally compact second countable Hausdorff space. Generalizations of Poisson point processes: Cox point process A Cox point process, Cox process or doubly stochastic Poisson process is a generalization of the Poisson point process by letting its intensity measure Λ to be also random and independent of the underlying Poisson process. The process is named after David Cox who introduced it in 1955, though other Poisson processes with random intensities had been independently introduced earlier by Lucien Le Cam and Maurice Quenouille. The intensity measure may be a realization of random variable or a random field. For example, if the logarithm of the intensity measure is a Gaussian random field, then the resulting process is known as a log Gaussian Cox process. More generally, the intensity measures is a realization of a non-negative locally finite random measure. Cox point processes exhibit a clustering of points, which can be shown mathematically to be larger than those of Poisson point processes. The generality and tractability of Cox processes has resulted in them being used as models in fields such as spatial statistics and wireless networks. Generalizations of Poisson point processes: Marked Poisson point process For a given point process, each random point of a point process can have a random mathematical object, known as a mark, randomly assigned to it. These marks can be as diverse as integers, real numbers, lines, geometrical objects or other point processes. The pair consisting of a point of the point process and its corresponding mark is called a marked point, and all the marked points form a marked point process. It is often assumed that the random marks are independent of each other and identically distributed, yet the mark of a point can still depend on the location of its corresponding point in the underlying (state) space. If the underlying point process is a Poisson point process, then the resulting point process is a marked Poisson point process. Generalizations of Poisson point processes: Marking theorem If a general point process is defined on some mathematical space and the random marks are defined on another mathematical space, then the marked point process is defined on the Cartesian product of these two spaces. For a marked Poisson point process with independent and identically distributed marks, the marking theorem states that this marked point process is also a (non-marked) Poisson point process defined on the aforementioned Cartesian product of the two mathematical spaces, which is not true for general point processes. Generalizations of Poisson point processes: Compound Poisson point process The compound Poisson point process or compound Poisson process is formed by adding random values or weights to each point of Poisson point process defined on some underlying space, so the process is constructed from a marked Poisson point process, where the marks form a collection of independent and identically distributed non-negative random variables. In other words, for each point of the original Poisson process, there is an independent and identically distributed non-negative random variable, and then the compound Poisson process is formed from the sum of all the random variables corresponding to points of the Poisson process located in some region of the underlying mathematical space.If there is a marked Poisson point process formed from a Poisson point process N (defined on, for example, Rd ) and a collection of independent and identically distributed non-negative marks {Mi} such that for each point xi of the Poisson process N there is a non-negative random variable Mi , the resulting compound Poisson process is then: C(B)=∑i=1N(B)Mi, where B⊂Rd is a Borel measurable set. Generalizations of Poisson point processes: If general random variables {Mi} take values in, for example, d -dimensional Euclidean space Rd , the resulting compound Poisson process is an example of a Lévy process provided that it is formed from a homogeneous Point process N defined on the non-negative numbers [0,∞) Failure process with the exponential smoothing of intensity functions The failure process with the exponential smoothing of intensity functions (FP-ESI) is an extension of the nonhomogeneous Poisson process. The intensity function of an FP-ESI is an exponential smoothing function of the intensity functions at the last time points of event occurrences and outperforms other nine stochastic processes on 8 real-world failure datasets when the models are used to fit the datasets, where the model performance is measured in terms of AIC (Akaike information criterion) and BIC (Bayesian information criterion).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Searle's bar method** Searle's bar method: Searle's bar method (named after George Frederick Charles Searle) is an experimental procedure to measure thermal conductivity of material. A bar of material is being heated by steam on one side and the other side cooled down by water while the length of the bar is thermally insulated. Then the heat ΔQ propagating through the bar in a time interval of Δt is given by (ΔQΔt)bar=−kAΔTbarL where ΔQ is the heat supplied to the bar in time Δt k is the coefficient of thermal conductivity of the bar. Searle's bar method: A is the cross-sectional area of the bar, ΔTbar is the temperature difference between each end of the bar L is the length of the barand the heat ΔQ absorbed by water in a time interval of Δt is: (ΔQΔt)water=CwΔmΔtΔTwater where Cw is the specific heat of water, Δm is the mass of water collected during time Δt, ΔTwater is difference in the temperature of water before and after it has gone through the bar.Assuming perfect insulation and no energy loss, then (ΔQΔt)bar=(ΔQΔt)water which leads to k=−CwLAΔmΔtΔTwaterΔTbar
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TestNG** TestNG: TestNG is a testing framework for the Java programming language created by Cédric Beust and inspired by JUnit and NUnit. The design goal of TestNG is to cover a wider range of test categories: unit, functional, end-to-end, integration, etc., with more powerful and easy-to-use functionalities. Features: TestNG's main features include: Annotation support. Support for data-driven/parameterized testing (with @DataProvider and/or XML configuration). Support for multiple instances of the same test class (with @Factory) Flexible execution model. TestNG can be run either by Ant via build.xml (with or without a test suite defined), or by an IDE plugin with visual results. There isn't a TestSuite class, while test suites, groups and tests selected to run are defined and configured by XML files. Concurrent testing: run tests in arbitrarily big thread pools with various policies available (all methods in their own thread, one thread per test class, etc.), and test whether the code is multithread safe. Embeds BeanShell for further flexibility. Default JDK functions for runtime and logging (no dependencies). Dependent methods for application server testing. Distributed testing: allows distribution of tests on slave machines. Data provider A data provider in TestNG is a method in a test class, which provides an array of varied actual values to dependent test methods. Features: Example: The returned type of a data provider can be one of the following two types: An array of array of objects (Object[][]) where the first dimension's size is the number of times the test method will be invoked and the second dimension size contains an array of objects that must be compatible with the parameter types of the test method. Features: An Iterator<Object[]>. The only difference with Object[][] is that an Iterator lets you create your test data lazily. TestNG will invoke the iterator and then the test method with the parameters returned by this iterator one by one. This is particularly useful if you have a lot of parameter sets to pass to the method and you don't want to create all of them upfront. Features: Tool support TestNG is supported, out-of-the-box or via plug-ins, by each of the three major Java IDEs - Eclipse, IntelliJ IDEA, and NetBeans. It also comes with a custom task for Apache Ant and is supported by the Maven build system. The Hudson continuous integration server has built-in support for TestNG and is able to track and chart test results over time. Most Java code coverage tools, such as Cobertura, work seamlessly with TestNG. Features: Note: TestNG support for Eclipse is only embedded in the Eclipse Marketplace for Eclipse versions up to 2018-09 (4.9). For later versions of Eclipse, TestNG must be manually installed as per instructions in the TestNG site. Reporting TestNG generates test reports in HTML and XML formats. The XML output can be transformed by the Ant JUnitReport task to generate reports similar to those obtained when using JUnit. Since version 4.6, TestNG also provides a reporter API that permits third-party report generators, such as ReportNG, PDFngreport and TestNG-XSLT, to be used. Comparison with JUnit: TestNG has a longstanding rivalry with another testing tool JUnit. Each framework has differences and respective advantages. Stack Overflow discussions reflect this controversy. Annotations In JUnit 5, the @BeforeAll and @AfterAll methods have to be declared as static in most circumstances. TestNG does not have this constraint. TestNG includes four additional setup/teardown annotation pairs for the test suite and groups: @BeforeSuite, @AfterSuite, @BeforeTest, @AfterTest, @BeforeGroup and @AfterGroup, @BeforeMethod and @AfterMethod. TestNG also provides support to automate testing an application using selenium. Parameterized testing Parameterized testing is implemented in both tools, but in quite different ways. Comparison with JUnit: TestNG has two ways for providing varying parameter values to a test method: by setting the testng.xml, and by defining a @DataProvider method.In JUnit 5, the @ParameterizedTest annotation allows parameterized testing. This annotation is combined with another annotation declaring the source of parameterized arguments, such as @ValueSource or @EnumSource. Using @ArgumentsSource allows the user to implement a more dynamic ArgumentsProvider. In JUnit 4, @RunWith and @Parameters are used to facilitate parameterized tests, where the @Parameters method has to return a List[] with the parameterized values, which will be fed into the test class constructor. Comparison with JUnit: Conclusion Different users often prefer certain features of one framework or another. JUnit is more widely popular and often shipped with mainstream IDEs by default. TestNG is noted for extra configuration options and capability for different kinds of testing. Which one more suitable depends on the use context and requirements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Integral cryptanalysis** Integral cryptanalysis: In cryptography, integral cryptanalysis is a cryptanalytic attack that is particularly applicable to block ciphers based on substitution–permutation networks. It was originally designed by Lars Knudsen as a dedicated attack against Square, so it is commonly known as the Square attack. It was also extended to a few other ciphers related to Square: CRYPTON, Rijndael, and SHARK. Stefan Lucks generalized the attack to what he called a saturation attack and used it to attack Twofish, which is not at all similar to Square, having a radically different Feistel network structure. Forms of integral cryptanalysis have since been applied to a variety of ciphers, including Hierocrypt, IDEA, Camellia, Skipjack, MISTY1, MISTY2, SAFER++, KHAZAD, and FOX (now called IDEA NXT). Integral cryptanalysis: Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen plaintexts of which part is held constant and another part varies through all possibilities. For example, an attack might use 256 chosen plaintexts that have all but 8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. This contrast between the differences of pairs of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis", borrowing the terminology of calculus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cytologia** Cytologia: Cytologia is a peer-reviewed scientific journal covering all aspects of botany. It was established in 1929. According to the Journal Citation Reports, the journal has a 2016 impact factor of 0.913.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subatomic scale** Subatomic scale: The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent. The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cable theory** Cable theory: Classical cable theory uses mathematical models to calculate the electric current (and accompanying voltage) along passive neurites, particularly the dendrites that receive synaptic inputs at different sites and times. Estimates are made by modeling dendrites and axons as cylinders composed of segments with capacitances cm and resistances rm combined in parallel (see Fig. 1). The capacitance of a neuronal fiber comes about because electrostatic forces are acting through the very thin lipid bilayer (see Figure 2). The resistance in series along the fiber rl is due to the axoplasm's significant resistance to movement of electric charge. History: Cable theory in computational neuroscience has roots leading back to the 1850s, when Professor William Thomson (later known as Lord Kelvin) began developing mathematical models of signal decay in submarine (underwater) telegraphic cables. The models resembled the partial differential equations used by Fourier to describe heat conduction in a wire. History: The 1870s saw the first attempts by Hermann to model neuronal electrotonic potentials also by focusing on analogies with heat conduction. However, it was Hoorweg who first discovered the analogies with Kelvin's undersea cables in 1898 and then Hermann and Cremer who independently developed the cable theory for neuronal fibers in the early 20th century. Further mathematical theories of nerve fiber conduction based on cable theory were developed by Cole and Hodgkin (1920s–1930s), Offner et al. (1940), and Rushton (1951). History: Experimental evidence for the importance of cable theory in modelling the behavior of axons began surfacing in the 1930s from work done by Cole, Curtis, Hodgkin, Sir Bernard Katz, Rushton, Tasaki and others. Two key papers from this era are those of Davis and Lorente de Nó (1947) and Hodgkin and Rushton (1946). History: The 1950s saw improvements in techniques for measuring the electric activity of individual neurons. Thus cable theory became important for analyzing data collected from intracellular microelectrode recordings and for analyzing the electrical properties of neuronal dendrites. Scientists like Coombs, Eccles, Fatt, Frank, Fuortes and others now relied heavily on cable theory to obtain functional insights of neurons and for guiding them in the design of new experiments. History: Later, cable theory with its mathematical derivatives allowed ever more sophisticated neuron models to be explored by workers such as Jack, Rall, Redman, Rinzel, Idan Segev, Tuckwell, Bell, and Iannella. More recently, cable theory has been applied to model electrical activity in bundled neurons in the white matter of the brain. Deriving the cable equation: Note, various conventions of rm exist. Deriving the cable equation: Here rm and cm, as introduced above, are measured per membrane-length unit (per meter (m)). Thus rm is measured in ohm·meters (Ω·m) and cm in farads per meter (F/m). This is in contrast to Rm (in Ω·m2) and Cm (in F/m2), which represent the specific resistance and capacitance respectively of one unit area of membrane (in m2). Thus, if the radius, a, of the axon is known, then its circumference is 2πa, and its rm, and its cm values can be calculated as: These relationships make sense intuitively, because the greater the circumference of the axon, the greater the area for charge to escape through its membrane, and therefore the lower the membrane resistance (dividing Rm by 2πa); and the more membrane available to store charge (multiplying Cm by 2πa). Deriving the cable equation: The specific electrical resistance, ρl, of the axoplasm allows one to calculate the longitudinal intracellular resistance per unit length, rl, (in Ω·m−1) by the equation: The greater the cross sectional area of the axon, πa2, the greater the number of paths for the charge to flow through its axoplasm, and the lower the axoplasmic resistance. Several important avenues of extending classical cable theory have recently seen the introduction of endogenous structures in order to analyze the effects of protein polarization within dendrites and different synaptic input distributions over the dendritic surface of a neuron. Deriving the cable equation: To better understand how the cable equation is derived, first simplify the theoretical neuron even further and pretend it has a perfectly sealed membrane (rm=∞) with no loss of current to the outside, and no capacitance (cm = 0). A current injected into the fiber at position x = 0 would move along the inside of the fiber unchanged. Moving away from the point of injection and by using Ohm's law (V = IR) we can calculate the voltage change as: where the negative is because current flows down the potential gradient. Deriving the cable equation: Letting Δx go towards zero and having infinitely small increments of x, one can write (4) as: or Bringing rm back into the picture is like making holes in a garden hose. The more holes, the faster the water will escape from the hose, and the less water will travel all the way from the beginning of the hose to the end. Similarly, in an axon, some of the current traveling longitudinally through the axoplasm will escape through the membrane. Deriving the cable equation: If im is the current escaping through the membrane per length unit, m, then the total current escaping along y units must be y·im. Thus, the change of current in the axoplasm, Δil, at distance, Δx, from position x=0 can be written as: or, using continuous, infinitesimally small increments: im can be expressed with yet another formula, by including the capacitance. The capacitance will cause a flow of charge (a current) towards the membrane on the side of the cytoplasm. This current is usually referred to as displacement current (here denoted ic .) The flow will only take place as long as the membrane's storage capacity has not been reached. ic can then be expressed as: where cm is the membrane's capacitance and ∂V/∂t is the change in voltage over time. The current that passes the membrane ( ir ) can be expressed as: and because im=ir+ic the following equation for im can be derived if no additional current is added from an electrode: where ∂il/∂x represents the change per unit length of the longitudinal current. Deriving the cable equation: Combining equations (6) and (11) gives a first version of a cable equation: which is a second-order partial differential equation (PDE). By a simple rearrangement of equation (12) (see later) it is possible to make two important terms appear, namely the length constant (sometimes referred to as the space constant) denoted λ and the time constant denoted τ . The following sections focus on these terms. Length constant: The length constant, λ (lambda), is a parameter that indicates how far a stationary current will influence the voltage along the cable. The larger the value of λ , the farther the charge will flow. The length constant can be expressed as: The larger the membrane resistance, rm, the greater the value of λ , and the more current will remain inside the axoplasm to travel longitudinally through the axon. The higher the axoplasmic resistance, rl , the smaller the value of λ , the harder it will be for current to travel through the axoplasm, and the shorter the current will be able to travel. Length constant: It is possible to solve equation (12) and arrive at the following equation (which is valid in steady-state conditions, i.e. when time approaches infinity): Where V0 is the depolarization at x=0 (point of current injection), e is the exponential constant (approximate value 2.71828) and Vx is the voltage at a given distance x from x=0. When x=λ then and which means that when we measure V at distance λ from x=0 we get Thus Vλ is always 36.8 percent of V0 Time constant: Neuroscientists are often interested in knowing how fast the membrane potential, Vm , of an axon changes in response to changes in the current injected into the axoplasm. The time constant, τ , is an index that provides information about that value. τ can be calculated as: The larger the membrane capacitance, cm , the more current it takes to charge and discharge a patch of membrane and the longer this process will take. The larger the membrane resistance rm , the harder it is for a current to induce a change in membrane potential. So the higher the τ the slower the nerve impulse can travel. That means, membrane potential (voltage across the membrane) lags more behind current injections. Response times vary from 1–2 milliseconds in neurons that are processing information that needs high temporal precision to 100 milliseconds or longer. A typical response time is around 20 milliseconds. Generic form and mathematical structure: If one multiplies equation (12) by rm on both sides of the equal sign we get: and recognize λ2=rm/rl on the left side and τ=cmrm on the right side. The cable equation can now be written in its perhaps best known form: This is a 1D heat equation or diffusion equation for which many solution methods, such as Green's functions and Fourier methods, have been developed. Generic form and mathematical structure: It is also a special degenerate case of the Telegrapher's equation, where the inductance L vanishes and the signal propagation speed 1/LC is infinite.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital preservation** Digital preservation: In library and archival science, digital preservation is a formal endeavor to ensure that digital information of continuing value remains accessible and usable. It involves planning, resource allocation, and application of preservation methods and technologies, and it combines policies, strategies and actions to ensure access to reformatted and "born-digital" content, regardless of the challenges of media failure and technological change. The goal of digital preservation is the accurate rendering of authenticated content over time. Digital preservation: The Association for Library Collections and Technical Services Preservation and Reformatting Section of the American Library Association, defined digital preservation as combination of "policies, strategies and actions that ensure access to digital content over time." According to the Harrod's Librarian Glossary, digital preservation is the method of keeping digital material alive so that they remain usable as technological advances render original hardware and software specification obsolete.The need for digital preservation mainly arises because of the relatively short lifespan of digital media. Widely used hard drives can become unusable in a few years due to a variety of reasons such as damaged spindle motors, and flash memory (found on SSDs, phones, USB flash drives, and in memory cards such as SD, microSD, and CompactFlash cards) can start to lose data around a year after its last use, depending on its storage temperature and how much data has been written to it during its lifetime. Currently, 5D optical data storage has the potential to store digital data for thousands of years. Archival disc-based media is available, but it is only designed to last for 50 years and it is a proprietary format, sold by just two Japanese companies, Sony and Panasonic. M-DISC is a DVD-based format that claims to retain data for 1,000 years, but writing to it requires special optical disc drives and reading the data it contains requires increasingly uncommon optical disc drives, in addition the company behind the format went bankrupt. Data stored on LTO tapes require periodic migration, as older tapes cannot be read by newer LTO tape drives. RAID arrays could be used to protect against failure of single hard drives, although care needs to be taken to not mix the drives of one array with those of another. Fundamentals: Appraisal Archival appraisal (or, alternatively, selection) refers to the process of identifying records and other materials to be preserved by determining their permanent value. Several factors are usually considered when making this decision. It is a difficult and critical process because the remaining selected records will shape researchers' understanding of that body of records, or fonds. Appraisal is identified as A4.2 within the Chain of Preservation (COP) model created by the InterPARES 2 project. Archival appraisal is not the same as monetary appraisal, which determines fair market value. Fundamentals: Archival appraisal may be performed once or at the various stages of acquisition and processing. Macro appraisal, a functional analysis of records at a high level, may be performed even before the records have been acquired to determine which records to acquire. More detailed, iterative appraisal may be performed while the records are being processed. Fundamentals: Appraisal is performed on all archival materials, not just digital. It has been proposed that, in the digital context, it might be desirable to retain more records than have traditionally been retained after appraisal of analog records, primarily due to a combination of the declining cost of storage and the availability of sophisticated discovery tools which will allow researchers to find value in records of low information density. In the analog context, these records may have been discarded or only a representative sample kept. However, the selection, appraisal, and prioritization of materials must be carefully considered in relation to the ability of an organization to responsibly manage the totality of these materials. Fundamentals: Often libraries, and to a lesser extent, archives, are offered the same materials in several different digital or analog formats. They prefer to select the format that they feel has the greatest potential for long-term preservation of the content. The Library of Congress has created a set of recommended formats for long-term preservation. They would be used, for example, if the Library was offered items for copyright deposit directly from a publisher. Fundamentals: Identification (identifiers and descriptive metadata) In digital preservation and collection management, discovery and identification of objects is aided by the use of assigned identifiers and accurate descriptive metadata. An identifier is a unique label that is used to reference an object or record, usually manifested as a number or string of numbers and letters. As a crucial element of metadata to be included in a database record or inventory, it is used in tandem with other descriptive metadata to differentiate objects and their various instantiations.Descriptive metadata refers to information about an object's content such as title, creator, subject, date etc... Determination of the elements used to describe an object are facilitated by the use of a metadata schema. Extensive descriptive metadata about a digital object helps to minimize the risks of a digital object becoming inaccessible.Another common type of file identification is the filename. Implementing a file naming protocol is essential to maintaining consistency and efficient discovery and retrieval of objects in a collection, and is especially applicable during digitization of analog media. Using a file naming convention, such as the 8.3 filename or the Warez standard naming, will ensure compatibility with other systems and facilitate migration of data, and deciding between descriptive (containing descriptive words and numbers) and non-descriptive (often randomly generated numbers) file names is generally determined by the size and scope of a given collection. However, filenames are not good for semantic identification, because they are non-permanent labels for a specific location on a system and can be modified without affecting the bit-level profile of a digital file. Fundamentals: Integrity The cornerstone of digital preservation, "data integrity" refers to the assurance that the data is "complete and unaltered in all essential respects"; a program designed to maintain integrity aims to "ensure data is recorded exactly as intended, and upon later retrieval, ensure the data is the same as it was when it was originally recorded".Unintentional changes to data are to be avoided, and responsible strategies put in place to detect unintentional changes and react as appropriately determined. However, digital preservation efforts may necessitate modifications to content or metadata through responsibly-developed procedures and by well-documented policies. Organizations or individuals may choose to retain original, integrity-checked versions of content and/or modified versions with appropriate preservation metadata. Data integrity practices also apply to modified versions, as their state of capture must be maintained and resistant to unintentional modifications. Fundamentals: The integrity of a record can be preserved through bit-level preservation, fixity checking, and capturing a full audit trail of all preservation actions performed on the record. These strategies can ensure protection against unauthorised or accidental alteration. Fixity File fixity is the property of a digital file being fixed, or unchanged. File fixity checking is the process of validating that a file has not changed or been altered from a previous state. This effort is often enabled by the creation, validation, and management of checksums. Fundamentals: While checksums are the primary mechanism for monitoring fixity at the individual file level, an important additional consideration for monitoring fixity is file attendance. Whereas checksums identify if a file has changed, file attendance identifies if a file in a designated collection is newly created, deleted, or moved. Tracking and reporting on file attendance is a fundamental component of digital collection management and fixity. Fundamentals: Characterization Characterization of digital materials is the identification and description of what a file is and of its defining technical characteristics often captured by technical metadata, which records its technical attributes like creation or production environment. Fundamentals: Sustainability Digital sustainability encompasses a range of issues and concerns that contribute to the longevity of digital information. Unlike traditional, temporary strategies, and more permanent solutions, digital sustainability implies a more active and continuous process. Digital sustainability concentrates less on the solution and technology and more on building an infrastructure and approach that is flexible with an emphasis on interoperability, continued maintenance and continuous development. Digital sustainability incorporates activities in the present that will facilitate access and availability in the future. The ongoing maintenance necessary to digital preservation is analogous to the successful, centuries-old, community upkeep of the Uffington White Horse (according to Stuart M. Shieber) or the Ise Grand Shrine (according to Jeffrey Schnapp). Fundamentals: Renderability Renderability refers to the continued ability to use and access a digital object while maintaining its inherent significant properties. Physical media obsolescence Physical media obsolescence can occur when access to digital content requires external dependencies that are no longer manufactured, maintained, or supported. External dependencies can refer to hardware, software, or physical carriers. For example, DLT tape was used for backups and data preservation, but is no longer used. Fundamentals: Format obsolescence File format obsolescence can occur when adoption of new encoding formats supersedes use of existing formats, or when associated presentation tools are no longer readily available.While the use of file formats will vary among archival institutions given their capabilities, there is documented acceptance among the field that chosen file formats should be "open, standard, non-proprietary, and well-established" to enable long-term archival use. Factors that should enter consideration when selecting sustainable file formats include disclosure, adoption, transparency, self-documentation, external dependencies, impact of patents, and technical protection mechanisms. Other considerations for selecting sustainable file formats include "format longevity and maturity, adaptation in relevant professional communities, incorporated information standards, and long-term accessibility of any required viewing software". For example, the Smithsonian Institution Archives considers uncompressed TIFFs to be "a good preservation format for born-digital and digitized still images because of its maturity, wide adaptation in various communities, and thorough documentation".Formats proprietary to one software vendor are more likely to be affected by format obsolescence. Well-used standards such as Unicode and JPEG are more likely to be readable in future. Fundamentals: Significant properties Significant properties refer to the "essential attributes of a digital object which affect its appearance, behavior, quality and usability" and which "must be preserved over time for the digital object to remain accessible and meaningful.""Proper understanding of the significant properties of digital objects is critical to establish best practice approaches to digital preservation. It assists appraisal and selection, processes in which choices are made about which significant properties of digital objects are worth preserving; it helps the development of preservation metadata, the assessment of different preservation strategies and informs future work on developing common standards across the preservation community." Authenticity Whether analog or digital, archives strive to maintain records as trustworthy representations of what was originally received. Authenticity has been defined as ". . . the trustworthiness of a record as a record; i.e., the quality of a record that is what it purports to be and that is free from tampering or corruption". Authenticity should not be confused with accuracy; an inaccurate record may be acquired by an archives and have its authenticity preserved. The content and meaning of that inaccurate record will remain unchanged. Fundamentals: A combination of policies, security procedures, and documentation can be used to ensure and provide evidence that the meaning of the records has not been altered while in the archives' custody. Access Digital preservation efforts are largely to enable decision-making in the future. Should an archive or library choose a particular strategy to enact, the content and associated metadata must persist to allow for actions to be taken or not taken at the discretion of the controlling party. Fundamentals: Preservation metadata Preservation metadata is a key enabler for digital preservation, and includes technical information for digital objects, information about a digital object's components and its computing environment, as well as information that documents the preservation process and underlying rights basis. It allows organizations or individuals to understand the chain of custody. Preservation Metadata: Implementation Strategies (PREMIS), is the de facto standard that defines the implementable, core preservation metadata needed by most repositories and institutions. It includes guidelines and recommendations for its usage, and has developed shared community vocabularies. Intellectual foundations: Preserving Digital Information (1996) The challenges of long-term preservation of digital information have been recognized by the archival community for years. In December 1994, the Research Libraries Group (RLG) and Commission on Preservation and Access (CPA) formed a Task Force on Archiving of Digital Information with the main purpose of investigating what needed to be done to ensure long-term preservation and continued access to the digital records. The final report published by the Task Force (Garrett, J. and Waters, D., ed. (1996). "Preserving digital information: Report of the task force on archiving of digital information.") became a fundamental document in the field of digital preservation that helped set out key concepts, requirements, and challenges.The Task Force proposed development of a national system of digital archives that would take responsibility for long-term storage and access to digital information; introduced the concept of trusted digital repositories and defined their roles and responsibilities; identified five features of digital information integrity (content, fixity, reference, provenance, and context) that were subsequently incorporated into a definition of Preservation Description Information in the Open Archival Information System Reference Model; and defined migration as a crucial function of digital archives. The concepts and recommendations outlined in the report laid a foundation for subsequent research and digital preservation initiatives. Intellectual foundations: OAIS To standardize digital preservation practice and provide a set of recommendations for preservation program implementation, the Reference Model for an Open Archival Information System (OAIS) was developed, and published in 2012. OAIS is concerned with all technical aspects of a digital object's life cycle: ingest, archival storage, data management, administration, access and preservation planning. The model also addresses metadata issues and recommends that five types of metadata be attached to a digital object: reference (identification) information, provenance (including preservation history), context, fixity (authenticity indicators), and representation (formatting, file structure, and what "imparts meaning to an object's bitstream"). Intellectual foundations: Trusted Digital Repository Model In March 2000, the Research Libraries Group (RLG) and Online Computer Library Center (OCLC) began a collaboration to establish attributes of a digital repository for research organizations, building on and incorporating the emerging international standard of the Reference Model for an Open Archival Information System (OAIS). In 2002, they published "Trusted Digital Repositories: Attributes and Responsibilities." In that document a "Trusted Digital Repository" (TDR) is defined as "one whose mission is to provide reliable, long-term access to managed digital resources to its designated community, now and in the future." The TDR must include the following seven attributes: compliance with the reference model for an Open Archival Information System (OAIS), administrative responsibility, organizational viability, financial sustainability, technological and procedural suitability, system security, procedural accountability. The Trusted Digital Repository Model outlines relationships among these attributes. The report also recommended the collaborative development of digital repository certifications, models for cooperative networks, and sharing of research and information on digital preservation with regard to intellectual property rights.In 2004 Henry M. Gladney proposed another approach to digital object preservation that called for the creation of "Trustworthy Digital Objects" (TDOs). TDOs are digital objects that can speak to their own authenticity since they incorporate a record maintaining their use and change history, which allows the future users to verify that the contents of the object are valid. Intellectual foundations: InterPARES International Research on Permanent Authentic Records in Electronic Systems (InterPARES) is a collaborative research initiative led by the University of British Columbia that is focused on addressing issues of long-term preservation of authentic digital records. The research is being conducted by focus groups from various institutions in North America, Europe, Asia, and Australia, with an objective of developing theories and methodologies that provide the basis for strategies, standards, policies, and procedures necessary to ensure the trustworthiness, reliability, and accuracy of digital records over time.Under the direction of archival science professor Luciana Duranti, the project began in 1999 with the first phase, InterPARES 1, which ran to 2001 and focused on establishing requirements for authenticity of inactive records generated and maintained in large databases and document management systems created by government agencies. InterPARES 2 (2002–2007) concentrated on issues of reliability, accuracy and authenticity of records throughout their whole life cycle, and examined records produced in dynamic environments in the course of artistic, scientific and online government activities. The third five-year phase (InterPARES 3) was initiated in 2007. Its goal is to utilize theoretical and methodological knowledge generated by InterPARES and other preservation research projects for developing guidelines, action plans, and training programs on long-term preservation of authentic records for small and medium-sized archival organizations. Challenges: Society's heritage has been presented on many different materials, including stone, vellum, bamboo, silk, and paper. Now a large quantity of information exists in digital forms, including emails, blogs, social networking websites, national elections websites, web photo albums, and sites which change their content over time. With digital media it is easier to create content and keep it up-to-date, but at the same time there are many challenges in the preservation of this content, both technical and economic. Challenges: Unlike traditional analog objects such as books or photographs where the user has unmediated access to the content, a digital object always needs a software environment to render it. These environments keep evolving and changing at a rapid pace, threatening the continuity of access to the content. Physical storage media, data formats, hardware, and software all become obsolete over time, posing significant threats to the survival of the content. This process can be referred to as digital obsolescence. Challenges: In the case of born-digital content (e.g., institutional archives, websites, electronic audio and video content, born-digital photography and art, research data sets, observational data), the enormous and growing quantity of content presents significant scaling issues to digital preservation efforts. Rapidly changing technologies can hinder digital preservationists' work and techniques due to outdated and antiquated machines or technology. This has become a common problem and one that is a constant worry for a digital archivist—how to prepare for the future. Challenges: Digital content can also present challenges to preservation because of its complex and dynamic nature, e.g., interactive Web pages, virtual reality and gaming environments, learning objects, social media sites. In many cases of emergent technological advances there are substantial difficulties in maintaining the authenticity, fixity, and integrity of objects over time deriving from the fundamental issue of experience with that particular digital storage medium and while particular technologies may prove to be more robust in terms of storage capacity, there are issues in securing a framework of measures to ensure that the object remains fixed while in stewardship.For the preservation of software as digital content, a specific challenge is the typically non-availability of the source code as commercial software is normally distributed only in compiled binary form. Without the source code an adaption (Porting) on modern computing hardware or operating system is most often impossible, therefore the original hardware and software context needs to be emulated. Another potential challenge for software preservation can be the copyright which prohibits often the bypassing of copy protection mechanisms (Digital Millennium Copyright Act) in case software has become an orphaned work (Abandonware). An exemption from the United States Digital Millennium Copyright Act to permit to bypass copy protection was approved in 2003 for a period of 3 years to the Internet Archive who created an archive of "vintage software", as a way to preserve them. The exemption was renewed in 2006, and as of 27 October 2009, has been indefinitely extended pending further rulemakings "for the purpose of preservation or archival reproduction of published digital works by a library or archive". The GitHub Archive Program has stored all of GitHub's open source code in a secure vault at Svalbard, on the frozen Norwegian island of Spitsbergen, as part of the Arctic World Archive, with the code stored as QR codes.Another challenge surrounding preservation of digital content resides in the issue of scale. The amount of digital information being created along with the "proliferation of format types" makes creating trusted digital repositories with adequate and sustainable resources a challenge. The Web is only one example of what might be considered the "data deluge". For example, the Library of Congress currently amassed 170 billion tweets between 2006 and 2010 totaling 133.2 terabytes and each Tweet is composed of 50 fields of metadata.The economic challenges of digital preservation are also great. Preservation programs require significant up front investment to create, along with ongoing costs for data ingest, data management, data storage, and staffing. One of the key strategic challenges to such programs is the fact that, while they require significant current and ongoing funding, their benefits accrue largely to future generations. Challenges: Layers of archiving The various levels of security may be represented as three layers: the "hot" (accessible online repositories) and "warm" (e.g. Internet Archive) layers both have the weakness of being founded upon electronics - both would be wiped out in a repeat of the powerful 19th-century geomagnetic storm known as the "Carrington Event". The Arctic World Archive, stored on specially developed film coated with silver halide with a lifespan of 500+ years, represents more secure snapshot of data, with archiving intended at five-year intervals. Strategies: In 2006, the Online Computer Library Center developed a four-point strategy for the long-term preservation of digital objects that consisted of: Assessing the risks for loss of content posed by technology variables such as commonly used proprietary file formats and software applications. Evaluating the digital content objects to determine what type and degree of format conversion or other preservation actions should be applied. Determining the appropriate metadata needed for each object type and how it is associated with the objects. Providing access to the content.There are several additional strategies that individuals and organizations may use to actively combat the loss of digital information. Strategies: Refreshing Refreshing is the transfer of data between two types of the same storage medium so there are no bitrot changes or alteration of data. For example, transferring census data from an old preservation CD to a new one. This strategy may need to be combined with migration when the software or hardware required to read the data is no longer available or is unable to understand the format of the data. Refreshing will likely always be necessary due to the deterioration of physical media. Strategies: Migration Migration is the transferring of data to newer system environments (Garrett et al., 1996). This may include conversion of resources from one file format to another (e.g., conversion of Microsoft Word to PDF or OpenDocument) or from one operating system to another (e.g., Windows to Linux) so the resource remains fully accessible and functional. Two significant problems face migration as a plausible method of digital preservation in the long terms. Due to the fact that digital objects are subject to a state of near continuous change, migration may cause problems in relation to authenticity and migration has proven to be time-consuming and expensive for "large collections of heterogeneous objects, which would need constant monitoring and intervention. Migration can be a very useful strategy for preserving data stored on external storage media (e.g. CDs, USB flash drives, and 3.5" floppy disks). These types of devices are generally not recommended for long-term use, and the data can become inaccessible due to media and hardware obsolescence or degradation. Strategies: Replication Creating duplicate copies of data on one or more systems is called replication. Data that exists as a single copy in only one location is highly vulnerable to software or hardware failure, intentional or accidental alteration, and environmental catastrophes like fire, flooding, etc. Digital data is more likely to survive if it is replicated in several locations. Replicated data may introduce difficulties in refreshing, migration, versioning, and access control since the data is located in multiple places. Strategies: Understanding digital preservation means comprehending how digital information is produced and reproduced. Because digital information (e.g., a file) can be exactly replicated down to the bit level, it is possible to create identical copies of data. Exact duplicates allow archives and libraries to manage, store, and provide access to identical copies of data across multiple systems and/or environments. Strategies: Emulation Emulation is the replicating of functionality of an obsolete system. According to van der Hoeven, "Emulation does not focus on the digital object, but on the hard- and software environment in which the object is rendered. It aims at (re)creating the environment in which the digital object was originally created." Examples are having the ability to replicate or imitate another operating system. Examples include emulating an Atari 2600 on a Windows system or emulating WordPerfect 1.0 on a Macintosh. Emulators may be built for applications, operating systems, or hardware platforms. Emulation has been a popular strategy for retaining the functionality of old video game systems, such as with the MAME project. The feasibility of emulation as a catch-all solution has been debated in the academic community. (Granger, 2000) Raymond A. Lorie has suggested a Universal Virtual Computer (UVC) could be used to run any software in the future on a yet unknown platform. The UVC strategy uses a combination of emulation and migration. The UVC strategy has not yet been widely adopted by the digital preservation community. Strategies: Jeff Rothenberg, a major proponent of Emulation for digital preservation in libraries, working in partnership with Koninklijke Bibliotheek and Nationaal Archief of the Netherlands, developed a software program called Dioscuri, a modular emulator that succeeds in running MS-DOS, WordPerfect 5.1, DOS games, and more.Another example of emulation as a form of digital preservation can be seen in the example of Emory University and the Salman Rushdie's papers. Rushdie donated an outdated computer to the Emory University library, which was so old that the library was unable to extract papers from the harddrive. In order to procure the papers, the library emulated the old software system and was able to take the papers off his old computer. Strategies: Encapsulation This method maintains that preserved objects should be self-describing, virtually "linking content with all of the information required for it to be deciphered and understood". The files associated with the digital object would have details of how to interpret that object by using "logical structures called "containers" or "wrappers" to provide a relationship between all information components that could be used in future development of emulators, viewers or converters through machine readable specifications. The method of encapsulation is usually applied to collections that will go unused for long periods of time. Strategies: Persistent archives concept Developed by the San Diego Supercomputer Center and funded by the National Archives and Records Administration, this method requires the development of comprehensive and extensive infrastructure that enables "the preservation of the organisation of collection as well as the objects that make up that collection, maintained in a platform independent form". A persistent archive includes both the data constituting the digital object and the context that the defines the provenance, authenticity, and structure of the digital entities. This allows for the replacement of hardware or software components with minimal effect on the preservation system. This method can be based on virtual data grids and resembles OAIS Information Model (specifically the Archival Information Package). Strategies: Metadata attachment Metadata is data on a digital file that includes information on creation, access rights, restrictions, preservation history, and rights management. Metadata attached to digital files may be affected by file format obsolescence. ASCII is considered to be the most durable format for metadata because it is widespread, backwards compatible when used with Unicode, and utilizes human-readable characters, not numeric codes. It retains information, but not the structure information it is presented in. For higher functionality, SGML or XML should be used. Both markup languages are stored in ASCII format, but contain tags that denote structure and format. Preservation repository assessment and certification: A few of the major frameworks for digital preservation repository assessment and certification are described below. A more detailed list is maintained by the U.S. Center for Research Libraries. Preservation repository assessment and certification: Specific tools and methodologies TRAC In 2007, CRL/OCLC published Trustworthy Repositories Audit & Certification: Criteria & Checklist (TRAC), a document allowing digital repositories to assess their capability to reliably store, migrate, and provide access to digital content. TRAC is based upon existing standards and best practices for trustworthy digital repositories and incorporates a set of 84 audit and certification criteria arranged in three sections: Organizational Infrastructure; Digital Object Management; and Technologies, Technical Infrastructure, and Security.TRAC "provides tools for the audit, assessment, and potential certification of digital repositories, establishes the documentation requirements required for audit, delineates a process for certification, and establishes appropriate methodologies for determining the soundness and sustainability of digital repositories". Preservation repository assessment and certification: DRAMBORA Digital Repository Audit Method Based On Risk Assessment (DRAMBORA), introduced by the Digital Curation Centre (DCC) and DigitalPreservationEurope (DPE) in 2007, offers a methodology and a toolkit for digital repository risk assessment. The tool enables repositories to either conduct the assessment in-house (self-assessment) or to outsource the process. Preservation repository assessment and certification: The DRAMBORA process is arranged in six stages and concentrates on the definition of mandate, characterization of asset base, identification of risks and the assessment of likelihood and potential impact of risks on the repository. The auditor is required to describe and document the repository's role, objectives, policies, activities and assets, in order to identify and assess the risks associated with these activities and assets and define appropriate measures to manage them. Preservation repository assessment and certification: European Framework for Audit and Certification of Digital Repositories The European Framework for Audit and Certification of Digital Repositories was defined in a memorandum of understanding signed in July 2010 between Consultative Committee for Space Data Systems (CCSDS), Data Seal of Approval (DSA) Board and German Institute for Standardization (DIN) "Trustworthy Archives – Certification" Working Group. The framework is intended to help organizations in obtaining appropriate certification as a trusted digital repository and establishes three increasingly demanding levels of assessment: Basic Certification: self-assessment using 16 criteria of the Data Seal of Approval (DSA). Extended Certification: Basic Certification and additional externally reviewed self-audit against ISO 16363 or DIN 31644 requirements. Formal Certification: validation of the self-certification with a third-party official audit based on ISO 16363 or DIN 31644. Preservation repository assessment and certification: nestor catalogue of criteria A German initiative, nestor Archived 2012-10-26 at the Wayback Machine (the Network of Expertise in Long-Term Storage of Digital Resources) sponsored by the German Ministry of Education and Research, developed a catalogue of criteria for trusted digital repositories in 2004. In 2008 the second version of the document was published. The catalogue, aiming primarily at German cultural heritage and higher education institutions, establishes guidelines for planning, implementing, and self-evaluation of trustworthy long-term digital repositories.The nestor catalogue of criteria conforms to the OAIS reference model terminology and consists of three sections covering topics related to Organizational Framework, Object Management, and Infrastructure and Security. Preservation repository assessment and certification: PLANETS Project In 2002 the Preservation and Long-term Access through Networked Services (PLANETS) project, part of the EU Framework Programmes for Research and Technological Development 6, addressed core digital preservation challenges. The primary goal for Planets was to build practical services and tools to help ensure long-term access to digital cultural and scientific assets. The Open Planets project ended May 31, 2010. The outputs of the project are now sustained by the follow-on organisation, the Open Planets Foundation. On October 7, 2014 the Open Planets Foundation announced that it would be renamed the Open Preservation Foundation to align with the organization's current direction. Preservation repository assessment and certification: PLATTER Planning Tool for Trusted Electronic Repositories (PLATTER) is a tool released by DigitalPreservationEurope (DPE) to help digital repositories in identifying their self-defined goals and priorities in order to gain trust from the stakeholders.PLATTER is intended to be used as a complementary tool to DRAMBORA, NESTOR, and TRAC. It is based on ten core principles for trusted repositories and defines nine Strategic Objective Plans, covering such areas as acquisition, preservation and dissemination of content, finance, staffing, succession planning, technical infrastructure, data and metadata specifications, and disaster planning. The tool enables repositories to develop and maintain documentation required for an audit. Preservation repository assessment and certification: ISO 16363 A system for the "audit and certification of trustworthy digital repositories" was developed by the Consultative Committee for Space Data Systems (CCSDS) and published as ISO standard 16363 on 15 February 2012. Extending the OAIS reference model, and based largely on the TRAC checklist, the standard was designed for all types of digital repositories. It provides a detailed specification of criteria against which the trustworthiness of a digital repository can be evaluated.The CCSDS Repository Audit and Certification Working Group also developed and submitted a second standard, defining operational requirements for organizations intending to provide repository auditing and certification as specified in ISO 16363. This standard was published as ISO 16919 – "requirements for bodies providing audit and certification of candidate trustworthy digital repositories" – on 1 November 2014. Best practices: Although preservation strategies vary for different types of materials and between institutions, adhering to nationally and internationally recognized standards and practices is a crucial part of digital preservation activities. Best or recommended practices define strategies and procedures that may help organizations to implement existing standards or provide guidance in areas where no formal standards have been developed.Best practices in digital preservation continue to evolve and may encompass processes that are performed on content prior to or at the point of ingest into a digital repository as well as processes performed on preserved files post-ingest over time. Best practices may also apply to the process of digitizing analog material and may include the creation of specialized metadata (such as technical, administrative and rights metadata) in addition to standard descriptive metadata. The preservation of born-digital content may include format transformations to facilitate long-term preservation or to provide better access.No one institution can afford to develop all of the software tools needed to ensure the accessibility of digital materials over the long term. Thus the problem arises of maintaining a repository of shared tools. The Library of Congress has been doing that for years, until that role was assumed by the Community Owned Digital Preservation Tool Registry. Best practices: Audio preservation Various best practices and guidelines for digital audio preservation have been developed, including: Guidelines on the Production and Preservation of Digital Audio Objects IASA-TC 04 (2009), which sets out the international standards for optimal audio signal extraction from a variety of audio source materials, for analogue to digital conversion and for target formats for audio preservation Capturing Analog Sound for Digital Preservation: Report of a Roundtable Discussion of Best Practices for Transferring Analog Discs and Tapes (2006), which defined procedures for reformatting sound from analog to digital and provided recommendations for best practices for digital preservation Digital Audio Best Practices (2006) prepared by the Collaborative Digitization Program Digital Audio Working Group, which covers best practices and provides guidance both on digitizing existing analog content and on creating new digital audio resources Sound Directions: Best Practices for Audio Preservation (2007) published by the Sound Directions Project, which describes the audio preservation workflows and recommended best practices and has been used as the basis for other projects and initiatives Documents developed by the International Association of Sound and Audiovisual Archives (IASA), the European Broadcasting Union (EBU), the Library of Congress, and the Digital Library Federation (DLF).The Audio Engineering Society (AES) also issues a variety of standards and guidelines relating to the creation of archival audio content and technical metadata. Best practices: Moving image preservation The term "moving images" includes analog film and video and their born-digital forms: digital video, digital motion picture materials, and digital cinema. As analog videotape and film become obsolete, digitization has become a key preservation strategy, although many archives do continue to perform photochemical preservation of film stock."Digital preservation" has a double meaning for audiovisual collections: analog originals are preserved through digital reformatting, with the resulting digital files preserved; and born-digital content is collected, most often in proprietary formats that pose problems for future digital preservation. Best practices: There is currently no broadly accepted standard target digital preservation format for analog moving images. The complexity of digital video as well as the varying needs and capabilities of an archival institution are reasons why no "one-size-fits-all" format standard for long-term preservation exists for digital video like there is for other types of digital records "(e.g., word-processing converted to PDF/A or TIFF for images)".Library and archival institutions, such as the Library of Congress and New York University, have made significant efforts to preserve moving images; however, a national movement to preserve video has not yet materialized". The preservation of audiovisual materials "requires much more than merely putting objects in cold storage". Moving image media must be projected and played, moved and shown. Born-digital materials require a similar approach".The following resources offer information on analog to digital reformatting and preserving born-digital audiovisual content. Best practices: The Library of Congress tracks the sustainability of digital formats, including moving images. The Digital Dilemma 2: Perspectives from Independent Filmmakers, Documentarians and Nonprofit Audiovisual Archives (2012). The section on nonprofit archives reviews common practices on digital reformatting, metadata, and storage. There are four case studies. Federal Agencies Digitization Guidelines Initiative (FADGI). Started in 2007, this is a collaborative effort by federal agencies to define common guidelines, methods, and practices for digitizing historical content. As part of this, two working groups are studying issues specific to two major areas, Still Image and Audio Visual. PrestoCenter publishes general audiovisual information and advice at a European level. Its online library has research and white papers on digital preservation costs and formats. Best practices: The Association of Moving Image Archivists (AMIA) sponsors conferences, symposia, and events on all aspects of moving image preservation, including digital. The AMIA Tech Review contains articles reflecting current thoughts and practices from the archivists' perspectives. Video Preservation for the Millennia (2012), published in the AMIA Tech Review, details the various strategies and ideas behind the current state of video preservation. Best practices: The National Archives of Australia produced the Preservation Digitisation Standards which set out the technical requirements for digitisation outputs produced under the National Digitisation Plan. This includes video and audio formats, as well as non-audiovisual formats. The Smithsonian Institution Archives published guidelines regarding file formats used for the long-term preservation of electronic records, which are regarded as open, standard, non-proprietary, and well-established. The guidelines are used for video and audio formats, and other non-audiovisual materials. Best practices: Codecs and containers Moving images require a codec for the decoding process; therefore, determining a codec is essential to digital preservation. In "A Primer on Codecs for Moving Image and Sound Archives: 10 Recommendations for Codec Selection and Management" written by Chris Lacinak and published by AudioVisual Preservation Solutions, Lacinak stresses the importance of archivists choosing the correct codec as this can "impact the ability to preserve the digital object". Therefore, the codec selection process is critical, "whether dealing with born digital content, reformatting older content, or converting analog materials". Lacinak's ten recommendations for codec selection and management are the following: adoption, disclosure, transparency, external dependencies, documentation and metadata, pre-planning, maintenance, obsolescence monitoring, maintenance of the original, and avoidance of unnecessary trans-coding or re-encoding. There is a lack of consensus to date among the archival community as to what standard codec should be used for the digitization of analog video and the long-term preservation of digital video nor is there a single "right" codec for a digital object; each archival institution must "make the decision as part of an overall preservation strategy".A digital container format or wrapper is also required for moving images and must be chosen carefully just like the codec. According to an international survey conducted in 2010 of over 50 institutions involved with film and video reformatting, "the three main choices for preservation products were AVI, QuickTime (.MOV) or MXF (Material Exchange Format)". These are just a few examples of containers. The National Archives and Records Administration (NARA) has chosen the AVI wrapper as its standard container format for several reasons including that AVI files are compatible with numerous open source tools such as VLC.Uncertainty about which formats will or will not become obsolete or become the future standard makes it difficult to commit to one codec and one container." Choosing a format should "be a trade off for which the best quality requirements and long-term sustainability are ensured." Considerations for content creators By considering the following steps, content creators and archivists can ensure better accessibility and preservation of moving images in the long term: Create uncompressed video if possible. While this does create large files, their quality will be retained. Storage must be considered with this approach. Best practices: If uncompressed video is not possible, use lossless instead of lossy compression. The compressed data gets restored while lossy compression alters data and quality is lost. Use higher bit rates (This affects resolution of the image and size of file.) Use technical and descriptive metadata. Use containers and codecs that are stable and widely used within the archival and digital preservation communities. Best practices: Email preservation Email poses special challenges for preservation: email client software varies widely; there is no common structure for email messages; email often communicates sensitive information; individual email accounts may contain business and personal messages intermingled; and email may include attached documents in a variety of file formats. Email messages can also carry viruses or have spam content. While email transmission is standardized, there is no formal standard for the long-term preservation of email messages.Approaches to preserving email may vary according to the purpose for which it is being preserved. For businesses and government entities, email preservation may be driven by the need to meet retention and supervision requirements for regulatory compliance and to allow for legal discovery. (Additional information about email archiving approaches for business and institutional purposes may be found under the separate article, Email archiving.) For research libraries and archives, the preservation of email that is part of born-digital or hybrid archival collections has as its goal ensuring its long-term availability as part of the historical and cultural record.Several projects developing tools and methodologies for email preservation have been conducted based on various preservation strategies: normalizing email into XML format, migrating email to a new version of the software and emulating email environments: Memories Using Email (MUSE), Collaborative Electronic Records Project (CERP), E-Mail Collection And Preservation (EMCAP), PeDALS Email Extractor Software (PeDALS), XML Electronic Normalizing of Archives tool (XENA). Best practices: Some best practices and guidelines for email preservation can be found in the following resources: Curating E-Mails: A Life-cycle Approach to the Management and Preservation of E-mail Messages (2006) by Maureen Pennock. Technology Watch Report 11-01: Preserving Email (2011) by Christopher J Prom. Best Practices: Email Archiving by Jo Maitland. Best practices: Video game preservation In 2007 the Keeping Emulation Environments Portable (KEEP) project, part of the EU Framework Programmes for Research and Technological Development 7, developed tools and methodologies to keep digital software objects available in their original context. Digital software objects as video games might get lost because of digital obsolescence and non-availability of required legacy hardware or operating system software; such software is referred to as abandonware. Because the source code is often not available any longer, emulation is the only preservation opportunity. KEEP provided an emulation framework to help the creation of such emulators. KEEP was developed by Vincent Joguin, first launched in February 2009 and was coordinated by Elisabeth Freyre of the French National Library.A community project, MAME, aims to emulate any historic computer game, including arcade games, console games and the like, at a hardware level, for future archiving. Best practices: In January 2012 the POCOS project funded by JISC organised a workshop on the preservation of gaming environments and virtual worlds. Personal archiving There are many things consumers and artists can do themselves to help care for their collections at home. The Software Preservation Society is a group of computer enthusiasts that is concentrating on finding old software disks (mostly games) and taking a snapshot of the disks in a format that can be preserved for the future. Best practices: "Resource Center: Caring For Your Treasures" by American Institute for Conservation of Historic and Artistic Works details simple strategies for artists and consumers to care for and preserve their work themselves.The Library of Congress also hosts a list for the self-preserver which includes direction toward programs and guidelines from other institutions that will help the user preserve social media, email, and formatting general guidelines (such as caring for CDs). Best practices: Some of the programs listed include: HTTrack: Software tool which allows the user to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to their computer. Muse: Muse (short for Memories Using Email) is a program that helps users revive memories, using their long-term email archives, run by Stanford University. Best practices: Scientific research In 2020, researchers reported in a preprint that they found "176 Open Access journals that, through lack of comprehensive and open archives, vanished from the Web between 2000-2019, spanning all major research disciplines and geographic regions of the world" and that in 2019 only about a third of the 14,068 DOAJ-indexed journals ensured the long-term preservation of their content. Some of the scientific research output is not located at the scientific journal's website but on other sites like source-code repositories such as GitLab. The Internet Archive archived many – but not all – of the lost academic publications and makes them available on the Web. According to an analysis by the Internet Archive "18 per cent of all open access articles since 1945, over three million, are not independently archived by us or another preservation organization, other than the publishers themselves". Sci-Hub does academic archiving outside the bounds of contemporary copyright law and also provides access to academic works that do not have an open access license. Best practices: Digital Building Preservation "The creation of a 3D model of a historical building needs a lot of effort." Recent advances in technology have led to developments of 3-D rendered buildings in virtual space. Traditionally the buildings in video games had to be rendered via code, and many game studios have done highly detailed renderings (see Assassin's Creed). But due to most preservationist not being highly capable teams of professional coders, Universities have begun developing methods by doing 3-D laser scanning. Such work was attempted by the National Taiwan University of Science and Technology in 2009. Their goal was "to build as-built 3D computer models of a historical building, the Don Nan-Kuan House, to fulfill the need of digital preservation." To rather great success, they were capable of scanning the Don Nan-Kuan House with bulky 10 kg (22 lbs.) cameras and with only minor touch-ups where the scanners were not detailed enough. More recently in 2018 in Calw, Germany, a team conducted a scanning of the historic Church of St. Peter and Paul by collecting data via laser scanning and photogrammetry. "The current church's tower is about 64 m high, and its architectonic style is neo-gothic of the late nineteenth century. This church counts with a main nave, a chorus and two lateral naves in each side with tribunes in height. The church shows a rich history, which is visible in the different elements and architectonic styles used. Two small windows between the choir and the tower are the oldest parts preserved, which date to thirteenth century. The church was reconstructed and extended during the sixteenth (expansion of the nave) and seventeenth centuries (construction of tribunes), after the destruction caused by the Thirty Years' War (1618-1648). However, the church was again burned by the French Army under General Mélac at the end of the seventeenth century. The current organ and pulpit are preserved from this time. In the late nineteenth century, the church was rebuilt and the old dome Welsch was replaced by the current neo-gothic tower. Other works from this period are the upper section of the pulpit, the choir seats and the organ case. The stained-glass windows of the choir are from the late nineteenth and early twentieth centuries, while some of the nave's windows are from middle of the twentieth century. Second World War having ended, some neo-gothic elements were replaced by pure gothic ones, such as the altar of the church, and some drawings on the walls and ceilings." With this much architectural variance it presented a challenge and a chance to combine different technologies in a large space with the goal of high-resolution. The results were rather good and are available to view online. Education: The Digital Preservation Outreach and Education (DPOE), as part of the Library of Congress, serves to foster preservation of digital content through a collaborative network of instructors and collection management professionals working in cultural heritage institutions. Composed of Library of Congress staff, the National Trainer Network, the DPOE Steering Committee, and a community of Digital Preservation Education Advocates, as of 2013 the DPOE has 24 working trainers across the six regions of the United States. In 2010 the DPOE conducted an assessment, reaching out to archivists, librarians, and other information professionals around the country. A working group of DPOE instructors then developed a curriculum based on the assessment results and other similar digital preservation curricula designed by other training programs, such as LYRASIS, Educopia Institute, MetaArchive Cooperative, University of North Carolina, DigCCurr (Digital Curation Curriculum) and Cornell University-ICPSR Digital Preservation Management Workshops. The resulting core principles are also modeled on the principles outlined in "A Framework of Guidance for Building Good Digital Collections" by the National Information Standards Organization (NISO).In Europe, Humboldt-Universität zu Berlin and King's College London offer a joint program in Digital Curation Archived 2015-12-26 at the Wayback Machine that emphasizes both digital humanities and the technologies necessary for long term curation. The MSc in Information Management and Preservation (Digital) offered by the HATII at the University of Glasgow has been running since 2005 and is the pioneering program in the field. Examples of initiatives: The Library of Congress founded the National Digital Stewardship Alliance which is now hosted by the Digital Library Federation. The British Library is responsible for several programmes in the area of digital preservation and is a founding member of the Digital Preservation Coalition and Open Preservation Foundation. Their digital preservation strategy is publicly available. The National Archives of the United Kingdom have also pioneered various initiatives in the field of digital preservation. Examples of initiatives: Centre of Excellence for Digital Preservation is established at C-DAC, Pune, India as a flagship project under National Digital Preservation Program (NDPP) sponsored by Ministry of Electronics & Information Technology, Government of India.A number of open source products have been developed to assist with digital preservation, including Archivematica, DSpace, Fedora Commons, OPUS, SobekCM and EPrints. The commercial sector also offers digital preservation software tools, such as Ex Libris Ltd.'s Rosetta, Preservica's Cloud, Standard and Enterprise Editions, CONTENTdm, Digital Commons, Equella, intraLibrary, Open Repository and Vital. Large-scale initiatives: Many research libraries and archives have begun or are about to begin large-scale digital preservation initiatives (LSDIs). The main players in LSDIs are cultural institutions, commercial companies such as Google and Microsoft, and non-profit groups including the Open Content Alliance (OCA), the Million Book Project (MBP), and HathiTrust. The primary motivation of these groups is to expand access to scholarly resources. Large-scale initiatives: Approximately 30 cultural entities, including the 12-member Committee on Institutional Cooperation (CIC), have signed digitization agreements with either Google or Microsoft. Several of these cultural entities are participating in the Open Content Alliance and the Million Book Project. Some libraries are involved in only one initiative and others have diversified their digitization strategies through participation in multiple initiatives. The three main reasons for library participation in LSDIs are: access, preservation, and research and development. It is hoped that digital preservation will ensure that library materials remain accessible for future generations. Libraries have a responsibility to guarantee perpetual access for their materials and a commitment to archive their digital materials. Libraries plan to use digitized copies as backups for works in case they go out of print, deteriorate, or are lost and damaged. Large-scale initiatives: Arctic World Archive The Arctic World Archive is a facility for data preservation of historical and cultural data from several countries, including open source code.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geographical Analysis (journal)** Geographical Analysis (journal): Geographical Analysis is a quarterly peer-reviewed academic journal published by Wiley-Blackwell on behalf of the Department of Geography (Ohio State University). It was established in 1969 and the current editor-in-chief is Rachel S. Franklin. The journal covers geographical theory, model building, and quantitative methods. These topics together are frequently referred to as geospatial analysis. According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.268.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IShell** IShell: iShell is a traditional multimedia authoring environment, similar in many ways to Macromedia Director. A descendant of the Apple Media Tool, iShell is designed to be easy to use, but powerful enough to grow as a user's skill set increases. iShell was first released by Tribeworks in 1999. In July 2006, tribalmedia acquired all rights to iShell. The current version of iShell is 4.5r7. IShell: iShell uses the Key programming language, which is based on Eiffel. This language was previously known as the Apple Media Language (AML) which was part of the Apple Media Tool. Both iShell and the Apple Media Tool were developed by Patrick Soquet, one of the founders of Tribeworks. The two tools share many design features in common. iShell differs in its distribution model from similar applications, allowing users access to the source code. Features: Cross-platform creation and delivery (Macintosh and Windows) Graphical reusable object and event based programming and design environment Support and use of the QuickTime media framework Text support via basic RTF and HTML, and common styled input fields Pluggable architecture for the addition of external third-party plugins and scripts Access local and remote media assets XML creation and parsing through DOM and SAX Flat file text database support Common programming functions and logic for strings, numbers, etc. without the need for scripting Third-party plugins: Numerous developers have taken advantage of iShell's community-source model to build commercial plugins for the software. tribalmedia acts as a reseller for many of these third-party developers. Kromo: Adds several features to iShell, including database integration and filesystem functions. Spunk: A companion program to Kromo, allows developer to incorporate web-based content into iShell projects. ImageLayer: Full integration of Photoshop files. ZebraSpeak: Adds text-to-speech and advanced keyboard interaction to iShell. ZebraTools: Adds numerous new features to iShell. OpenTribe plugins: iSQUALE, DiSx, iStorm, iGDIP, iDream
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Positive Hack Days** Positive Hack Days: Positive Hack Days (PHDays) is an annual international cybersecurity forum. It has been held by Positive Technologies since 2011. PHDays brings together IT and infosec experts, government officials, business representatives, students, and schoolchildren. The forum hosts talks and workshops on the most interesting information security topics, The Standoff cyberexercises, practical competitions in which participants analyze the security of industrial control systems, banking and mobile services, and web apps.PHDays scope and agenda can be compared to those of Black Hat, DEF CON, and Source. The forum addresses the security of government and individuals in today's cyberworld, zero-day attacks and digital investigations, cyberwarfare, and cryptography. Positive Hack Days: The forum takes place in Moscow in May. An attendance fee is required. Free tickets are available for winners of special white hacking contests and for students who participate in the Positive Education program. Presentations are given in Russian and English. PHDays 2011: Who Wins: The first forum was held on May 19, 2011, at a popular club in Moscow. Talks and workshops covered such topics as government control of information security in Russia, remote banking system safety, secure connection in VoIP, protection of data in the cloud, and security of virtualization systems. The key guest speaker of the event was Dmitry Sklyarov. PHDays 2011: Who Wins: During the forum, a capture the flag (CTF) competition was held among information security specialists from different countries. The US team PPP was the winner. There were other hacking contests, and during one of them a participant detected a zero-day vulnerability in Safari for Windows.Among other speakers were experts from Kaspersky Lab, Russian Agricultural Bank, VimpelCom, Rostelecom, Cisco Systems, Leta IT-Company, Positive Technologies, and PwC. About 500 people attended the one-day event. PHDays 2012: Future Now: The second forum was conducted on May 30 and 31, 2012 at Digital October's center of new technologies. Along with six parallel streams of presentations and workshops, a CTF competition and several security-related contests were held again. Topics were divided into two areas: technical (exploiting radio noise, password protection, telecom security, usage of sqlmap) and business (internet banking security, data leakage in government, seeking specialists in information security).The conference featured Bruce Schneier, an American cryptographer and the author of Applied Cryptography, Datuk Mohd Noor Amin (from IMPACT, UN), and the creator of the password cracking tool John the Ripper Alexander Peslyak (known as Solar Designer).Significant events included: demonstration of zero-day vulnerabilities in Windows XP and FreeBSD 8.3, cracking iPhone 4S using the popular application Office Plus, and contests in taking control of AR.Drone and analyzing remote banking system security.Young School, a competition of young scientists' research papers, took place for the first time. PHDays 2012 was attended by 2,000 people. PHDays III: From Both Sides of the Barricade: The third conference was held on May 23 and 24 at the Moscow World Trade Center. The main topics were ICS protection, web application and mobile application security, and preventing attacks against banking systems, as well as cooperation between government, researchers and information society. The lead speaker of the third forum was Marc "van Hauser" Heuse, the creator of THC-Hydra, Amap, and SuSEfirewall and the founder of The Hacker Choice.Significant events included a talk from SCADA Strangelove about the security of Siemens SIMATIC software, a workshop on ATM hacking, and a workshop from TOOOL (experts in nondestructive lock opening). The forum featured a model railroad controlled by real industrial systems, the security of which was to be tested by the participants, and the Labyrinth's rooms, with laser field and motion detectors (10).A famous hacker George Hotz (geohot) participated in the CTF contest as a member of PPP. He was the first to unlock the iPhone to use it with other providers besides AT&T. George Hotz also won 2drunk2hack, a contest where participants hack web applications and must finish an alcoholic beverage when they fail.Anatoly Katyushin, a student from Samara nicknamed "heartless," won a $natch contest in which participants tested the security of remote banking systems: he hacked a remote banking system and stole 4,900 rubles.The Russian politician Vladimir Zhirinovsky took part in a discussion about encouraging information security specialists to work within legal boundaries.Over 2,000 people visited the event.A movie about preparation for the forum was released in 2013. PHDays IV: IT Gazes You: The forum took place on May 21 and 22, 2014 at Digital October's center of new technologies in Moscow. Among the main topics were cyberwarfare, IoT, protection of ICS and critical infrastructure components, internet banking system security, and regulation of the information security industry.Alisa Shevchenko detected several zero-day vulnerabilities in Indusoft Web Studio 7.1 during a contest in analyzing ICS security, and won the 1st place in the contest. Other major events included a contest in identifying threats of a smart home, discussion of the security of telecommunications companies, and the lack of really "smart" grids in the power industry. In addition, the participants of information security contests managed to withdraw money from virtual accounts in a remote banking system created specially for the competition and containing typical vulnerabilities of banking systems.The forum saw over 2,500 attendees from around the globe. PHDays 2015: Entering a singularity: The forum took place on May 26 and 27, 2015, at the Moscow World Trade Center. The main topics were security of critical information systems, fraud management, cybercrimes, and incident investigation.Specially introduced at this forum was a new format of CTF games. The teams competed in a fictional state that had its own corporations, banks, stock exchanges, media, and infrastructure. The hacker teams had to complete tasks to earn points: for example, hacking the infrastructure of an energy company whose shares were listed on a stock exchange to give an advantage to industry insiders.There was a contest to break into a real IEC 61850 electrical substation. During the contest, participants managed to temporarily disrupt the organizers' information infrastructure six times, while twice they managed to disconnect consumers from the power grid, and discovered one zero-day vulnerability.PHDays 2015 also hosted a competition organized by Almaz Capital investment fund to identify photo manipulation. The winner was SMTDP Tech. The prize fund was 1.5 million rubles.Over 3,500 people visited the event. PHDays 2016: The Standoff: The forum took place on May 17 and 18, 2016, at the Moscow World Trade Center. The topics included protection of cloud computing and virtual infrastructure, business applications and ERP systems, prevention of zero-day attacks, and security of industrial control systems and communication networks.The main theme was a battle between attackers and defenders: the organizers prepared a game, which was a confrontation between the attacker teams (hackers) and the defender teams (SOC employees) on a cyberrange with a mock-up city (City F).In one competition, a teenager from Moscow was able to break into an electrical substation.Over two days, 4,200 people visited the forum. PHDays 2017: Enemy Inside: Enemy Inside was held on May 23–24, 2017 at the World Trade Center in Moscow, Russia. The key themes of the forum were the IoT, the combination of the IoT and SCADA, development of security products, and SSDL approaches.The main competition of the forum was The Standoff. The participants competed at a cyberrange with a fictional megalopolis that had companies with offices, telecom operators, railroads, a CHP, many IoT devices, and other objects.Patrick Wardle, a former NSA and NASA officer, presented a technical review of a new macOS malware. Positive Technologies specialists Kirill Puzankov, Sergey Mashukov, and Pavel Novikov spoke about the insecurity of cellular networks. Andrey Masalovich talked about methods of hacking popular websites and systems by using bots.Nearly 5,000 people attended the forum. PHDays 2018: Digital Bet: The forum was held in the Moscow in World Trade Center on 15 and 16 May, 2018. Top topics included the role of government and regulators in the economy digitalization, the digital wave in finance, security of critical information infrastructure, security risk management, and physical security.PHDays 8 speakers included Ilfak Guilfanov, the creator of IDA Pro disassembler and Hex-Rays decompiler, and Fernando Gont, a security researcher at SI6 Networks.The Standoff, a cyberbattle between teams of attackers, defenders, and security operations centers, took place at the forum. The battleground was a fictional city whose economy was built on digital technologies. The cyberrange emulated city infrastructure. The Standoff ended in a draw.In addition, PHDays hosted other hacker competitions: participants hacked into surveillance cameras, smart electric meters, and remote banking systems. The American channel ABC News broadcast a video about the forum.For the first time, PHDays hosted Positive Hard Days, an IT music festival featuring six bands.Over 5,000 people were at the event. PHDays 2019: Breaking the Constant: PHDays 9 was held on May 21–22, 2019, in Moscow at the Crocus Expo International Exhibition Center. It included over 100 presentations and workshops by Russian and foreign information security experts and IT business representatives. The keynote speaker was German security researcher Carsten Knoll. The forum hosted hacking and data protection competitions, including The Standoff, a cyberbattle between attackers and defenders.The best attacker teams from PHDays 9 received an invitation to the contest finals at the HITB+ CyberWeek conference in Abu Dhabi, which took place on October 12–17, 2019. PHDays 2019: Breaking the Constant: For the first time at PHDays, with the support of FinCERT (Bank of Russia) and CODDY (a programming school), a children's track was held, The Standoff Kids. Young guests aged 8 to 13 were introduced to the basics of cyberliteracy, as well as information and financial security.On the second day of the forum, the final stage of the Positive Wave music IT festival took place. The winner was the band Raev Clan, and the People's Choice Award went to the band Of Titans and Men.Positive Hack Days 9 brought together over 8,000 attendees. PHDays 2019: Breaking the Constant: The Standoff In 2020, PHDays was cancelled because of the coronavirus pandemic. However, in November 2020, the organizers isolated The Standoff (cyberexercises held at PHDays) from the forum, making it a separate event during which an online conference took place. The main theme of the event was digital threat modeling. For this purpose, an entire cyberrange was created that included the model of a virtual city with control systems that mimicked the same systems of real power substations, oil refineries, and the infrastructure of modern cities. PHDays 2021: The Origin: PHDays 10 was held on May 20 and 21, 2021, at the Moscow World Trade Center. Its main topic was the increase of digitalization during the pandemic and the need to review the existing cybersecurity approaches. Maxut Shadayev, Minister of Digital Development, Communications, and Mass Media of the Russian Federation, took part in the forum's plenary session.The attackers had to trigger business-critical events at The Standoff cyberbattle. These included specific events that threaten a particular enterprise and could lead to unacceptable consequences for the enterprise. For example, the attackers had to halt the supply of gas, cause electricity failure, or design a railway crash. 33 unique business-critical events were triggered at the cyberrange—54% of the total number of risks listed in the competition program. The attacker teams submitted a total of 84 reports of successful task completion to the jury.PHDays 10 brought together 2,500 people. PHDays 2022: INdependence: PHDays 11 was held on May 18 and 19, 2022 at the Moscow World Trade Center. Its main theme was independence from imports in the field of information security and preservation of digital sovereignty. The program included about 100 reports, sections, and round tables, in which more than 250 speakers took part. The forum featured The Standoff 365 Bug Bounty platform. There were events dedicated to cybersecurity investments, traditional competitions, Positive Wave and HackerToon creative festivals, the finals of the first All-Russian open source project contest, and the NFT kidnapping contest.Over 100 guests visited the live broadcast studio, including Russian Minister of Digital Development, Telecommunications, and Mass Media Maxut Shadaev and official spokesperson of the Russian Foreign Ministry Maria Zakharova.Spectators and participants of The Standoff cyberbattle witnessed the butterfly effect: they saw how an unacceptable event in one industry can affect other industries.PHDays 11 became the most attended event yet: 8,700 people visited the forum venue at the Moscow World Trade Center. Features: In addition to technical presentations, workshops, contests, and discussions on the IT industry regulation and business development, PHDays hosts a large number of activities aimed at creating a free cyberpunk atmosphere.Famous rock bands, such as Smyslovye Gallyutsinatsii, Neschastny Sluchai, and Undervud have performed at the forum's closing ceremony throughout the years. In 2014, cyberpunk films were shown at the forum at night, and during the break between presentations there was an audio show called "Model for Assembly."In 2018, the Positive Hard Days music festival was added to the forum's program.In 2019, the leader of Smyslovye Gallyutsinatsii Sergey Bobunets and music columnist for Kommersant newspaper and music producer Boris Barabanov joined the jury of the contest (renamed Positive Wave). Features: Six teams took part at the 2022 Positive Wave finals at PHDays 2022. The Serious Men (SIBUR Digital) won the contest and received a check for 100,000 rubles and certificates for tuition at the Musical Wave school.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CLIST** CLIST: CLIST (Command List) (pronounced "C-List") is a procedural programming language for TSO in MVS systems. It originated in OS/360 Release 20 and has assumed a secondary role since the availability of Rexx in TSO/E Version 2. The term CLIST is also used for command lists written by users of NetView.In its basic form, a CLIST program (or "CLIST" for short) can take the form of a simple list of commands to be executed in strict sequence (like a DOS batch file (*.bat) file). However, CLIST also features If-Then-Else logic as well as loop constructs. CLIST: CLIST is an interpreted language. That is, the computer must translate a CLIST every time the program is executed. CLISTs therefore tend to be slower than programs written in compiled languages such as COBOL, FORTRAN, or PL/1. (A program written in a compiled language is translated once to create a "load module" or executable.) CLIST can read/write MVS files and read/write from/to a TSO terminal. It can read parameters from the caller and also features a function to hold global variables and pass them between CLISTs. A CLIST can also call an MVS application program (written in COBOL or PL/I, for example). CLISTs can be run in background (by running JCL which executes the TSO control program (IKJEFT01)). TSO I/O screens and menus using ISPF dialog services can be displayed by CLISTs. CLIST: Compare the function of CLIST with that provided by REXX. Example programs: PROC 0 WRITE HELLO WORLD! Adding If-Then-Else logic:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trinder spot test** Trinder spot test: The Trinder spot test is a diagnostic test used in medicine to determine exposure to salicylates, particularly to salicylic acid. The test employs the Trinder reagent (a.k.a. Trinder solution) which is mixed with a patient's urine. The colour change, resulting from the Trinder reaction, is immediate, enabling rapid bedside assessment.The Trinder solution/reagent is a pre-mixed solution of 10% ferric chloride. It can be prepared by combining 40 g of mercuric chloride and 40 g of ferric nitrate in 850 ml of type II deionized water, and then adding 10 ml of concentrated hydrochloric acid to the solution and diluting to a volume of 1 litre with more type II deionized water.The test for the Trinder reaction is to mix 1 ml of urine with 1 ml of the Trinder reagent in a test tube. The test is positive if a colour change results. The specific colour changes are: blue or purple positive test no change negative test brown false-positive test caused by the presence of phenothiazinesThe test has a sensitivity of 94% and a specificity of 74% for identifying patients whose salicylate concentrations are greater than 30 mg per decilitre (2.17 mmol/L). False positive concentrations (2.8 to 14.3 mg per decilitre) have been reported to occur in neonates with hyperbilirubinemia, premature neonates, and children who are seriously ill (e.g. children who have extensive burns).The reaction between iron(III) and pharmaceuticals was first adapted for clinical use by P. Trinder (after whom the test, reaction, and reagent are now named), of the Biochemistry Department of the Royal Infirmary in Sunderland, in 1954 (see the article listed in further reading). Salicylic acid, salicylamide, and methyl salicylate all react with iron(III) via the phenol group which is next to their –COOH, –CONH2, or –COOCH3 functional groups. The Trinder reaction has been used for the determination of the presence of oxytetracycline in 1991, of ciprofloxacin in 1992, and of norfloxacin in 1993, in each case using a solution of iron(III) in sulphuric acid. It has also been used for the determination of the presence of bromazepam in 1992, using an iron(II) solution in a hydrochloric acid rather than an iron(III) solution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**5-Hydroxyhydantoin** 5-Hydroxyhydantoin: 5-Hydroxyhydantoin is an oxidation product of 2′-deoxycytidine. If not repaired, it may be processed by DNA polymerases that induce mutagenic processes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Open edX** Open edX: The Open edX platform is the open-source software whose development led to the creation of the edX organization. On June 1, 2013, edX open sourced the platform, naming it Open edX to distinguish it from the organization itself. The source code can be found on GitHub. The platform was originally developed as a research project at MIT, with maintenance transferred to edX in 2012. Open edX: When edX was acquired in 2021 by 2U, the Open edX team and maintenance were transferred to the Center for Reimagining Learning (tCRIL), a nonprofit founded by Harvard and MIT with the proceeds from the acquisition. In 2023, the nonprofit was renamed the Axim Collaborative. Uses: Open edX was designed for the edX project, which remains the largest global installation as of 2022, with over 3000 courses and 500,000 regular users. The Open edX community maintains a catalog of other installations, including fully-hosted learning sites open to public courses and 350 other instances run by organizations of all sizes.An Open edX marketplace also features partners that provide various services to community members running their own instances in multiple languages. Software: The platform has been released one to two times a year since 2013. Each release is named after a tree, honoring the tree of knowledge. The Open edX server-side software is based on Python, with Django as the web application framework. Community: Platform design and development have been co-designed with its community from early in the project's history. The community maintains several working groups focused on marketing, build-test-release cycles, translation, data design, front-end design, and code deprecation.The community hosts an annual Open edX Conference, which rotates worldwide each year. In 2022 it was held in Portugal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Syringe driver** Syringe driver: A syringe driver, also known as a syringe pump, is a small infusion pump, used to gradually administer small amounts of fluid (with or without medication) to a patient or for use in chemical and biomedical research. Some syringe drivers can both infuse and withdraw solutions. Uses: Syringe drivers can be used for electrospinning, electrospraying, microdialysis, microfluidics, dispensing/dilution, tissue perfusion, and fluid circulation. Uses: Intravenous therapy Syringe drivers are useful for delivering intravenous (IV) therapies over several minutes. They infuse solutions at a constant rate. In the case of a medication which should be slowly pushed in over the course of several minutes, this device saves staff time and reduces medical errors. It is useful for patients who cannot take medicines orally (such as those with difficulty swallowing), and for medications too harmful to be taken orally. Uses: Palliative care Syringe drivers are particularly useful in palliative care, to continuously administer analgesics (painkillers), antiemetics (medication to suppress nausea and vomiting) and other drugs. This prevents periods during which medication levels in the blood are too high or too low, and avoids the use of multiple tablets. As medication is administered subcutaneously, the area of administration is practically limitless, although edema may interfere with the action of some drugs. Uses: Research Syringe pumps are useful in microfluidic applications, such as microreactor design and testing, and also in chemistry for slow incorporation of a fixed volume of fluid into a solution. In enzyme kinetics studies, syringe drivers can be used to observe rapid kinetics as part of a stopped flow apparatus. They are also sometimes used as laboratory media dispensers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of the Geological Society** Journal of the Geological Society: The Journal of the Geological Society is a peer-reviewed scientific journal published by the Geological Society of London. It covers research in all aspects of the Earth sciences.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dimestrol** Dimestrol: Dimestrol (brand names Depot-Cyren, Depot-Oestromon), also known as dianisylhexene, 4,4'-dimethoxy-α,α'-diethylstilbene, diethylstilbestrol dimethyl ether, and dimethoxydiethylstilbestrol, is a synthetic nonsteroidal estrogen of the stilbestrol group which is related to diethylstilbestrol. It has been used clinically as a hormonal therapy in cases of delayed female puberty, hypogonadism, menopausal, and postmenopausal symptoms. It is known to induce the development of female secondary sexual characteristics in the case of female delayed puberty or hypogonadism. The drug has also been used as a growth promoter in livestock.DES is a known endocrine disrupting chemical. Molecularly, it is known to increase the risk of aneuploidy via interference with microtubule assembly.Prior to the 1950s, DES was widely prescribed to pregnant women to prevent miscarriage and preterm labor. A study released in the 1950s found that women who were exposed to DES were at increased risk for cervical and vaginal clear cell adenocarcinoma. Shortly after this finding, the FDA discouraged the prescription of DES to pregnant women. Children were also affected by the maternal use of DES during their gestation. Study findings showed that daughters were more likely to develop fertility complications such as premature delivery, neonatal death, miscarriage, ectopic pregnancy, stillbirth, infertility, and preeclampsia. DES exposed sons may also experience genital abnormalities but no conclusive increased risk of infertility.In the case of suspected or known exposure to DES before, women are encouraged to receive pelvic examinations, PAP tests, biopsies, and breast examinations. Men should receive routine examinations from their physician in the case of suspected or potential exposure.The medication has a long duration of action of 6 weeks given by intramuscular injection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theology Digest** Theology Digest: Theology Digest (1953–2010) summarized selected recent articles from over 400 theological journals. Some were in the form of formal summaries, approved by the authors of the articles, while some were briefer abstracts for whose accuracy the editors assumed responsibility. The digest was published quarterly. In each year, three issues contained the abstracts and summaries, and the fourth contained selected lectures from the series of Bellarmine Lectures and elsewhere. All issues contained a book survey as well, covering over 200 books per issue. The Digest was founded by Cyril A. Vollert, S.J., and Gerald F. Van Ackeren, S.J., at the Jesuit divinity school, St. Mary's College in Kansas in 1953, but those of other denominations became involved as well. Theology Digest was based at Saint Louis University from 1967 until 2010.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Dermatological Science** Journal of Dermatological Science: Journal of Dermatological Science is a medical journal that covers the entire scope of dermatology, from molecular studies to clinical investigations. The journal is published by Elsevier. Abstracting and indexing: The journal is abstracted and indexed in: Science Citation Index Web of Science Embase BIOSIS Citation Index PubMed/Medline Abstracts on Hygiene and Communicable Diseases Elsevier BIOBASEAccording to the Journal Citation Reports, the journal has a 2021 impact factor of 5.408.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Two-legged tie** Two-legged tie: In sports (particularly association football), a two-legged tie is a contest between two teams which comprises two matches or "legs", with each team as the home team in one leg. The winning team is usually determined by aggregate score, the sum of the scores of the two legs. For example, if the scores of the two legs are: First leg: Team A 4–1 Team B Second leg: Team B 2–1 Team AThen the aggregate score will be Team A 5–3 Team B, meaning team A wins the tie. In some competitions, a tie is considered to be drawn if each team wins one leg, regardless of the aggregate score. Two-legged ties can be used in knockout cup competitions and playoffs. Two-legged tie: In North America, the equivalent term is home-and-home series or, if decided by aggregate, two-game total-goals series. Use: In association football, two-legged ties are used in the later stages of many international club tournaments, including the UEFA Champions League and the Copa Libertadores; in many domestic cup competitions, including the Coppa Italia and the Copa del Rey; in domestic league play-offs, including the Football League play-offs; and in national-team playoffs in some qualification tournaments, including FIFA World Cup qualification. Use: In ice hockey, the National Hockey League used two-game, total-goals series in the early years of its playoffs. It applied to all its playoffs from 1918 to 1926, and the early rounds until 1937, when it completed the switch to best-of-n series; Rendez-vous '87 (which pitted a team of NHL All-Stars against the Soviet Union) was the only two-legged tie to be held in the league's history after 1937. The NCAA Men's Ice Hockey Championship also used a two-game total goals format for much of its history. Use: In rugby union, two-legged matches are used in the qualifying stages of the Rugby World Cup. The semifinals of the Italian National Championship of Excellence are also two-legged, as are the semifinals and final of England's second-tier league, the RFU Championship. Use: In basketball, the two top European club competitions, the Euroleague and Eurocup, both use two-legged ties in the qualifying rounds that determine the clubs advancing to each competition's group phase. The Eurocup also uses two-legged ties in its quarterfinal round, which will be a separate phase of the competition starting in 2009–10. The French Pro A league used two-legged ties in all of its playoff rounds, except for the one-off final, until the 2006–07 season. At that time, all of its playoff rounds leading up to the final, which remained a single match through 2011–12, were changed to best-of-three series. The final changed to best-of-five starting in 2012–13. Use: Other the seasons, Gaelic football, two-legged finals were used for five seasons of the National Football League, the last in 1988–89. The International Rules Series was also two-legged in 1998–2013 and from 2017 onward. In Canadian football, two-legged total point series were occasionally used by the Canadian Football League and their predecessor leagues in the postseason, most recently in the 1986 playoffs. Use: In Arena football, the playoff semifinals (but not the Arena Bowl itself) are decided, as of the 2018 season, by a two-legged total points playoff. In one 2018 semifinal, the first game ended in a tie, and went to overtime. However, the winner of the second game won by a larger margin (within regulation time) and was awarded overall victory based on total aggregate points. Use: Outside of sports, the American game shows Jeopardy!, Wheel of Fortune, and The Challengers have used the two-legged tie in the final round of tournament play at some point in their history. Tiebreaking: If the aggregate score is tied after the two legs, various methods can be used to break ties. Under the away goals rule, the team who scored more away goals advances. If away goals are equal, or are not considered, then the tie may be decided by extra time and/or a penalty shootout. Replays, at the second-leg venue or a neutral venue, were formerly used in European club competitions. In the Liguilla (playoffs) of the Primera División de México, the team with the better regular-season record advances; some leagues take the two teams' record against one another into account. In the promotion playoffs in Italy's Serie B (which do not necessarily occur in a given season), two-legged ties that are level on aggregate at the end of regulation time of the second leg go to extra time (away goals are not used); if the tie remains level after extra time, the team that finished higher in the league table advances. Second leg home advantage: Each team hosts one match, and there is no intended advantage to whether a team plays at home first or second. However, many managers and players believe that the team playing at home for the second leg has a slight advantage. The thinking is that the team playing away for the first leg can play it safe there (a draw or even a slight defeat is considered a favorable result), and then "win" the tie at home in the second leg (even away goals rule). Additionally, hosting the second match also gives an advantage as the hosting team may get to play extra-time or a penalty shootout in their home stadium if a tiebreak was needed.A statistical analysis of roughly 12,000 matches from the European club competitions between 1956 and 2007 showed that around 53% of teams playing at home in the second leg won the tie (even after allowing for the fact that team playing at home in the second leg tend to be better teams).In the case of World Cup intercontinental playoffs, the team that plays the second leg at home has won 61% of ties.In many competitions where two-legged ties involve seeded and unseeded teams, the seeded team are given home advantage in the second leg. For example, in the UEFA Champions League round of 16, the group winners play the second leg at home against the group runners-up. In both the UEFA Europa League and UEFA Europa Conference League knockout round play-offs, the group runners-up play the second leg at home against the higher competition's third-place team from the group stage while in the round of 16, the group winners play the second leg at home against these knockout round playoff winners. Second leg home advantage: Until the 2016 edition of the Copa do Brasil, in the first two rounds which were played as two-legged ties, if the away team won the first leg by two or more goals, they would progress straight to the next round without needing to play the second leg which they would play at home. However, the second leg would still have to be played if the home team won the first leg by two or more goals. Alternatives: In knockout competitions, alternatives to two-legged ties include: single-leg ties, either where one team has home advantage, as in all rounds of the FA Cup except the semi-finals and finals. When a replay is necessary, it may be played at the home ground of the opposite team. Two-legged ties are seen as fairer, since they give neither team home advantage; conversely, in the National Football League, home advantage is a reward for being the better seed or, in the opening wild-card round, winning the division. Alternatives: or played at a neutral venue, as in the final match of many tournaments, including the UEFA Champions League Final, the FA Cup Semi-finals and Final, and the NFL's Super Bowl. Neutral venues may be inconvenient for a team's fans to travel to, and due to this and the much higher prices for such a marquee event, a championship at a neutral site often draws a crowd of a much different nature than a crowd at a regular season contest. If the venue is picked before the teams playing are known, it is possible for the team that normally plays at the neutral venue to reach the match: an example is the 1984 European Cup Final where Liverpool F.C. played A.S. Roma at the Stadio Olimpico, Roma's home ground (despite this, Roma had been drawn as the technical away team for this match). Alternatives: best-of-n series, where the team winning more matches wins the series. These are common in major Canadian and American sports leagues; games cannot be drawn and series are typically best of 3, 5 or 7, though 9-game series are sometimes used. Such series are typically structured with alternating home venues so that the higher-ranked team gets the extra game (if necessary).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Payment system** Payment system: A payment system is any system used to settle financial transactions through the transfer of monetary value. This includes the institutions, payment instruments such as payment cards, people, rules, procedures, standards, and technologies that make its exchange possible. A common type of payment system, called an operational network, links bank accounts and provides for monetary exchange using bank deposits. Some payment systems also include credit mechanisms, which are essentially a different aspect of payment. Payment system: Payment systems are used in lieu of tendering cash in domestic and international transactions. This consists of a major service provided by banks and other financial institutions. Traditional payment systems include negotiable instruments such as drafts (e.g., cheques) and documentary credits such as letters of credit. With the advent of computers and electronic communications, many alternative electronic payment systems have emerged. The term electronic payment refers to a payment made from one bank account to another using electronic methods and forgoing the direct intervention of bank employees. Narrowly defined electronic payment refers to e-commerce—a payment for buying and selling goods or services offered through the Internet, or broadly to any type of electronic funds transfer. Payment system: Modern payment systems use cash-substitutes as compared to traditional payment systems. This includes debit cards, credit cards, electronic funds transfers, direct credits, direct debits, internet banking and e-commerce payment systems. Payment system: Payment systems may be physical or electronic and each has its own procedures and protocols. Standardization has allowed some of these systems and networks to grow to a global scale, but there are still many country-specific and product-specific systems. Examples of payment systems that have become globally available are credit card and automated teller machine (ATM) networks. Additionally, forms exist to transfer funds between financial institutions. Domestically this is accomplished by using Automated clearing house (ACH) and real-time gross settlement (RTGS) systems. Internationally this is accomplished using the SWIFT network. Domestic: An efficient national payment system reduces the cost of exchanging goods, services, and assets. It is indispensable to the functioning of the interbank, money, and capital markets. A weak payment system may severely drag on the stability and developmental capacity of a national economy. Such failures can result in inefficient use of financial resources, inequitable risk-sharing among agents, actual losses for participants, and loss of confidence in the financial system and in the very use of money. The technical efficiency of the payment system is important for the development of the economy. Domestic: An automated clearing house (ACH) system processes transactions in batches, storing, and transmitting them in groups. An ACH is considered a net settlement system, which means settlement may be delayed. This poses what is known as settlement risk. Domestic: Real-time gross settlement systems (RTGS) are funds transfer systems where the transfer of money or securities takes place from one bank to another on a "real-time" and on "gross" basis. Settlement in "real time" means that payment transaction does not require any waiting period. The transactions are settled as soon as they are processed. "Gross settlement" means the transaction is settled on one to one basis without bunching or netting with any other transaction. Once processed, payments are final and irrevocable. Domestic: Comparatively, ACHs are typically used for low-value, non-urgent transactions while RTGS systems are typically used for high-value, urgent transactions.Countries and regions have also implemented real-time or instant (or faster) payment systems which typically operate 24x7x365 and perform the transaction from debit of ordering customer's account to credit of beneficiary customer's account within a timeframe of 10–15 seconds. International: Globalization is driving corporations to transact more frequently across borders. Consumers are also transacting more on a global basis—buying from foreign eCommerce sites as well as traveling, living, and working abroad. For the payments industry, the result is higher volumes of payments—in terms of both currency value and number of transactions. This is also leading to a consequent shift downwards in the average value of these payments The ways these payments are made can be cumbersome, error prone, and expensive. Payments systems set up decades ago continue to be used sometimes retrofitted, sometimes force-fitted—to meet the needs of modern corporations. And, frequently, the systems creak and groan as they bear the strain. Examples of such systems include STEP2 (an upgrade from 2003), which processes only Euros, and TARGET2 (an upgrade from 2007), which is closed on Saturdays and Sundays and some public holidays. International: As of 2014, STEP2 is the only Pan-European automated clearing house (or PE-ACH system) in operation. This type of system is thought to become less relevant as banks will settle their transactions via multiple clearing houses rather than using one central clearing house. International: TARGET2 (Trans-European Automated Real-time Gross Settlement Express Transfer System) is a RTGS system that covers the European Union member states which use the euro. It is part of the Eurosystem, which comprises the European Central Bank and the national central banks of those countries that have adopted the euro. TARGET2 is used for the settlement of central bank operations, large-value Euro interbank transfers as well as other euro payments. TARGET 2 provides real-time financial transfers, debt settlement at central banks which is immediate and irreversible. International: For users of these systems, on both the paying and receiving sides, it can be difficult and time-consuming to learn how to use cross-border payments tools, and how to set up processes to make optimal use of them. Solution providers (both banks and non-banks) also face challenges cobbling together old systems to meet new demands. For these providers, cross-border payments are both lucrative (especially given foreign exchange conversion revenue) and rewarding, in terms of the overall financial relationship created with the end customer. International: The challenges for global payments are not simply those resulting from volume increases. A number of economic, political, and technical factors are changing the types of cross-border transactions conducted. Such factors include: Corporations are making more cross-border purchases of services (as opposed to goods), as well as more purchases of complex fabricated parts rather than simple, raw materials. Enterprises are purchasing from more countries, in more regions. Increased outsourcing is leading to new in-country and new cross-border intracompany transactions. More enterprises are participating in complex, automated supply chains, which in some cases drive automatic ordering and fulfillment. Online purchasing continues to grow, both by large enterprises as part of an automated procurement systems and by smaller enterprises purchasing directly. There is continued growth in the use of Commuter worker. Individuals are increasingly taking their investments abroad.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PowerMapper** PowerMapper: PowerMapper is a web crawler that automatically creates a site map of a website using thumbnails from each web page. Map styles: A sitemap is a comprehensive list of pages within a website's domain. It can serve three primary purposes: aiding designers during the website planning phase, providing human-visible, typically hierarchical listings of site pages, and offering structured listings specifically designed for web crawlers, such as search engines. Site maps can be displayed in a number of different map styles. Some styles display thumbnails for each page, others use text-only presentation. Map styles: Map styles include: Electrum – a simple thumbnail map style Electrum 2.0 – a variation of the Electrum style that works better on larger sites Isometric – a thumbnail map style using a pseudo-3D isometric projection Page Cloud – a thumbnail map style with pages clustered into 3D clouds Skyscrapers – an abstract representation of pages that looks like city blocks Thumb Tree – a hierarchical thumbnail map style Table Map – a plain text list of pages in a table Table of Contents – a plain text list of pages Tree View – an expanding table of contentsSite maps can also be exported in XML sitemaps format for use by the Google, Yahoo and MSN search engines. Reviews: The product received positive reviews from a greater community for the 1.0 release in 1997, and subsequent reviews for the 4.0 release in Microsoft TechNet Magazine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Help key** Help key: A Help key, found in the shape of a dedicated key explicitly labeled Help, or as another key, typically one of the function keys, on a computer keyboard, is a key which, when pressed, produces information on the screen/display to aid the user in their current task, such as using a specific function in an application program. In the case of a non-dedicated Help key, the location of the key will sometimes vary between different software packages. Most common in computer history, however, is the development of a de facto Help key location for each brand/family of computer, exemplified by the use of F1 on IBM compatible PCs. Apple keyboards: The standard help key on the Apple IIe and Apple III series computers is either OPEN-APPLE-? or SOLID-APPLE-? ... The standard help key on the Apple II and Apple II+, where practical, is a question mark or slash, or else ESCAPE ? or ESCAPE /. Apple keyboards: On a full-sized Apple keyboard, the help key was labelled simply as Help, located to the left of the Home. Where IBM compatible PC keyboards had the Insert, Apple keyboards had the help key instead. As of 2007, new Apple keyboards do not have a help key. In its place, a full-sized Apple keyboard has a Fn instead. Instead of a mechanical help key, the menu bar for most applications contain a Help menu as a matter of convention. Commodore and Amiga keyboards: The Commodore 128 had a Help key in the second block of top row keys. Amiga keyboards had a Help key, labelled as such, above the arrow keys on the keyboard, and next to a Del key (where the Insert Home Pg Up cluster is on a standard PC keyboard). Atari keyboards: The keyboards of the Atari 16- and 32-bit computers had a Help key above the arrow keys on the keyboard. Atari 8-bit XL and XE series keyboards had dedicated Help keys, but in the group of differently-styled system keys separated from the rest of the keyboard. Sun Microsystems (Oracle): Most of the Sun Microsystems keyboards have a dedicate "Help" key in the left top corner (left from the "Esc" key above block of 10 (Stop,Again,Props,Undo,Front,Copy,Open,Paste,Find,Cut) extra keys.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Annunciator panel** Annunciator panel: An annunciator panel, also known in some aircraft as the Centralized Warning Panel (CWP) or Caution Advisory Panel (CAP), is a group of lights used as a central indicator of status of equipment or systems in an aircraft, industrial process, building or other installation. Usually, the annunciator panel includes a main warning lamp or audible signal to draw the attention of operating personnel to the annunciator panel for abnormal events or condition. Aviation: In the aircraft industry, annunciator panels are groupings of annunciator lights that indicate status of the aircraft's subsystems. The lights are usually accompanied with a test switch, which when pressed illuminates all the lights to confirm they are in working order. More advanced modern aircraft replaces these with the integrated electronic Engine Indicating and Crew Alerting System or Electronic Centralised Aircraft Monitor. Aviation: An aviation annunciator panel will have a test switch to check for burned out lamps. Indicator lights are grouped together by their associated systems into various panels of lights.Lamp colours are normally given the following meanings: Red: Warning, this systems condition is critical and requires immediate attention (such as an engine fire, hydraulic pump failure) Amber: Caution, this system requires timely attention or may do so in the future (ice detected, fuel imbalance) Green: Advisory/Indication, a system is in use or ready for operation (such as landing gear down and locked, APU operating) White/blue: Advisory/Indication, a system is in use (seatbelt signs on, anti-ice system in-use, landing lights on)The annunciator panel may display warnings or cautions that are not necessarily indicative of a problem; for example, a Cessna 172 on its after-landing roll will often flicker the "Volts" warning simply due to the idle throttle position and therefore the lower voltage output of the alternator to the aircraft's electrical system. Aviation: More complicated aircraft will feature Master Warning and Master Caution lights/switches. In the event of any red or yellow annunciator being activated, the yellow or red master light, usually located elsewhere in the pilot's line of sight, will illuminate. In most installations they will flash and an audible alert will accompany them. These "masters" will not stop flashing until they have been acknowledged, usually by pressing the light itself, and in some cases the audible alert will also continue until this acknowledgement. On some aircraft (most Boeing airliners, for example) the "masters" will also flash briefly and the audible alert will sound whenever the autopilot is disconnected, as an additional reminder to the pilots that manual control is now required. Process control: In industrial process control, an annunciator panel is a system to alert operators of alarm conditions in the plant. Multiple back-lit windows are provided, each engraved with the name of a process alarm. Lamps in each window are controlled by hard-wired switches in the plant, arranged to operate when a process condition enters an abnormal state (such as high temperature, low pressure, loss of cooling water flow, or many others). Single point or multipoint alarm logic modules operate the window lights based on a preselected ISA 18.1 or custom sequence. Process control: In one common alarm sequence, the light in a window will flash and a bell or horn will sound to attract the operator's attention when the alarm condition is detected. The operator can silence the alarm with a button, and the window will remain lit as long as the process is in the alarm state. When the alarm clears (process condition returns to normal), the lamps in the window go out. Process control: Annunciator panels were relatively costly to install in a plant because they had dedicated wiring to the alarm initiating devices in the process plant. Since incandescent lamps were used, a lamp test button was always provided to allow early detection of failed lamps. Modern electronic distributed control systems usually require less wiring since the process signals can be monitored within the control system, and the engraved windows are replaced by alphanumeric displays on a computer monitor.Behavior of alarm systems, and colors used to indicate alarms, are standardized. Standards such as ISA 18.1 or EN 60073 simplify purchase of systems and training of operators by giving standard alarm sequences. Process control: Obsolescence and revival The introduction of computer monitor based control systems during the 1980s and 1990s saw a wholesale absorption of alarm window displays onto the computer screen. This created a downturn in the sales of the conventional alarm annunciator systems, and many of the companies manufacturing these alarm annunciator products were either sold off or went out of business. Process control: This has left today a major obsolescence support problem for customers who are still using these alarm annunciator systems as part of their safety systems. Process control: Over the last five years the alarm annunciator has seen a resurgence in popularity especially for use in IEC 61508 SIL 1 and SHE (Safety Health and Environmental) alarm monitoring applications. The modern trend is to identify critical alarms and return them from the computer screen to discrete alarm windows. This is being done for two reasons. Firstly, alarm annunciators offer pattern recognition to the operators in the form of LED alarm fascias instead of just providing an exhaustive list of alarms and events which the operators have to scroll through and in some instances alarms can be overlooked. Secondly, the analysis of plant failure modes is leading to the separation of critical alarm monitoring and process control systems for safety reasons. Fire alarm panel: In large buildings, a central fire alarm annunciator panel is located where it is accessible to fire-fighting crews. The annunciator panel will indicate the zone and approximate physical location of the source of a fire alarm in the building. The annunciator will also include lamps and audible warning devices to indicate failures of alarm circuits. In a large building such as an office tower or hotel, the fire annunciator may also be associated with a control panel for building ventilation systems, and may also include emergency communication systems for the building.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pentabromodiphenyl ether** Pentabromodiphenyl ether: Pentabromodiphenyl ether (also known as pentabromodiphenyl oxide) is a brominated flame retardant which belongs to the group of polybrominated diphenyl ethers (PBDEs). Because of their toxicity and persistence, their industrial production is to be eliminated under the Stockholm Convention, a treaty to control and phase out major persistent organic pollutants (POP). Composition, uses, and production: Commercial pentaBDE is a technical mixture of different PBDE congeners, with BDE-47 (2,2',4,4'- tetrabromodiphenyl ether) and BDE-99 (2,2',4,4',5-pentabromodiphenyl ether) as the most abundant. The term pentaBDE alone refers to isomers of pentabromodiphenyl ether (PBDE congener numbers 82-127). Only congeners with more than 1% listed. Composition, uses, and production: Commercial pentaBDE is most commonly used as a flame retardant in flexible polyurethane foam; it was also used in printed circuit boards in Asia, and in other applications. The annual demand worldwide was estimated as 7,500 tonnes in 2001, of which the Americas accounted for 7,100 tonnes, Europe 150 tonnes, and Asia 150 tonnes. The global industrial demand increased from 4,000 tonnes annually in 1991 to 8,500 tonnes annually in 1999. As of 2007, "there should be no current production of C-PentaBDE [commercial pentaBDE] in Europe, Japan, Canada, Australia and the US"; however, it is possible that production continues elsewhere in the world. Environmental chemistry: PentaBDE is released by different processes into the environment, such as emissions from manufacture of pentaBDE-containing products and from the products themselves. Elevated concentrations can be found in air, water, soil, food, sediment, sludge, and dust. Exposures and health effects: PentaBDE may enter the body by ingestion or inhalation. It is "stored mainly in body fat" and may stay in the body for years. A 2007 study found that PBDE 47 (a tetraBDE) and PBDE 99 (a pentaBDE) had biomagnification factors in terrestrial carnivores and humans of 98, higher than any other industrial chemicals studied. In an investigation carried out by the WWF, "the brominated flame retardant chemical (PBDE 153), which is a component of the penta- and octa- brominated diphenyl ether flame retardant products" was found in all blood samples of 14 ministers of health and environment of 13 European Union countries.The chemical has no proven health effects in humans; however, based on animal experiments, pentaBDE may have effects on "the liver, thyroid, and neurobehavioral development." Voluntary and governmental actions: In Germany, industrial users of pentaBDE "agreed to a voluntary phaseout in 1986." In Sweden, the government "phase[d] out the production and use of the [pentaBDE] compounds by 1999 and a total ban on imports came into effect within just a few years." The European Union (EU) has carried out a comprehensive risk assessment under the Existing Substances Regulation 793/93/EEC; as a consequence, the EU has banned the use of pentaBDE since 2004.In the United States, as of 2005, "no new manufacture or import of" pentaBDE and octaBDE "can occur... without first being subject to EPA [i.e., United States Environmental Protection Agency ] evaluation." As of mid-2007, a total of eleven states in the U.S. had banned pentaBDE.In May 2009, pentaBDE was added to the Stockholm Convention as it meets the criteria for the so-called persistent organic pollutants of persistence, bioaccumulation and toxicity. Alternatives: The EPA organized a Furniture Flame Retardancy Partnership beginning in 2003 "to better understand fire safety options for the furniture industry" after pentaBDE "was voluntarily phased out of production by the sole U.S. manufacturer on December 31, 2004." In 2005 the Partnership published evaluations of alternatives to pentaBDE, including triphenyl phosphate, tribromoneopentyl alcohol, tris(1,3-dicholoro-2-propyl)phosphate, and 12 proprietary chemicals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bisdemethoxycurcumin** Bisdemethoxycurcumin: Bisdemethoxycurcumin is a curcuminoid found (along with the curcuminoids curcumin and demethoxycurcumin) in turmeric (Curcuma longa), but absent in Javanese turmeric (Curcuma xanthorrhiza). Bisdemethoxycurcumin is used as a pigment and nutraceutical with antimutagenic properties. All three of the curcuminoids found in Curcuma longa have been shown to have antioxidant properties, but bisdemethoxycurcumin is more resistant than the others to alkaline degradation. It was found to be effective in sensitizing PC cells resistance against gemcitabine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gimmick (professional wrestling)** Gimmick (professional wrestling): In professional wrestling, a gimmick generally refers to a wrestler's in-ring persona, character, behaviour, attire and/or other distinguishing traits while performing which are usually artificially created in order to draw fan interest. These in-ring personalities often involve costumes, makeup and catchphrases that they shout at their opponents or the fans. Gimmick (professional wrestling): Gimmicks can be designed to work as good guys/heroes (babyfaces) or bad guys/villains (heels) depending on the wrestler's desire to be popular or hated by the crowd. A tweener gimmick falls between the two extremes, such as wrestlers who manifests many heel and face traits such as Randy Orton's viper gimmick. A wrestler may portray more than one gimmick over their career depending on the angle or the wrestling promotion that they are working for at that time. Gimmick (professional wrestling): Promotions will use gimmicks on more than one person, albeit at different times, occasionally taking advantage of a masked character which allows for the identity of the wrestler in question to be concealed. Razor Ramon was portrayed by both Scott Hall and Rick Bognar and Diesel was portrayed by Kevin Nash and then Glen Jacobs. Occasionally, a wrestler uses a gimmick as a tribute to another worker; such is the case of Ric Flair's Nature Boy persona which he took on as an homage to the original Nature Boy, Buddy Rogers. When a wrestler acts outside his or her gimmick this is known as 'breaking kayfabe', a term showing pro wrestling's linkages to theatre, where the more common term "breaking the fourth wall" is used. Gimmicks are annually rated for the Wrestling Observer Newsletter awards by the publication's owner, professional wrestling journalists, and various industry insiders, such as Dave Meltzer, promoters, agents and performers, other journalists, historians, and fans. The two awards are given to the best and worst gimmick of that year. History: Beginnings (1860s to 1940s) Pro wrestling's history has been tied to the use of gimmicks from its infancy. From its circus origins in the 1830s, showmen presented wrestlers under names such as “Edward, the steel eater”, “Gustave d’Avignon, the bone wrecker”, or “Bonnet, the ox of the low Alps” and challenged the public to knock them down for 500 francs.During the late 19th century-early 20th century, when wrestler Frank Gotch rose to prominence, the focus became on contests largely legitimate (see catch wrestling), which largely resulted in the abandoning previous character gimmicks. History: Television era (1950s to 1970s) It was not until the First Golden Age of Professional Wrestling in the United States during the 1940s–1950s, when Gorgeous George created pro wrestling's first major gimmick. His heel character focused on his looks and quickly antagonized the fans with his exaggerated effeminate behavior, drawing jealousy to the fans. Such showmanship was unheard of for the time; and consequently, arena crowds grew in size as fans turned out to ridicule George.Gorgeous George's impact and legacy on wrestling gimmicks was enormous, demonstrating how fast television changed the product from athletics to performance. Before him, wrestlers gimmicks imitated "ethnic terrors"—Nazis, Middle Eastern Muslims (Arabs, Turks, Persians, Afghans, etc.), Japanese, Russians, etc.—but his success birthed a more individualistic and narcissist form of character.He was one of the first pro-wrestlers to use entrance music, "Pomp and Circumstance" which always played as he made his way to the ring.In Britain, television took British wrestling to the next level when in 1964, it went full-time as part of the World of Sport show. History: The style of wrestling at the time was unique with strong emphasis on clean technical wrestling. Heels made up a minority of the roster, with most shows containing an abnormally high proportion of clean sportsmanly matches between two "blue-eyes" (as faces were known backstage in the UK). This would remain the case for several decades to come. Gimmick matches were a rarity, midget wrestling failed to catch on, while women were banned by the Greater London Council until the late 1970s. History: Explosion (1980–present) During the Golden Age of pro wrestling in the 1980s–1990s, a rise of cartoonish, outlandish gimmicks became popular with the increase of the World Wrestling Federation's popularity. History: The WWF contributed to the explosion of gimmicks by becoming the most colorful and well-known wrestling brand because of its child-oriented characters, soap opera dramatics and cartoon-like personas. Most notable was the muscular Hulk Hogan, who marked the 1980s with his "Real-American" gimmick and made his main events into excellent ratings draws. His dominant role in the industry at that time led to this era to be also known as "Hulkamania". Around this time, wrestling became a form of entertainment rather than an official sport. History: Other wrestlers from this era with similarly vivid and outlandish characterization include The Iron Sheik, The Ultimate Warrior, Randy Savage, The Undertaker, Sting, Goldust, Roddy Piper, Ric Flair, "The Heartbreak Kid" Shawn Michaels, Big Daddy Cool Diesel, Kwang, The Bushwhackers, Big Boss Man, Tatanka, Razor Ramon, Sgt. Slaughter, Irwin R. Schyster, among many others. History: Following the Attitude era, the emphasis of gimmicks has been more realistic with wrestlers portraying themselves or actual people without the exaggeration, freakishness or fantastical qualities. It's also more common for the wrestlers to use their actual names. Wrestlers like Randy Orton, Batista, Bobby Lashley, John Cena, and Brock Lesnar are prime examples. All the said wrestlers are depicted as less-exaggerated average people. History: Although rare, colorful and cartoon-like characters remain in the WWE, such as Shinsuke Nakamura (a wildly random, erratic mixed martial arts enigma, emotionally charged by the sound of violins) and Matt Riddle (a stereotypical carefree, barefoot surfer Valley boy). Outside WWE, some wrestlers have made names for themselves on the crowded independent circuit by adopting absurdist comedy gimmicks intended to be understood by post-kayfabe fans as purely fictional characters. Two such wrestlers whose independent-scene popularity got them noticed and eventually signed by the internationally televised promotion All Elite Wrestling are Orange Cassidy, an emotionless slacker who puts as little effort as possible into his matches and frequently wrestles with his hands in his pockets; and Danhausen, a demonic but somewhat-bumbling figure in horror face paint who claims to be "very nice, very evil" and attempts to put curses on his opponents. Common gimmicks: Related to origin Exaggerating the characteristics of a wrestler's (on occasion fabricated) origin is one of the most commonly exploited gimmicks, in which overarching characteristics of a character play up to clichés and stereotypes. Common gimmicks: A long list of wrestlers in this category includes: Albanian (Rezar), Arab (The Sheik, The Sultan, Muhammad Hassan), African (Kamala, Abdullah The Butcher, Akeem, Apollo Crews), American (The Patriot, Hulk Hogan, 'Hacksaw' Jim Duggan, Jack Swagger), Australian (The Bushwackers, Outback Jack, Nathan Jones, Buddy Murphy), Austrian (Walter), Brazilian (Arturo Ruas, Taynara Conti), Bulgarian (Rusev), Canadian (Team Canada (TNA), Team Canada (WCW)), Chinese (Xia Li, Boa), Cowboy (Bob Orton Jr.), Cuban (Razor Ramon), Dominican (No Way Jose), Dutch (Aleister Black), English (William Regal, Lord Alfred Hayes, Gentleman Jack Gallagher), Fijian (Jimmy Snuka), French (La Résistance), German (Alexander Wolfe, Marcel Barthel), Guyanese (Ezekiel Jackson), Hawaiian (Crush, Leilani Kai, Ricky Steamboat), Indian (The Great Khali, Jinder Mahal, Akam), Iranian (The Iron Sheik, Ariya Daivari), Irish (Finlay, Sheamus), Israeli (Noam Dar), Italian (FBI, Santino Marella, Fabian Aichner), Jamaican (Kofi Kingston), Japanese (The Orient Express, Mr. Fuji, Kai En Tai), Korean (Gail Kim), Lithuanian (Aksana), Mexican (Alberto Del Rio, Eddie Guerrero, The Mexicools), Moldovan (Alex Koslov, Marina Shafir) Native American/American Indian (Chief Jay Strongbow, Tatanka), New Zealander/Maori (The Sheepherders, Dakota Kai), Puerto Rican (Carlito Colón, Primo and Epico), Polish (Dabba-Kato/Commander Azeez,), Russian (Vladimir Kozlov, Nikolai Volkoff, Lana), Samoan (Samoa Joe, The Wild Samoans, Roman Reigns, The Usos) Scottish (Drew McIntyre, Roddy Piper), South African (Adam Rose, Justin Gabriel) Swiss (Cesaro), Thai (Super Invader), Tongan (Haku, Tama Tonga, Tanga Loa), and Welsh (Mason Ryan). Common gimmicks: The undeniable influence of the Puroresu style in the world of professional wrestling has resulted in many wrestlers using fabricated Japanese origins or being billed from a Japanese city, without actually being natives of the country. Prime examples of this include Yokozuna, Awesome Kong, Hawaiians Professor Tanaka and Mr. Fuji, and British wrestler Kendo Nagasaki. Several Japanese wrestlers who wrestle outside of their home country are known to play up or exaggerate aspects of their cultural heritage as part of their gimmicks for an overseas audience. Common gimmicks: Masked Masked wrestlers made their appearance in Europe (Theobaud Bauer in France, 1865) and the United States (Mort Henderson as "Masked Marvel" in 1915) considerably earlier than in Mexico, but it was the latter that popularised the use of masks. This, in some cases to signify a high-flyer style, influenced by Lucha Libre. Common gimmicks: A specific masked gimmick may be used by more than one wrestler at a wrestling company's request since their identity can be permanently concealed. This is the case of Mexican Sin Cara and Japanese Tiger Mask. Masks also allow a wrestler to perform as more than one character for a variety of wrestling promotions. In Mexico, a masked wrestler's identity is often not even a matter of public record, and being unmasked, usually as a stipulation of losing a match, is considered a great humiliation. It is a major taboo for a Mexican wrestler who has lost his mask to start wearing one again, though this has occasionally been violated, as in the case of Rey Mysterio. Common gimmicks: Other wrestlers who have used masks in their performances include: The Masked Superstar, Mexican-American Kalisto, Lince Dorado, Gran Metalik, or Japanese legend Jushin Thunder Liger. Common gimmicks: Sports A high number of wrestlers who start their careers in another sport incorporate their athletic abilities as part of their act. That is the case for Olympic medallist Kurt Angle, who previously competed in freestyle wrestling and alludes to it in his attire and wrestling style. Brock Lesnar is also an ex-amateur wrestler, NFL player and UFC champion. Welsh wrestler Mason Ryan is also a former Gladiator and football player. English wrestler Wade Barrett was also a former bare-knuckle fighter as well as Elijah Burke who is also a former amateur boxer. Former MMA fighters Ronda Rousey and Shayna Baszler also uses their MMA background as part of their characters as well as former American Ninja Warrior competitor Kacy Catanzaro, former kung-fu fighter Xia Li, and Matt Riddle, who always wrestles barefooted during matches, presuming that he had an MMA background career in the past before debuting in WWE along with Mojo Rawley's "hyperactive" wrestling style due to being a former NFL player before debuting WWE as well as the stable The Four Horsemen. Common gimmicks: Superheroes, supervillains and other comic-based characters The theatrical nature of professional wrestling easily blends with comic hero and villain characters, made popular in the 1980s by legend The Ultimate Warrior and Sting, whose character was inspired by the 1994 movie The Crow, based on the comic book of the same name. Common gimmicks: Other wrestlers with superhero and supervillain gimmicks include late WWE Hall of Famer Dusty Rhodes' sons Gold and Stardust, Big Van Vader, Bam Bam Bigelow, Pierre Carl Ouellet, Dr. Luther, the magician Phantasio, Icarus, Super Eric, Dexter Lumis, Samoan Rosey during his "the Super Hero in Training" (the S.H.I.T.) phase and his tag-team partner The Hurricane and valet Super Stacy, Earthquake/Avalanche and his tag-team partner Typhoon in The Natural Disasters stable, and tag-teams The Road Warriors, Demolition, KroniK, The Assassins, The Super Assassins, The Machines, and most recently, The Ascension, and The Viking Raiders/War Machine. Common gimmicks: Some of these characters are brought during very short periods of time for entertainment value. The Joker and Harley Quinn from the Batman comics have inspired wrestling attire for Sting and Alexa Bliss respectively. Finn Bálor's Demon King persona is visually based on Spider-Man villains Venom and Carnage. Sandman's character name is also based on Spider-Man villain Sandman as well as Rhyno, whose character name was based on a pun on the Spider-Man villain Rhino. Raven's character name was based on DC Comics superhero, Raven. Kenny Omega's taunts were inspired by video games since he was a big fan of them. Mantaur's character name was also based on a pun on the word Minotaur, a half-man, half-bull creature from Greek Mythology. Luchasaurus' character name is a portmanteau of "lucha libre" and "dinosaurus". Tag-team The Super Smash Brothers's name was based on the video game franchise Super Smash Bros. Nikki Cross also changed her gimmick and name like that of a superhero, into Nikki A.S.H. (Almost a Superhero). TNA's Dean Roll's ring name, Shark Boy, became the inspiration for the 3D film, The Adventures of Sharkboy and Lavagirl in 3-D. Common gimmicks: Supernatural-based characters Similarly to superheroes and supervillains, supernatural characters add to entertainment value. Most famously in this category is The Undertaker, considered one of the most respected wrestlers in the business, whose gimmick is a horror-themed character of an undead, macabre and paranormal dark presence prone to scare tactics. He was managed by the ghostly character that was Paul Bearer and tagged with his half-brother Kane in The Brothers of Destruction stable. Common gimmicks: Other wrestlers displaying supposed supernatural powers include Matt Hardy (as his Broken/Woken persona), and his younger brother Jeff Hardy (as his Brother Nero/Willow character), Mordecai, Waylon Mercy, Jake "The Snake" Roberts, Papa Shango, The Boogeyman, Abyss, and most recently Asuka, Aleister Black, and Bray Wyatt's The Fiend, and stables The Three Faces of Fear, and The Dungeon of Doom. Japanese Onryo portrays a dead wrestler who returned for vengeance. Common gimmicks: Raven was the leader of five stables; Raven's Nest, The Flock, The Dead Pool, The Gathering, and Serotonin. The Brood was a vampire stable, composed of Gangrel, Christian and Edge.Alexa Bliss was also given a different gimmick after her alliance with Bray Wyatt in late 2020s, appearing suddenly and sometimes attacking the other wrestlers, the same things that Bray Wyatt would do. Juggernaut Since its beginnings in the circus circuit, the professional wrestler's stereotype has been that of large, powerful and strong, most notably Kane upon his arrival to the WWF/E. Various wrestlers have banked on the larger size which has influenced their in-ring style and persona. Notable examples of these kind include Swede Tor Johnson (181 kg), Gorilla Monsoon (182 kg), Giant González (8 ft 0 in), André the Giant (7 ft 4 in), The Great Khali (7 ft 3 in), Big Show (7 ft 2 in), Awesome Kong and Nia Jax (123 kg). Midget Similarly to juggernauts, since its beginnings in the circus circuit, the professional wrestler's stereotype has been that of small, but powerful and strong like those of dwarves of Norse mythology. Various wrestlers have banked on the small size which has influenced their in-ring style and persona. Notable examples of these kind include the leprechaun Hornswoggle, El Torito and other various dwarfed versions of other various wrestlers. Common gimmicks: Educational Education is a rare gimmick in wrestling due to the fact that, most times, the wrestler is a former real-life student or scholar of a school, a college, a university, or a TAFE, who also worked as a cheerleader, a coach, a dean, a librarian, a teacher, or even a principal. Wrestlers who used this gimmick include NXT wrestlers, e.g. Alex Riley etc., Bobby "The Brain" Heenan, Sgt. Slaughter, Dean Douglas, Jonathan Coachman, Michelle McCool's "sexy teacher" character, The Miz's and Jack Swagger's "student" amateur background characters, Damien Sandow's "Intellectual Savior of the Unwashed Masses" character, and "The Librarian" Peter Avalon and his manager Leva Bates, and tag-teams The Steiner Brothers, The Spirit Squad, and most recently, Team Rhodes Scholars, American Alpha, and Chase University. Common gimmicks: Bad News reporter Bad News reporter characters are a villainous gimmick; due to any "bad news" reported to the fans by a "bad guy" (heel); but is quite rare since that fans are not quite interested in it either. Wrestlers who used this gimmick include Bad News Brown, and most recently, "Bad News" Barrett. Religious Religion is often a rare gimmick in Professional Wrestling due to its controversial nature. Wrestlers who used this gimmick include Friar Ferguson, and most recently, "Bolieve" Bo Dallas, and "The Monday Night Messiah" Seth Rollins. Common gimmicks: Hardcore technician Whilst being way beyond over the limit from some sheer violence is scary in some matches, hardcore technician gimmicks are also another popular choice for gimmicks, due to the fans being over with getting used to watching sheer violence as they don't shy away from it either. These include Abdullah the Butcher, and Bruiser Brody, which came popular into other professional wrestling companies like ECW wrestlers, e.g. Terry Funk, Hardcore Holly, New Jack, and Mick Foley/Mankind/Cactus Jack, etc., CZW wrestlers, e.g. John Zandig, Necro Butcher, Wifebeater, Nick Mondo, and Nick Gage, etc., AEW wrestlers, e.g. The Blade, and The Butcher, etc., Japanese Wrestlers Atsushi Onita, Toshiaki Kawada, and Jun Kasai, and tag-teams The Motor City Machine Guns, and most recently, The Mechanics, and Heavy Machinery. Common gimmicks: Music-based characters Music influences are another popular choice for gimmicks. In the 80's, The Honky Tonk Man worked with a Elvisesque character. Elias also works well with his musician guitar character. Rapping was demonstrated by R-Truth/K-Kwik's original rapper character along with Road Dogg and John Cena worked during the first years of his career with a rapper gimmick. Other music genre types were demonstrated by CM Punk's straight edge iconoclast hardcore punk, party boys No Way Jose and Adam Rose, Cameron Grimes, Rick Boogs, Rockstar Spud, Heath Slater, Lance Archer, Chris Jericho, Jeff Jarrett, Marty Jannetty, The Honky Tonk Man, Disco Inferno, One Man Gang, Buck Zumhofe, WWE's Brodus Clay and his fun-loving, funk dancing gimmick "The Funkasaurus" and Fandango who includes salsa dancing in his routine, and AEW's Jack Evans who usually does breakdancing in the ring during entrances or when he's won a match, and tag-teams The Public Enemy, Badd Company, The Rockers, The Rock 'n' Roll Express, The Rhythm and Blues, and most recently, The Vaudevillains. AEW's Adam Williams is also a professional wrestler and a real-life guitarist. Common gimmicks: Comedy Whilst humor has long been present in professional wrestling matches and many wrestlers incorporate elements of comedy in their act, full-on comedic gimmicks are not commonly seen. These are sometimes reserved for wrestlers who not always have the stereotypical physique required in the industry and instead exploit their entertainment abilities. Common gimmicks: Initiated by English wrestler Les Kellett, wrestlers who fall under this category are Doink The Clown which was majorly portrayed by Matt Osborne until his death in 2013, which inspired others like Scottish comedian and actor Grado, Ring of Honor's Colt Cabana, Santino Marella, James Ellsworth, and Eugene's "mentally disabled boy" character, Japanese Wrestlers Stalker Ichikawa, Gran Naniwa, Kuishinbo Kamen and Toru Yano, Charlie Haas during his impersonations run, and WWE's 1990s turkey character Gobbledy Gooker, and rooster character Red Rooster, WCW's Brian Pillman, and Al Snow along with his mannequin prop called "Head" which he used as a sidekick companion during segments while addressing the fans. And recently, The New Day pursued a joyous gimmick, giving them a character heavily associated with the fans. Damien Sandow also falls under this category due to his 'stunt double' gimmick in late 2014 where he copied whatever his on-screen mentor The Miz did, due to the latter using a gimmick of an arrogant movie star. R-Truth also influenced his character with some of his comedic activities, such as breaking out a joke, dancing and finding out his opponent to win the 24/7 Championship in a strange and funny way. Common gimmicks: Charity Characters who do charity are depicted as a heroic gimmick due to real-life charity. Wrestlers who used this gimmick include Sweet Daddy Siki, Brother Love, "Make a Difference" Fatu, Dude Love, and most recently, "The Doctor of Hug-o-nomics" Bayley, and tag-team Men on a Mission. Common gimmicks: Self-absorbed Usually a villainous gimmick, initiated by Gorgeous George, due to the jealousy of the good looks the fans want to have for themselves. Wrestlers that followed on with this trend include Sonny Kiss, Angel Garza, "The Untouchable" Carmella, Lana with her catchphrase, "I am the best in the world", "Dashing" Cody Rhodes, "The Black Machismo" Jay Lethal, "The Artist Collective" Sami Zayn, "The Masterpiece" Chris Masters, Byron Saxton, "The Swiss Superman" Antonio Cesaro, Dolph Ziggler with his "perfection" gimmick, The Miz with his catchphrase, "AWESOME", Randy Orton, "The Glamazon" Beth Phoenix, Carlito Caribbean Cool, "The Phenominal" AJ Styles, "Glorious" Bobby Roode, "The Almighty" Bobby Lashley, "The Golden Standard" Shelton Benjamin, Scotty 2 Hotty, "The Rated R Superstar" Edge, The "Great One" Rock, "The World's Strongest Man" Mark Henry, Val Venis, "The Heartbreak Kid" Shawn Michaels, "Big Sexy" Kevin Nash, Lex Luger's "The Narcissist" character, "Beautiful" Bobby Eaton, Ravishing Rick Rude, "The Model" Rick Martel, "Adorable" Adrian Adonis, Hulk Hogan, "Macho Man" Randy Savage, Jesse "The Body" Ventura, "The Nature Boy" Ric Flair and his daughter, "Handsome" Harley Race, "Classy" Freddie Blassie, AEW's "Pretty" Peter Avalon, and Powerhouse Hobbs, TNA's Mr Pec-tacular, Brian Christopher's Grand Master Sexay, Billy Gunn's Mr Ass, Curt Hennig's Mr Perfect, Paul Orndorff's Mr Wonderful, NXT's Tyler Breeze, Lacey Evans, and "The Finest" Kona Reeves, and tag-teams The Mexicools, and Too Cool, as well as women's tag-teams The Beautiful People, LayCool, Fire and Desire, and The IIconics. Common gimmicks: Hollywood movie star Hollywood movie stars are occasionally villainous due to fame outside of wrestling as a real-life Hollywood actor/actress. These include "Hollywood" Hulk Hogan, The Rock, and most recently, Batista, John Cena, The Miz, and David Otunga's A-list character, and tag-teams The Hollywood Blondes, and MNM, and most recently, The Bollywood Boyz, despite being of Indian descent and being billed from the famous Indian filming district of Bollywood, Mumbai (Bombay), instead which they were named after (although the name "Bollywood" was borrowed from the word "Hollywood" but with a "B" instead of a "H" to describe a famous filming district in Mumbai (Bombay), in India, which it was named after). Common gimmicks: Authority figure-based characters Authority figures are apparently villainous but sometimes as heroic characters as wrestlers and non-wrestlers (e.g. referees, general managers, security, police, etc.) as well depending on the storyline. Some wrestlers also use a character based on an authority over other people. These include non-wrestlers like managers, and wrestlers like The Mountie, Big Boss Man, "The Alpha Male" Marcus Cor Von, Consequences Creed, "The Man" Becky Lynch, "The Boss" Sasha Banks, Sean O'Haire's devil advocate gimmick, and David Otunga's legal adviser character, ECW's 911, and stables New World Order, Right to Censor, The Truth Commission, The Acolytes Protection Agency, 3-Minute Warning, and most recently, The Authors of Pain, The Shield, and The Authority. Common gimmicks: Money-based characters (Evil billionaire/Millionaire tyrant) The evil billionaire/millionaire tyrant character works well as a villain — due to the jealousy of the fans who want the things "money can't buy" for themselves which they can't afford — in contrast to professional wrestling's working-class fan-base. It is because of this audience that Dusty Rhodes' Common Man or "American Dream" was highly successful with the crowds. Common gimmicks: The original gimmick of this type was created by "Million Dollar Man" Ted DiBiase, which consequently inspired wrestlers like his son, which includes being owners of the promotion, like Mr. McMahon and his family (including his son and daughter (since they are the real owners of WWE)), and most recently, "The Dream" Velveteen Dream, and stables The Diamond Exchange, The Beverly Brothers, The Million Dollar Corporation, Money Inc., Beer Money, Inc., and most recently, The Prime Time Players, The Street Profits, and The Hurt Business. JBL used his real-life work as Wall Street investor as base for his JBL character. Common gimmicks: Ruthless ruler Similarly to evil billionaire/millionaire tyrant characters, and even authority figures, ruthless ruler characters are mostly a villainous gimmick based on real-life royals, imperials, empires, monarchs, or around other non-royal characters, like bureaucrats, aristocrats, diplomats, nobles, and gents. Wrestlers who originally used this gimmick include Lord Alfred Hayes, which inspired others like Baron von Raschke, "King" James Valiant, The Duke of Dorchester, Jerry "The King" Lawler, The Sultan, King Booker, Hunter Hearst Helmsley, Prince Nana, Tiger Ali Singh's rich and arrogant Asiatic heir character and his manservant Babu, William Regal's arrogant royal noble English ambassador character and his manager Sir William, and most recently, Dalton Castle, Gentleman Jack Gallagher, Baron Corbin who uses the gimmick of a villainous and an evil king, after winning the 2019 king of the ring tournament, but lost to Shinsuke Nakamura who is using the gimmick like that of the Japanese emperor after winning the "Battle For The Crown" against Corbin, Roman Reigns who is using the gimmick of the head of the table and the tribal chief, representing his tribe, upon his heel turn, Jinder Mahal as the Modern Maharaja, associating with his Indian ancestry, Apollo Crews as a proud representative of Nigeria, and Alberto Del Rio's arrogant rich Mexican aristocrat character and his personal ring announcer, Ricardo Rodriguez, and stables The Nation of Domination, The Kings of Wrestling, The British Invasion, The British Bulldogs, The Blue Bloods, Los Conquistadores, and most recently, The Kingdom, The Undisputed Era, and The Imperium. Common gimmicks: Hated crime gang/Terrorist thugs/Bad guy bandits/Mafia mobsters Hated crime gang/terrorist thugs/bad guy bandits/mafia mobsters work perfectly well as villainous gimmick due to real-life crime gangs, terrorist thugs, bandits, and mobsters but has become a more popular gimmick (partially due to being over with fans who seem to be more malicious, malevolent, violent, aggressive, erratic, or hostile, and seem show no respect, remorse or sympathy — especially when they show some profanity — to the heels, even if they are trying to be friendly, polite, or nice to them, especially when they are telling them they are or telling them to by placating them). These include Razor Ramon, The Brooklyn Brawler, Stone Cold Steve Austin, Eddie Guerrero and Chavo Guerrero with their catchphrase, "I lie, I cheat, I steal"/"We lie, We cheat, We steal", "Brutal" Bob Evans, Beer City Bruiser, Shannon Moore, John Cena's "thug nature" character, and most recently, Eddie Edwards, Sami Callihan, Darby Allin, and Bandido, and tag-teams Cryme Tyme, D-Generation X, The New Age Outlaws, The Disciples of Apocalypse, The Gangstas/The Gangstanators, FBI, LAX, Mexican America, La Familia, The Forever Hooligans, and most recently, Riott Squad, The Forgotten Sons, Social Outcasts, Enzo Amore and Big Cass, Sanity, Aces & Eights, The Bullet Club, and Retribution. Other usage: Within professional wrestling in insider usage the word 'gimmick' has come to refer to an array of other related terms, including any weapon or foreign object used during a match or the scripted quality of a match. In backstage lingo, gimmick is also a stand-in for basically any physical noun or set of moves in a match. Gimmicked is used to describe an object that is altered or rigged for use in a match. For example, a gimmicked table or chair which would be precut or made to fall apart more easily. An event that is referred to as a gimmick event is one that is centred around a match type, such as the pay-per-view events WWE Hell in a Cell and WWE TLC: Tables, Ladders, & Chairs. The term is also a euphemism for hormone-enhancing drugs, namely steroids and growth hormone, which have historically been linked to the sport. It has also been used by people in the profession to describe casual marijuana use, as wrestlers will refer to 'smoking the gimmick'.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cryptococcosis** Cryptococcosis: Cryptococcosis is a potentially fatal fungal infection of mainly the lungs, presenting as a pneumonia, and brain, where it appears as a meningitis. Cough, difficulty breathing, chest pain and fever are seen when the lungs are infected. When the brain is infected, symptoms include headache, fever, neck pain, nausea and vomiting, light sensitivity and confusion or changes in behavior. It can also affect other parts of the body including skin, where it may appear as several fluid-filled nodules with dead tissue.It is caused by the fungi Cryptococcus neoformans or less commonly Cryptococcus gattii, and is acquired by breathing in the spores from the air. These fungi are found around the world in soil, decaying wood, pigeon droppings, and in the hollows of some species of trees. Whereas C. neoformans generally infects people with HIV/AIDS and those on immunosuppressant drugs and does not usually affect fit and healthy people, C. gattii (found in some parts of Canada and the US) does. Once breathed in, the dried yeast cells colonize the lungs, where they are either cleared by immune cells, lie dormant, or cause infection and spread.Diagnosis is by isolating Cryptococcus from a sample of affected tissue or direct observation of the fungus by using staining of body fluids. It can be cultured from a cerebrospinal fluid, sputum, and skin biopsy. Treatment is with fluconazole or amphotericin B.Data from 2009 estimated that of the almost one million cases of cryptococcal meningitis that occurred worldwide annually, 700,000 occurred in sub-Saharan Africa and 600,000 per year died. Cryptococcosis was rare before the 1970s which saw an increase in at-risk groups such as people with organ transplant or on immunosuppressant medications. The number of cases escalated in the mid-1980s with over 80% occurring in people with HIV/AIDS. Pigeon breeders (or otherwise people who spend significant time with pigeons) are known to have a high incidence of cryptococcal infections including PCC due to Cryptococcus' association with pigeon droppings. Classification: Cryptococcus is generally classified according to how it is acquired and its site. It typically begins in the lungs before spreading to other parts of the body, particularly the brain and nervous system. The skin type is less common. Signs and symptoms: Cough, shortness of breath, chest pain and fever are seen when the lungs are infected, appearing like a pneumonia. There may also be feeling of tiredness. When the brain is infected, symptoms include headache, fever, neck pain, nausea and vomiting, light sensitivity, confusion or changes in behaviour. It can also affect other parts of the body including skin, eyes, bones and prostate. In the skin, it may appear as several fluid-filled nodules with dead tissue. Depending on the site of infection, other features may include loss of vision, blurred vision, inability to move an eye and memory loss.Symptom onset is often sudden when lungs are infected and gradual over several weeks when the central nervous system is affected. Cause: Cryptococcosis is a common opportunistic infection for AIDS, and is particularly common among people living with AIDS in Africa. Other conditions that pose an increased risk include certain malignancies (such as lymphoma), liver cirrhosis, organ transplants, and long-term corticosteroid therapy.Distribution is worldwide in soil. The prevalence of cryptococcosis has been increasing over the past 50 years for many reasons, including the increase in incidence of AIDS and the expanded use of immunosuppressive drugs.In humans, C. neoformans chiefly infects the skin, lungs, and central nervous system (causing meningitis). Less commonly it may affect other organs such as the eye or prostate. Primary cutaneous cryptococcosis: Primary cutaneous cryptococcosis (PCC) is a distinct clinical diagnosis separate from the secondary cutaneous cryptococcosis that is spread from systematic infection. Males are more likely to develop the infection and a 2020 study showed that the sex bias may be due to a growth hormone, produced by C. neoformans called gibberellic acid (GA) that is upregulated by testosterone. The upper limbs account for a majority of infections. Isolates found in PCC include Cryptococcus neoformans (most common), Cryptococcus gattii, and Cryptococcus laurentii. Prognosis for PCC is generally good outside of disseminated infection.Morphologic description of the lesions show umbilicated papules, nodules, and violaceous plaques that can mimic other cutaneous diseases like molluscum contagiosum and Kaposi's sarcoma. These lesions may be present months before other signs of system infection in patients with AIDS. Pulmonary cryptococcosis: Cryptococcus (both C. neoformans and C. gattii) plays a common role in pulmonary invasive mycosis seen in adults with HIV and other immunocompromised conditions. It also affects healthy adults at a much lower frequency and severity as healthy hosts may have no or mild symptoms. Immune-competent hosts may not seek or require treatment, but careful observation may be important. Cryptococcal pneumonia has a potential to disseminate to the central nervous system (CNS) especially in immunocompromised individuals.Pulmonary cryptococcosis has a worldwide distribution and is commonly underdiagnosed due to limitations in diagnostic capabilities. Since pulmonary nodules are its most common radiological feature, it can clinically and radiologically mimic lung cancer, TB, and other pulmonary mycoses. The sensitivity of cultures and the Cryptococcal (CrAg) antigen with lateral flow device on serum are rarely positive in the absence of disseminated disease. Moreover, pulmonary cryptococcosis worsen the prognosis of cryptococcal meningitis. Cryptococcal meningitis: Cryptococcal meningitis (infection of the meninges, the tissue covering the brain) is believed to result from dissemination of the fungus from either an observed or unappreciated pulmonary infection. Often there is also silent dissemination throughout the brain when meningitis is present. People with defects in their cell-mediated immunity, for example, people with AIDS, are especially susceptible to disseminated cryptococcosis. Cryptococcosis is often fatal, even if treated. It is estimated that the three-month case-fatality rate is 9% in high-income regions, 55% in low/middle-income regions, and 70% in sub-Saharan Africa. As of 2009 there were globally approximately 958,000 annual cases and 625,000 deaths within three months after infection.Although C. neoformans infection most commonly occurs as an opportunistic infection in immunocompromised people (such as those living with AIDS), C. gattii often infects immunocompetent people as well.Cryptococcus (both C. neoformans and C. gattii) is the dominant and leading etiologic agent of meningitis in adults with HIV and is considered an "emerging" disease in healthy adults. Though the rate of infection is clearly higher with immunocompromised individuals, some studies suggest a higher mortality rate in patients with non-HIV cryptococcal meningitis secondary to the role of T-cell mediated reaction and injury. CD4+ T cells have proven roles in the defense against Cryptococcus, but it can also contribute to clinical deterioration due its inflammatory response. Diagnosis: Dependent on the infectious syndrome, symptoms include fever, fatigue, dry cough, headache, blurred vision, and confusion. Symptom onset is often subacute, progressively worsened over several weeks. The two most common presentations are meningitis (an infection in and around the brain) and pulmonary (lung) infection.Any person who is found to have cryptococcosis at a site outside of the central nervous system (e.g., pulmonary cryptococcosis), a lumbar puncture is indicated to evaluate the cerebrospinal fluid (CSF) for evidence of cryptococcal meningitis, even if they do not have signs or symptoms of CNS disease. Detection of cryptococcal antigen (capsular material) by culture of CSF, sputum and urine provides definitive diagnosis. Blood cultures may be positive in heavy infections. India ink of the CSF is a traditional microscopic method of diagnosis, although the sensitivity is poor in early infection, and may miss 15–20% of patients with culture-positive cryptococcal meningitis. Unusual morphological forms are rarely seen. Cryptococcal antigen from cerebrospinal fluid is the best test for diagnosis of cryptococcal meningitis in terms of sensitivity. Apart from conventional methods of detection like direct microscopy and culture, rapid diagnostic methods to detect cryptococcal antigen by latex agglutination test, lateral flow immunochromatographic assay (LFA), or enzyme immunoassay (EIA). A new cryptococcal antigen LFA was FDA approved in July 2011. Polymerase chain reaction (PCR) has been used on tissue specimens. Diagnosis: Cryptococcosis can rarely occur in the non-immunosuppressed people, particularly with Cryptococcus gattii. Prevention: Cryptococcosis is a very subacute infection with a prolonged subclinical phase lasting weeks to months in persons with HIV/AIDS before the onset of symptomatic meningitis. In Sub-Saharan Africa, the prevalence rates of detectable cryptococcal antigen in peripheral blood is often 4–12% in persons with CD4 counts lower than 100 cells/mcL. Prevention: Cryptococcal antigen screen and preemptive treatment with fluconazole is cost saving to the healthcare system by avoiding cryptococcal meningitis. The World Health Organization recommends cryptococcal antigen screening in HIV-infected persons entering care with CD4<100 cells/μL. This undetected subclinical cryptococcal (if not preemptively treated with anti-fungal therapy) will often go on to develop cryptococcal meningitis, despite receiving HIV therapy. Cryptococcosis accounts for 20–25% of the mortality after initiating HIV therapy in Africa. What is effective preemptive treatment is unknown, with the current recommendations on dose and duration based on expert opinion. Screening in the United States is controversial, with official guidelines not recommending screening, despite cost-effectiveness and a 3% U.S. cryptococcal antigen prevalence in CD4<100 cells/μL.Antifungal prophylaxis such as fluconazole and itraconazole reduces the risk of contracting cryptococcosis in those with low CD4 cell count and high risk of developing such disease in a setting of cryptococcal antigen screening tests are not available. Treatment: Treatment options in persons without HIV-infection have not been well studied. Intravenous Amphotericin B combined with flucytosine by mouth is recommended for initial treatment (induction therapy).People living with AIDS often have a greater burden of disease and higher mortality (30–70% at 10-weeks), but recommended therapy is with amphotericin B and flucytosine. Where flucytosine is not available (many low and middle income countries), fluconazole should be used with amphotericin. Amphotericin-based induction therapy has much greater microbiologic activity than fluconazole monotherapy with 30% better survival at 10 weeks. Based on a systematic review of existing data, the most cost-effective induction treatment in resource-limited settings appears to be one week of amphotericin B coupled with high-dose fluconazole. After initial induction treatment as above, typical consolidation therapy is with oral fluconazole for at least 8 weeks used with secondary prophylaxis with fluconazole thereafter.The decision on when to start treatment for HIV appears to be very different than other opportunistic infections. A large multi-site trial supports deferring ART for 4–6 weeks was overall preferable with 15% better 1-year survival than earlier ART initiation at 1–2 weeks after diagnosis. A 2018 Cochrane review also supports the delayed starting of treatment until cryptococcosis starts improving with antifungal treatment. Treatment: IRIS The immune reconstitution inflammatory syndrome (IRIS) has been described in those with normal immune function with meningitis caused by C. gattii and C. grubii. The increasing inflammation can cause brain injury or be fatal. Epidemiology: Data from 2009 estimated that of the almost one million cases of cryptococcal meningitis that occurred worldwide annually, 700,000 occurred in sub-Saharan Africa and 600,000 per year died. Other animals: Cryptococcosis is also seen in cats and occasionally dogs. It is the most common deep fungal disease in cats, usually leading to chronic infection of the nose and sinuses, and skin ulcers. Cats may develop a bump over the bridge of the nose from local tissue inflammation. It can be associated with FeLV infection in cats. Cryptococcosis is most common in dogs and cats but cattle, sheep, goats, horses, wild animals, and birds can also be infected. Soil, fowl manure, and pigeon droppings are among the sources of infection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reliability engineering** Reliability engineering: Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time. Reliability engineering: The reliability function is theoretically defined as the probability of success at time t, which is denoted R(t). In practice, it is calculated using different techniques and its value ranges between 0 and 1, where 0 indicates no probability of success while 1 indicates definite success. This probability is estimated from detailed (physics of failure) analysis, previous data sets or through reliability testing and reliability modeling. Availability, testability, maintainability and maintenance are often defined as a part of "reliability engineering" in reliability programs. Reliability often plays the key role in the cost-effectiveness of systems. Reliability engineering: Reliability engineering deals with the prediction, prevention and management of high levels of "lifetime" engineering uncertainty and risks of failure. Although stochastic parameters define and affect reliability, reliability is not only achieved by mathematics and statistics. "Nearly all teaching and literature on the subject emphasize these aspects, and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods for prediction and measurement." For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massively multivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability. Reliability engineering: Reliability engineering relates closely to Quality Engineering, safety engineering and system safety, in that they use common methods for their analysis and may require input from each other. It can be said that a system must be reliably safe. Reliability engineering focuses on costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims. History: The word reliability can be traced back to 1816, and is first attested to the poet Samuel Taylor Coleridge. Before World War II the term was linked mostly to repeatability; a test (in any type of science) was considered "reliable" if the same results would be obtained repeatedly. In the 1920s, product improvement through the use of statistical process control was promoted by Dr. Walter A. Shewhart at Bell Labs, around the time that Waloddi Weibull was working on statistical models for fatigue. The development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U.S. military in the 1940s, characterizing a product that would operate when expected and for a specified period of time. History: In World War II, many reliability issues were due to the inherent unreliability of electronic equipment available at the time, and to fatigue issues. In 1945, M.A. Miner published the seminal paper titled "Cumulative Damage in Fatigue" in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability proved to be very problematic and costly. The IEEE formed the Reliability Society in 1948. In 1950, the United States Department of Defense formed a group called the "Advisory Group on the Reliability of Electronic Equipment" (AGREE) to investigate reliability methods for military equipment. This group recommended three main ways of working: Improve component reliability. History: Establish quality and reliability requirements for suppliers. History: Collect field data and find root causes of failures.In the 1960s, more emphasis was given to reliability testing on component and system level. The famous military standard MIL-STD-781 was created at that time. Around this period also the much-used predecessor to military handbook 217 was published by RCA and was used for the prediction of failure rates of electronic components. The emphasis on component reliability and empirical research (e.g. Mil Std 217) alone slowly decreased. More pragmatic approaches, as used in the consumer industries, were being used. In the 1980s, televisions were increasingly made up of solid-state semiconductors. Automobiles rapidly increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as had microwave ovens and a variety of other appliances. Communications systems began to adopt electronics to replace older mechanical switching systems. Bellcore issued the first consumer prediction methodology for telecommunications, and SAE developed a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, and it became apparent that die complexity wasn't the only factor that determined failure rates for integrated circuits (ICs). History: Kam Wong published a paper questioning the bathtub curve—see also reliability-centered maintenance. During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, and the PC market helped keep IC densities following Moore's law and doubling about every 18 months. Reliability engineering was now changing as it moved towards understanding the physics of failure. Failure rates for components kept dropping, but system-level issues became more prominent. Systems thinking became more and more important. For software, the CMM model (Capability Maturity Model) was developed, which gave a more qualitative approach to reliability. ISO 9000 added reliability measures as part of the design and development portion of certification. The expansion of the World-Wide Web created new challenges of security and trust. The older problem of too little reliability information available had now been replaced by too much information of questionable value. Consumer reliability problems could now be discussed online in real time using data. New technologies such as micro-electromechanical systems (MEMS), handheld GPS, and hand-held devices that combined cell phones and computers all represent challenges to maintain reliability. Product development time continued to shorten through this decade and what had been done in three years was being done in 18 months. This meant that reliability tools and tasks had to be more closely tied to the development process itself. In many ways, reliability became part of everyday life and consumer expectations. Overview: Objective The objectives of reliability engineering, in decreasing order of priority, are: To apply engineering knowledge and specialist techniques to prevent or to reduce the likelihood or frequency of failures. To identify and correct the causes of failures that do occur despite the efforts to prevent them. To determine ways of coping with failures that do occur, if their causes have not been corrected. Overview: To apply methods for estimating the likely reliability of new designs, and for analysing reliability data.The reason for the priority emphasis is that it is by far the most effective way of working, in terms of minimizing costs and generating reliable products. The primary skills that are required, therefore, are the ability to understand and anticipate the possible causes of failures, and knowledge of how to prevent them. It is also necessary to have knowledge of the methods that can be used for analysing designs and data. Overview: Scope and techniques Reliability engineering for "complex systems" requires a different, more elaborate systems approach than for non-complex systems. Reliability engineering may in that case involve: System availability and mission readiness analysis and related reliability and maintenance requirement allocation Functional system failure analysis and derived requirements specification Inherent (system) design reliability analysis and derived requirements specification for both hardware and software design System diagnostics design Fault tolerant systems (e.g. by redundancy) Predictive and preventive maintenance (e.g. reliability-centered maintenance) Human factors / human interaction / human errors Manufacturing- and assembly-induced failures (effect on the detected "0-hour quality" and reliability) Maintenance-induced failures Transport-induced failures Storage-induced failures Use (load) studies, component stress analysis, and derived requirements specification Software (systematic) failures Failure / reliability testing (and derived requirements) Field failure monitoring and corrective actions Spare parts stocking (availability control) Technical documentation, caution and warning analysis Data and information acquisition/organisation (creation of a general reliability development hazard log and FRACAS system) Chaos engineeringEffective reliability engineering requires understanding of the basics of failure mechanisms for which experience, broad engineering skills and good knowledge from many different special fields of engineering are required, for example: Tribology Stress (mechanics) Fracture mechanics / fatigue Thermal engineering Fluid mechanics / shock-loading engineering Electrical engineering Chemical engineering (e.g. corrosion) Material science Definitions Reliability may be defined in the following ways: The idea that an item is fit for a purpose with respect to time The capacity of a designed, produced, or maintained item to perform as required over time The capacity of a population of designed, produced or maintained items to perform as required over time The resistance to failure of an item over time The probability of an item to perform a required function under stated conditions for a specified period of time The durability of an object Basics of a reliability assessment Many engineering techniques are used in reliability risk assessments, such as reliability block diagrams, hazard analysis, failure mode and effects analysis (FMEA), fault tree analysis (FTA), Reliability Centered Maintenance, (probabilistic) load and material stress and wear calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect analysis, reliability testing, etc. It is crucial that these analyses are done properly and with much attention to detail to be effective. Because of the large number of reliability techniques, their expense, and the varying degrees of reliability required for different situations, most projects develop a reliability program plan to specify the reliability tasks (statement of work (SoW) requirements) that will be performed for that specific system. Overview: Consistent with the creation of safety cases, for example per ARP4761, the goal of reliability assessments is to provide a robust set of qualitative and quantitative evidence that use of a component or system will not be associated with unacceptable risk. The basic steps to take are to: Thoroughly identify relevant unreliability "hazards", e.g. potential conditions, events, human errors, failure modes, interactions, failure mechanisms and root causes, by specific analysis or tests. Overview: Assess the associated system risk, by specific analysis or testing. Propose mitigation, e.g. requirements, design changes, detection logic, maintenance, training, by which the risks may be lowered and controlled for at an acceptable level. Overview: Determine the best mitigation and get agreement on final, acceptable risk levels, possibly based on cost/benefit analysis.Risk here is the combination of probability and severity of the failure incident (scenario) occurring. The severity can be looked at from a system safety or a system availability point of view. Reliability for safety can be thought of as a very different focus from reliability for system availability. Availability and safety can exist in dynamic tension as keeping a system too available can be unsafe. Forcing an engineering system into a safe state too quickly can force false alarms that impede the availability of the system. Overview: In a de minimis definition, severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 Tōhoku earthquake and tsunami)—in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities. Residual risk is the risk that is left over after all reliability activities have finished, and includes the unidentified risk—and is therefore not completely quantifiable. Overview: The complexity of the technical systems such as improvements of design and materials, planned inspections, fool-proof design, and backup redundancy decreases risk and increases the cost. The risk can be decreased to ALARA (as low as reasonably achievable) or ALAPA (as low as practically achievable) levels. Reliability and availability program plan: Implementing a reliability program is not simply a software purchase; it is not just a checklist of items that must be completed that will ensure one has reliable products and processes. A reliability program is a complex learning and knowledge-based system unique to one's products and processes. It is supported by leadership, built on the skills that one develops within a team, integrated into business processes and executed by following proven standard work practices.A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools, analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements for reliability assessment. For large-scale complex systems, the reliability program plan should be a separate document. Resource determination for manpower and budgets for testing and other tasks is critical for a successful program. In general, the amount of work required for an effective program for complex systems is large. Reliability and availability program plan: A reliability program plan is essential for achieving high levels of reliability, testability, maintainability, and the resulting system availability, and is developed early during system development and refined over the system's life-cycle. It specifies not only what the reliability engineer does, but also the tasks performed by other stakeholders. An effective reliability program plan must be approved by top program management, which is responsible for allocation of sufficient resources for its implementation. Reliability and availability program plan: A reliability program plan may also be used to evaluate and improve the availability of a system by the strategy of focusing on increasing testability & maintainability and not on reliability. Improving maintainability is generally easier than improving reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, they are likely to dominate the availability calculation (prediction uncertainty problem), even when maintainability levels are very high. When reliability is not under control, more complicated issues may arise, like manpower (maintainers / customer service capability) shortages, spare part availability, logistic delays, lack of repair facilities, extensive retro-fit and complex configuration management costs, and others. The problem of unreliability may be increased also due to the "domino effect" of maintenance-induced failures after repairs. Focusing only on maintainability is therefore not enough. If failures are prevented, none of the other issues are of any importance, and therefore reliability is generally regarded as the most important part of availability. Reliability needs to be evaluated and improved related to both availability and the total cost of ownership (TCO) due to cost of spare parts, maintenance man-hours, transport costs, storage cost, part obsolete risks, etc. But, as GM and Toyota have belatedly discovered, TCO also includes the downstream liability costs when reliability calculations have not sufficiently or accurately addressed customers' personal bodily risks. Often a trade-off is needed between the two. There might be a maximum ratio between availability and cost of ownership. Testability of a system should also be addressed in the plan, as this is the link between reliability and maintainability. The maintenance strategy can influence the reliability of a system (e.g., by preventive and/or predictive maintenance), although it can never bring it above the inherent reliability. Reliability and availability program plan: The reliability plan should clearly provide a strategy for availability control. Whether only availability or also cost of ownership is more important depends on the use of the system. For example, a system that is a critical link in a production system—e.g., a big oil platform—is normally allowed to have a very high cost of ownership if that cost translates to even a minor increase in availability, as the unavailability of the platform results in a massive loss of revenue which can easily exceed the high cost of ownership. A proper reliability plan should always address RAMT analysis in its total context. RAMT stands for reliability, availability, maintainability/maintenance, and testability in the context of the customer's needs. Reliability requirements: For any system, one of the first tasks of reliability engineering is to adequately specify the reliability and maintainability requirements allocated from the overall availability needs and, more importantly, derived from proper design failure analysis or preliminary prototype test results. Clear requirements (able to designed to) should constrain the designers from designing particular unreliable items / constructions / interfaces / systems. Setting only availability, reliability, testability, or maintainability targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability Requirements Engineering. Reliability requirements address the system itself, including test and assessment requirements, and associated tasks and documentation. Reliability requirements are included in the appropriate system or subsystem requirements specifications, test plans, and contract statements. Creation of proper lower-level requirements is critical. Reliability requirements: Provision of only quantitative minimum targets (e.g., Mean Time Between Failure (MTBF) values or failure rates) is not sufficient for different reasons. One reason is that a full validation (related to correctness and verifiability in time) of a quantitative reliability allocation (requirement spec) on lower levels for complex systems can (often) not be made as a consequence of (1) the fact that the requirements are probabilistic, (2) the extremely high level of uncertainties involved for showing compliance with all these probabilistic requirements, and because (3) reliability is a function of time, and accurate estimates of a (probabilistic) reliability number per item are available only very late in the project, sometimes even after many years of in-service use. Compare this problem with the continuous (re-)balancing of, for example, lower-level-system mass requirements in the development of an aircraft, which is already often a big undertaking. Notice that in this case, masses do only differ in terms of only some %, are not a function of time, the data is non-probabilistic and available already in CAD models. In case of reliability, the levels of unreliability (failure rates) may change with factors of decades (multiples of 10) as result of very minor deviations in design, process, or anything else. The information is often not available without huge uncertainties within the development phase. This makes this allocation problem almost impossible to do in a useful, practical, valid manner that does not result in massive over- or under-specification. A pragmatic approach is therefore needed—for example: the use of general levels / classes of quantitative requirements depending only on severity of failure effects. Also, the validation of results is a far more subjective task than for any other type of requirement. (Quantitative) reliability parameters—in terms of MTBF—are by far the most uncertain design parameters in any design. Reliability requirements: Furthermore, reliability design requirements should drive a (system or part) design to incorporate features that prevent failures from occurring, or limit consequences from failure in the first place. Not only would it aid in some predictions, this effort would keep from distracting the engineering effort into a kind of accounting work. A design requirement should be precise enough so that a designer can "design to" it and can also prove—through analysis or testing—that the requirement has been achieved, and, if possible, within some a stated confidence. Any type of reliability requirement should be detailed and could be derived from failure analysis (Finite-Element Stress and Fatigue analysis, Reliability Hazard Analysis, FTA, FMEA, Human Factor Analysis, Functional Hazard Analysis, etc.) or any type of reliability testing. Also, requirements are needed for verification tests (e.g., required overload stresses) and test time needed. To derive these requirements in an effective manner, a systems engineering-based risk assessment and mitigation logic should be used. Robust hazard log systems must be created that contain detailed information on why and how systems could or have failed. Requirements are to be derived and tracked in this way. These practical design requirements shall drive the design and not be used only for verification purposes. These requirements (often design constraints) are in this way derived from failure analysis or preliminary tests. Understanding of this difference compared to only purely quantitative (logistic) requirement specification (e.g., Failure Rate / MTBF target) is paramount in the development of successful (complex) systems.The maintainability requirements address the costs of repairs as well as repair time. Testability (not to be confused with test requirements) requirements provide the link between reliability and maintainability and should address detectability of failure modes (on a particular system level), isolation levels, and the creation of diagnostics (procedures). Reliability requirements: As indicated above, reliability engineers should also address requirements for various reliability tasks and documentation during system development, testing, production, and operation. These requirements are generally specified in the contract statement of work and depend on how much leeway the customer wishes to provide to the contractor. Reliability tasks include various analyses, planning, and failure reporting. Task selection depends on the criticality of the system as well as cost. A safety-critical system may require a formal failure reporting and review process throughout development, whereas a non-critical system may rely on final test reports. The most common reliability program tasks are documented in reliability program standards, such as MIL-STD-785 and IEEE 1332. Failure reporting analysis and corrective action systems are a common approach for product/process reliability monitoring. Reliability culture / human errors / human factors: In practice, most failures can be traced back to some type of human error, for example in: Management decisions (e.g. in budgeting, timing, and required tasks) Systems Engineering: Use studies (load cases) Systems Engineering: Requirement analysis / setting Systems Engineering: Configuration control Assumptions Calculations / simulations / FEM analysis Design Design drawings Testing (e.g. incorrect load settings or failure measurement) Statistical analysis Manufacturing Quality control Maintenance Maintenance manuals Training Classifying and ordering of information Feedback of field information (e.g. incorrect or too vague) etc.However, humans are also very good at detecting such failures, correcting for them, and improvising when abnormal situations occur. Therefore, policies that completely rule out human actions in design and production processes to improve reliability may not be effective. Some tasks are better performed by humans and some are better performed by machines.Furthermore, human errors in management; the organization of data and information; or the misuse or abuse of items, may also contribute to unreliability. This is the core reason why high levels of reliability for complex systems can only be achieved by following a robust systems engineering process with proper planning and execution of the validation and verification tasks. This also includes careful organization of data and information sharing and creating a "reliability culture", in the same way that having a "safety culture" is paramount in the development of safety critical systems. Reliability prediction and improvement: Reliability prediction combines: creation of a proper reliability model (see further on this page) estimation (and justification) of input parameters for this model (e.g. failure rates for a particular failure mode or event and the mean time to repair the system for a particular failure) estimation of output reliability parameters at system or part level (i.e. system availability or frequency of a particular functional failure) The emphasis on quantification and target setting (e.g. MTBF) might imply there is a limit to achievable reliability, however, there is no inherent limit and development of higher reliability does not need to be more costly. In addition, they argue that prediction of reliability from historic data can be very misleading, with comparisons only valid for identical designs, products, manufacturing processes, and maintenance with identical operating loads and usage environments. Even minor changes in any of these could have major effects on reliability. Furthermore, the most unreliable and important items (i.e. the most interesting candidates for a reliability investigation) are most likely to be modified and re-engineered since historical data was gathered, making the standard (re-active or pro-active) statistical methods and processes used in e.g. medical or insurance industries less effective. Another surprising – but logical – argument is that to be able to accurately predict reliability by testing, the exact mechanisms of failure must be known and therefore – in most cases – could be prevented! Following the incorrect route of trying to quantify and solve a complex reliability engineering problem in terms of MTBF or probability using an-incorrect – for example, the re-active – approach is referred to by Barnard as "Playing the Numbers Game" and is regarded as bad practice.For existing systems, it is arguable that any attempt by a responsible program to correct the root cause of discovered failures may render the initial MTBF estimate invalid, as new assumptions (themselves subject to high error levels) of the effect of this correction must be made. Another practical issue is the general unavailability of detailed failure data, with those available often featuring inconsistent filtering of failure (feedback) data, and ignoring statistical errors (which are very high for rare events like reliability related failures). Very clear guidelines must be present to count and compare failures related to different type of root-causes (e.g. manufacturing-, maintenance-, transport-, system-induced or inherent design failures). Comparing different types of causes may lead to incorrect estimations and incorrect business decisions about the focus of improvement. Reliability prediction and improvement: To perform a proper quantitative reliability prediction for systems may be difficult and very expensive if done by testing. At the individual part-level, reliability results can often be obtained with comparatively high confidence, as testing of many sample parts might be possible using the available testing budget. However, unfortunately these tests may lack validity at a system-level due to assumptions made at part-level testing. These authors emphasized the importance of initial part- or system-level testing until failure, and to learn from such failures to improve the system or part. The general conclusion is drawn that an accurate and absolute prediction – by either field-data comparison or testing – of reliability is in most cases not possible. An exception might be failures due to wear-out problems such as fatigue failures. In the introduction of MIL-STD-785 it is written that reliability prediction should be used with great caution, if not used solely for comparison in trade-off studies. Reliability prediction and improvement: Design for reliability Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure that a product meets its reliability requirements, under its use environment, for the duration of its lifetime. DfR is implemented in the design stage of a product to proactively improve product reliability. DfR is often used as part of an overall Design for Excellence (DfX) strategy. Reliability prediction and improvement: Statistics-based approach (i.e. MTBF) Reliability design begins with the development of a (system) model. Reliability and availability models use block diagrams and Fault Tree Analysis to provide a graphical means of evaluating the relationships between different parts of the system. These models may incorporate predictions based on failure rates taken from historical data. While the (input data) predictions are often not accurate in an absolute sense, they are valuable to assess relative differences in design alternatives. Maintainability parameters, for example Mean time to repair (MTTR), can also be used as inputs for such models. Reliability prediction and improvement: The most important fundamental initiating causes and failure mechanisms are to be identified and analyzed with engineering tools. A diverse set of practical guidance as to performance and reliability should be provided to designers so that they can generate low-stressed designs and products that protect, or are protected against, damage and excessive wear. Proper validation of input loads (requirements) may be needed, in addition to verification for reliability "performance" by testing. Reliability prediction and improvement: One of the most important design techniques is redundancy. This means that if one part of the system fails, there is an alternate success path, such as a backup system. The reason why this is the ultimate design choice is related to the fact that high-confidence reliability evidence for new parts or systems is often not available, or is extremely expensive to obtain. By combining redundancy, together with a high level of failure monitoring, and the avoidance of common cause failures; even a system with relatively poor single-channel (part) reliability, can be made highly reliable at a system level (up to mission critical reliability). No testing of reliability has to be required for this. In conjunction with redundancy, the use of dissimilar designs or manufacturing processes (e.g. via different suppliers of similar parts) for single independent channels, can provide less sensitivity to quality issues (e.g. early childhood failures at a single supplier), allowing very-high levels of reliability to be achieved at all moments of the development cycle (from early life to long-term). Redundancy can also be applied in systems engineering by double checking requirements, data, designs, calculations, software, and tests to overcome systematic failures. Reliability prediction and improvement: Another effective way to deal with reliability issues is to perform analysis that predicts degradation, enabling the prevention of unscheduled downtime events / failures. RCM (Reliability Centered Maintenance) programs can be used for this. Reliability prediction and improvement: Physics-of-failure-based approach For electronic assemblies, there has been an increasing shift towards a different approach called physics of failure. This technique relies on understanding the physical static and dynamic failure mechanisms. It accounts for variation in load, strength, and stress that lead to failure with a high level of detail, made possible with the use of modern finite element method (FEM) software programs that can handle complex geometries and mechanisms such as creep, stress relaxation, fatigue, and probabilistic design (Monte Carlo Methods/DOE). The material or component can be re-designed to reduce the probability of failure and to make it more robust against such variations. Another common design technique is component derating: i.e. selecting components whose specifications significantly exceed the expected stress levels, such as using heavier gauge electrical wire than might normally be specified for the expected electric current. Reliability prediction and improvement: Common tools and techniques Many of the tasks, techniques, and analyses used in Reliability Engineering are specific to particular industries and applications, but can commonly include: Physics of failure (PoF) Built-in self-test (BIT or BIST) (testability analysis) Failure mode and effects analysis (FMEA) Reliability hazard analysis Reliability block-diagram analysis Dynamic reliability block-diagram analysis Fault tree analysis Root cause analysis Statistical engineering, design of experiments – e.g. on simulations / FEM models or with testing Sneak circuit analysis Accelerated testing Reliability growth analysis (re-active reliability) Weibull analysis (for testing or mainly "re-active" reliability) Thermal analysis by finite element analysis (FEA) and / or measurement Thermal induced, shock and vibration fatigue analysis by FEA and / or measurement Electromagnetic analysis Avoidance of single point of failure (SPOF) Functional analysis and functional failure analysis (e.g., function FMEA, FHA or FFA) Predictive and preventive maintenance: reliability centered maintenance (RCM) analysis Testability analysis Failure diagnostics analysis (normally also incorporated in FMEA) Human error analysis Operational hazard analysis Preventative/Planned Maintenance Optimization (PMO) Manual screening Integrated logistics supportResults from these methods are presented during reviews of part or system design, and logistics. Reliability is just one requirement among many for a complex part or system. Engineering trade-off studies are used to determine the optimum balance between reliability requirements and other constraints. Reliability prediction and improvement: The importance of language Reliability engineers, whether using quantitative or qualitative methods to describe a failure or hazard, rely on language to pinpoint the risks and enable issues to be solved. The language used must help create an orderly description of the function/item/system and its complex surrounding as it relates to the failure of these functions/items/systems. Systems engineering is very much about finding the correct words to describe the problem (and related risks), so that they can be readily solved via engineering solutions. Jack Ring said that a systems engineer's job is to "language the project." (Ring et al. 2000) For part/system failures, reliability engineers should concentrate more on the "why and how", rather that predicting "when". Understanding "why" a failure has occurred (e.g. due to over-stressed components or manufacturing issues) is far more likely to lead to improvement in the designs and processes used than quantifying "when" a failure is likely to occur (e.g. via determining MTBF). To do this, first the reliability hazards relating to the part/system need to be classified and ordered (based on some form of qualitative and quantitative logic if possible) to allow for more efficient assessment and eventual improvement. This is partly done in pure language and proposition logic, but also based on experience with similar items. This can for example be seen in descriptions of events in fault tree analysis, FMEA analysis, and hazard (tracking) logs. In this sense language and proper grammar (part of qualitative analysis) plays an important role in reliability engineering, just like it does in safety engineering or in-general within systems engineering. Reliability prediction and improvement: Correct use of language can also be key to identifying or reducing the risks of human error, which are often the root cause of many failures. This can include proper instructions in maintenance manuals, operation manuals, emergency procedures, and others to prevent systematic human errors that may result in system failures. These should be written by trained or experienced technical authors using so-called simplified English or Simplified Technical English, where words and structure are specifically chosen and created so as to reduce ambiguity or risk of confusion (e.g. an "replace the old part" could ambiguously refer to a swapping a worn-out part with a non-worn-out part, or replacing a part with one using a more recent and hopefully improved design). Reliability modeling: Reliability modeling is the process of predicting or understanding the reliability of a component or system prior to its implementation. Two types of analysis that are often used to model a complete system's availability behavior including effects from logistics issues like spare part provisioning, transport and manpower are fault tree analysis and reliability block diagrams. At a component level, the same types of analyses can be used together with others. The input for the models can come from many sources including testing; prior operational experience; field data; as well as data handbooks from similar or related industries. Regardless of source, all model input data must be used with great caution, as predictions are only valid in cases where the same product was used in the same context. As such, predictions are often only used to help compare alternatives. Reliability modeling: For part level predictions, two separate fields of investigation are common: The physics of failure approach uses an understanding of physical failure mechanisms involved, such as mechanical crack propagation or chemical corrosion degradation or failure; The parts stress modelling approach is an empirical method for prediction based on counting the number and type of components of the system, and the stress they undergo during operation. Reliability modeling: Reliability theory Reliability is defined as the probability that a device will perform its intended function during a specified period of time under stated conditions. Mathematically, this may be expressed as, where f(x) is the failure probability density function and t is the length of the period of time (which is assumed to start from time zero). Reliability modeling: There are a few key elements of this definition: Reliability is predicated on "intended function:" Generally, this is taken to mean operation without failure. However, even if no individual part of the system fails, but the system as a whole does not do what was intended, then it is still charged against the system reliability. The system requirements specification is the criterion against which reliability is measured. Reliability modeling: Reliability applies to a specified period of time. In practical terms, this means that a system has a specified chance that it will operate without failure before time T . Reliability engineering ensures that components and materials will meet the requirements during the specified time. Note that units other than time may sometimes be used (e.g. "a mission", "operation cycles"). Reliability modeling: Reliability is restricted to operation under stated (or explicitly defined) conditions. This constraint is necessary because it is impossible to design a system for unlimited conditions. A Mars rover will have different specified conditions than a family car. The operating environment must be addressed during design and testing. That same rover may be required to operate in varying conditions requiring additional scrutiny. Reliability modeling: Two notable references on reliability theory and its mathematical and statistical foundations are Barlow, R. E. and Proschan, F. (1982) and Samaniego, F. J. (2007). Reliability modeling: Quantitative system reliability parameters—theory Quantitative requirements are specified using reliability parameters. The most common reliability parameter is the mean time to failure (MTTF), which can also be specified as the failure rate (this is expressed as a frequency or conditional probability density function (PDF)) or the number of failures during a given period. These parameters may be useful for higher system levels and systems that are operated frequently (i.e. vehicles, machinery, and electronic equipment). Reliability increases as the MTTF increases. The MTTF is usually specified in hours, but can also be used with other units of measurement, such as miles or cycles. Using MTTF values on lower system levels can be very misleading, especially if they do not specify the associated Failures Modes and Mechanisms (The F in MTTF).In other cases, reliability is specified as the probability of mission success. For example, reliability of a scheduled aircraft flight can be specified as a dimensionless probability or a percentage, as often used in system safety engineering. Reliability modeling: A special case of mission success is the single-shot device or system. These are devices or systems that remain relatively dormant and only operate once. Examples include automobile airbags, thermal batteries and missiles. Single-shot reliability is specified as a probability of one-time success or is subsumed into a related parameter. Single-shot missile reliability may be specified as a requirement for the probability of a hit. For such systems, the probability of failure on demand (PFD) is the reliability measure – this is actually an "unavailability" number. The PFD is derived from failure rate (a frequency of occurrence) and mission time for non-repairable systems. Reliability modeling: For repairable systems, it is obtained from failure rate, mean-time-to-repair (MTTR), and test interval. This measure may not be unique for a given system as this measure depends on the kind of demand. In addition to system level requirements, reliability requirements may be specified for critical subsystems. In most cases, reliability parameters are specified with appropriate statistical confidence intervals. Reliability testing: The purpose of reliability testing is to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements. Reliability testing may be performed at several levels and there are different types of testing. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels. Reliability testing: (The test level nomenclature varies among applications.) For example, performing environmental stress screening tests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels. Testing proceeds during each level of integration through full-up system testing, developmental testing, and operational testing, thereby reducing program risk. However, testing does not mitigate unreliability risk. Reliability testing: With each test both a statistical type 1 and type 2 error could be made and depends on sample size, test time, assumptions and the needed discrimination ratio. There is risk of incorrectly accepting a bad design (type 1 error) and the risk of incorrectly rejecting a good design (type 2 error). Reliability testing: It is not always feasible to test all system requirements. Some systems are prohibitively expensive to test; some failure modes may take years to observe; some complex interactions result in a huge number of possible test cases; and some tests require the use of limited test ranges or other resources. In such cases, different approaches to testing can be used, such as (highly) accelerated life testing, design of experiments, and simulations. Reliability testing: The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specified confidence level with the minimum number of test units and test time. Different test plans result in different levels of risk to the producer and consumer. The desired reliability, statistical confidence, and risk levels for each side influence the ultimate test plan. The customer and developer should agree in advance on how reliability requirements will be tested. Reliability testing: A key aspect of reliability testing is to define "failure". Although this may seem obvious, there are many situations where it is not clear whether a failure is really the fault of the system. Variations in test conditions, operator differences, weather and unexpected situations create differences between the customer and the system developer. One strategy to address this issue is to use a scoring conference process. A scoring conference includes representatives from the customer, the developer, the test organization, the reliability organization, and sometimes independent observers. The scoring conference process is defined in the statement of work. Each test case is considered by the group and "scored" as a success or failure. This scoring is the official result used by the reliability engineer. Reliability testing: As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented. Reliability testing is common in the Photonics industry. Examples of reliability tests of lasers are life test and burn-in. These tests consist of the highly accelerated aging, under controlled conditions, of a group of lasers. The data collected from these life tests are used to predict laser life expectancy under the intended operating characteristics. Reliability testing: Reliability test requirements Reliability test requirements can follow from any analysis for which the first estimate of failure probability, failure mode or effect needs to be justified. Evidence can be generated with some level of confidence by testing. With software-based systems, the probability is a mix of software and hardware-based failures. Testing reliability requirements is problematic for several reasons. A single test is in most cases insufficient to generate enough statistical data. Multiple tests or long-duration tests are usually very expensive. Some tests are simply impractical, and environmental conditions can be hard to predict over a systems life-cycle. Reliability testing: Reliability engineering is used to design a realistic and affordable test program that provides empirical evidence that the system meets its reliability requirements. Statistical confidence levels are used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, an MTBF of 1000 hours at 90% confidence level. From this specification, the reliability engineer can, for example, design a test with explicit criteria for the number of hours and number of failures until the requirement is met or failed. Different sorts of tests are possible. Reliability testing: The combination of required reliability level and required confidence level greatly affects the development cost and the risk to both the customer and producer. Care is needed to select the best combination of requirements—e.g. cost-effectiveness. Reliability testing may be performed at various levels, such as component, subsystem and system. Also, many factors must be addressed during testing and operation, such as extreme temperature and humidity, shock, vibration, or other environmental factors (like loss of signal, cooling or power; or other catastrophes such as fire, floods, excessive heat, physical or security violations or other myriad forms of damage or degradation). For systems that must last many years, accelerated life tests may be needed. Reliability testing: Accelerated testing The purpose of accelerated life testing (ALT test) is to induce field failure in the laboratory at a much faster rate by providing a harsher, but nonetheless representative, environment. In such a test, the product is expected to fail in the lab just as it would have failed in the field—but in much less time. Reliability testing: The main objective of an accelerated test is either of the following: To discover failure modes To predict the normal field life from the high stress lab lifeAn accelerated testing program can be broken down into the following steps: Define objective and scope of the test Collect required information about the product Identify the stress(es) Determine level of stress(es) Conduct the accelerated test and analyze the collected data.Common ways to determine a life stress relationship are: Arrhenius model Eyring model Inverse power law model Temperature–humidity model Temperature non-thermal model Software reliability: Software reliability is a special aspect of reliability engineering. It focuses on foundations and techniques to make software more reliable, i.e., resilient to faults. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digital integrated circuit technology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. Therefore, software reliability has gained prominence within the field of system reliability. There are significant differences, however, in how software and hardware behave. Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state. However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically large combinations of inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account. Software reliability: Despite this difference in the source of failure between software and hardware, several software reliability models based on statistics have been proposed to quantify what we experience with software: the longer software is run, the higher the probability that it will eventually be used in an untested manner and exhibit a latent defect that results in a failure (Shooman 1987), (Musa 2005), (Denney 2005). Software reliability: As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplined software engineering process to anticipate and design against unintended consequences. There is more overlap between software quality engineering and software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards, peer reviews, unit tests, configuration management, software metrics and software models to be used during software development. Software reliability: A common reliability metric is the number of software faults per line of code (FLOC), usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) decreases. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used. This metric remains controversial, since changes in software development and verification practices can have dramatic impact on overall defect rates. Software reliability: Software testing is an important aspect of software reliability. Even the best software development process results in some software faults that are nearly undetectable until tested. Software is tested at several levels, starting with individual units, through integration and full-up system testing. All phases of testing, software faults are discovered, corrected, and re-tested. Reliability estimates are updated based on the fault density and other metrics. At a system level, mean-time-between-failure data can be collected and used to estimate reliability. Unlike hardware, performing exactly the same test on exactly the same software configuration does not provide increased statistical confidence. Instead, software reliability uses different metrics, such as code coverage. Software reliability: The Software Engineering Institute's capability maturity model is a common means of assessing the overall software development process for reliability and quality purposes. Structural reliability: Structural reliability or the reliability of structures is the application of reliability theory to the behavior of structures. It is used in both the design and maintenance of different types of structures including concrete and steel structures. In structural reliability studies both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated. Comparison to safety engineering: Reliability for safety and reliability for availability are often closely related. Lost availability of an engineering system can cost money. If a subway system is unavailable the subway operator will lose money for each hour the system is down. The subway operator will lose more money if safety is compromised. The definition of reliability is tied to a probability of not encountering a failure. A failure can cause loss of safety, loss of availability or both. It is undesirable to lose safety or availability in a critical system. Comparison to safety engineering: Reliability engineering is concerned with overall minimisation of failures that could lead to financial losses for the responsible entity, whereas safety engineering focuses on minimising a specific set of failure types that in general could lead to loss of life, injury or damage to equipment. Comparison to safety engineering: Reliability hazards could transform into incidents leading to a loss of revenue for the company or the customer, for example due to direct and indirect costs associated with: loss of production due to system unavailability; unexpected high or low demands for spares; repair costs; man-hours; re-designs or interruptions to normal production.Safety engineering is often highly specific, relating only to certain tightly regulated industries, applications, or areas. It primarily focuses on system safety hazards that could lead to severe accidents including: loss of life; destruction of equipment; or environmental damage. As such, the related system functional reliability requirements are often extremely high. Although it deals with unwanted failures in the same sense as reliability engineering, it, however, has less of a focus on direct costs, and is not concerned with post-failure repair actions. Another difference is the level of impact of failures on society, leading to a tendency for strict control by governments or regulatory bodies (e.g. nuclear, aerospace, defense, rail and oil industries). Comparison to safety engineering: Fault tolerance Safety can be increased using a 2oo2 cross checked redundant system. Availability can be increased by using "1oo2" (1 out of 2) redundancy at a part or system level. If both redundant elements disagree the more permissive element will maximize availability. A 1oo2 system should never be relied on for safety. Fault-tolerant systems often rely on additional redundancy (e.g. 2oo3 voting logic) where multiple redundant elements must agree on a potentially unsafe action before it is performed. This increases both availability and safety at a system level. This is common practice in Aerospace systems that need continued availability and do not have a fail-safe mode. For example, aircraft may use triple modular redundancy for flight computers and control surfaces (including occasionally different modes of operation e.g. electrical/mechanical/hydraulic) as these need to always be operational, due to the fact that there are no "safe" default positions for control surfaces such as rudders or ailerons when the aircraft is flying. Comparison to safety engineering: Basic reliability and mission reliability The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety. However, the "basic" reliability of the system will in this case still be lower than a non-redundant (1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not result in system failure, but do result in additional cost due to: maintenance repair actions; logistics; spare parts etc. For example, replacement or repair of 1 faulty channel in a 2oo3 voting system, (the system is still operating, although with one failed channel it has actually become a 2oo2 system) is contributing to basic unreliability but not mission unreliability. As an example, the failure of the tail-light of an aircraft will not prevent the plane from flying (and so is not considered a mission failure), but it does need to be remedied (with a related cost, and so does contribute to the basic unreliability levels). Comparison to safety engineering: Detectability and common cause failures When using fault tolerant (redundant) systems or systems that are equipped with protection functions, detectability of failures and avoidance of common cause failures becomes paramount for safe functioning and/or mission reliability. Reliability versus quality (Six Sigma): Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the failure intensity over the whole life of a product or engineering system from commissioning to decommissioning. Six Sigma has its roots in statistical control in quality of manufacturing. Reliability engineering is a specialty part of systems engineering. The systems engineering process is a discovery process that is often unlike a manufacturing process. A manufacturing process is often focused on repetitive activities that achieve high quality outputs with minimum cost and time.The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of excellence. In industry, a more precise definition of quality as "conformance to requirements or specifications at the start of use" is used. Assuming the final product specification adequately captures the original requirements and customer/system needs, the quality level can be measured as the fraction of product units shipped that meet specifications. Manufactured goods quality often focuses on the number of warranty claims during the warranty period. Reliability versus quality (Six Sigma): Quality is a snapshot at the start of life through the warranty period and is related to the control of lower-level product specifications. This includes time-zero defects i.e. where manufacturing mistakes escaped final Quality Control. In theory the quality level might be described by a single fraction of defective products. Reliability, as a part of systems engineering, acts as more of an ongoing assessment of failure rates over many years. Theoretically, all items will fail over an infinite period of time. Defects that appear over time are referred to as reliability fallout. To describe reliability fallout a probability model that describes the fraction fallout over time is needed. This is known as the life distribution model. Some of these reliability issues may be due to inherent design issues, which may exist even though the product conforms to specifications. Even items that are produced perfectly will fail over time due to one or more failure mechanisms (e.g. due to human error or mechanical, electrical, and chemical factors). These reliability issues can also be influenced by acceptable levels of variation during initial production. Reliability versus quality (Six Sigma): Quality and reliability are, therefore, related to manufacturing. Reliability is more targeted towards clients who are focused on failures throughout the whole life of the product such as the military, airlines or railroads. Items that do not conform to product specification will generally do worse in terms of reliability (having a lower MTTF), but this does not always have to be the case. The full mathematical quantification (in statistical models) of this combined relation is in general very difficult or even practically impossible. In cases where manufacturing variances can be effectively reduced, six sigma tools have been shown to be useful to find optimal process solutions which can increase quality and reliability. Six Sigma may also help to design products that are more robust to manufacturing induced failures and infant mortality defects in engineering systems and manufactured product. Reliability versus quality (Six Sigma): In contrast with Six Sigma, reliability engineering solutions are generally found by focusing on reliability testing and system design. Solutions are found in different ways, such as by simplifying a system to allow more of the mechanisms of failure involved to be understood; performing detailed calculations of material stress levels allowing suitable safety factors to be determined; finding possible abnormal system load conditions and using this to increase robustness of a design to manufacturing variance related failure mechanisms. Furthermore, reliability engineering uses system-level solutions, like designing redundant and fault-tolerant systems for situations with high availability needs (see Reliability engineering vs Safety engineering above). Reliability versus quality (Six Sigma): Note: A "defect" in six-sigma/quality literature is not the same as a "failure" (Field failure | e.g. fractured item) in reliability. A six-sigma/quality defect refers generally to non-conformance with a requirement (e.g. basic functionality or a key dimension). Items can, however, fail over time, even if these requirements are all fulfilled. Quality is generally not concerned with asking the crucial question "are the requirements actually correct?", whereas reliability is. Reliability operational assessment: Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters identified during the fault tree analysis design stage. Data collection is highly dependent on the nature of the system. Most large organizations have quality control groups that collect failure data on vehicles, equipment and machinery. Consumer product failures are often tracked by the number of returns. For systems in dormant storage or on standby, it is necessary to establish a formal surveillance program to inspect and test random samples. Any changes to the system, such as field upgrades or recall repairs, require additional reliability testing to ensure the reliability of the modification. Since it is not possible to anticipate all the failure modes of a given system, especially ones with a human element, failures will occur. The reliability program also includes a systematic root cause analysis that identifies the causal relationships involved in the failure such that effective corrective actions may be implemented. When possible, system failures and corrective actions are reported to the reliability engineering organization. Reliability operational assessment: Some of the most common methods to apply to a reliability operational assessment are failure reporting, analysis, and corrective action systems (FRACAS). This systematic approach develops a reliability, safety, and logistics assessment based on failure/incident reporting, management, analysis, and corrective/preventive actions. Organizations today are adopting this method and utilizing commercial systems (such as Web-based FRACAS applications) that enable them to create a failure/incident data repository from which statistics can be derived to view accurate and genuine reliability, safety, and quality metrics. Reliability operational assessment: It is extremely important for an organization to adopt a common FRACAS system for all end items. Also, it should allow test results to be captured in a practical way. Failure to adopt one easy-to-use (in terms of ease of data-entry for field engineers and repair shop engineers) and easy-to-maintain integrated system is likely to result in a failure of the FRACAS program itself. Reliability operational assessment: Some of the common outputs from a FRACAS system include Field MTBF, MTTR, spares consumption, reliability growth, failure/incidents distribution by type, location, part no., serial no., and symptom. The use of past data to predict the reliability of new comparable systems/items can be misleading as reliability is a function of the context of use and can be affected by small changes in design/manufacturing. Reliability organizations: Systems of any significant complexity are developed by organizations of people, such as a commercial company or a government agency. The reliability engineering organization must be consistent with the company's organizational structure. For small, non-critical systems, reliability engineering may be informal. As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization. Reliability organizations: There are several common types of reliability organizations. The project manager or chief engineer may employ one or more reliability engineers directly. In larger organizations, there is usually a product assurance or specialty engineering organization, which may include reliability, maintainability, quality, safety, human factors, logistics, etc. In such case, the reliability engineer reports to the product assurance manager or specialty engineering manager. Reliability organizations: In some cases, a company may wish to establish an independent reliability organization. This is desirable to ensure that the system reliability, which is often expensive and time-consuming, is not unduly slighted due to budget and schedule pressures. In such cases, the reliability engineer works for the project day-to-day, but is actually employed and paid by a separate organization within the company. Reliability organizations: Because reliability engineering is critical to early system design, it has become common for reliability engineers, however, the organization is structured, to work as part of an integrated product team. Education: Some universities offer graduate degrees in reliability engineering. Other reliability professionals typically have a physics degree from a university or college program. Many engineering programs offer reliability courses, and some universities have entire reliability engineering programs. A reliability engineer must be registered as a professional engineer by the state or province by law, but not all reliability professionals are engineers. Reliability engineers are required in systems where public safety is at risk. There are many professional conferences and industry training programs available for reliability engineers. Several professional organizations exist for reliability engineers, including the American Society for Quality Reliability Division (ASQ-RD), the IEEE Reliability Society, the American Society for Quality (ASQ), and the Society of Reliability Engineers (SRE).A group of engineers have provided a list of useful tools for reliability engineering. These include: PTC Windchill software, RAM Commander software, RelCalc software, Military Handbook 217 (Mil-HDBK-217), 217Plus and the NAVMAT P-4855-1A manual. Analyzing failures and successes coupled with a quality standards process also provides systemized information to making informed engineering designs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polymorphism (biology)** Polymorphism (biology): In biology, polymorphism is the occurrence of two or more clearly different morphs or forms, also referred to as alternative phenotypes, in the population of a species. To be classified as such, morphs must occupy the same habitat at the same time and belong to a panmictic population (one with random mating).Put simply, polymorphism is when there are two or more possibilities of a trait on a gene. For example, there is more than one possible trait in terms of a jaguar's skin colouring; they can be light morph or dark morph. Due to having more than one possible variation for this gene, it is termed 'polymorphism'. However, if the jaguar has only one possible trait for that gene, it would be termed "monomorphic". For example, if there was only one possible skin colour that a jaguar could have, it would be termed monomorphic. Polymorphism (biology): The term polyphenism can be used to clarify that the different forms arise from the same genotype. Genetic polymorphism is a term used somewhat differently by geneticists and molecular biologists to describe certain mutations in the genotype, such as single nucleotide polymorphisms that may not always correspond to a phenotype, but always corresponds to a branch in the genetic tree. See below. Polymorphism (biology): Polymorphism is common in nature; it is related to biodiversity, genetic variation, and adaptation. Polymorphism usually functions to retain a variety of forms in a population living in a varied environment.: 126  The most common example is sexual dimorphism, which occurs in many organisms. Other examples are mimetic forms of butterflies (see mimicry), and human hemoglobin and blood types. According to the theory of evolution, polymorphism results from evolutionary processes, as does any aspect of a species. It is heritable and is modified by natural selection. In polyphenism, an individual's genetic makeup allows for different morphs, and the switch mechanism that determines which morph is shown is environmental. In genetic polymorphism, the genetic makeup determines the morph. The term polymorphism also refers to the occurrence of structurally and functionally more than two different types of individuals, called zooids, within the same organism. It is a characteristic feature of cnidarians. For example, Obelia has feeding individuals, the gastrozooids; the individuals capable of asexual reproduction only, the gonozooids, blastostyles; and free-living or sexually reproducing individuals, the medusae. Balanced polymorphism refers to the maintenance of different phenotypes in population. Terminology: Monomorphism means having only one form. Dimorphism means having two forms. Polymorphism does not cover characteristics showing continuous variation (such as weight), though this has a heritable component. Polymorphism deals with forms in which the variation is discrete (discontinuous) or strongly bimodal or polymodal. Terminology: Morphs must occupy the same habitat at the same time; this excludes geographical races and seasonal forms. The use of the words "morph" or "polymorphism" for what is a visibly different geographical race or variant is common, but incorrect. The significance of geographical variation is that it may lead to allopatric speciation, whereas true polymorphism takes place in panmictic populations. Terminology: The term was first used to describe visible forms, but it has been extended to include cryptic morphs, for instance blood types, which can be revealed by a test. Terminology: Rare variations are not classified as polymorphisms, and mutations by themselves do not constitute polymorphisms. To qualify as a polymorphism, some kind of balance must exist between morphs underpinned by inheritance. The criterion is that the frequency of the least common morph is too high simply to be the result of new mutations or, as a rough guide, that it is greater than 1% (though that is far higher than any normal mutation rate for a single allele).: ch. 5 Nomenclature Polymorphism crosses several discipline boundaries, including ecology, genetics, evolution theory, taxonomy, cytology, and biochemistry. Different disciplines may give the same concept different names, and different concepts may be given the same name. For example, there are the terms established in ecological genetics by E.B. Ford (1975), and for classical genetics by John Maynard Smith (1998). The shorter term morphism was preferred by the evolutionary biologist Julian Huxley (1955).Various synonymous terms exist for the various polymorphic forms of an organism. The most common are morph and morpha, while a more formal term is morphotype. Form and phase are sometimes used, but are easily confused in zoology with, respectively, "form" in a population of animals, and "phase" as a color or other change in an organism due to environmental conditions (temperature, humidity, etc.). Phenotypic traits and characteristics are also possible descriptions, though that would imply just a limited aspect of the body. Terminology: In the taxonomic nomenclature of zoology, the word "morpha" plus a Latin name for the morph can be added to a binomial or trinomial name. However, this invites confusion with geographically variant ring species or subspecies, especially if polytypic. Morphs have no formal standing in the ICZN. In botanical taxonomy, the concept of morphs is represented with the terms "variety", "subvariety" and "form", which are formally regulated by the ICN. Horticulturists sometimes confuse this usage of "variety" both with cultivar ("variety" in viticultural usage, rice agriculture jargon, and informal gardening lingo) and with the legal concept "plant variety" (protection of a cultivar as a form of intellectual property). Mechanisms: Three mechanisms may cause polymorphism: Genetic polymorphism – where the phenotype of each individual is genetically determined A conditional development strategy, where the phenotype of each individual is set by environmental cues A mixed development strategy, where the phenotype is randomly assigned during development Relative frequency: Endler's survey of natural selection gave an indication of the relative importance of polymorphisms among studies showing natural selection. The results, in summary: Number of species demonstrating natural selection: 141. Number showing quantitative traits: 56. Number showing polymorphic traits: 62. Number showing both Q and P traits: 23. This shows that polymorphisms are found to be at least as common as continuous variation in studies of natural selection, and hence just as likely to be part of the evolutionary process. Genetics: Genetic polymorphism Since all polymorphism has a genetic basis, genetic polymorphism has a particular meaning: Genetic polymorphism is the simultaneous occurrence in the same locality of two or more discontinuous forms in such proportions that the rarest of them cannot be maintained just by recurrent mutation or immigration, originally defined by Ford (1940).: 11  The later definition by Cavalli-Sforza & Bodmer (1971) is currently used: "Genetic polymorphism is the occurrence in the same population of two or more alleles at one locus, each with appreciable frequency", where the minimum frequency is typically taken as 1%.The definition has three parts: a) sympatry: one interbreeding population; b) discrete forms; and c) not maintained just by mutation. Genetics: In simple words, the term polymorphism was originally used to describe variations in shape and form that distinguish normal individuals within a species from each other. Presently, geneticists use the term genetic polymorphism to describe the inter-individual, functionally silent differences in DNA sequence that make each human genome unique.Genetic polymorphism is actively and steadily maintained in populations by natural selection, in contrast to transient polymorphisms where a form is progressively replaced by another.: 6–7  By definition, genetic polymorphism relates to a balance or equilibrium between morphs. The mechanisms that conserve it are types of balancing selection. Genetics: Mechanisms of balancing selection Heterosis (or heterozygote advantage): "Heterosis: the heterozygote at a locus is fitter than either homozygote".: 65  Frequency dependent selection: The fitness of a particular phenotype is dependent on its frequency relative to other phenotypes in a given population. Example: prey switching, where rare morphs of prey are actually fitter due to predators concentrating on the more frequent morphs. Genetics: Fitness varies in time and space. Fitness of a genotype may vary greatly between larval and adult stages, or between parts of a habitat range.: 26  Selection acts differently at different levels. The fitness of a genotype may depend on the fitness of other genotypes in the population: this covers many natural situations where the best thing to do (from the point of view of survival and reproduction) depends on what other members of the population are doing at the time.: 17 & ch. 7 Pleiotropism Most genes have more than one effect on the phenotype of an organism (pleiotropism). Some of these effects may be visible, and others cryptic, so it is often important to look beyond the most obvious effects of a gene to identify other effects. Cases occur where a gene affects an unimportant visible character, yet a change in fitness is recorded. In such cases, the gene's other (cryptic or 'physiological') effects may be responsible for the change in fitness. Pleiotropism is posing continual challenges for many clinical dysmorphologists in their attempt to explain birth defects which affect one or more organ system, with only a single underlying causative agent. For many pleiotropic disorders, the connection between the gene defect and the various manifestations is neither obvious, nor well understood. Genetics: "If a neutral trait is pleiotropically linked to an advantageous one, it may emerge because of a process of natural selection. It was selected but this doesn't mean it is an adaptation. The reason is that, although it was selected, there was no selection for that trait." Epistasis Epistasis occurs when the expression of one gene is modified by another gene. For example, gene A only shows its effect when allele B1 (at another locus) is present, but not if it is absent. This is one of the ways in which two or more genes may combine to produce a coordinated change in more than one characteristic (for instance, in mimicry). Unlike the supergene, epistatic genes do not need to be closely linked or even on the same chromosome. Genetics: Both pleiotropism and epistasis show that a gene need not relate to a character in the simple manner that was once supposed. Genetics: The origin of supergenes Although a polymorphism can be controlled by alleles at a single locus (e.g. human ABO blood groups), the more complex forms are controlled by supergenes consisting of several tightly linked genes on a single chromosome. Batesian mimicry in butterflies and heterostyly in angiosperms are good examples. There is a long-standing debate as to how this situation could have arisen, and the question is not yet resolved. Genetics: Whereas a gene family (several tightly linked genes performing similar or identical functions) arises by duplication of a single original gene, this is usually not the case with supergenes. In a supergene some of the constituent genes have quite distinct functions, so they must have come together under selection. This process might involve suppression of crossing-over, translocation of chromosome fragments and possibly occasional cistron duplication. That crossing-over can be suppressed by selection has been known for many years.Debate has centered round the question of whether the component genes in a super-gene could have started off on separate chromosomes, with subsequent reorganization, or if it is necessary for them to start on the same chromosome. Originally, it was held that chromosome rearrangement would play an important role. This explanation was accepted by E. B. Ford and incorporated into his accounts of ecological genetics.: ch. 6 : 17–25 However, many believe it more likely that the genes start on the same chromosome. They argue that supergenes arose in situ. This is known as Turner's sieve hypothesis. John Maynard Smith agreed with this view in his authoritative textbook, but the question is still not definitively settled. Ecology: Selection, whether natural or artificial, changes the frequency of morphs within a population; this occurs when morphs reproduce with different degrees of success. A genetic (or balanced) polymorphism usually persists over many generations, maintained by two or more opposed and powerful selection pressures. Diver (1929) found banding morphs in Cepaea nemoralis could be seen in prefossil shells going back to the Mesolithic Holocene. Non-human apes have similar blood groups to humans; this strongly suggests that this kind of polymorphism is ancient, at least as far back as the last common ancestor of the apes and man, and possibly even further. Ecology: The relative proportions of the morphs may vary; the actual values are determined by the effective fitness of the morphs at a particular time and place. The mechanism of heterozygote advantage assures the population of some alternative alleles at the locus or loci involved. Only if competing selection disappears will an allele disappear. However, heterozygote advantage is not the only way a polymorphism can be maintained. Apostatic selection, whereby a predator consumes a common morph whilst overlooking rarer morphs is possible and does occur. This would tend to preserve rarer morphs from extinction. Ecology: Polymorphism is strongly tied to the adaptation of a species to its environment, which may vary in colour, food supply, and predation and in many other ways including sexual harassment avoidance. Polymorphism is one good way the opportunities get to be used; it has survival value, and the selection of modifier genes may reinforce the polymorphism. In addition, polymorphism seems to be associated with a higher rate of speciation. Ecology: Polymorphism and niche diversity G. Evelyn Hutchinson, a founder of niche research, commented "It is very likely from an ecological point of view that all species, or at least all common species, consist of populations adapted to more than one niche". He gave as examples sexual size dimorphism and mimicry. In many cases where the male is short-lived and smaller than the female, he does not compete with her during her late pre-adult and adult life. Size difference may permit both sexes to exploit different niches. In elaborate cases of mimicry, such as the African butterfly Papilio dardanus, female morphs mimic a range of distasteful models called Batesian mimicry, often in the same region. The fitness of each type of mimic decreases as it becomes more common, so the polymorphism is maintained by frequency-dependent selection. Thus the efficiency of the mimicry is maintained in a much increased total population. However it can exist within one gender.: ch. 13 Female-limited polymorphism and sexual assault avoidance Female-limited polymorphism in Papilio dardanus can be described as an outcome of sexual conflict. Cook et al. (1994) argued that the male-like phenotype in some females in P. dardanus population on Pemba Island, Tanzania functions to avoid detection from a mate-searching male. The researchers found that male mate preference is controlled by frequency-dependent selection, which means that the rare morph suffers less from mating attempt than the common morph. The reasons why females try to avoid male sexual harassment are that male mating attempt can reduce female fitness in many ways such as fecundity and longevity. Ecology: The switch The mechanism which decides which of several morphs an individual displays is called the switch. This switch may be genetic, or it may be environmental. Taking sex determination as the example, in humans the determination is genetic, by the XY sex-determination system. In Hymenoptera (ants, bees and wasps), sex determination is by haplo-diploidy: the females are all diploid, the males are haploid. However, in some animals an environmental trigger determines the sex: alligators are a famous case in point. In ants the distinction between workers and guards is environmental, by the feeding of the grubs. Polymorphism with an environmental trigger is called polyphenism. Ecology: The polyphenic system does have a degree of environmental flexibility not present in the genetic polymorphism. However, such environmental triggers are the less common of the two methods. Ecology: Investigative methods Investigation of polymorphism requires use of both field and laboratory techniques. In the field: detailed survey of occurrence, habits and predation selection of an ecological area or areas, with well-defined boundaries capture, mark, release, recapture data relative numbers and distribution of morphs estimation of population sizesAnd in the laboratory: genetic data from crosses population cages chromosome cytology if possible use of chromatography, biochemistry or similar techniques if morphs are crypticWithout proper field-work, the significance of the polymorphism to the species is uncertain and without laboratory breeding the genetic basis is obscure. Even with insects, the work may take many years; examples of Batesian mimicry noted in the nineteenth century are still being researched. Relevance for evolutionary theory: Polymorphism was crucial to research in ecological genetics by E. B. Ford and his co-workers from the mid-1920s to the 1970s (similar work continues today, especially on mimicry). The results had a considerable effect on the mid-century evolutionary synthesis, and on present evolutionary theory. The work started at a time when natural selection was largely discounted as the leading mechanism for evolution, continued through the middle period when Sewall Wright's ideas on drift were prominent, to the last quarter of the 20th century when ideas such as Kimura's neutral theory of molecular evolution was given much attention. The significance of the work on ecological genetics is that it has shown how important selection is in the evolution of natural populations, and that selection is a much stronger force than was envisaged even by those population geneticists who believed in its importance, such as Haldane and Fisher.In just a couple of decades the work of Fisher, Ford, Arthur Cain, Philip Sheppard and Cyril Clarke promoted natural selection as the primary explanation of variation in natural populations, instead of genetic drift. Evidence can be seen in Mayr's famous book Animal Species and Evolution, and Ford's Ecological Genetics. Similar shifts in emphasis can be seen in most of the other participants in the evolutionary synthesis, such as Stebbins and Dobzhansky, though the latter was slow to change.Kimura drew a distinction between molecular evolution, which he saw as dominated by selectively neutral mutations, and phenotypic characters, probably dominated by natural selection rather than drift.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Music written in all major and/or minor keys** Music written in all major and/or minor keys: There is a long tradition in classical music of writing music in sets of pieces that cover all the major and minor keys of the chromatic scale. These sets typically consist of 24 pieces, one for each of the major and minor keys (sets that comprise all the enharmonic variants include 30 pieces). Music written in all major and/or minor keys: Examples include Johann Sebastian Bach's The Well-Tempered Clavier and Frédéric Chopin's 24 Preludes, Op. 28. Such sets are often organized as preludes and fugues or designated as preludes or études. Some composers have restricted their sets to cover only the 12 major keys or the 12 minor keys; or only the flat keys (Franz Liszt's Transcendental Études) or the sharp keys (Sergei Lyapunov's Op. 11 set). In yet another type, a single piece may progressively modulate through a set of tonalities, as occurs in Ludwig van Beethoven's Two Preludes through all twelve major keys, Op. 39. Music written in all major and/or minor keys: The bulk of works of this type have been written for piano solo, but there also exist sets for piano 4-hands; two pianos; organ; guitar; two guitars; flute; recorder; oboe; violin solo; violin and piano; cello solo; cello and piano; voice and piano; and string quartet. There are examples of attempts to write full sets that, for one reason or another, were never completed (Josef Rheinberger's organ sonatas, Dmitri Shostakovich's string quartets, César Franck's L'Organiste). Well-known sets that cover all 24 keys: Some examples of well-known works covering all 24 major and minor keys are: Johann Sebastian Bach: The Well-Tempered Clavier, Books I and II (1722 and 1742) – two separate sets of 24 preludes and fugues, together known as "the 48". Frédéric Chopin: 24 Preludes, Op. 28 (1835–39) Franz Liszt: Transcendental Études, S. 139 (1826–52) – It covers the keys with flat signatures only. Liszt originally planned to write the full suite of 24 études, but apparently abandoned this plan. In 1897–1905, Sergei Lyapunov wrote his 12 Études d'exécution transcendante, Op. 11, which covers the remaining sharp keys and is dedicated to Liszt's memory. Well-known sets that cover all 24 keys: Charles-Valentin Alkan: 25 Preludes, Op. 31 (1847); 24 etudes in all the major and minor keys, Opp. 35, 39 (1848 and 1857) Alexander Scriabin: 24 Preludes, Op. 11 (1893–95) – Scriabin wrote a total of 90 preludes for piano (50 in major keys, 31 in minor keys, and 9 in indeterminate keys). These contained only one complete set of preludes in all 24 major and minor keys. Well-known sets that cover all 24 keys: Sergei Rachmaninoff: 24 Preludes, Opp. 3/2, 23, and 32 (1892, 1901–03, and 1910) Paul Hindemith: Ludus Tonalis (1942) – 25 fugues and interludes covering twelve keys, but the major/minor disctinction does not apply Dmitri Shostakovich: 24 Preludes and Fugues, Op. 87 (1950–51) – Shostakovich also wrote a separate set of 24 Preludes, Op. 34 in 1933. Well-known sets that cover all 24 keys: Composers who wrote multiple sets A number of composers have written multiple sets of works covering all the keys of the scale. 14 sets: Niels Viggo Bentzon (1919-2000) wrote 14 complete sets of 24 Preludes and Fugues, a total of 336 pieces in this genre alone. The contemporary composer Roberto Novegno (b. 1981) has also written 14 complete sets of preludes so far, each using a different sequence of keys. Well-known sets that cover all 24 keys: 8 sets: Carl Czerny 5 sets: Charles-Valentin Alkan 4 sets: Richard Hofmann, Franciszek Zachara 3 sets: Joachim Andersen, Adolf von Henselt, Charles Koechlin, Christian Heinrich Rinck 2 sets: Lera Auerbach, David Cope, Johann Baptist Cramer, Ferdinand David, Hans Gál, Johann Nepomuk Hummel, Aaron Andrew Hunt, Friedrich Kalkbrenner, Nikolai Kapustin, Joseph Christoph Kessler, Craig Sellar Lang, Trygve Madsen, Désiré Magnus, Ignaz Moscheles, Rob Peters, Jaan Rääts, Igor Rekhin, Dmitri Shostakovich (he also left unfinished a set of string quartets in all keys), Sir Charles Villiers Stanford, Sulkhan Tsintsadze, Louis Vierne, Vsevolod Zaderatsky other: Franz Liszt wrote only one set of his own, but he also transcribed for piano solo a set for violin and piano by Ferdinand David. Josef Rheinberger wrote one full set, and died before completing a further set of organ sonatas.Full details are shown in the tables below. Variants: Single pieces that modulate through many keys Ludwig van Beethoven wrote 2 Preludes through all 12 Major Keys, Op. 39 for piano (1789). These two preludes each progressively traverse the 12 major keys. In Prelude No. 1, each key occupies from 2 to 26 bars. The keys of C# and D♭, which are enharmonically equivalent, are both represented. C major both opens and closes the set. In Prelude No. 2, the cycle of keys appears twice; in the first cycle, the number of bars per key ranges from 1 to 8; in the second half, after C every new key signature lasts for only one bar; the cycle concludes with 15 bars of C major. There is no evidence that Beethoven intended to write similar sets in the 12 minor keys. Variants: Giovanni Battista Vitali (1632–1692) included in Artificii musicali, Op. 13 (1689) a passacaglia which modulates through eight major keys (out of twelve) from E♭ major to E major through the cycle of fifths. Fugue No. 8 from Anton Reicha's Trente six Fugues pour le Piano-Forté composées d'après un nouveau systême (subtitled Cercle harmonique) modulates through all keys. The rondo theme of Darius Milhaud's Le bœuf sur le toit is played fifteen times in all 12 major keys (twice in A major and thrice in the tonic, C major). It also passes through every minor key except E minor and B minor. Works covering all eight church modes Around 1704, Johann Pachelbel completed his 95 Magnificat Fugues, which covered all eight of the church modes. Charles-Valentin Alkan composed Petits préludes sur les huit gammes du plain-chant, for organ (1859, no opus number), a sequence of eight organ preludes covering each of the church modes. In the music of the Eastern Orthodox Church, the doxasticon for Vespers of the Dormition is notable as a single hymn that includes passages in all eight tones of the Byzantine Octoechos. Variants: Other sets of 24 pieces Not all sets of 24 pieces belong in this category. For example, there was no intention in Niccolò Paganini's 24 Caprices for solo violin, Claude Debussy's 24 Préludes for piano, or Pavel Zemek Novak's 24 Preludes and Fugues for piano to cover all the keys. (Paganini may not have been aware of Pierre Rode's 24 Caprices for violin, which did span the 24 keys and were written almost at the same time as Paganini's.) Chopin's 24 Études, Opp. 10 & 25 might have originally been planned to be in all 24 keys. In fact, apart from Nos. 7 and 8, the first series (Op. 10) is made of couples of études in a major key and its relative minor (the major key either preceding the minor key or following it) with none of the tonalities occurring twice (except for C major, which appears in No. 1 and then in the only couple which is not major-minor, i.e. Nos. 7 and 8). But in the second series (Op. 25) this tonal scheme gets more and more loose. It is still possible to see connections on a tonal basis between the couples of études in Op. 25, but they are not based on one principle (e.g. Nos. 3 and 4 in F major – A minor, two tonalities which Chopin likes to put together very often, as in his second Ballade). One might suppose that Chopin considered writing the études in all the tonalities but eventually came to the conclusion that it wasn't practical, and turned back to it later, for the 24 Preludes, Op. 28. The fact that the first étude of Op. 10 is made of arpeggios in C major draws a connection to Bach's first book of The Well-Tempered Clavier and makes it clear that Chopin had the tradition on his mind. Keys: There are 12 notes in the octave, and each of them can be the tonic of one major and one minor key. This gives 24 possible keys, but each note can be represented by several enharmonic note names (note names which designate the same actual note in the 12 note octave such as G# and A♭) and so each key can be represented by several enharmonic key names (e.g. G# minor and A♭ minor). Keys: In practice, the choice of key name is restricted to the 30 keys whose signatures have no double flats or double sharps. (Such key signatures are used for so-called theoretical keys which are almost never encountered outside music-theoretical exercises.) Keys with 6 flats and 6 sharps, with 7 flats and 5 sharps and with 5 flats and 7 sharps are enharmonic to one another. Composers will, in most (though not all) cases, choose only one key from each enharmonic pair. But there are also cases of sets covering all 30 keys, which, in other words, include all enharmonic variants. Keys: The table below outlines the choices made in the various collections listed here. The keys are in the order that J.S. Bach used. Keys: Order of keys in published works The circle of fifths, whereby each major key is followed by its relative minor key, and the sequence proceeds in fifths (C, a, G, e, D, b ...) is a commonly used schema. Angelo Michele Bartolotti used this approach as early as 1640, and it was also adopted by such later composers as Rode, Hummel, Chopin, Heller, Busoni, Scriabin, Shostakovich, Kabalevsky and Kapustin. Keys: In J.S. Bach's The Well-Tempered Clavier and some other earlier sets, major keys were followed by their parallel minor keys, and the sequence ascends chromatically (C, c, C#/D♭, c#, D, d, ...). The Bach order was adopted by Arensky, Glière, York Bowen and others. Keys: Other composers derived their own schemas based on certain logical rationales. For example, in Alkan’s 25 Preludes, Op. 31, the sequence of keys moves alternately up a fourth and down a third: the major keys take the odd-numbered positions in the cycle, proceeding chromatically upwards from C to C again, and each major key is followed by its subdominant minor. Keys: Yet others used no systematic ordering. Palmgren, Rachmaninoff and Castelnuovo-Tedesco's works are examples of this. History: Bach and his precursors Johann Sebastian Bach's The Well-Tempered Clavier, two complete sets of 24 Preludes and Fugues written for keyboard in 1722 and 1742, and often known as "the 48", is generally considered the greatest example of music traversing all 24 keys. Many later composers clearly modelled their sets on Bach's, including the order of the keys. History: It was long believed that Bach had taken the title The Well-Tempered Clavier from a similarly named set of 24 Preludes and Fugues in all the keys, for which a manuscript dated 1689 was found in the library of the Brussels Conservatoire. It was later shown that this was the work of a composer who was not even born by 1689: Bernhard Christian Weber (1712–1758). In fact, the work was written in 1745–50 in imitation of Bach's example. While Bach can safely claim the title The Well-Tempered Clavier, he was not the earliest composer to write sets of pieces in all the keys: As early as 1567, Giacomo Gorzanis (c.1520–c.1577) composed twelve settings of the passamezzo antico and passamezzo moderno, each followed by a saltarello, in all 24 keys. In 1584, Vincenzo Galilei, father of Galileo Galilei, wrote a Codex of pieces illustrating the use of all 24 major and minor keys.In 1640, Angelo Michele Bartolotti wrote Libro primo di chitarra spagnola, a cycle of passacaglias that moves through all 24 major and minor keys according to the circle of fifths. Also in 1640, Antonio Carbonchi wrote Sonate di chitarra spagnola con intavolatura franzese for guitar.In 1702, Johann Caspar Ferdinand Fischer wrote a cycle of 20 organ pieces all in different keys in his Ariadne musica. These included E major as well as E in Phrygian mode and again in Dorian mode, but not E minor per se. They also excluded C#/D♭ major, D#/E♭ minor, F#/G♭ major, G#/A♭ minor, and A#/B♭ minor. Bach modelled the sequence of his 48 Preludes on Fischer's example.In 1735, between Bach's two sets, Johann Christian Schickhardt wrote his L'alphabet de la musique, Op. 30, which contained 24 sonatas for flute, violin, or recorder in all keys. In 1749, the year before Bach's death, Johann Gottlieb Goldberg, the inspiration for J.S. Bach's Goldberg Variations, wrote his own 24 polonaises for keyboard, one in each of the major and minor keys. Other examples include works by John Wilson (1595–1674), Daniel Croner (1682), Christoph Graupner (1718), Johann Mattheson (1719), Friedrich Suppig (1722), and Johann David Heinichen (1683–1729). History: After Bach The following is an incomplete list of works of this type that have been written since the death of J.S. Bach. Legend: AC = ascending chromatic; C5 = circle of fifths, major followed by relative minor; C5* = circle of fifths, major followed by parallel minor 18th and 19th centuries 1750-1850 1851-1900 20th century 1901-1950 1951-2000 21st century
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bacillary angiomatosis** Bacillary angiomatosis: Bacillary angiomatosis (BA) is a form of angiomatosis associated with bacteria of the genus Bartonella. Symptoms: Cutaneous BA is characterised by the presence of lesions on or under the skin. Appearing in numbers from one to hundreds, these lesions may take several forms: papules or nodules which are red, globular and non-blanching, with a vascular appearance purplish nodules sufficiently similar to Kaposi's sarcoma that a biopsy may be required to verify which of the two it is a purplish lichenoid plaque a subcutaneous nodule which may have ulceration, similar to a bacterial abscessWhile cutaneous BA is the most common form, it can also affect several other parts of the body, such as the brain, bone, bone marrow, lymph nodes, gastrointestinal tract, respiratory tract, spleen, and liver. Symptoms vary depending on which parts of the body are affected; for example, those whose livers are affected may have an enlarged liver and fever, while those with osseous BA experience intense pain in the affected area. Symptoms: Presentation BA is characterised by the proliferation of blood vessels, resulting in them forming tumour-like masses in the skin and other organs. Causes: It is caused by either Bartonella henselae or B. quintana. B. henselae is most often transmitted through a cat scratch or bite, though ticks and fleas may also act as vectors. B. quintana is usually transmitted by lice.It can manifest in people with AIDS and rarely appears in those who are immunocompetent. Diagnosis: Diagnosis is based on a combination of clinical features and biopsy. Neutrophilic infiltrate. Treatment and prevention: While curable, BA is potentially fatal if not treated. BA responds dramatically to several antibiotics. Usually, erythromycin will cause the skin lesions to gradually fade away in the next four weeks, resulting in complete recovery. Doxycycline may also be used. However, if the infection does not respond to either of these, the medication is usually changed to tetracycline. If the infection is serious, then a bactericidal medication may be coupled with the antibioticsIf a cat is carrying Bartonella henselae, then it may not exhibit any symptoms. Cats may be bacteremic for weeks to years, but infection is more common in young cats. Transmission to humans is thought to occur via flea feces inoculated into a cat scratch or bite, and transmission between cats occurs only in the presence of fleas. Therefore, elimination and control of fleas in the cat's environment are key to prevention of infection in both cats and humans. History: The condition that later became known as bacillary angiomatosis was first described by Stoler and associates in 1983. Being unaware of its infectious origin, it was originally called epithelioid angiomatosis. Following documentation of bacilli in Warthin-Starry stains and by electron microscopy in a series of cases by LeBoit and colleagues, the term bacillary angiomatosis was widely adopted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital empathy** Digital empathy: Digital empathy is the application of the core principles of empathy – compassion, cognition, and emotion – into technical designs to enhance user experience. According to Friesem (2016), digital empathy is the cognitive and emotional ability to be reflective and socially responsible while strategically using digital media. Background: Digital empathy finds its roots in empathy, a human behaviour explained by cognitive and behavioral neuroscientists as, “a multifaceted construct used to account for the capacity to share and understand the thoughts and feelings of others." The neurological basis for empathy lies in mirror neurons, where perception and imitation facilitate empathy.At the centre of empathy creation is communication. As communication became increasingly online due to the rapid adoption of digital communications technology in the 1990s through the 2000s, society’s communication patterns altered rapidly, both positively and negatively. Technology usage has transformed human interactions into digital conversations where people now have the ability to instantly share thoughts, feelings, and behaviours via digital channels in a few seconds. It has been observed and researched that digital conversations threaten the appropriate expression of empathy, largely as a result of the “online disinhibition effect”. Psychologist Dr. John Suler defines the online disinhibition effect as the tendency for “ people say and do things in cyberspace that they wouldn’t ordinarily say and do in the face-to-face world”. Research has shown that the shift away from face-to-face communication has caused a decline in the social-emotional skills of youth and suggest that "generations raised on technology" are becoming less empathic. Digital Empathy: Increasingly online communication patterns, and the associated phenomenon of online disinhibition, have led to research on "digital empathy". Christopher Terry and Cain (2015) in their research paper “The Emerging Issue of Digital Empathy” define digital empathy as the “traditional empathic characteristics such as concern and caring for others expressed through computer-mediated communications.” Yonty Friesem (2016) wrote that “digital empathy seeks to expand our thinking about traditional empathy phenomena into the digital arena."In the handbook of research on media literacy in the digital age, Friesem (2015) further elaborates on this concept by stating that, “digital empathy explores the ability to analyze and evaluate another’s internal state (empathy accuracy), have a sense of identity and agency (self-empathy), recognize, understand and predict other’s thoughts and emotions (cognitive empathy), feel what others feel (affective empathy), role play (imaginative empathy), and be compassionate to others (empathic concern) via digital media." Applications: Digital empathy is used in DEX (Digital employee experience), healthcare and education. In healthcare, traditional empathetic characteristics can be understood as a physician’s ability to understand the patient's experience and feelings, communicate and check their understanding of the situation, and act helpfully. According to Nina Margarita Gonzalez, digitally empathetic tools in healthcare should meet these conditions of physician empathy by designing empathetic tools through 3 key steps: understand, check, and act. By ensuring that the digital tool understands the patient experience, the tool can act through “automated empathy” to provide validating statements or tips. For example, The National Cancer Institute created texting programs that collected information on user’s smoking cessation efforts and provided validation or tips to support them, such as, “We know how you are feeling. Think about what you are gaining and why you want to quit smoking.” New health communications technology and telehealth makes clear the need for medical practitioners to recognize and adapt to online disinhibition and the lack of nonverbal cues. The University of the Highlands and Islands Experience Lab completed a study on empathy in video conferencing consultations with diabetes patients. It found that many factors impact the level of perceived empathy in a video conferencing consultation, including clarity of verbal communication, choice in pathways of care, and preparation and access to information before the consultation. Given the particular challenge that the online disinhibition effect poses to telehealth or other digital health communications, Terry and Cain suggest that, for physicians to effectively communicate empathy through digitally-mediated interactions, they must be taught traditional empathy more broadly, as the foundational principles are the same.In education, researchers are often concerned with how to use digital technologies to teach empathy and how to teach students to be empathetic when using digital platforms. In “Empathy for the Digital Age”, Yonty Friesem (2016) found that empathy can be taught to youth through video production, where the majority of students experienced higher levels of empathy after writing, producing, creating, and screening their videos in a project designed to foster empathy. Cheryl Wei-yu Chen similarly found that video projects can help youth develop awareness of empathy in digitally-mediated interactions. In their research, Friesem and Greene (2020) used digital empathy to promote digital and media literacy skills of foster youth. Practicing cognitive, emotional and social empathy through digital media have been effective to support not only the academic skills of the foster youth, but also their well being, sense of belonging and media literacy skills. As an important practice of digital literacy and media education, digital empathy is an inclusive and collaborative experience as students learn to produce their own media.Digital Employee Experience is a relatively new (circa 2019) field of IT. It focuses on ensuring employees have the best possible "Digital" experience when using technology in work. DEX market leaders 1E use concepts of Digital Empathy in the implementation of their DEX platform. They describe Digital Empathy as the solution to "IT Friction" which occurs when devices, applications and systems do not work or react as users expect them to, causing user frustration, increasing IT costs and loss of productivity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tridecane** Tridecane: Tridecane or n-tridecane is an alkane with the chemical formula CH3(CH2)11CH3. Tridecane is a combustible colourless liquid. In industry, they have no specific value aside from being components of various fuels and solvents. In the research laboratory, tridecane is also used as a distillation chaser. Natural occurrence: Nymphs of the southern green shield bug produce tridecane as a dispersion/aggregation pheromone, which possibly serves as a defense against predators. It is also the main component of the defensive fluid produced by the stink bug Cosmopepla bimaculata.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spoon rest** Spoon rest: A spoon rest (also known as a dublé) is a piece of kitchenware that serves as a place to lay spoons and other cooking utensils, to prevent cooking fluids from getting onto countertops, as well as keeping the spoon from touching any contaminants that might be on the counter (the rest is easier to keep clean). A typical design of a spoon rest is that of an "oversized spoon" with a shallow bowl and a notch on a side or an oversized ladle on feet (so called ladle rest). The rests are made of many materials, including wood, plastic, ceramic, stainless steel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calorimetric Electron Telescope** Calorimetric Electron Telescope: The CALorimetric Electron Telescope (CALET) is a space telescope being mainly used to perform high precision observations of electrons and gamma rays. It tracks the trajectory of electrons, protons, nuclei, and gamma rays and measures their direction, charge and energy, which may help understand the nature of dark matter or nearby sources of high-energy particle acceleration.The mission was developed and sponsored by the Japan Aerospace Exploration Agency (JAXA), involving teams from Japan, Italy, and the United States. CALET was launched aboard JAXA's H-II Transfer Vehicle Kounotori 5 (HTV-5) on 19 August 2015, and was placed on the International Space Station's Japanese Kibo module. Overview: CALET is an astrophysics mission that searches for signatures of dark matter and provides the highest energy direct measurements of the cosmic ray electron spectrum in order to observe discrete sources of high-energy particle acceleration in our local region of the galaxy. The mission was developed and sponsored by the Japan Aerospace Exploration Agency (JAXA), involving teams from Japan, Italy, and the United States. It seeks to understand the mechanisms of particle acceleration and propagation of cosmic rays in our galaxy, to identify their sources of acceleration, their elemental composition as a function of energy, and possibly to unveil the nature of dark matter. Such sources seem to be able to accelerate particles to energies far higher than scientists can achieve on Earth using the largest accelerators. Understanding how nature does this is important to space travel and has possible applications here on Earth. The CALET Principal Investigator is Shoji Torii from the Waseda University, Japan; John Wefel is the co-principal investigator for the US team; Pier S. Marrocchesi, is the co-investigator from the Italy team. Overview: Unlike optical telescopes, CALET operates in a scanning mode. It records each cosmic ray event that enters its field of view and triggers its detectors to take measurements of the cosmic ray in the extremely high energy region of teraelectronvolts (TeV, one trillion electronvolts). These measurements are recorded on the space station and sent to a ground station at Waseda University for analyses. CALET may also yield evidence of rare interactions between matter and dark matter by working in synergy with the Alpha Magnetic Spectrometer (AMS) – also aboard the ISS – that is looking at positrons and antiprotons to identify dark matter. Observations will be carried out more than 5 years.CALET contains a sub-payload CIRC (Compact Infrared Camera) to observe the Earth's surface in order to detect forest fires. Objectives: The objectives are to understand the following: origin and mechanisms of acceleration of high-energy cosmic rays and gamma rays propagation mechanism of cosmic rays throughout the Galaxy identity of dark matterAs a cosmic ray observatory, CALET aims to clarify high energy space phenomena and dark matter from two perspectives; one is particle creation and annihilation in the field of particle physics (or nuclear physics) and the other is particle acceleration and propagation in the field of space physics. Results: CALET first published data on half a million electron and positron cosmic ray events in 2017, finding a spectral index of −3.152 ± 0.016 above 30 GeV.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**XPL Protocol** XPL Protocol: xPL is an open protocol intended to permit the control and monitoring of home automation devices. The primary design goal of xPL is to provide a rich set of features and functionality, whilst maintaining an elegant, uncomplicated message structure. The protocol includes complete discovery and auto-configuration capabilities which support a fully "plug-n-play" architecture - essential to ensure a good end-user experience. XPL Protocol: xPL benefits from a strongly specified message structure, required to ensure that xPL-enabled devices from different vendors are able to communicate without the risk of incompatibilities. Communications between xPL applications on a Local Area Network (LAN) use UDP on port 3865.xPL development has primarily occurred in the DIY community, where users have written connecting software to existing protocols and devices. Some examples include bridges to other home automation protocols like Z-Wave and UPB. Commercially, the Logitech SqueezeCenter software for the Squeezebox supports xPL. Architecture: Different devices communicate using xPL within a local network. They all broadcast their messages on the IANA registered UDP port 3865 for the other devices to handle. As on modern operating systems only one program can listen to a given port, there is a need for a hub forwarding the messages to all devices on the same machine. The devices register to the hub on a private UDP port and the hub then forwards all incoming message to these private ports. HUB A hub is the first xPL component required on a machine running xPL devices. All devices send a heartbeat message to the hub on a regular basis (typically 5 minutes). When disconnecting, they also can send a special heartbeat end message for the hub to radiate them out of his list. The hub forwards all messages to every device in its list. There is no filtering of messages: a blind redistribution of all messages is carried out. XPL device Applications add functionality to a home automation solution such as light control, sun rise/set, weather information and so on. A device chooses a free UDP port and sends heartbeat messages from that port to the hub on the IANA registered UDP port 3865. From that time, the devices listens for messages on its private port but sends messages as broadcast on the xPL port 3865. The message types are one of the following: command, targeted to control other devices status, generally as an answer to a command trigger, used to notify a change in a device's stateAn extensive list of applications can be downloaded from the net. Tooklits are also provided for users wishing to develop their own devices. Bridge It is assumed that your network protocol is UDP/IP but this is by no means a requirement. If you wish for your XPL message to cross from one transport medium to another (UDP/IP to RS232 for example) then you will need a Bridge. Rules On Windows, xPL HAL processes incoming xPL messages and executes scripts to perform a wide variety of tasks. Configuration is done either through a Windows-based Manager or via a browser. xPL HAL also includes an xPL Configuration Manager. On Linux or Mac OS, xpl-central monitors all xPL messages and can trigger other messages based on a set of rules stored in an XML file. Transmission media: The xPL protocol can operate over a variety of transmission media, including Ethernet, RS232 and RS485. Ethernet All xPL devices broadcast their messages over UDP, on IANA registered port 3865. But, as only one application can listen at a time to a given port, the xPL protocol uses a hub to retransmit all broadcast messages to the different applications on the same machine. The applications subscribe to the hub on a free port by sending heartbeat messages which specifies the port they are listening to. In turn, the hub forwards all xPL broadcast messages it receives to every application in his list. Protocol: Lite on the wire, by design Example xPL Messages are line based, with each line ending with a linefeed (ASCII: 10 decimal) character. Protocol: The following is an example of a typical xPL Message: xpl-cmnd { hop=1 source=xpl-xplhal.myhouse target=acme-cm12.server } x10.basic { command=dim device=a1 level=75 } Message Structure All messages are made out of: The message type (xpl-cmnd, xpl-stat or xpl-trig) The header block, inside curly braces, containing: hop=n, the hop count which is incremented each time the xPL message is transferred from one physical network to another source=vendor_id-device_id.instance_id, which serves to identify the sender of the message target=vendor_id-device_id.instance_id, which serves to identify the destination of the message The message schema, in the format class.type The message body, inside curly braces, containing name=value pairsIn the header block, the target name is replaced by the wildcard symbol "*" for broadcast messages. Protocol: This is the case for tigger and status messages. Message Schema xPL uses well defined message schemas to ensure that applications from different vendors can interact sensibly. Message Schemas are extensible, and define not only the elements which should be present in a message, but also the order in which they appear. This allows simple devices and applications to parse messages more easily. All of the existing message schemas can be found on the xPL project home page. Developers looking to create a new schema are invited to do so.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Knee Surgery, Sports Traumatology, Arthroscopy** Knee Surgery, Sports Traumatology, Arthroscopy: The Knee Surgery, Sports Traumatology, Arthroscopy is a monthly peer-reviewed medical journal published in English covering orthopaedic surgery, especially related to sports trauma and surgeries, in particular arthroscopies. Knee Surgery, Sports Traumatology, Arthroscopy: The journal is the official journal of the European Society of Sports Traumatology, Knee Surgery and Arthroscopy . It was established in 1992 with Ejnar Eriksson as founding editor-in-chief for the first 16 years. He was succeeded by Jon Karlsson (Gothenburg University) and René Verdonk (Ghent University) in 2008. In 2012, Verdonk became Senior Editor and Jon Karlsson the sole editor-in-chief. While the journal was originally published as three relatively thin issues in 1993, its publication frequency increased gradually to the current 12 issues per year with about 300 pages per issue. Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2014 impact factor of 3.053, ranking it 7th out of 72 journals in the category "Orthopaedics", 9th out of 81 journals in the category "Sport Sciences", and 31st out of 198 journals in the category "Surgery".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Project management triangle** Project management triangle: The project management triangle (called also the triple constraint, iron triangle and project triangle) is a model of the constraints of project management. While its origins are unclear, it has been used since at least the 1950s. It contends that: The quality of work is constrained by the project's budget, deadlines and scope (features). The project manager can trade between constraints. Changes in one constraint necessitate changes in others to compensate or quality will suffer.For example, a project can be completed faster by increasing budget or cutting scope. Similarly, increasing scope may require equivalent increases in budget and schedule. Cutting budget without adjusting schedule or scope will lead to lower quality. Project management triangle: "Good, fast, cheap. Choose two." as stated in the Common Law of Business Balance (often expressed as "You get what you pay for.") which is attributed to John Ruskin but without any evidence and similar statements are often used to encapsulate the triangle's constraints concisely. Martin Barnes (1968) proposed a project cost model based on cost, time and resources (CTR) in his PhD thesis and in 1969, he designed a course entitled "Time and Cost in Contract Control" in which he drew a triangle with each apex representing cost, time and quality (CTQ). Later, he expanded quality with performance, becoming CTQ. It is understood that the area of the triangle represents the scope of a project which is fixed and known for a fixed cost and time. In fact the scope can be a function of cost, time and performance, requiring a trade off among the factors. Project management triangle: In practice, however, trading between constraints is not always possible. For example, throwing money (and people) at a fully staffed project can slow it down. Moreover, in poorly run projects it is often impossible to improve budget, schedule or scope without adversely affecting quality. Overview: The time constraint refers to the amount of time available to complete a project. The cost constraint refers to the budgeted amount available for the project. The scope constraint refers to what must be done to produce the project's end result. These three constraints are often competing constraints: increased scope typically means increased time and increased cost, a tight time constraint could mean increased costs and reduced scope, and a tight budget could mean increased time and reduced scope. Overview: The discipline of project management is about providing the tools and techniques that enable the project team (not just the project manager) to organize their work to meet these constraints. Overview: Another approach to project management is to consider the three constraints as finance, time and human resources. If you need to finish a job in a shorter time, you can throw more people at the problem, which in turn will raise the cost of the project, unless by doing this task quicker we will reduce costs elsewhere in the project by an equal amount. Overview: As a project management graphic aid, a triangle can show time, resources, and technical objective as the sides of a triangle, instead of the corners. John Storck, a former instructor of the American Management Association's "Basic Project Management" course, used a pair of triangles called triangle outer and triangle inner to represent the concept that the intent of a project is to complete on or before the allowed time, on or under budget, and to meet or exceed the required scope. The distance between the inner and outer triangles illustrated the hedge or contingency for each of the three elements. Bias could be shown by the distance. His example of a project with a strong time bias was the Alaska pipeline which essentially had to be done on time no matter the cost. After years of development, oil flowed out the end of the pipe within four minutes of schedule. In this illustration, the time side of triangle inner was effectively on top of the triangle outer line. This was true of the technical objective line also. The cost line of triangle inner, however, was outside since the project ran significantly over budget. Overview: James P. Lewis suggests that project scope represents the area of the triangle, and can be chosen as a variable to achieve project success. He calls this relationship PCTS (Performance, Cost, Time, Scope), and suggests that a project can pick any three. Overview: The real value of the project triangle is to show the complexity that is present in any project. The plane area of the triangle represents the near infinite variations of priorities that could exist between the three competing values. By acknowledging the limitless variety, possible within the triangle, using this graphic aid can facilitate better project decisions and planning and ensure alignment among team members and the project owners. STR Model: The STR model is a mathematical model which views the "triangle model" as a graphic abstraction of the relationship: Scope refers to complexity (which can also mean quality or performance). Resources includes humans (workers), financial, and physical. Note that these values are not considered unbounded. For instance, if one baker can make a loaf of bread in an hour in an oven, that does not mean that ten bakers could make ten loaves in one hour in the same oven, due to the oven's limited capacity. Project management triangle topics: Time For analytical purposes, the time required to produce a deliverable is estimated using several techniques. One method is to identify tasks needed to produce the deliverables documented in a work breakdown structure or WBS. The work effort for each task is estimated and those estimates are rolled up into the final deliverable estimate. The tasks are also prioritized, dependencies between tasks are identified, and this information is documented in a project schedule. The dependencies between the tasks can affect the length of the overall project (dependency constrained), as can the availability of resources (resource constrained). Time is different from all other resources and cost categories. Using actual cost of previous, similar projects as the basis for estimating the cost of current project. Project management triangle topics: According to the Project Management Body of Knowledge (PMBOK) the Project Time Management processes include: Plan Schedule Management Define Activities Sequence Activities Estimate Activity Resources Estimate Activity Durations Develop Schedule Control Schedule Define Activities Inputs: Management Plan, Scope Baseline, Enterprise environmental factors, Organizational process assets Tools: Decomposition, Rolling Wave Planning, Expert Judgment Outputs: Activity list, Activity attributes, Milestone list Activity sequencing Inputs: Project Scope Statement, Activity List, Activity Attributes, Milestones List, Approved change requests Tools: Precedence Diagramming Method (PDM), Arrow Diagramming Method (ADM), Schedule Network templates, dependency degeneration, applying leads and lags Outputs: Project Schedule Network diagrams, Activity List Updates, Activity Attributes updates, Request Changes Activity resource estimating Inputs: Enterprise Environmental factoring, Organizational process assets, Activity list, Activity attributes, Resources Availability, Project Management Plan Tools: Expert Judgment Collections, Alternative Analysis, Publishing estimating data, Project management software implementation, Bottom up estimating Outputs: Activity resource requirements, Activity attributes, Resource breakdown structure, resource calendars, request change updates. Project management triangle topics: Activity duration estimating Inputs: Enterprise environmental factors, organization process assets, Project scope statement, activity list, activity attributes, activity resource requirements, resource calendars, project management plan, risk register, activity cost estimates Tools: Expert judgment collection, analogous estimating, parametric estimating, Bottom up Estimation, Two-Point estimation, Three-point estimation, reserve analysis Outputs: Activity duration estimates, activity attribute updates and estimates Schedule development Inputs: Organizational process assets, Project scope Statement, Activity list, Activity attributes, project Schedule Network diagrams, Activity resource requirements, Resource calendars, Activity duration estimates, project management plan, risk register Tools: Schedule Network Analysis, Critical path method, schedule compression, what if scenario analysis, resources leveling, critical chain method, project management software, applying calendars, adjusting leads and lags, schedule model Outputs: Project schedule, Schedule model data, schedule baseline, resource requirements update, activity attributes, project calendar updates, request changes, project management plan updates, schedule management plan updates Schedule control Inputs: Schedule management plan, schedule baseline, performance reports, approved change requests Tools: Progressive elaboration reporting, schedule change control system, performance measurement, project management software, variance, analysis, schedule comparison bar charts Outputs: Schedule model data updates, schedule baseline. performance measurement, requested changes, recommended corrective actions, organizational process assets, activity list updates, activity attribute updates, project management plan updatesDue to the complex nature of the 'Time' process group the project management credential PMI Scheduling Professional (PMI-SP) was created. Project management triangle topics: Cost To develop an approximation of a project cost depends on several variables including: resources, work packages such as labor rates and mitigating or controlling influencing factors that create cost variances. Tools used in cost are, risk management, cost contingency, cost escalation, and indirect costs. But beyond this basic accounting approach to fixed and variable costs, the economic cost that must be considered includes worker skill and productivity which is calculated using various project cost estimate tools. This is important when companies hire temporary or contract employees or outsource work. Project management triangle topics: Cost Process Areas Cost Estimating is an approximation of the cost of all resources needed to complete activities. Cost budgeting aggregating the estimated costs of resources, work packages and activities to establish a cost baseline. Cost Control – factors that create cost fluctuation and variance can be influenced and controlled using various cost management tools. Project Management Cost Estimating Tools Analogous Estimating: Using the cost of similar project to determine the cost of the current project Determining Resource Cost rates: The cost of goods and labor by unit gathered through estimates or estimation. Bottom Up estimating: Using the lowest level of work package detail and summarizing the cost associated with it. Then rolling it up to a higher level aimed and calculating the entire cost of the project. Parametric Estimating: Measuring the statistical relationship between historical data and other variable or flow. Vendor Bid Analysis: taking the average of several bids given by vendors for the project. Reserve Analysis: Aggregate the cost of each activity on the network path then add a contingency or reserve to the end result of the analysis by a factor determined by the project manager. Cost of Quality Analysis: Estimating the cost at the highest quality for each activity.Project management software can be used to calculate the cost variances for a project. Project management triangle topics: Scope Requirements specified to achieve the end result. The overall definition of what the project is supposed to accomplish, and a specific description of what the end result should be or accomplish. A major component of scope is the quality of the final product. The amount of time put into individual tasks determines the overall quality of the project. Some tasks may require a given amount of time to complete adequately, but given more time could be completed exceptionally. Over the course of a large project, quality can have a significant impact on time and cost (or vice versa). Project management triangle topics: Together, these three constraints have given rise to the phrase "On Time, On Spec, On Budget." In this case, the term "scope" is substituted with "spec(ification)." Evolution of the Project Constraint Model: Traditionally the Project Constraint Model recognised three key constraints; "Cost", "Time" and "Scope". These constraints construct a triangle with geometric proportions illustrating the strong interdependent relationship between these factors. If there is a requirement to shift any one of these factors then at least one of the other factors must also be manipulated.With mainstream acceptance of the Triangle Model, "Cost" and "Time" appear to be represented consistently. "Scope" however is often used interchangeably given the context of the triangle's illustration or the perception of the respective project. Scope / Goal / Product / Deliverable / Quality / Performance / Output are all relatively similar and generic variation examples of this, while the above suggestion of 'People Resources' offers a more specialised interpretation. Evolution of the Project Constraint Model: This widespread use of variations implies a level of ambiguity carried by the nuance of the third constraint term and of course a level of value in the flexibility of the Triangle Model. This ambiguity allows blurred focus between a project's output and project's process, with the example terms above having potentially different impetus in the two contexts. Both "Cost" and "Time" / "Delivery" represent the top level project's inputs. Evolution of the Project Constraint Model: The ‘Project Diamond’ model engenders this blurred focus through the inclusion of "Scope" and "Quality" separately as the ‘third’ constraint. While there is merit in the addition of "Quality" as a key constraining factor, acknowledging the increasing maturity of project management, this model still lacks clarity between output and process. The Diamond Model does not capture the analogy of the strong interrelation between points of the triangles however. Evolution of the Project Constraint Model: PMBOK 4.0 offered an evolved model based on the triple constraint with 6 factors to be monitored and managed. This is illustrated as a 6 pointed Star that maintains the strength of the triangle analogy (two overlaid triangles), while at the same time represents the separation and relationship between project inputs/outputs factors on one triangle and the project processes factors on the other. The star variables are: Input-Output Triangle Scope Cost Time Process Triangle Risk Quality ResourcesWhen considering the ambiguity of the third constraint and the suggestions of the "Project Diamond"; it is possible to consider instead the Goal or Product of the project as the third constraint, being made up of the sub factors "Scope" and "Quality". In terms of a project's output both "Scope" and "Quality" can be adjusted resulting in an overall manipulation of the Goal/Product. This interpretation includes the four key factors in the original triangle inputs/outputs form. This can even be incorporated into the PMBOK Star illustrating that "Quality" in particular may be monitored separately in terms of project outputs and process. Further to this suggestion, the use of term "Goal" may best represent change initiative outputs, while Product may best represent more tangible outputs. Evolution of Project Success Criteria: The triple constraints represent a minimum number of project success criteria which are not adequate by themselves. Thus, a number of studies have been carried out to define and expand the various criteria of project success based on the theory of change which is the basic input-process-output chain. Evolution of Project Success Criteria: Bannerman (2008) proposed the multilevel project success framework which comprises five L Levels of project success i.e. team, project management, deliverable, business and strategic.The UNDP in 2012 proposed the results framework which has six stages of project success i.e. input, process, output, outcome and impact.Zidane et al (2016) expanded the results framework into the PESTOL framework to plan and assess project success which can be used to evaluate "value for money" spent on each project in terms of efficiency and effectiveness.Hence, the triple constraints has been developed into various frameworks to plan and appraise project success as holistically as possible. Limitations: The Project Management Triangle is used to analyze projects. It is often misused to define success as delivering the required scope, at a reasonable quality, within the established budget and schedule. The Project Management Triangle is considered insufficient as a model of project success because it omits crucial dimensions of success including impact on stakeholders, learning and user satisfaction. Subsequently, several enhancements of the basic triple constraints have been proposed such as the diamond model, the pyramid model, six or multiple constraints and theory of constraints. Accordingly, the project success criteria have been enhanced as well from three to multiple parameters.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JoAnne Robbins** JoAnne Robbins: JoAnne Robbins is an American authority on dysphagia and biomedical engineering, and is professor of medicine at the University of Wisconsin School of Medicine and Public Health. For more than three decades she has been a leading researcher in the field of swallowing abnormalities. Her work has uncovered correlations among elderly populations who are at increased risk for pneumonia, choking and other serious medical conditions as a result of dysphagia. Using grants from N.I.H. and the Department of Veterans Affairs, Robbins developed a medical device designed to help people afflicted with swallowing disorders. Education: Robbins earned a B.A. degree from Temple University in 1972, an M.S. degree from the University of Wisconsin-Madison in 1973, and a Ph.D. from Northwestern University in 1981. She completed a postdoctoral fellowship program through NIH’s National Research Service Award. She is a Board Certified Specialist in Swallowing (BCS-S) and holds a Certificate of Clinical Competence for Speech-Language Pathologists (CCC-SLP). She has published dozens of research papers involving dysphagia and holds several patents. Career: Robbins holds teaching positions at the University of Wisconsin-Madison and serves as associate director of research at the William S. Middleton Memorial Veterans Hospital.She has conducted extensive studies on aging. Although motor exercises have been used widely as a treatment for speech problems for many decades, Robbins applied strengthening therapy to swallowing rehabilitation. In 2012, she began a clinical demonstration project which sought to improve swallowing and eating-related care for dysphagic veterans.In 2013, Robbins introduced a new medical device to provide isometric exercises for treating patients with dysphagia. The product, sold through a company called Swallow Solutions, is an oral mouthpiece which uses sensors to measure pressure at five locations on the tongue.She frequently speaks via Internet trade portals and at conferences around the United States. She is coauthor of a culinary book targeted for those who have difficulty swallowing. First published in 2002, the book is titled, The Easy-to-Swallow, Easy-to-Chew Cookbook. Boards, community service: Robbins serves on the board of the American Heart Association’s Stroke Council. She is a past president of the Dysphagia Research Society, and has served on the editorial boards of the American Journal of Speech-Language Pathology, Dysphagia Journal and the Journal of Medical Speech-Language Pathology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Percussion section** Percussion section: The percussion section is one of the main divisions of the orchestra and the concert band. It includes most percussion instruments and all unpitched instruments. The percussion section is itself divided into three subsections: Pitched percussion, consisting of pitched instruments such as glockenspiel and tubular bells. Auxiliary percussion, consisting of all unpitched instruments such as snare drum and cymbals. Timpani.These three subsections reflect the three main skill areas that a percussionist studies. Percussion sections, consisting of similar instruments, may also be found in stage bands and other musical ensembles. Tuned percussion: See also untuned percussionThis subsection is traditionally called tuned percussion, however the corresponding term untuned percussion is avoided in modern organology in favour of the term unpitched percussion, so the instruments of this subsection are similarly termed pitched percussion. All instruments of this subsection are pitched, and with the exception of the timpani, all pitched instruments of the percussion section are in this subsection. Tuned percussion: They include: All mallet percussion instruments, and keyboard percussion instruments such as the xylophone and tubular bells. Collections of pitched instruments such as hand bells, tuned cowbells and crotales. Most other melodic percussion instruments.Despite the name, keyboard percussion instruments do not have keyboards as such. Keyboard instruments such as the celesta and keyboard glockenspiel are not included in the percussion section owing to the very different skills required to play them, but instead are grouped in the keyboard section with instruments that require similar skills. Auxiliary percussion: All unpitched percussion instruments are grouped into the auxiliary percussion subsection, which includes an enormous variety of instruments, including drums, cymbals, bells, shakers, whistles and even found objects. Players are expected to be accomplished on the snare drum, bass drum, clash cymbals and other hand percussion, and to be able to adapt these skills to playing other instruments and even objects, for example the typewriter. Timpani: The timpanist is a specialist who does not usually perform on the other percussion instruments during a concert. A high level of skill unique to this instrument is expected. While players of the keyboard and auxiliary percussion subsections often play many instruments from both subsections during a performance or piece, the timpanist is normally dedicated to that instrument.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Upflow anaerobic sludge blanket digestion** Upflow anaerobic sludge blanket digestion: Upflow anaerobic sludge blanket (UASB) technology, normally referred to as UASB reactor, is a form of anaerobic digester that is used for wastewater treatment. The UASB reactor is a methanogenic (methane-producing) digester that evolved from the anaerobic clarigester. A similar but variant technology to UASB is the expanded granular sludge bed (EGSB) digester. Process description: UASB uses an anaerobic process whilst forming a blanket of granular sludge which suspends in the tank. Wastewater flows upwards through the blanket and is processed (degraded) by the anaerobic microorganisms. The upward flow combined with the settling action of gravity suspends the blanket with the aid of flocculants. The blanket begins to reach maturity at around three months. Small sludge granules begin to form whose surface area is covered in aggregations of bacteria. In the absence of any support matrix, the flow conditions create a selective environment in which only those microorganisms capable of attaching to each other survive and proliferate. Eventually the aggregates form into dense compact biofilms referred to as "granules".Biogas with a high concentration of methane is produced as a by-product, and this may be captured and used as an energy source, to generate electricity for export and to cover its own running power. The technology needs constant monitoring when put into use to ensure that the sludge blanket is maintained, and not washed out (thereby losing the effect). The heat produced as a by-product of electricity generation can be reused to heat the digestion tanks. Process description: The blanketing of the sludge enables a dual solid and hydraulic (liquid) retention time in the digesters. Solids requiring a high degree of digestion can remain in the reactors for periods up to 90 days. Sugars dissolved in the liquid waste stream can be converted into gas quickly in the liquid phase which can exit the system in less than a day. Process description: UASB reactors are typically suited to dilute waste water streams (3% TSS with particle size >0.75mm). Historical course Over time, the UASB model has been upgraded, pain points have been addressed, and design has been optimized - ultimately resulting in the following types of systems. Process description: Second generation UASB reactors, the EGSB (Expended Granule Sludge Blanket) reactor. This is a single layer high load system, with only one settler layer. The upflow speeds are many times higher than in UASB, so that the 'adult' granules remain in the system, 'baby' granules often wash out. Typical loading rates for an EGSB; 15 - 30kg COD / m3 / day. The EGSB is a largely closed system. There is no or little chance of corrosion or odor nuisance. Process description: Third generation UASB reactors, the ECSB reactor. This is a double layer high load system, with 2 settler layers. The upflow rates are high below the first settler layer, and low below the second settler layer - this keeps both the 'adult' and 'baby' grains in the system, which pays off in greater net growth of granular sludge. Typical loading rates for an ECSB; 15 - 35 kg COD / m3 / day. The ECSB is a closed system. There is no chance of corrosion or odor nuisance. Design: With UASB (but also EGSB and ECSB), the process of settlement and digestion occurs in one or more large tank(s). The effluent from the UASB, which has a much reduced biochemical oxygen demand (BOD) concentration, usually needs to be treated further, for example with the activated sludge process, depending on the effluent quality requirements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Two-stroke oil** Two-stroke oil: Two-stroke oil (also referred to as two-cycle oil, 2-cycle oil, 2T oil, or 2-stroke oil) is a special type of motor oil intended for use in crankcase compression two-stroke engines, typical of small gasoline-powered engines. Use: Unlike a four-stroke engine, the crankcase of which is closed except for its ventilation system, a two-stroke engine uses the crankcase as part of the induction tract, so oil must be mixed with gasoline to be distributed throughout the engine for lubrication. The resultant mix is referred to as premix or petroil. The oil is ultimately burned along with the fuel as a total-loss oiling system. That results in increased exhaust emissions, sometimes with excess smoke and/or a distinctive odor. Use: The oil-base stock can be petroleum, castor oil, semi-synthetic or synthetic oil, and is mixed (or metered by injection) with petrol/gasoline at a volumetric fuel-to-oil ratio ranging from 16:1 to as low as 100:1. To avoid high emissions and oily deposits on spark plugs, modern two-strokes, particularly small engines powering such items as garden equipment and chainsaws, may now require a synthetic oil, and can suffer from oiling problems otherwise. Use: Engine original equipment manufacturers (OEMs) introduced pre-injection systems (sometimes known as "auto-lube") to engines to operate from a 32:1 to 100:1 ratio. Oils must meet or exceed the following typical specifications: TC-W3® (NMMA), API-TC, JASO FC, ISO-L-EGC.The relevant difference between regular lubricating oil and two-stroke oil is that the latter must have a much lower ash content, to minimize deposits that tend to form if ash is present in the oil when it is burned in the engine's combustion chamber. Additionally, a non-2T-specific oil can turn to gum in a matter of days if mixed with gasoline and not immediately consumed. Another important factor is that four-stroke engines have a different requirement for "stickiness" than do two-strokes. Use: Since the 1980s, different types of two-stroke oil have been developed for specialized uses, such as outboard motor two-strokes, as well as the more standard auto lube (motorcycle) two-stroke oil. As a rule of thumb, it will be stated somewhere on the printed label of most containers of oil available commercially, that it is compatible with "Autolube" or injector pumps. Those oils tend to have the consistency of liquid dish soap if shaken. A more viscous oil cannot reliably be passed through an injection system, although a premix can be used on either type. Use: "Racing" oil or castor-based does offer excellent lubrication, at the expense of premature coking. The average moped/scooter/trail rider will not achieve an appreciable increase in performance and will require very frequent teardowns. Additive ingredients: Additives for two-stroke oils fall into several general categories: Detergent/Dispersants, Antiwear agents, Biodegradability components and antioxidants (Zinc compounds). Some of the higher quality include a fuel stabilizer as well. Standards: The current international standard (ISO 13738) for two-stroke gasoline engine oil evolved from JASO M345, which were grades intended to exceed API-TC. Grades include: JASO FA is abandoned. It is not put in ISO. JASO FB evolved into ISO L-EGB, with additional test for piston cleanliness. JASO FC evolved into ISO L-EGC, with additional test for piston cleanliness. JASO FD evolved into ISO L-EGD, with additional test for piston cleanliness and detergent effect.The National Marine Manufacturers Association (NMMA) of the USA maintains its own TC-W line of standards.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spydercam** Spydercam: Spydercam is a cable-suspended camera system, and rigging system used in making motion pictures, television and at athletic stadiums. Using computer controlled winches to drive synthetic lines connected to a crane, truss or buildings to achieve multidimensional, repeatable movement. Spydercam is used as an alternative to other camera movement systems.According to the maker's website, Spydercam has been used in the following films: History: Founded in 1992 by Earl Wiggens as a stunt and rigging company, Spydercam quickly became the standard for suspended camera work. Joined later, and now owned by Tim Drnec and Hammer Semmes, they added their unique experiences in Motion control, VFX, stunts, and rope rigging. General Systems: The manufacturers of Spydercam claim their software is entirely modular, allowing them to work with producers to create their specific shot, but Spydercam also outlines three general systems they call Bullet, Falcon, and Talon. Bullet This is a simple, point-to-point rig with a single axis. The Bullet rig is the quickest to set up and implement, and can be rigged horizontally or vertically. Often used for live events, it can start and stop very quickly. The camera is held tight to the highline, which allows for a more stable shot. Falcon According to the Spydercam website, the Falcon is the most versatile and widely used of the rigs offered. With two axes, the Falcon allows both horizontal and vertical movement along two separate lines. Talon This is a three dimensional rig that allows movement anywhere within its volumetric space. With the most control and variety out of the three general rigs offered by Spydercam, Talon is a sophisticated system. Competitors: While Spydercam is based in the United States with systems in place around the world and is largely used for film and commercial work, there are several competitors on the market of cable-suspended camera systems. Some of these specialize in other areas of film, including sports footage. Spidercam is a European company that was established in the early 2000s and is mainly used for sporting events such as the Pakistan Super League, Bundesliga, Rugby World Cup, UEFA Champions League, Premier League, Ultimate Fighting Championship, US Open (tennis), Indian Premier League, and French Open. Skycam and Cablecam are consolidated under Skycam, which preceded the other systems mentioned. Skycam was established in 1984 in the United States. It can replicate previsual plans to a high degree of accuracy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Famine food** Famine food: A famine food or poverty food is any inexpensive or readily available food used to nourish people in times of hunger and starvation, whether caused by extreme poverty, such as during economic depression or war, or by natural disasters such as drought. Foods associated with famine need not be nutritionally deficient, or unsavory. People who eat famine food in large quantity over a long period of time may become averse to it over time. In times of relative affluence, these foods may become the targets of social stigma and rejection. The characterization of some foodstuffs as "famine" or "poverty" food can be social. For example, lobster and other crustaceans may be considered poverty food in some societies and luxury food in others depending on the time period and situation. Examples: A number of foodstuffs have been strongly associated with famine, war, or times of hardship throughout history: The breadnut or Maya nut was cultivated by the ancient Mayans but is largely rejected as a poverty food in modern Central America. In Polynesia, plants from the genus Xanthosoma, known locally as ʻape, were considered famine food and used only when the taro crop failed. Examples: Several species of edible algae, including dulse, channeled wrack and Irish moss (Chondrus crispus), were eaten by coastal peasants during the Great Famine in Ireland of 1846–1848. Further inland, famine foods included stinging nettle, wild mustard, sorrel and watercress. In the area of Skibbereen, people resorted to eating donkey meat, earning the nickname "Donkey Aters" (Eaters) for people in the area. Others ate dogs, cats, corncrakes, rotten pigs and even human flesh. The consumption of silverweed, sea anemones, wild carrot, sloes, pignut, common limpet, snails, dock leaves, sycamore seeds, laurel berries, holly berries, dandelion, juices of red clover and heather blossoms are also recorded. Many accounts of the Famine mention people dying with green stains around their mouths from eating grass or other green plants. Examples: Sego lily bulbs were eaten by the Mormon pioneers when their food crops failed. Tulip bulbs and beetroots were eaten in the German-occupied parts of the Netherlands during the "hunger winter" of 1944–45. During a number of famines in Russia and the Soviet Union, nettle, orache, and other types of wild plants were used to make breads or soups. In Iceland, rural parts of Sweden, and Western Finland, mushrooms were not widely eaten before World War II. They were viewed as food for cows and were also associated with the stigma of being a wartime and poverty food. In times of famine in Scandinavia, the cambium (phloem) of deciduous trees was dried, ground, and added to extend what grain flour was available, to create bark bread. This is thought to be a Sami tradition. Examples: The word Adirondack, describing the indigenous peoples that lived in the Adirondack Mountains in New York, is thought to come from the Mohawk word 'ha-de-ron-dah' meaning 'eaters of trees'. This name was said to be used by the Iroquoians as a derogatory term for groups of Algonquians who did not practice agriculture and therefore sometimes had to eat tree bark to survive harsh winters. Examples: Cat meat was eaten in the northern Italian regions of Piedmont, Emilia-Romagna, and Liguria in times of famine, such as during World War II. Likewise, during the Siege of Paris in the Franco-Prussian War, the menu in Parisian cafes was not limited to cats but also dogs, rats, horses, donkeys, camels, and even elephants. During the Japanese occupation of Malaya, due to a severe shortage of rice, the locals resorted to surviving on hardy tuberous roots such as cassava, sweet potato, and yam. During the Battle of Bataan in the Philippines during World War II, Filipino and American servicemen resorted to consuming dog meat, monkey meat, and the meat of monitor lizards (referred to as "iguana lizards" in the source), pythons, mules, horses, parrots, owls, crocodiles and carabaos as their supply of food dwindled. In the semi-arid areas of the Brazilian Northeast, the shoots and leaves of cactus Opuntia cochenillifera are normally used to feed the livestock (cattle and goats). But during long droughts, people may use them as a last resort. Historically in the Maldives the leaves of seaside trees such as the octopus bush and the beach cabbage were often used as famine food. The caper, the flower bud and berry of Capparis spinosa species, has been a famine food in southern Ethiopia and Sudan as well as in the 1948 siege of west Jerusalem. During the Cambodian humanitarian crisis, people ate tarantulas, scorpions, silkworms, and grasshoppers. Fried tarantulas later became a delicacy popular with tourists in the Cambodian town of Skuon. Morinda citrifolia is sometimes called a "starvation fruit", implying it was used by indigenous peoples in the South Pacific as emergency food during times of famine. In Haiti, mud cookies are sometimes eaten by the poorest people to avoid starvation. Similar mud cookies are eaten in Zambia, Guinea and Cameroon for their nutritional content.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lucigenin** Lucigenin: Lucigenin is an aromatic compound used in areas which include chemiluminescence. Its chemical name is bis-N-methylacridinium nitrate. It exhibits a bluish-green fluorescence. It is used as a probe for superoxide anion in biology, for its chemiluminescent properties. Synthesis: It may be prepared from acridone. There's also a route from toluene:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Habit (biology)** Habit (biology): Habit, equivalent to habitus in some applications in biology, refers variously to aspects of behaviour or structure, as follows: In zoology (particularly in ethology), habit usually refers to aspects of more or less predictable behaviour, instinctive or otherwise, though it also has broader application. Habitus refers to the characteristic form or morphology of a species. In botany, habit is the characteristic form in which a given species of plant grows (see plant habit). Behavior: In zoology, habit (not to be confused with habitus as described below) usually refers to a specific behavior pattern, either adopted, learned, pathological, innate, or directly related to physiology. For example: ...the [cat] was in the habit of springing upon the [door knocker] in order to gain admission... If these sensitive parrots are kept in cages, they quickly take up the habit of feather plucking. The spider monkey has an arboreal habit and rarely ventures onto the forest floor. Behavior: The brittle star has the habit of breaking off arms as a means of defense.Mode of life (or lifestyle, modus vivendi) is a concept related to habit, and it is sometimes referred to as the habit of an animal. It may refer to the locomotor capabilities, as in "(motile habit", sessile, errant, sedentary), feeding behaviour and mechanisms, nutrition mode (free-living, parasitic, holozoic, saprotrophic, trophic type), type of habitat (terrestrial, arboreal, aquatic, marine, freshwater, seawater, benthic, pelagic, nektonic, planktonic, etc.), period of activity (diurnal, nocturnal), types of ecological interaction, etc. Behavior: The habits of plants and animals often change responding to changes in their environment. For example: if a species develops a disease or there is a drastic change of habitat or local climate, or it is removed to a different region, then the normal habits may change. Such changes may be either pathological, or adaptive. Structure: In botany, habit is the general appearance, growth form, or architecture. For example: Many species of maple have a shrubby habit and may form bushes or hedges rather than trees. Structure: Certain alpine plants have been chosen for cultivation because of their dwarf habit.Plants may be woody or herbaceous. The main types of woody plants are trees, shrubs and lianas. Climbing plants (vines) can be woody (lianas) or herbaceous (nonwoody vines). Plants can also be categorized in terms of their habit as subshrubs (dwarf shrub, bush), cushion plants and succulents.There is some overlap between the classifications of plants according to their habit and their life-form. Structure: Other terms in biology refer similarly to various taxa; for example: Fungi are described by their growth patterns: molds, yeasts, mushrooms and dimorphic fungi. Lichens structure is described their growth form: foliose, crustose, fruticose or gelatinous. Bryophytes structure is described as foliose or thallose. The structure of a given species of algae is referred to as its type or level of organization. Bacteria are described by their morphology or shape. Structure: Animal structure is described by its body plan, which encompasses the body symmetry, the type of germ layers and of body cavities.Since the distinction between the concepts – mode of behavior and morphological form – are significant in zoology, the term habitus (from which the word habit derives) is used to describe form as distinct from behaviour (habit). The term habitus also occurs in botanical texts, but there it is used almost interchangeably with habit, because plant behaviour generally does not correspond closely to the concept of habits in the zoological sense.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flash rob** Flash rob: A flash rob, also known as a multiple offender crime or flash mob robbery, is an organized form of theft in which a group of participants enter a retail shop or convenience store en masse and steal goods and other items. Typically, store workers and employees in these cases quickly become overwhelmed by the large number of participants and are unable to stop the theft.The National Retail Federation does not classify these crimes as "flash mobs" but rather "multiple offender crimes" that utilize "flash mob tactics". In a report, the NRF noted, "multiple offender crimes tend to involve groups or gangs of juveniles who already know each other, which does not earn them the term 'flash mob'." Etymology: The term often used by the media for this type of event is "flash rob", which originates from flash mobs, where a group of people assemble quickly, perform an unusual and seemingly pointless act, and then disperse. In Chile this kind of robbery is known as turbazo. Flash rob dynamics: Flash robs operate using speed and sheer numbers in order to intimidate any resistance and complete the act before police can respond. While often viewed as a form of theft or looting (the illegal taking of items), these crimes more closely fit the definition of robbery because the large crowd creates an implied threat of violence should employees or bystanders attempt to intervene. Many investigations into these robberies have shown that they are planned ahead of time using social media, and the participants do not all necessarily know each other personally. Flash rob dynamics: United States Flash robs have occurred in places such as Chicago, Illinois, Portland, Oregon, Houston, Texas, Jacksonville, Florida, Germantown, Maryland, Beverly Hills, Los Angeles , San Francisco, and Walnut Creek, California. Flash rob dynamics: Brazil Brazil has seen mass flash robberies since at least the early 1990s. In a phenomenon known as arrastão (trawling), mobs will steal money, telephone, watches, rings, bags and sometimes even victim's clothing. The most infamous case of trawling took place on 18 October 1992, on Ipanema beach in Rio de Janeiro, when hundreds of young people ran together in a mass to rob beach goers.As a result of mass flash robberies, shopping malls in Brazil have heavy security, and typically prevent large crowds of young from entering the private property, which has been called a form of soft-apartheid. Flash rob dynamics: In 2013, a rolezinho (strolling) protest movement arose amongst youth, where thousands of young people coordinated their simultaneous entry to normally inaccessible upscale shopping malls. In some rolezinhos, the police were called and crowds were dispersed with tear gas and flash grenades.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Power optimizer** Power optimizer: A power optimizer is a DC to DC converter technology developed to maximize the energy harvest from solar photovoltaic or wind turbine systems. They do this by individually tuning the performance of the panel or wind turbine through maximum power point tracking, and optionally tuning the output to match the performance of the string inverter (DC to AC inverter). Power optimizers are especially useful when the performance of the power generating components in a distributed system will vary widely, such as due to differences in equipment, shading of light or wind, or being installed facing different directions or widely separated locations. Power optimizer: Power optimizers for solar applications can be similar to microinverters in that both systems attempt to isolate individual panels in order to improve overall system performance. A smart module is a power optimizer integrated into a solar module. A microinverter essentially combines a power optimizer with a small inverter in a single enclosure that is used on every panel, while the power optimizer leaves the inverter in a separate box and uses only one inverter for the entire array. The claimed advantage to this "hybrid" approach is lower overall system costs, avoiding the distribution of electronics. Description: Maximum power point tracking (MPPT) Most energy production or storage devices have a complex relationship between the power they produce, the load placed on them, and the efficiency of the delivery. A conventional battery, for instance, stores energy in chemical reactions in its electrolytes and plates. These reactions take time to occur, which limits the rate at which the power can be efficiently drawn from the cell. For this reason, large batteries used for power storage generally list two or more capacities, normally the "2 hour" and "20 hour" rates, with the 2 hour rate often being around 50% of the 20 hour rate. Description: Solar panels have similar issues due to the speed at which the cell can convert solar photons into electrons, ambient temperature, and a host of other issues. In this case there is a complex non-linear relationship between voltage, current and the total amount of power being produced, the "I-V curve". In order to optimize collection, modern solar arrays use a technique known as "maximum power point tracking" (MPPT) to monitor the total output of the array and continually adjust the presented load to keep the system operation at its peak efficiency point.Traditionally, solar panels produce voltages around 30 V. This is too low to be effectively converted into AC to feed to the power grid. To address this, panels are strung together in series to increase the voltage to something more appropriate for the inverter being used, typically about 600 V.The drawback to this approach is that MPPT system can only be applied to the array as a whole. Because the I-V curve is non-linear, a panel that is even slightly shadowed can have dramatically lower output, and greatly increase its internal resistance. As the panels are wired in series, this would cause the output of the entire string to be reduced due to the increased total resistance. This change in performance causes the MPPT system to change the operation point, moving the rest of the panels away from their best performance.Because of their sequential wiring, power mismatch between PV modules within a string can lead to a drastic and disproportionate loss of power from the entire solar array, in some cases leading to complete system failure. Shading of as little as 9% of the entire surface array of a PV system can, in some circumstances, lead to a system-wide power loss of as much as 54%. Although this problem is most notable with "large" events like a passing shadow, even the tiniest differences in panel performance, due to dirt, differential aging or tiny differences during manufacturing, can result in the array as a whole operating away from its best MPPT point. "Panel matching" is an important part of solar array design. Description: Isolating panels These problems have led to a number of different potential solutions that isolate panels individually or into much smaller groups (2 to 3 panels) in an effort to provide MPPT that avoids the problems of large strings. Description: One solution, the microinverter, places the entire power conversion system directly on the back of each panel. This allows the system to track the MPPT for each panel, and directly output AC power that matches the grid. The panels are then wired together in parallel, so even the failure of one of the panels or microinverters will not lead to a loss of power from the string. However, this approach has the disadvantage of distributing the power conversion circuitry, which, in theory, is the expensive part of the system. Microinverters, at least as late as early 2011, had significantly higher price per watt. Description: This leads, naturally, to the power optimizer concept, where only the MPPT system is distributed to the panels. In this case the conversion from DC to AC takes place in a single inverter, one that lacks the MPPT hardware or has it disabled. Advanced solutions are able to work correctly with all solar inverters, to make possible optimisation of already installed plants. According to its supporters, this "hybrid" approach produces the lowest-cost solution overall, while still maintaining the advantages of the microinverter approach. Description: Implementation Power optimizers are essentially DC-DC converters, taking the DC power from a solar panel at whatever voltage and current is optimal (via MPPT), then converting that to a different voltage and current that best suits the central / string inverter. Description: Some power optimizers are designed to work in conjunction with a central inverter from the same manufacturer, which allows the inverter to communicate with the optimizers to ensure that the inverter always receives the same total voltage from the panel string. In this situation, if there is a string of panels in series and a single panel's output drops due to shade, its voltage will drop so that it can deliver the same amount of current (amps). This would cause the string voltage to drop as well, except that the central inverter adjusts all the other optimizers so that their output voltage increases slightly, maintaining the fixed string voltage required at the inverter (just at reduced available amperage while the single panel is shaded). The down side of this type of optimizer is that it requires a central inverter from the same manufacturer as the optimizers, so it is not possible to gradually retrofit these in an existing installation unless the inverter is also replaced, as well as optimizers installed on all panels at the same time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fonz (video game)** Fonz (video game): Road Race is a 1976 car driving arcade racing video game developed and released by Sega in February 1976. Later the same year, Sega released two motorbike racing variants, Man T.T. (released in August) and Moto-Cross, which were in turn re-branded as Fonz, in November 1976. The game was based on the character Fonzie (portrayed by Henry Winkler) from the 1970s TV show Happy Days, with the slogan being "TV's hottest name, Your hottest game." Sega licensed Fonz because at the time it was owned by Charles Bluhdorn's Gulf+Western Company and it was a Paramount Television intellectual property. Fonz (video game): A two-player version of Man T.T. called Twin Course T.T. was released in January 1977. Overview: Moto-Cross / Fonz is an early black-and-white motorbike racing game, most notable for introducing an early three-dimensional third-person perspective. Both versions of the game display a constantly changing forward-scrolling road and the player's bike in a third-person perspective where objects nearer to the player are larger than those nearer to the horizon, and the aim was to steer the vehicle across the road, racing against the clock, while avoiding any on-coming motorcycles or driving off the road. The game also introduced the use of haptic feedback, which caused the motorcycle handlebars to vibrate during a collision with another vehicle. Gameplay: The general premise has the player controlling Fonzie on a motorcycle with handlebars on the cabinet. Gameplay: The player has to go as fast as possible without skidding off the road or colliding with other racing bikes on the screen. Turn the handlebars, and the bike will corner and bank. Twist the handle throttle open, and it will accelerate. When a collision with another bike occurs, the handlebars vibrate and the screen flashes a reverse image. To increase the challenge, the size of the bike can be regulated by the operator. Gameplay: Game time is adjustable from 45 to 100 seconds. Reception: In Japan, Road Race was among the top twenty highest-grossing arcade video games of 1976, according to the first annual Game Machine chart. In North America, Road Race was reported to be doing strong business upon release. Man T.T. was among the top ten highest-grossing arcade video games of 1977 in Japan.Fonz was introduced at Chicago's Music Operators Association (MOA) show in November 1976. It was the first time that a television character was licensed for a video game, with Sega co-founder David M. Rosen predicting the start of a new coalition between the show business and amusement arcade industries. Sega also advertised the game for having both the road and bikes seen in "true perspective on the game screen, while the player operates realistically functioning handle-bars to simulate high-speed competition riding complete with authentic motor sounds." Sega said the response to the game at the MOA show was "unanimous and enthusiastic" and that test location results were very positive. At the start of December 1976, Sega of America reported that it had manufactured several hundred Fonz arcade cabinets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glucuronosyl-disulfoglucosamine glucuronidase** Glucuronosyl-disulfoglucosamine glucuronidase: The enzyme glucuronosyl-disulfoglucosamine glucuronidase (EC 3.2.1.56) catalyzes the following chemical reaction: 3-D-glucuronosyl-N2,6-disulfo-β-D-glucosamine + H2O ⇌ D-glucuronate + N2,6-disulfo-D-glucosamineThis enzyme belongs to the family of hydrolases, specifically those glycosidases that hydrolyse O- and S-glycosyl compounds. The systematic name of this enzyme class is 3-D-glucuronsyl-N2,6-disulfo-β-D-glucosamine glucuronohydrolase. Other names in common use include glycuronidase, and 3-D-glucuronsyl-2-N,6-disulfo-β-D-glucosamine glucuronohydrolase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Offshore embedded anchors** Offshore embedded anchors: Offshore embedded anchors are anchors intended for offshore use that derive their holding capacity from the frictional, or bearing, resistance of the surrounding soil, as opposed to gravity anchors, which derive their holding capacity largely from their weight. As offshore developments move into deeper waters, gravity-based structures become less economical due to the large size needed and the consequent cost of transportation. Offshore embedded anchors: Each of several embedded-anchor types presents its own advantages for anchoring offshore structures. The choice of anchoring solution depends on multiple factors, such as the type of offshore facility that requires mooring, its location, economic viability, the lifetime of its use, soil conditions, and resources available. Examples of facilities that may need mooring offshore are floating production storage and offloading (FPSO) units, mobile offshore drilling units, offshore oil production platforms, wave power and other renewable energy converters, and floating liquefied natural gas facilities. Drag-embedment anchors: Drag-embedment anchors (DEA) derive their holding capacity from being buried, or embedded, deep within the seabed with their anchoring capacity being directly related to embedment depth. DEAs are installed by means of dragging, using a mooring chain or wire, this relatively simple means of installation making the DEA a cost-effective option for anchoring offshore structures. DEAs are commonly used for temporary moorings of offshore oil and gas structures, e.g. mobile offshore drilling units. Their use in only temporary mooring situations may be largely attributed to uncertainty involving the anchor's embedding trajectory and placement in the soil, which results in uncertainty with regard to the anchor's holding capacity.Under ideal conditions, DEAs are one of the most efficient types of anchors, with holding capacities ranging from 33 to greater than 50 times their weight; and such efficiency gives DEAs an inherent advantage over other anchoring solutions such as caissons and piles, since the mass of a DEA is concentrated deep within the seabed where soil resistance and, hence, holding capacity, is greatest. Anchor efficiency is defined as the ratio between the ultimate holding capacity and the dry weight of the anchor, with DEAs often possessing significantly higher efficiency ratios compared to other anchoring solutions. Drag-embedment anchors: A catenary configuration consists of "slack" mooring lines that form a catenary shape under their own weight. Since the catenary mooring lines lie flat along the seabed, they exert only horizontal forces on their anchors. Taut mooring lines arriving at an angle to the seabed exert both horizontal and vertical forces on their anchors. Since DEAs are designed to resist horizontal forces only, these anchors should only be used in a catenary-moored configuration. Applying a significant vertical load to a DEA will result in its failure, as the vertical force applied to the padeye will result in anchor retrieval. However, this does facilitate anchor retrieval, which contributes to the cost effectiveness of this anchoring solution. Drag-embedment anchors: Design The three main components of a DEA are the fluke, shank, and padeye. For a DEA, the angle between the fluke and the shank is fixed at approximately 30 degrees for stiff clays and sand, and 50 degrees for soft clays. Drag-embedment anchors: Fluke The fluke of a plate anchor is a bearing plate that provides the large majority of the anchors holding capacity at its ultimate embedment depth. As well as contributing to anchor capacity, the fluke may contribute to anchor stability during embedment. Adopting a wider fluke can help in providing rolling stability which allows for deeper embedment and better holding capacity. There are industry guidelines pertaining to both the appropriate width, length, and thickness of anchor flukes, where width refers to the dimension perpendicular to the direction of embedment. Commercial anchors typically have a fluke width-to-length ratio of 2:1 and a fluke length-to- thickness ratio between 5 and 30. Drag-embedment anchors: Shank Since DEAs derive their strength from their embedment depth, the shank should be designed such that soil resistance perpendicular to the anchor's embedment trajectory should be minimised. Frictional soil resistance against the parallel component of the shank, however, is less significant. Thus, the area of the shank in line with the direction of the embedment trajectory is often relatively large to provide anchor stability against rolling during embedment. Drag-embedment anchors: Padeye The padeye is the connection between the anchor and mooring line. Padeye eccentricity, often measured as the padeye offset ratio, is the relationship between the horizontal and vertical distance of the padeye position in relation to the fluke–shank connection of an anchor. The evaluation of the optimal padeye eccentricity for DEAs and vertically loaded anchors (VLAs) is limited to the appropriate choice of shank length given a fixed fluke–shank angle during embedment. A study conducted to investigate appropriate shank lengths considered a range of shank-length to fluke-length ratios between 1 and 2. It was determined that the shorter shank lengths (closer to ratios of 1) produced deeper anchor embedment. Drag-embedment anchors: Mooring Line Although the mooring line is not an anchor component unique to the DEA, its design significantly influences the behaviour of the anchor. A thicker mooring line makes for more resistance to anchor embedment. The properties of chain, versus wire, mooring lines have been investigated, with chain mooring lines causing reductions in anchor capacity of up to 70%. Thus, where appropriate and cost-efficient, wire mooring lines should be used. The embedded section of a mooring line contributes to the anchor's holding capacity against horizontal movement. It is, therefore, appropriate to analyse the contribution of the anchor's mooring line with respect to both the embedment process of the anchor and its contribution to the final anchor holding capacity. Vertically-loaded anchors: Vertically-loaded anchors (VLAs) are essentially DEAs that are free to rotate about the fluke-shank connection, which allows the anchor to withstand both vertical and horizontal loading and thus, unlike DEAs, mooring lines may be in either a catenary or taut-moored configuration. VLAs are embedded as DEAs are, over a specified drag length. As a result, much of the design considerations required for DEAs is applicable to VLAs. Following the drag length insertion, the fluke is "released" and allowed to rotate freely about its connection with the shank. This new anchor configuration results in the mooring line load being essentially normal to the fluke of the VLA. Suction caissons: Suction caissons (also known as suction buckets, suction piles, or suction anchors) are a new class of embedded anchors that have a number of economic advantages over other methods. They are essentially upturned buckets that are embedded into the soil and use suction, by pumping out the water to create a vacuum, to anchor offshore floating facilities. They present a number of economic benefits, including quick installation and removal during decommissioning, as well as a reduction in material costs. The caisson consists of a large-diameter cylinder (typically in the 3-to-8-metre (10 to 26 ft) range), open at the bottom and closed at the top, with a length-to-diameter ratio in the range of 3 to 6. This anchoring solution is used extensively in large offshore structures, offshore drilling and accommodation platforms. Since the rise in the demand for renewable energy, such anchors are now used for offshore wind turbines, typically in a tripod configuration. Suction-embedded plate anchors: In 1997, the suction-embedded plate anchor (SEPLA) was introduced as a combination of two proven anchoring concepts—suction piles and plate anchors—to increase efficiency and reduce costs.Today, SEPLA anchors are used in the Gulf of Mexico, off the coast of West Africa, and in many other locations. The SEPLA uses a suction "follower", an initially water-filled, open-bottom caisson, to embed a plate anchor into soil. The suction follower is lowered to the seabed where it begins to penetrate under its own weight. Water is then pumped from the interior of the caisson to create a vacuum that pushes the plate anchor underneath to the desired depth (Step 1). The plate anchor mooring line is then disengaged from the caisson, which is retrieved by water being forced into the caisson, causing it to move upwards whilst leaving the plate anchor embedded (Step 2). Tension is then applied to the mooring line (Step 3), causing the plate anchor to rotate (a process also known as "keying") to be perpendicular to the direction of loading (Step 4). This is done so that the maximum surface area is facing the direction of loading, maximising the resistance of the anchor. Suction-embedded plate anchors: As a suction-caisson follower is used, SEPLA anchors can be classified as direct-embedment anchors; and thus the location and depth of the anchor are known. Because of their geotechnical efficiency, SEPLA plate anchors are significantly smaller and lighter than the equivalent suction anchors, thus reducing costs. Dynamically installed anchors: The increased cost of installing anchors in deep water has led to the inception of dynamically penetrating anchors that embed themselves into the seabed by free-fall. These anchors typically consist of a thick-walled, steel, tubular shaft filled with scrap metal or concrete and fitted with a conical tip. Steel flukes are often attached to the shaft to improve its hydrodynamic stability and to provide additional frictional resistance against uplift after installation.The main advantage of dynamically installed anchors is that their use is not restricted by water depth. Costs are reduced, as no additional mechanical interaction is required during installation. The simple anchor design keeps fabrication and handling costs to a minimum. Additionally, the ultimate holding capacity of dynamic anchors is less dependent on the geotechnical assessment of the location, as lower shear-strengths permit greater penetration which increases the holding capacity. Despite these advantages, this anchor type's major disadvantage is the degree of uncertainty in predicting embedment depth and orientation and the resultant uncertain holding capacity. Dynamically installed anchors: Design Several different forms of dynamically installed anchors have been designed since their first commercial development in the 1990s. The deep-penetrating anchor (DPA) and the torpedo anchor have seen widespread adoption in offshore South American and Norwegian waters. Their designs are shown in the figure with two other forms of dynamically installed anchors, namely the Omni-Max and the dynamically embedded plate anchor (DEPLA). Dynamically installed anchors: Deep-penetrating and torpedo anchors are designed to reach maximum velocities of 25–35 metres per second (82–115 ft/s) at the seabed, allowing for tip penetration of two to three times the anchor length, and holding capacities in the range of three to six times the weight of the anchor after soil consolidation. Dynamically installed anchors: The dynamically-embedded plate anchor (DEPLA) is a direct-embedment, vertically-loaded anchor that consists of a plate embedded in the seabed by the kinetic energy obtained by freefall in water. This new anchor concept has only been recently developed but has been tested both in the lab and field. The different components of the DEPLA can be seen in the labeled diagram in the figure. Dynamically installed anchors: The Omni-Max anchor pictured is a gravity-installed anchor that is capable of being loaded in any direction due to its 360-degree swivel feature. The anchor is manufactured from high-strength steel and possesses adjustable fluke fins that can be adapted to specific soil conditions. Torpedo anchors A torpedo anchor features a tubular steel shaft, with or without vertical steel fins, which is fitted with a conical tip and filled with scrap metal or concrete. Up to 150 metres (490 ft) long, the anchor becomes completely buried within the seabed by free-fall. Dynamically installed anchors: Full-scale field tests were performed in water at depths of up to 1,000 metres (3,300 ft) using a 12-metre (39 ft) long, 762-millimetre (30.0 in) diameter, torpedo anchor with a dry weight of 400 kilonewtons (90,000 lbf). The torpedo anchor dropped from a height of 30 metres (98 ft) above the seabed achieved 29-metre (95 ft) penetration in normally consolidated clay.Subsequent tests with a torpedo anchor, with a dry weight of 240 kilonewtons (54,000 lbf) and an average tip embedment of 20 metres (66 ft), resulted in holding capacities of approximately 4 times the anchor's dry weight immediately following installation, which approximately doubled after 10 days of soil consolidation.While the efficiencies are lower than what would be obtained with other sorts of anchor, such as a drag embedment anchor, this is compensated by the low cost of fabrication and ease of installation. Therefore, a series of torpedo anchors can be deployed for station-keeping of risers and other floating structures. Dynamically installed anchors: Deep-penetrating anchors A deep-penetrating anchor (DPA) is conceptually similar to a torpedo anchor: it features a dart-shaped, thick-walled, steel cylinder with flukes attached to the upper section of the anchor. A full-scale DPA is approximately 15 metres (49 ft) in length, 1.2 metres (4 ft) in diameter, and weighs on the order of 50–100 tonnes (49–98 long tons; 55–110 short tons). Its installation method is no different from that of the torpedo anchor: it is lowered to a predetermined height above the seabed and then released in free-fall to embed itself into the seabed. Anchor piles: Embedded anchor piles (driven or drilled) are required for situations where a large holding capacity is required. The design of anchor piles allows for three types of mooring configurations—vertical tethers, catenary moorings, and semi-taut/taut moorings—which are used for the mooring of offshore structures such as offshore wind turbines, floating production storage and offloading (FPSO) vessels, floating liquefied natural gas (FLNG) facilities, etc. An industrial example is the Ursa tension-leg platform (TLP) which has been held on-station by 16 anchor piles, each of which is 147 metres (482 ft) long, 2.4 metres (7 ft 10 in) in diameter, and weighs 380 tonnes (370 long tons; 420 short tons). Anchor piles: Design Anchor piles are hollow steel pipes that are either driven, or inserted into a hole drilled into the seabed and then grouted, similar to pile foundations commonly used in offshore fixed structures. The figure shows the different installation methods, where in the "driven" method, the steel tube is driven mechanically by a hammer, whilst in the "drilled" method a cast in-situ pile is inserted into an oversized borehole constructed with a rotary drill and then grouted with cement. Employment of a particular method depends on the geophysical and geotechnical properties of the seabed. Anchor piles: Anchor piles are typically designed to resist both horizontal and vertical loads. The axial holding capacity of the anchor pile is due to the friction along the pile-soil interface, while the lateral capacity of the pile is generated by lateral soil resistance, where the anchor's orientation is critical to optimising this resistance. As a result, the location of the padeye is placed such that the force from the catenary or taut mooring will result in a moment equilibrium about the point of rotation, to achieve the optimal lateral soil resistance. Anchor piles: Installation Due to the slender nature of anchor piles, there are three installation issues pertaining to driven piles, the first of which is the driveability of the piles at the location, or where excessive soil resistance may prevent penetration to the desired depth. The second issue is the deformation of the piles where tip collapse or buckling occurs due to excessive resistance and a deviation of pile trajectory. The third issue is the geotechnical properties of the soil. Insufficient lateral resistance by the soil may lead to a toppling of the anchor, and rocks and boulders along the penetration trajectory may lead to refusal and tip collapse. Anchor piles: Installation issues pertaining to drilled-and-grouted piles include borehole stability, unwanted soft cuttings at the base of the hole, hydrofracture of the soil leading to loss of grout, and thermal expansion effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HaeIII** HaeIII: HaeIII is one of many restriction enzymes (endonucleases) a type of prokaryotic DNA that protects organisms from unknown, foreign DNA. It is a restriction enzyme used in molecular biology laboratories. It was the third endonuclease to be isolated from the Haemophilus aegyptius bacteria. The enzyme's recognition site—the place where it cuts DNA molecules—is the GGCC nucleotide sequence which means it cleaves DNA at the site 5′-GG/CC-3. The recognition site is usually around 4-8 bps.This enzyme's gene has been sequenced and cloned. This is done to make DNA fragments in blunt ends. HaeIII is not effective for single stranded DNA cleavage. Properties: HaeIII has a molecular weight of 37126. After a 2-10-fold of HaeIII takes place, there is overdigestion of a DNA substrate. This results in 100% being cut, more than 50% of fragments being ligated, and more than 95% being recut. Heat inactivation comes at about 80 °C for 20 minutes. The locus of the HaeIII enzyme is on AF05137, and is linear with 957 base pairs. History: HaeIII along with other restriction enzymes were discovered in 1970 by Werner Arber and Matthew Meselson. The HaeIII methyltransferase also known as MTase gene from Haemophilus aegyptius (recognition sequence: 5′-GGCC-3′) was made into Escherichia coli (E.coli) in the plasmid vector pBR322. The gene was extracted from a single EcoRI fragment and a single HindIII enzyme fragment. Clones carrying additional adjacent fragments were found to code for the HaeIII restriction enzyme. Function: The enzyme cleaves the DNA at the positions where the GGCC sequence is found. The cleavage occurs between the second and the third nucleotides (G and C). The resulting DNA fragments are known as restriction fragments. HaeIII cuts both strands of DNA in the same location, yielding restriction fragments with blunt ends. Heat denaturation occurs at 80°C after 20 minutes. Methylase: Haemophilus aegyptius also carries a methylase dubbed HaeIIIM (P20589) that methylates the internal cytosines in the GGCC sequence. It protects sequences from being cut by HaeIII, and forms a restriction modification system. HaeIII enzyme comes from an E. coli strain that carries the cloned HaeIII modification gene from Haemophilus aegyptius.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modern Quantum Mechanics** Modern Quantum Mechanics: Modern Quantum Mechanics, often called Sakurai or Sakurai and Napolitano, is a standard graduate-level quantum mechanics textbook written originally by J. J. Sakurai and edited by San Fu Tuan in 1985, with later editions coauthored by Jim Napolitano. Sakurai died in 1982 before he could finish the textbook and both the first edition of the book, published in 1985 by Benjamin Cummings, and the revised edition of 1994, published by Addison-Wesley, were edited and completed by Tuan posthumously. The book was updated by Napolitano and released two later editions. The second edition was initially published by Addison-Wesley in 2010 and rereleased as an eBook by Cambridge University Press, who released a third edition in 2020. Table of Contents (3rd edition): Prefaces Chapter 1: Fundamental Concepts Chapter 2: Quantum Dynamics Chapter 3: Theory of Angular Momentum Chapter 4: Symmetry in Quantum Mechanics Chapter 5: Approximation Methods Chapter 6: Scattering Theory Chapter 7: Identical Particles Chapter 8: Relativistic Quantum Mechanics Appendix A: Electromagnetic Units Appendix B: Elementary Solutions to Schrödinger's Wave Equation Appendix C: Hamiltonian for a Charge in an Electromagnetic Field Appendix D: Proof of the Angular-Momentum Rule (3.358) Appendix E: Finding Clebsch-Gordan Coefficients Appendix F: Notes on Complex Variables Bibliography Index Reception: Early editions of the book have received several reviews. It is a standard textbook on the subject and is recommended in other works on the subject, it has inspired other textbooks on the subject, and it is used as a point of comparison in book reviews. Along with Griffith's Introduction to Quantum Mechanics, the book was also analyzed in a review of the "Philosophical Standpoints of Textbooks in Quantum Mechanics" in June 2020. Publication history: Sakurai, J. J. (1985). Tuan, San Fu (ed.). Modern Quantum Mechanics (1st ed.). Menlo Park, Calif.: Benjamin Cummings. ISBN 0-8053-7501-5. OCLC 11518382. Publication history: Sakurai, J. J. (1994). Tuan, San Fu (ed.). Modern Quantum Mechanics (Rev. ed.). Reading, Mass.: Addison-Wesley. ISBN 0-201-53929-2. OCLC 28065703. (hardcover) Sakurai, J. J.; Napolitano, Jim (2010). Modern quantum mechanics (2nd ed.). Boston: Addison-Wesley. ISBN 978-0-8053-8291-4. OCLC 641998678. (hardcover) Sakurai, J. J.; Napolitano, Jim (2017). Modern Quantum Mechanics (2nd ed.). Cambridge. ISBN 978-1-108-49999-6. OCLC 1105708539.{{cite book}}: CS1 maint: location missing publisher (link) (eBook) Sakurai, J. J.; Napolitano, Jim (2020). Modern Quantum Mechanics (3rd ed.). Cambridge. ISBN 978-1-108-47322-4. OCLC 1202949320.{{cite book}}: CS1 maint: location missing publisher (link) (hardcover) Sakurai, J. J.; Napolitano, Jim (2020). Modern Quantum Mechanics (3rd ed.). Cambridge. ISBN 978-1-108-64592-8. OCLC 1202949320.{{cite book}}: CS1 maint: location missing publisher (link) (eBook)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spectral glide** Spectral glide: A spectral glide is a music-composition concept, consisting of a "modification of the vowel quality of a tone" (Erickson 1975, p. 72). Since the vowel quality of a tone is determined by the overtones, spectrum, or timbre of that tone (all three terms describe approximately the same hearing experience), a spectral glide is a move from a spectrum characteristic of one vowel to a spectrum characteristic of another vowel. A spectral glide may be accomplished through a wah-wah, mute, or pedal, or through the modification of one's vocal tract while speaking, singing, or playing an instrument such as the didgeridoo. Lip-vibrated instruments with large mouthpieces such as tuba and trombone allow extensive modification of vowel quality, while woodwinds have a smaller range, with the exception of the flute in air-sound mode. Strings have the smallest range (Erickson 1975, p. 72). Spectral glide: The glide rate and the vowel contrasts used are important factors in the compositional use of spectral glides. Karlheinz Stockhausen specifies the use of a trumpet wa-wa mute in his Punkte (1952/1962/64/66/93) through open and closed circles connected by a line. A. Wayne Slawson's computer-generated Wishful Thinking about Winter (Decca DL 710180) uses speechlike sounds featuring a large range of spectral glide rates. Loren Rush began investigating in 1967 the computer-generated modeling of timbres "in between" familiar instruments such as a bassoon and bass clarinet, and devised a program to provide a smooth transition between timbres (Erickson 1975, p. 73). Sources: Erickson, Robert (1975). Sound Structure in Music. University of California Press. ISBN 0-520-02376-5.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virtual Cluster Switching** Virtual Cluster Switching: Virtual Cluster Switching (VCS) fabric technology is a Layer 2 proprietary Ethernet technology from Brocade Communications Systems, later acquired by Extreme Networks. It is designed to improve network utilization, maximize application availability, increase scalability, and simplify the network architecture in virtualized data centers. Ethernet Fabrics: Ethernet fabrics encompasses Data Center Bridging (DCB) technologies, IEEE 802.1aq and the emerging IETF standard, Transparent Interconnection of Lots of Links (TRILL), to provide a more efficient way of moving data throughout the network. An Ethernet fabric is promoted for Fibre Channel over Ethernet (FCoE) and iSCSI storage traffic. Ethernet fabrics have the following characteristics: Flatter: Ethernet fabrics are self-aggregating, enabling a flatter network. Intelligent: Switches in the fabric know about each other and all connected devices. Scalable: All paths are available for high performance and high reliability. Efficient: Traffic automatically travels along the shortest path. Simple: The fabric is managed as a single logical entity.Brocade markets using the term "Ethernet fabric". Brocade SAN fabric technology is currently deployed in over 90 percent of the Global 1000 data centers. With VCS Fabric technology, Brocade will be bringing the same level of innovation to the data center LAN environment. Distributed intelligence: With VCS Fabric technology, all configuration and destination information is distributed to each member switch in the fabric. For example, when a server connects to the fabric for the first time, all switches in the fabric learn about that server. Also, when two VCS-enabled switches are connected, the fabric is automatically created, and the switches discover the common fabric configuration. This fabric configuration is shared amongst all of the switches in the fabric, making it masterless, so no single switch stores configuration information or controls fabric operations. Distributed intelligence: Distributed intelligence enables the automatic migration of port profiles (AMPP) which ensures that the source and destination network ports have the same configuration when virtual machines migrate. Logical chassis: All switches in an Ethernet fabric are managed as if they were a single logical chassis. To the rest of the network, the fabric looks no different than any other single Layer 2 switch. Each physical switch in the fabric is managed as if it were a port module in a chassis. This enables fabric scalability without manual configuration. The logical chassis capability significantly reduces management of small-form-factor edge switches. Instead of managing each top-of-rack switch (or switches in blade server chassis) individually, organizations can manage them as one logical chassis, which further optimizes the network in the virtualized data center and will further enable a cloud computing model. Dynamic services: Dynamic services extend the VCS Fabric technology to incrementally incorporate network services. A dynamic service behaves like a special service module in a modular chassis. Possible fabric services include fabric extension over distance, native Fibre Channel connectivity, Layer 4 - 7 services such as the Brocade application resource broker, and security services such as firewalls and data encryption. Switches can join an Ethernet fabric, adding a network service layer that is available across the entire fabric. Availability: Brocade announced VCS Fabric technology on June 9, 2010 at its annual Technology Day in New York City. It is available as a licensed feature for the Brocade VDX switch family.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marcel Escudier** Marcel Escudier: Marcel Escudier is Professor Emeritus of mechanical engineering of the School of Engineering at the University of Liverpool.He became a Fellow of the Royal Academy of Engineering in 2000. Escudier is a fellow of the Institution of Mechanical Engineers and of the City and Guilds of London Institute.As well as more than 60 papers published in scholarly journals, Escudier is the author of The Essence of Engineering Fluid Mechanics published by Prentice Hall Europe in 1998 and recently co-authored A Dictionary of Mechanical Engineering, published by Oxford University Press in 2013.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**One-shot (comics)** One-shot (comics): In comics, a one-shot is a work composed of a single standalone issue or chapter, contrasting a limited series or ongoing series, which are composed of multiple issues or chapters. One-shots date back to the early 19th century, published in newspapers, and today may be in the form of single published comic books, parts of comic magazines/anthologies or published online in websites. In the marketing industry, some one-shots are used as promotion tools that tie in with existing productions, movies, video games or television shows. Overview: In the Japanese manga industry, one-shots are called yomikiri (読み切り), a term which implies that the comic is presented in its entirety without any continuation. One-shot manga are often written for contests, and sometimes later developed into a full-length series, much like a television pilot. Many popular manga series began as one-shots, such as Dragon Ball, Fist of the North Star, Naruto, Bleach, One Piece, Berserk, Kinnikuman and Death Note. Rising Stars of Manga was an annual competition for original English-language one-shot manga, many of which have gone on to become full-length manga series. Some noted manga authors, such as Akira Toriyama and Rumiko Takahashi, have worked on numerous one-shot stories in addition to their serialized works. In the United States, one-shots are usually labeled with a "#1" despite there being no following issues, and are sometimes subtitled as "specials". On occasion, a character or concept will appear in a series of one-shots, in cases where the subject matter is not financially lucrative enough to merit an ongoing or limited series, but still popular enough to be published on a regular basis, often annually or quarterly. A current example of a series of one-shots would be Marvel Comics' Franklin Richards: Son of a Genius publications. This type of one-shot is not to be confused with a comic book annual, which is typically a companion publication to an established ongoing series. Overview: The term has also been borrowed into the Franco-Belgian comics industry, with basically the same meaning, although there it mostly refers to comic albums. One-shot manga: The comic art histories of different countries and regions are following divergent paths. Japanese early comic art or manga took its rise from the 12th century and developed from Chōjū-jinbutsu-giga ("Animal-person Caricatures"), went so far as to ukiyo-e ("floating world") in the 17th century. Western-style humour comics and caricatures had been introduced into Japan in the late 19th century and impacted on the styles of comic art. On the other hand, the significant development of modern era Japanese comic art was arising in the aftermath of World War II and further developed into diversified genres. Nowadays, Japan has the largest and most matured manga market around the world. Almost a quarter of all printed materials in Japan are in forms of manga, while the audiences are from all ages.Most modern era one-shot manga (yomikiri 読み切り) have independent settings, characters, and storylines, rather than sharing them with existing works. In Japan and other Asian countries, some one-shot manga are more like takeoff boards to determine the popularity among the audience. The format of a one-shot manga could be changed if it has a broad market prospect, so that: a one-shot manga could become a serialized continuing manga after adapting; a one-shot manga could develop into a series of one-shot manga or serial manga, which are sharing the same world set and character design, but in different story lines; and side stories could derive from the original one-shot manga, such as a prequel, a sequel, and an antagonist or supporting role's side story. One-shot western comics: The prototype comic works in Western countries were pamphlets, giveaways, or Sunday newspaper comic sections in the 19th century. These were then developed and published as comic magazines which were distributed with the newspapers sales on newsstands. On the other hand, graphic books in America was also viewed as developing from pamphlets that sold on newsstands. Comic was not highly regarded in the early market, for example, during depression comic was used to increase the sales of newspapers and some other products in America. Most of the comics were one-shot comics before the rise of long continuities in newspaper strips. After some early developments, weekly comic magazines became the major way of dissemination in European comic markets. Influenced by the chaos of social revolutions and changings in 20th century, Western alternative comic art was quickly developed as well as 1970s and 1980s' America. Also, America has stirred up a spree of superhero comics since 1930s, and this comic form is still dominating the comic market. One-shot western comics: The 19th and early 20th century In this period, comic strips and magazines were the major reading formats that had been leading the markets. Divergent genres such as humour, caricature, and horror were dominant forms of comics in that time. In the very beginning, magazines were divided from the comic supplements of newspapers within a decade of their first appearance in America. On the other side of the coin, in Europe, magazine format was developed as a comic supplement of newspapers along European features and never lost the identification. It is worth mentioning that comic art is developing more rapidly during social revolutions, while comic strips were very topical and aimed at all ages. One-shot western comics: Modern era one-shot comics Since the 1930s, a specific form of comic, the superhero comic, has been causing a feeding frenzy in America and further impacted on other countries' comic markets. It dominated the publishing industry on comic art, and most of the published comic books were contained one-shot stories rather than serialized stories. It is also worth mentioning that a single popular protagonist always centered all the highlights in a superhero comic story. This best-selling model is still the majority of American comic market until today. In the 1970s, due to the dislocations of social developments, alternative comic art traditions were developing under the era. This alternative underground comic movement used comic strips and comic books as mediums for radical changes.In more recent years, European albums are still the dominant comic format in their own markets, while superhero comic books dominate the American market rather than continued stories. Several large comic book publishers, Entertainments and animation production companies were established such as DC Comics and Marvel Comics. On another note, Japanese comics are increasing in popularity as Japanese-style anthologies are published in America in recent decades.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ragavendra R. Baliga** Ragavendra R. Baliga: Ragavendra R. Baliga, FACC, FACP, FRCP (Edin) is a Professor of Medicine at The Ohio State University School of Medicine in Columbus, Ohio. He is a consulting editor of Heart Failure Clinics of North America, an indexed medical journal along with James B. Young, MD, Executive Dean, Lerner College of Medicine, Cleveland Clinic, Cleveland, Ohio. This is journal is known for editorials championing novel and esoteric mechanisms pertaining to cardiac function including ‘The Heart as the Concertina Pump’ and suggesting that stiffness of the great arteries contribute to cardiorenal syndrome. The most provocative editorial is a recent one that discusses the role of implantable cardiac defibrillators in sudden death. He is also Vice-Chief of the Division of Cardiovascular Medicine, at The Ohio State University of Medical Center. Ragavendra R. Baliga: Using pioneering positron emission tomography techniques at the MRC Cyclotron Center at Hammersmith Hospital, London along with Prof J.S. Kooner, Dr Stuart Rosen and Prof Paulo Camici, he demonstrated that angina occurring after a meal is due to "intramyocardial steal", wherein blood is redistributed from ischemic areas of the myocardium to the normally supplied myocardial in order to maintain overall myocardial blood flow. This mechanistic paper was published in the journal Circulation. Another paper published in the American Journal of Cardiology investigating the role of meal components showed that the carbohydrates contribute significantly to the pathogenesis of post-prandial angina. He also worked with Professor Christopher Mathias, FRCP, St. Mary’s Medical School and Imperial College of Science, Technology and Medicine and Prof Hans L. Frankel, FRCP, National Spinal Injuries Center, Stoke Mandeville Hospital, Ayelsbury. Ragavendra R. Baliga: While at Brigham and Women’s Hospital and Harvard Medical School he worked with Thomas Woodward Smith, MD, Chief of Cardiovascular Medicine and Professor of Medicine, Harvard Medical School and Ralph A Kelly, MD. At that time he worked as a part of a team to tease out the intracellular cell signaling pathways in response to a paracrine growth factor Neuregulin-1 in the cardiac myocyte. This research shed light on the effects of trastuzumab/Herceptin (a medication used in the treatment of breast cancer) on the heart and was published in the American Journal of Physiology and Journal of Biochemistry. Ragavendra R. Baliga: Baliga has written or edited a number of books but is best known for his book 250 Cases in Clinical Medicine, initially published by Balliere Tindall as 200 Cases in Clinical Medicine in June 1993, and later by W.B. Saunders, an imprint of Elsevier. He wrote this book at the age of 32. The book remains popular among medical students. His subsequent books include Self-assessment in Clinical Medicine, Saunders, although in its 3rd edition and 500 MCQs for the MRCP Part I, 1997 also by Saunders. A more recent book, Practical Cardiology, co-edited with Kim A Eagle, MD, and published by Lippincott Wilkins, is more popular. Early career: Baliga received an MBBS, from St. John's Medical College, Bangalore in 1984 and post-doctoral degree Doctor of Medicine, from Bangalore Medical College/Bangalore University in 1988. In 1988 along with Prof Anura Kurpad, MD he was founding editor of St. John’s Journal of Medicine which was subsequently edited by Prof Ashley D’Cruz, MBBS, MS, MCH and Prof Sunitha Simon Kurpad, MD. After a hiatus this journal has been resurrected and now rechristened St. John’s Medical Journal. Early career: He then migrated to the UK in 1988 and worked with Prof Hans Frankel, FRCP and Prof Christopher J Mathias, FRCP at the National Spinal Injuries Center affiliated with Stoke Mandeville Hospital, Aylesbury, Oxford Regional Health Authority and St. Mary’s Medical School, Paddington, London. The research he conducted shed light on the post-prandial cardiovascular hemodynamics in quadriplegics. Between 1990-1992 he worked at Clinical Tutor at University of Aberdeen, and Registrar with Prof James Petrie, FRCP who later on became President of Royal College of Physicians of Edinburgh, Prof Peter Brunt, FRCP, Prof John Webster, FRCP and Prof Nigel Benjamin, FRCP. From Scotland he moved the Hammersmith Hospital and Royal Postgraduate Medical School, London where he worked with Prof J.Kooner, FRCP and Prof Paolo Camici, FRCP at the MRC Cyclotron Center. He was involved with research pertaining to premature coronary artery disease in those hailing from the Indian sub-continent and he also investigated post-prandial hemodynamics. Early career: He subsequently migrated to the US to work at Harvard Medical School and Brigham and Women’s Hospital where he was tutor on the New Pathway for Harvard medical students. He also worked with Prof Andrew Selwyn, FRCP, Professor of Harvard Medical School. His subsequent experience included working with Dr Wilson S. Colucci, Professor of Medicine and Chief of Cardiology at Boston University Medical Center and with Dr Clyde Yancy, MD and Dr Mark Drazner, MD at UT Southwestern Medical Center. Notable research papers: Mechanisms of Post-Prandial Angina Pectoris—Circulation, 1998;97:1144-49. Neuregulin Cell Signaling in Cardiac Myocyte—Am J Physiol. 1999 Nov;277(5 Pt 2):H2026-37 Post-Prandial Hemodynamics in Quadriplegics—Clinical Autonomic Research, 1997;7:137-141. Carbohydrates are more likely to cause post-prandial angina—Am J Cardiol, 1997;79:1397-1400 Books: Baliga RR (ed). An Introductory Guide to Cardiac CT Imaging, Lippincott, Williams & Wilkins, 2010 Baliga RR. Statin Prescribing Guide (Oxford American Pocket Notes), Oxford University Press, 2010 Baliga, RR, Abraham WT (eds). Cardiac Resynchronization in Heart Failure, Lippincott, Williams & Wilkins, 2009 Eagle KA, Baliga RR (Eds). Practical Cardiology: Evaluation and Treatment of Common Cardiovascular Disorders, Lippincott, Williams & Wilkins, 2008, pp 688, 2nd edition Raman J, Givertz M, Pitt B, Baliga RR (Eds). Management of Heart Failure, (Springer Verlag), 2008. Books: Baliga RR, Neinber C, Isselbacher E, Eagle KA. Aortic Dissection and related syndromes. Springer, 2007 Baliga RR. Crash Course (US): Internal Medicine, Mosby, 2006 Baliga RR. Crash Course (US): Cardiology, Mosby, 2005 * Baliga RR. 250 Cases in Clinical Medicine, Elsevier, 3rd edition, 2002. Baliga RR. Self-assessment in Clinical Medicine, Saunders, 2003 Baliga RR. MCQs in Clinical Medicine, Saunders, 1999 Baliga RR. MCQs for the MRCP Part I, W.B. Saunders, 1997 Metaphrastic works: Statin Prescribing Guide has been translated into Polish. Management of Heart Failure translated to Italian Editorials: Baliga RR, Young JB. Editorial: Sudden death in heart failure: an ounce of prediction is worth a pound of prevention. Heart Fail Clin. 2011 Apr;7(2):xiii-xviii. PMID 21439492. Baliga RR, Young JB. Editorial: depression in heart failure is double trouble: warding off the blues requires early screening. Heart Fail Clin. 2011 Jan;7(1):xiii-xvii. PMID 21109201. Baliga RR, Young JB. Editorial: Giant strides and baby steps in pediatric cardiac disease and heart failure in children. Heart Fail Clin. 2010 Oct;6(4):xiii-v. PMID 20869640. Baliga RR, Young JB. Staying in the pink of health for patients with cardiorenal anemia requires a multidisciplinary approach. Heart Fail Clin. 2010 Jul;6(3):xi-xvi. PMID 20630399. Baliga RR, Young JB. Editorial. Unleashing our healthy avatars using cardiovascular genetics. Heart Fail Clin. 2010 Apr;6(2):xi-xiii. PMID 20347781. Baliga RR, Young JB. Pharmacogenomics transforming medicine to create a world of immortal Struldbruggs or even a Methuselah? So be it! Heart Fail Clin. 2010 Jan;6(1):xi-xiii. PMID 19945053. Baliga RR, Narula J. Salt never calls itself sweet. Indian J Med Res. 2009 May;129(5):472-7. PMID 19675372. Baliga RR, Young JB. Editorial: Do biomarkers deserve high marks? Heart Fail Clin. 2009 Oct;5(4):ix-xii. PMID 19631170. Baliga RR, Young JB. Using a magnet to strike gold. Heart Fail Clin. 2009 Jul;5(3):ix-x. PMID 19564007. Baliga RR, Young JB. Editorial: Bench to bedside to home: homing-in on therapy that begins at home. Heart Fail Clin. 2009 Apr;5(2):xiii-xiv. PMID 19249682. Baliga RR, Young JB. The race to tissue oxygenation: special teams GoGoGo. Heart Fail Clin. 2009 Jan;5(1):xi-xiv. PMID 19026378. Baliga RR, Young JB. "Stiff central arteries" syndrome: does a weak heart really stiff the kidney? Heart Fail Clin. 2008 Oct;4(4):ix-xii. PMID 18760749. Baliga RR, Young JB. Editorial: the concertina pump. Heart Fail Clin. 2008 Jul;4(3):xiii-xix. PMID 18598975. Baliga RR, Young JB. Statins or status quo? Heart Fail Clin. 2008 Apr;4(2):ix-xii. PMID 18433691. Baliga RR, Young JB. Energizing diastole. Heart Fail Clin. 2008 Jan;4(1):ix-xiii. Review. PMID 18313619. Baliga RR, Young JB. Never too late to drink from the fountain of youth. Heart Fail Clin. 2007 Oct;3(4):xi-xii. PMID 17905373. Honors and awards: Honoris Causa Fellow of Royal College of Physicians, Edinburgh, 2002 Fellow of American College of Cardiology, 2002 Fellow, Royal Society of Medicine, London, 2007 Representative and Visiting Professor of American College of Cardiology (ACC), 3rd Annual Best of ACC, Cardiovascular Medicine,Best Practices Series, India, 2009, Hyderabad, New Delhi and Lucknow, Aug 1-7, 2009, J.P. Das Oration, Indian College of Cardiology, 2010, Prof C.N. Manjunath,MD, DM, Dr U.C. Samal, Prof T.R. Raghu, MD, DM Fellow of American College of Physicians, 2011 Visiting Professor University of Naples Federico II, 2011, Facolta di Medicina e Chirugia, 17 giugno 2011, Aula grande edificio 2, introducono: Prof. Eduardo Bossone (AOU Salerno), Prof. Antonio Cittadini (Universita "Fedrico II" di Napoli)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oregon lunar sample displays** Oregon lunar sample displays: The Oregon lunar sample displays are two commemorative plaques consisting of small fragments of Moon specimen brought back with the Apollo 11 and Apollo 17 lunar missions and given in the 1970s to the people of Oregon by United States President Richard Nixon as goodwill gifts. Description: Apollo 11 At the request of Nixon, NASA had about 250 presentation plaques made following Apollo 11 in 1969. Each included about four rice-sized particles of Moon dust from the mission totaling about 50 mg. The Apollo 11 lunar sample display has an acrylic plastic button containing the Moon dust mounted with the recipient's country or state flag that had been to the Moon and back. All 135 countries received the display, as did the 50 states of the United States and the U.S. provinces and the United Nations.The plaques were given as gifts by Nixon in 1970. Description: Apollo 17 The sample Moon rock collected during the Apollo 17 mission was later named lunar basalt 70017, and dubbed the Goodwill rock. Pieces of the rock weighing about 1.14 grams were placed inside a piece of acrylic lucite, and mounted along with a flag from the country that had flown on Apollo 17 it would be distributed to.In 1973 Nixon had the plaques sent to 135 countries, and to the United States with its territories, as a goodwill gesture. History: The Oregon Apollo 11 lunar sample display is exhibited in the governor's ceremonial office at the Oregon State Capitol.According to Moon rocks researcher Robert Pearlman, the Oregon Apollo 17 "goodwill Moon rock" plaque display is at the Earth Science Hall of the Oregon Museum of Science and Industry in Portland.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Euclid's lemma** Euclid's lemma: In algebra and number theory, Euclid's lemma is a lemma that captures a fundamental property of prime numbers, namely: For example, if p = 19, a = 133, b = 143, then ab = 133 × 143 = 19019, and since this is divisible by 19, the lemma implies that one or both of 133 or 143 must be as well. In fact, 133 = 19 × 7. Euclid's lemma: If the premise of the lemma does not hold, i.e., p is a composite number, its consequent may be either true or false. For example, in the case of p = 10, a = 4, b = 15, composite number 10 divides ab = 4 × 15 = 60, but 10 divides neither 4 nor 15. This property is the key in the proof of the fundamental theorem of arithmetic. It is used to define prime elements, a generalization of prime numbers to arbitrary commutative rings. Euclid's Lemma shows that in the integers irreducible elements are also prime elements. The proof uses induction so it does not apply to all integral domains. Formulations: Euclid's lemma is commonly used in the following equivalent form: Euclid's lemma can be generalized as follows from prime numbers to any integers. This is a generalization because a prime number p is coprime with an integer a if and only if p does not divide a. History: The lemma first appears as proposition 30 in Book VII of Euclid's Elements. It is included in practically every book that covers elementary number theory.The generalization of the lemma to integers appeared in Jean Prestet's textbook Nouveaux Elémens de Mathématiques in 1681.In Carl Friedrich Gauss's treatise Disquisitiones Arithmeticae, the statement of the lemma is Euclid's Proposition 14 (Section 2), which he uses to prove the uniqueness of the decomposition product of prime factors of an integer (Theorem 16), admitting the existence as "obvious". From this existence and uniqueness he then deduces the generalization of prime numbers to integers. For this reason, the generalization of Euclid's lemma is sometimes referred to as Gauss's lemma, but some believe this usage is incorrect due to confusion with Gauss's lemma on quadratic residues. Proofs: The two first subsections, are proofs of the generalized version of Euclid's lemma, namely that: if n divides ab and is coprime with a then it divides b. The original Euclid's lemma follows immediately, since, if n is prime then it divides a or does not divide a in which case it is coprime with a so per the generalized version it divides b. Using Bézout's identity In modern mathematics, a common proof involves Bézout's identity, which was unknown at Euclid's time. Bézout's identity states that if x and y are coprime integers (i.e. they share no common divisors other than 1 and −1) there exist integers r and s such that 1. Let a and n be coprime, and assume that n|ab. By Bézout's identity, there are r and s such that 1. Multiply both sides by b: rnb+sab=b. The first term on the left is divisible by n, and the second term is divisible by ab, which by hypothesis is divisible by n. Therefore their sum, b, is also divisible by n. By induction The following proof is inspired by Euclid's version of Euclidean algorithm, which proceeds by using only subtractions. Suppose that n∣ab and that n and a are coprime (that is, their greatest common divisor is 1). One has to prove that n divides b. Since n∣ab, there is an integer q such that nq=ab. Without loss of generality, one can suppose that n, q, a, and b are positive, since the divisibility relation is independent from the signs of the involved integers. For proving this by strong induction, we suppose that the result has been proved for all positive lower values of ab. There are three cases: If n = a, coprimality implies n = 1, and n divides b trivially. If n < a, one has n(q−b)=(a−n)b. The positive integers a – n and n are coprime: their greatest common divisor d must divide their sum, and thus divides both n and a. It results that d = 1, by the coprimality hypothesis. So, the conclusion follows from the induction hypothesis, since 0 < (a – n) b < ab. Proofs: Similarly, if n > a one has (n−a)q=a(b−q), and the same argument shows that n – a and a are coprime. Therefore, one has 0 < a (b − q) < ab, and the induction hypothesis implies that n − a divides b − q; that is, b−q=r(n−a) for some integer. So, (n−a)q=ar(n−a), and, by dividing by n − a, one has q=ar. Proofs: Therefore, ab=nq=anr, and by dividing by a, one gets b=nr, the desired result. Proof of Elements Euclid's lemma is proved at the Proposition 30 in Book VII of Euclid's Elements. The original proof is difficult to understand as is, so we quote the commentary from Euclid (1956, pp. 319–332). Proposition 19 If four numbers be proportional, the number produced from the first and fourth is equal to the number produced from the second and third; and, if the number produced from the first and fourth be equal to that produced from the second and third, the four numbers are proportional. Proposition 20 The least numbers of those that have the same ratio with them measures those that have the same ratio the same number of times—the greater the greater and the less the less. Proposition 21 Numbers prime to one another are the least of those that have the same ratio with them. Proposition 29 Any prime number is prime to any number it does not measure. Proposition 30 If two numbers, by multiplying one another, make the same number, and any prime number measures the product, it also measures one of the original numbers. Proofs: Proof of 30 If c, a prime number, measure ab, c measures either a or b.Suppose c does not measure a.Therefore c, a are prime to one another. [VII. 29]Suppose ab=mc.Therefore c : a = b : m. [VII. 19]Hence [VII. 20, 21] b=nc, where n is some integer.Therefore c measures b.Similarly, if c does not measure b, c measures a.Therefore c measures one or other of the two numbers a, b.Q.E.D.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PX-2** PX-2: PX-2 (also known as 5F-APP-PINACA, FU-PX and PPA(N)-2201) is an indazole-based synthetic cannabinoid that has been sold online as a designer drug. It contains a phenylalanine amino acid amide as part of its structure. Legality: Sweden's public health agency suggested classifying PX-2 as hazardous substance on November 10, 2014.PX-2 is listed in the Fifth Schedule of the Misuse of Drugs Act (MDA) and therefore illegal in Singapore as of May 2015.As of October 2015 PX-2 is a controlled substance in China.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BioFabric** BioFabric: BioFabric is an open-source software application for graph drawing. It presents graphs as a node-link diagram, but unlike other graph drawing tools that depict the nodes using discrete symbols, it represents nodes using horizontal lines. Rationale: Traditional node-link methods for visualizing networks deteriorate in terms of legibility when dealing with large networks, due to the proliferation of edge crossings amassing as what are disparagingly termed 'hairballs'. BioFabric is one of a number of alternative approaches designed explicitly to tackle this scalability issue, choosing to do so by depicting nodes as lines on the horizontal axis, one per row; edges as lines on the vertical axis, one per column, terminating at the two rows associated with the endpoint nodes. As such, nodes and edges are each provided their own dimension (as opposed to solely the edges with nodes being non-dimensional points). BioFabric exploits the additional degree of freedom thus produced to place ends of incident edges in groups. This placement can potentially carry semantic information, whereas in node-link graphics the placement is often arbitrarily generated within constraints for aesthetics, such as during force-directed graph drawing, and may result in apparently informative artifacts. Rationale: Edges are drawn (vertically) in a darker shade than (horizontal) nodes, creating visual distinction. Additional edges increase the width of the graph. Both ends of a link are represented as a square to reinforce the above effect even at small scales. Directed graphs also incorporate arrowheads. Development: The first version, 1.0.0, was released in July 2012. Development work on BioFabric is ongoing. An open source R implementation was released in 2013, RBioFabric, for use with the igraph package, and subsequently described on the project weblog. Features: Input Networks can be imported using SIF files as input. Related work: Blakley et al. have described how the technique used by BioFabric, which they refer to as a cartographic representation, can be used to compare the networks A and B by juxtaposing the edges in (A \ B), (A ∩ B), and (B \ A), a technique that is evocative of a Venn Diagram. Rossi and Magnani have developed ranked sociograms, which is a BioFabric-like presentation where the node ordering is based upon a ranking metric. This approach attaches semantic meaning to the length of the edge lines, and can be used to visualize the assortativity or dissortativity of a network.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SetiQuest** SetiQuest: setiQuest is an inactive project of the SETI Institute, whose declared aim is to "globalize the search for extra-terrestrial intelligence and empower a new generation of SETI enthusiasts", by creating means for a deeper involvement from the interested public. The project focuses on amplifying the human potential of SETI enthusiasts, and consists of four main fronts: Software, DSP Algorithms, Citizen Scientists, and Data release. Although there is no present activity on setiQuest per-se, much of the code and raw data products are still available on its successor site. Jill Tarter's TED wish: The setiQuest project started with Jill Tarter's "TED wish". Tarter was one of three recipients of the 2009 TED prize, which targets outstanding individuals and tries to grant them "one wish to change the world". Tarter's wish was: I wish that you would empower Earthlings everywhere to become active participants in the ultimate search for cosmic company. After the award, the project materialized through a website, that later garnered grass-roots public involvement on its discussion forum, and also in IRC meetings, various social networks, and a wiki. Some of these channels were set up by the community itself, and others were facilitated through the support of several institutions, in cooperation with the SETI Institute. Software: The software aspect of the project initially entailed opening the source code of SonATA, the software used on the Allen Telescope Array (ATA), into Open SonATA, to allow improvement by hobbyist programmers who are passionate about the subject or enjoy contributing to open source code development projects. To further this goal, setiQuest was accepted as part of the Google Summer of Code 2011 program. DSP Algorithms: The Algorithms subproject provides a channel for the creation of improved Digital signal processing (DSP) algorithms for detection of signals in the background noise captured by the ATA. The algorithms used on the ATA currently only search for continuous wave signals (that is, signals that appear at one frequency as a single tone) or pulsed signals. The goal of this participation mode was to allow signal processing enthusiasts to expand the search to other waveforms, perform improvements in the current algorithms or propose innovative new ones. Citizen Scientists: setiQuest also targets those who are less technically inclined, through the development of game-like apps to engage participants as Citizen Scientists, examining real data from SETI observations with the Allen Telescope Array. These apps were meant to harness the ability of the human brain to instinctively detect patterns, even in non-trivial cases where automated tools currently fail. The first of such apps, setiQuest Explorer was released in March 2011. Citizen Scientists: Do you wear headphones and listen to music while you work to sharpen your concentration? Could you imagine listening to data instead and responding to anomalies? Data release: To encourage developers to create new apps, the SETI Institute has made a large amount of its data available to the public, under a Creative Commons 3.0 license. This data, a subset of the approximately 100 terabytes collected every day, can be downloaded through the setiQuest website.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paceband** Paceband: A paceband is a wristband, sometimes made of a strip of waterproof paper, that lists expected split times for a running race. When used in conjunction with a stopwatch, a paceband can assist athletes in maintaining a steady pace throughout the race. This is the most efficient racing pace from a cardiovascular and muscle energy perspective. Erratic running speeds, particularly the urge to sprint early in a race while feeling fresh, consume energy inefficiently. A glance at the paceband and stopwatch as each distance marker is passed allows the athlete to quickly determine if they are running too fast for their targeted finishing time or too slowly and adjust accordingly. Paceband: Pre-printed versions for a variety of target finishing times can often be obtained before endurance races such as marathons, or commercially. Many websites exist that allow the free creation of customised pacebands for different distances and target finishing times that can be printed on the visitor's own computer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Turn-based MMORPG** Turn-based MMORPG: A turn-based MMORPG is a type of massively multiplayer online role-playing game that utilizes turn-based game flow, meaning that game actions are partitioned into well-defined and visible parts, called turns. A player of a turn-based game is allowed a period of analysis before committing to a game action, ensuring a separation between the game flow and the thinking process. Many turn-based MMORPGs are text-based, but there are a few games which depict their environments with fully animated graphics, such as Atlantica Online. Text-based and browser games: Many multiplayer browser games give the player a set number of turns which are replenished at set increments of time or through in-game items. Early BBS door games were turn-based, and browser games that use server-side scripting (such as PHP, ASP, Ruby, Perl, Python or Java) operate on similar principles. Most of these games use text or a combination of text and still images to depict the game world. Text-based and browser games: Games like Kingdom of Loathing play like traditional turn-based role-playing video games, with players given a new allotment of turns each day. Graphical games: Relatively few fully animated, graphical multiplayer RPGs include turn-based gameplay. Dofus, Dofus Arena and Wakfu are turn-based tactical role-playing games developed by Ankama Games. They feature isometric, sprite-based graphics. Atlantica Online is a strategic, turn-based MMORPG launched in 2008. While the game's battles are turn-based, during each turn the player must issue commands to their characters within a time limit of 30 seconds, making combat fast-paced. Darkwind: War on Wheels is a 3D turn-based car combat MMORPG which, similar to Atlantica Online, provides 30-second periods in which players issue commands to their vehicles. Game turns represent one-second of simulated time, and cars are moved using physics calculations. Concerto Gate uses a combat system that combines turn-based and real time, similar to Final Fantasy's "Active Time Battle" system. Wonderland Online Wonderland Online is 2D MMORPG by IGG. Digimon Battle Even though Digimon Battle was originally released in South Korea nearly 10 years ago, WeMade decided to launch the game in North America in March 2010. Wizard101 Wizard101 is a 3D turn based MMORPG with card based gameplay. Toontown Online is a 3D turn based game inspired by the cartoon world of Disney, while the official game has shut down, multiple fan-made servers have taken its place.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cycrimine** Cycrimine: Cycrimine (trade name Pagitane) is a central anticholinergic drug designed to reduce the levels of acetylcholine in the treatment of Parkinson's disease. Its mechanism of action is to bind to the muscarinic acetylcholine receptor M1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Generic Routing Encapsulation** Generic Routing Encapsulation: Generic Routing Encapsulation (GRE) is a tunneling protocol developed by Cisco Systems that can encapsulate a wide variety of network layer protocols inside virtual point-to-point links or point-to-multipoint links over an Internet Protocol network. Example uses: In conjunction with PPTP to create VPNs. In conjunction with IPsec VPNs to allow passing of routing information between connected networks. In mobility protocols. In A8/A10 interfaces to encapsulate IP data to/from Packet Control Function (PCF). Linux and BSD can establish ad-hoc IP over GRE tunnels which are interoperable with Cisco equipment. Distributed denial of service (DDoS) protected appliance to an unprotected endpoint. Example protocol stack Based on the principles of protocol layering in OSI, protocol encapsulation, not specifically GRE, breaks the layering order. It may be viewed as a separator between two different protocol stacks, one acting as a carrier for another. Delivery protocols: GRE packets that are encapsulated within IP directly, use IP protocol type 47 in the IPv4 header's Protocol field or the IPv6 header's Next Header field.For performance reasons, GRE can also be encapsulated in UDP packets. Better throughput may be achieved by using Equal-cost multi-path routing. Packet header: Extended GRE packet header (RFC 2890) The extended version of the GRE packet header is represented below: C (1 bit) Checksum bit. Set to 1 if a checksum is present. K (1 bit) Key bit. Set to 1 if a key is present. S (1 bit) Sequence number bit. Set to 1 if a sequence number is present. Reserved 0 (9 bits) Reserved bits; set to 0. Version (3 bits) GRE Version number; set to 0. Protocol Type (16 bits) Indicates the ether protocol type of the encapsulated payload. (For IPv4, this would be hex 0800.) Checksum (16 bits) Present if the C bit is set; contains the checksum for the GRE header and payload. Reserved 1 (16 bits) Present if the C bit is set; is set to 0. Key (32 bits) Present if the K bit is set; contains an application-specific key value. Sequence Number (32 bits) Present if the S bit is set; contains a sequence number for the GRE packet. Standard GRE packet header (RFC 2784) A standard GRE packet header structure is represented in the diagram below. C (1 bit) Checksum bit. Set to 1 if a checksum is present. Reserved 0 (12 bits) Reserved bits; set to 0. Version (3 bits) GRE Version number; set to 0. Protocol Type (16 bits) Indicates the ether protocol type of the encapsulated payload. (For IPv4, this would be hexadecimal 0x0800; for IPv6, it would be 0x86DD.) Checksum (16 bits) Present if the C bit is set; contains the checksum for the GRE header and payload. Reserved 1 (16 bits) Present if the C bit is set; its contents is set to 0. Original GRE packet header (RFC 1701) The newer structure superseded the original structure: The original GRE RFC defined further fields in the packet header which became obsolete in the current standard: C (1 bit) Checksum bit. Set to 1 if a checksum is present. R (1 bit) Routing Bit. Set to 1 if Routing and Offset information are present. K (1 bit) Key bit. Set to 1 if a key is present. S (1 bit) Sequence number bit. Set to 1 if a sequence number is present. s (1 bit) Strict source route bit. Recur (3 bits) Recursion control bits. Flags (5 bits) Reserved for future use, set to 0. Version (3 bits) Set to 0. Protocol Type (16 bits) Indicates the ether protocol type of the encapsulated payload. Checksum (16 bits) Present if the C bit is set; contains the checksum for the GRE header and payload. Offset (16 bits) Present if R bit or C bit is set; contains valid information, only if R bit is set. An offset field indicating the offset within the Routing field to the active source route entry. Key (32 bits) Present if the K bit is set; contains an application-specific key value. Sequence Number (32 bits) Present if the S bit is set; contains a sequence number for the GRE packet. Routing (variable) Present if R bit is set; contains a list of source route entries, therefore is of variable length. PPTP GRE packet header The Point-to-Point Tunneling Protocol (PPTP) uses a variant GRE packet header structure, represented below. PPTP creates a GRE tunnel through which the PPTP GRE packets are sent. C (1 bit) Checksum bit. For PPTP GRE packets, this is set to 0. R (1 bit) Routing bit. For PPTP GRE packets, this is set to 0. K (1 bit) Key bit. For PPTP GRE packets, this is set to 1. (All PPTP GRE packets carry a key.) S (1 bit) Sequence number bit. Set to 1 if a sequence number is supplied, indicating a PPTP GRE data packet. s (1 bit) Strict source route bit. For PPTP GRE packets, this is set to 0. Recur (3 bits) Recursion control bits. For PPTP GRE packets, these are set to 0. A (1 bit) Acknowledgment number present. Set to 1 if an acknowledgment number is supplied, indicating a PPTP GRE acknowledgment packet. Flags (4 bits) Flag bits. For PPTP GRE packets, these are set to 0. Version (3 bits) GRE Version number. For PPTP GRE packets, this is set to 1. Protocol Type (16 bits) For PPTP GRE packets, this is set to hex 880B. Key Payload Length (16 bits) Contains the size of the payload, not including the GRE header. Key Call ID (16 bits) Contains the Peer's Call ID for the session to which the packet belongs. Sequence Number (32 bits) Present if the S bit is set; contains the GRE payload sequence number. Acknowledgement Number (32 bits) Present if the A bit is set; contains the sequence number of the highest GRE payload packet received by the sender. Standards: RFC 1701: Generic Routing Encapsulation (GRE) (informational) RFC 1702: Generic Routing Encapsulation over IPv4 networks (informational) RFC 2637: Point to Point Tunneling Protocol (informational) RFC 2784: Generic Routing Encapsulation (GRE) (proposed standard, updated by RFC 2890) RFC 2890: Key and Sequence Number Extensions to GRE (proposed standard) RFC 8086: GRE-in-UDP Encapsulation (proposed standard)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded