text
stringlengths
60
353k
source
stringclasses
2 values
**2-hydroxyphytanoyl-CoA lyase** 2-hydroxyphytanoyl-CoA lyase: 2-Hydroxyphytanoyl-CoA lyase is a peroxisomal enzyme involved in the catabolism of phytanoic acid by α-oxidation. It requires thiamine diphosphate (ThDP) as cofactor.It is classified under EC number 4.1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inverted cone filtration** Inverted cone filtration: Inverted cone filtration (ICF) is a process used to remove particulate and dissolved contaminants from a designated fluid such as water. In a typical design, fluid enters the narrow top end of the filter falling onto a chamber where pressure head is built to force the fluid across woven filter media. The pressure differential on the inner and outer wall of the filter is caused by a low pressure outlet pipe that carries out the treated fluid. Practical application: Inverted cone filtration has been successfully used in stormwater quality application. The filter is made with monofilament polypropylene that as serves as a barrier to solid and dissolved particulate matter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methylene green** Methylene green: Methylene green is a heterocyclic aromatic chemical compound similar to methylene blue. It is used as a dye. It functions as a visible light-activated photocatalyst in organic synthesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lateritic nickel ore deposits** Lateritic nickel ore deposits: Lateritic nickel ore deposits are surficial, weathered rinds formed on ultramafic rocks. They account for 73% of the continental world nickel resources and will be in the future the dominant source for the mining of nickel. Genesis and types of nickel laterites: Lateritic nickel ores formed by intensive tropical weathering of olivine-rich ultramafic rocks such as dunite, peridotite and komatiite and their serpentinized derivatives, serpentinite which consist largely of the magnesium silicate serpentine and contains approx. 0.3% nickel. This initial nickel content is strongly enriched in the course of lateritization. Two kinds of lateritic nickel ore have to be distinguished: limonite types and silicate types.Limonite type laterites (or oxide type) are highly enriched in iron due to very strong leaching of magnesium and silica. They consist largely of goethite and contain 1-2% nickel incorporated in goethite. Absence of the limonite zone in the ore deposits is due to erosion. Genesis and types of nickel laterites: Strong weathering of ultramafic rocks at the earth's surface in humid conditions causes nickel resources to form inside nickel laterites. Laterites are formed by the breakdown of minerals which then leach into groundwater, the leftover minerals join to form the new mineral known as laterites. Nickel is turned into usable quality ore grade by being merged into the newly formed stable minerals.Silicate type (or saprolite type) nickel ore formed beneath the limonite zone. It contains generally 1.5-2.5% nickel and consists largely of Mg-depleted serpentine in which nickel is incorporated. In pockets and fissures of the serpentinite rock green garnierite can be present in minor quantities, but with high nickel contents - mostly 20-40%. It is bound in newly formed phyllosilicate minerals. All the nickel in the silicate zone is leached downwards (absolute nickel concentration) from the overlying goethite zone. Ore deposits: Typical nickel laterite ore deposits are very large tonnage, low-grade deposits located close to the surface. They are typically in the range of 20 million tonnes and upwards (this being a contained resource of 200,000 tonnes of nickel at 1%) with some examples approaching a billion tonnes of material. Thus, typically, nickel laterite ore deposits contain many billions of dollars of in-situ value of contained metal. Ore deposits: Ore deposits of this type are restricted to the weathering mantle developed above ultramafic rocks. As such they tend to be tabular, flat and really large, covering many square kilometres of the Earth's surface. However, at any one time the area of a deposit being worked for the nickel ore is much smaller, usually only a few hectares. The typical nickel laterite mine often operates as either an open cut mine or a strip mine. Extraction: Nickel laterites are a very important type of nickel ore deposit. They are growing to become the most important source of nickel metal for world demand (currently second to sulfide nickel ore deposits). Extraction: Nickel laterites are generally mined via open cut mining methods. Nickel is extracted from the ore by a variety of process routes. Hydrometallurgical processes include high-pressure acid leach (HPAL) and heap leach, both of which are generally followed by solvent extraction - electrowinning (SX-EW) for recovery of nickel. Another hydrometallurgical routes is the Caron process, which consists of roasting followed by ammonia leaching and precipitation as nickel carbonate. Additionally, ferronickel is produced by the rotary kiln electric furnace process (RKEF process). Extraction: HPAL process High pressure acid leach processing is employed for two types of nickel laterite ores: Ores with a limonitic character such as the deposits of the Moa district in Cuba and southeast New Caledonia at Goro where nickel is bound in goethite and asbolan. Extraction: Ores of a predominantly nontronitic character, such as many deposits in Western Australia, where nickel is bound within clay or secondary silicate substrates in the ores. The nickel (+/- cobalt) metal is liberated from such minerals only at low pH and high temperatures, generally in excess of 250 °C.The advantages of HPAL plants are that they are not as selective toward the type of ore minerals, grades and nature of mineralisation. The disadvantage is the energy required to heat the ore material and acid, and the wear and tear hot acid causes upon plant and equipment. Higher energy costs demand higher ore grades. Extraction: Heap (atmospheric) leach Heap leach treatment of nickel laterites is primarily applicable to clay-poor oxide-rich ore types where clay contents are low enough to allow percolation of acid through the heap. Generally, this route of production is much cheaper - up to half the cost of production - due to the lack of need to heat and pressurise the ore and acid. Extraction: Ore is ground, agglomerated, and perhaps mixed with clay-poor rock, to prevent compaction of the clay-like materials and so maintain permeability. The ore is stacked on impermeable plastic membranes and acid is percolated over the heap, generally for 3 to 4 months, at which stage 60% to 70% of the nickel-cobalt content is liberated into acid solution, which is then neutralised with limestone and a nickel-cobalt hydroxide intermediate product is generated, generally then sent to a smelter for refining. Extraction: The advantage of heap leach treatment of nickeliferous laterite ores is that the plant and mine infrastructure are much cheaper - up to 25% of the cost of a HPAL plant - and less risky from a technological point of view. However, they are somewhat limited in the types of ore which can be treated. Extraction: FerroNickel process A recent development in the extraction of nickel laterite ores is a particular grade of tropical deposits, typified by examples at Acoje in the Philippines, developed on ophiolite sequence ultramafics. This ore is so rich in limonite (generally grading 47% to 59% iron, 0.8 to 1.5% nickel and trace cobalt) that it is essentially similar to low-grade iron ore. As such, certain steel smelters in China have developed a process for blending nickel limonite ore with conventional iron ore to produce stainless steel feed products. Extraction: Nitric acid hydrometallurgical tank leach Another new method of extracting nickel from laterite ores is currently being demonstrated at a pilot scale test plant at the CSIRO facility in Perth Australia. The DNi process uses nitric acid, instead of sulphuric acid, to extract the nickel within a few hours and then the nitric acid is recycled. The DNi process has the major advantage of being able to treat both limonite and saprolite lateritic ores and is estimated to have less than half the capital and operating costs of HPAL or FerroNickel processes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**I3C (bus)** I3C (bus): I3C is a specification to enable communication between computer chips by defining the electrical connection between the chips and signaling patterns to be used. Short for "Improved Inter Integrated Circuit", the standard defines the electrical connection between the chips to be a two wire, shared (multidrop), serial data bus, one wire (SCL) being used as a clock to define the sampling times, the other wire (SDA) being used as a data line whose voltage can be sampled. The standard defines a signalling protocol in which multiple chips can control communication and thereby act as the bus controller. I3C (bus): The I3C specification takes its name from, uses the same electrical connections as, and allows some backward compatibility with, the I²C bus, a de facto standard for inter-chip communication, widely used for low-speed peripherals and sensors in computer systems. The I3C standard is designed to retain some backward compatibility with the I²C system, notably allowing designs where existing I²C devices can be connected to an I3C bus but still have the bus able to switch to a higher data rate for communication at higher speeds between compliant I3C devices. The I3C standard thereby combines the advantage of the simple, two wire I²C architecture with the higher communication speeds common to more complicated buses such as the Serial Peripheral Interface (SPI). I3C (bus): The I3C standard was developed as a collaborative effort between electronics and computer related companies under auspices of the Mobile Industry Processor Interface Alliance (MIPI Alliance). The I3C standard was first released to the public at the end of 2017, although access requires the disclosure of private information. Google and Intel have backed I3C as a sensor interface standard for Internet of things (IoT) devices. History: Goals of the MIPI Sensor Working Group effort were first announced in November 2014 at the MEMS Executive Congress in Scottsdale AZ.Electronic design automation tool vendors including Cadence, Synopsys and Silvaco have released controller IP blocks and associated verification software for the implementation of the I3C bus in new integrated circuit designs. In December 2016, Lattice Semiconductor integrated I3C support into its new FPGA known as an iCE40 UltraPlus.In 2017, Qualcomm announced the Snapdragon 845 mobile SOC with integrated I3C controller support.In December 2017, the I3C 1.0 specification was released for public review. At about the same time, a Linux kernel patch introducing support for I3C was proposed by Boris Brezillon.In 2021, DDR5 has introduced I3C. In June 2022, Renesas Electronics introduced the first I3C Intelligent switch products. Goals: Prior to public release of the specification, a substantial amount of general information about it has been published in the form of slides from the 2016 MIPI DevCon. The goals for this interface were based on a survey of MIPI member organizations and MEMS Industry Group (MIG) members. The results of this survey have been made public. I3C v1.0 The initial I3C design sought to improve over I²C in the following ways: Two-pin interface that is a superset of the I²C standard. Legacy I²C target devices can be connected to the newer bus. Low-power and space efficient design intended for mobile devices (smartphones and IoT devices.) In-band interrupts over the serial bus rather than requiring separate pins. In I²C, interrupts from peripheral devices typically require an additional non-shared pin per package. Standard Data Rate (SDR) throughput between 10 and 12.5 Mbit/s using CMOS I/O levels. High Data Rate (HDR) modes permitting multiple bits per clock cycle. These support throughput comparable to SPI while requiring only a fraction of I²C Fast Mode power. Goals: A standardized set of common command codes Command queue support Error Detection and Recovery (parity check in SDR mode and 5 bit CRC for HDR modes) Dynamic address assignment (DAA) for I3C targets, while still supporting static addresses for I²C legacy devices I3C traffic is invisible for legacy I²C devices when equipped with I²C spike filters, achieved by SCL HIGH times of less than 50 ns Hot-join (some devices on the bus may be powered on/off during operation) Multi-controller operation with a well-defined protocol for hand-off between controllers I3C Basic Specification After making the I3C 1.0 standard publicly accessible, the organization subsequently published the I3C Basic specification, a subset intended to be implementable by non-member organizations under a RAND-Z licence. I3C Basic allows royalty-free implementation of I3C, and is intended for organizations that may view MIPI membership as a barrier for adoption. The basic version includes many of the protocol innovations in I3C 1.0, but lacks some of the potentially more difficult-to-implement ones such as the optional high data rate (HDR) modes like DDR. None the less the default SDR mode at up to 12.5 Mbit/s is a major speed/capacity improvement over I²C. Goals: I3C v1.1 Published in December 2019, this specification is only available to MIPI members. I3C v1.1.1 Published in June 2021, it has deprecated the terms "master/slave" and now uses the updated normative terms "controller/target." The technical definitions of such devices, and their roles on an I3C bus, remain unchanged. Nomenclature: Signal pins I3C uses same two signal pins as I²C, referred to as SCL (serial clock) and SDA (serial data). The primary difference is that I²C operates them as open-drain outputs at all times, so its speed is limited by the resultant slow signal rise time. I3C uses open-drain mode when necessary for compatibility, but switches to push-pull outputs whenever possible, and includes protocol changes to make it possible more often than in I²C. Nomenclature: SCL is a conventional digital clock signal, driven with a push-pull output by the current bus controller during data transfers. (Clock stretching, a frequently used I²C feature, is not supported.) In transactions involving I²C target devices, this clock signal generally has a duty cycle, of approximately 50%, but when communicating with known I3C targets, the bus controller may switch to a higher frequency and/or alter the duty cycle so the SCL high period is limited to at most 40 ns. Nomenclature: SDA carries the serial data stream, which may be driven by either controller or target, but is driven at a rate determined by the controller's SCL signal. For compatibility with the I²C protocol, each transaction begins with SDA operating as an open-drain output, which limits the transmission speed. For messages addressed to an I3C target, the SDA driver mode switches to push-pull after the first few bits in the transaction, allowing the clock to be further increased up to 12.5 MHz. This medium-speed feature is called standard data rate (SDR) mode.Generally, SDA is changed just after the falling edge of SCL, and the resultant value is received on the following rising edge. When the controller hands SDA over to the target, it likewise does so on the falling edge of SCL. However, when the target is handing back control of SDA to the controller (e.g. after acknowledging its address before a write), it releases SDA on the rising edge of SCL, and the controller is responsible for holding the received value for the duration of SCL high. (Because the controller drives SCL, it will see the rising edge first, so there will be a brief period of overlap when both are driving SDA, but as they are both driving the same value, no bus contention occurs.) Framing All communications in I²C and I3C requires framing for synchronization. Within a frame, changes on the SDA line should always occur while SCL is in the low state, so that SDA can be considered stable on the low-to-high transition of SCL. Violations of this general rule are used for framing (at least in legacy and standard data rate modes). Nomenclature: Between data frames, the bus controller holds SCL high, in effect stopping the clock, and SDA drivers are in a high-impedance state, permitting a pull-up resistor to float it to high. A high-to-low transition of SDA while SCL is high is known as a START symbol, and signals the beginning a new data frame. A low-to-high transition on SDA while SCL is high is the STOP symbol, ending a data frame. Nomenclature: A START without a preceding STOP, called a "repeated START", may be used to end one message and begin another within a single bus transaction. Nomenclature: In I²C, the START symbol is usually generated by a bus controller, but in I3C, even target devices may pull SDA low to indicate they want to start a frame. This is used to implement some advanced I3C features, such as in-band interrupts, multi-controller support, and hot-joins. After the start, the bus controller restarts the clock by driving SCL, and begins the bus arbitration process. Nomenclature: Ninth bit Like I²C, I3C uses 9 clock cycles to send each 8-bit byte. However, the 9th cycle is used differently. I²C uses the last cycle for an acknowledgement sent in the opposite direction to the first 8 bits. I3C operates the same way for the first (address) byte of each message, and for I²C-compatible messages, but when communicating with I3C targets, message bytes after the first use the 9th bit as an odd parity bit on writes, and an end-of-data flag on reads. Nomenclature: Writes may be terminated only by the controller. Nomenclature: Either the controller or the target may terminate a read. The target sets SDA low to indicate that no more data is available; the controller responds by taking over SDA and generating a STOP or repeated START. To allow a read to continue, the target drives SDA high while SCL is low before the 9th bit, but lets SDA float (open-drain) while SCL is high. The controller may drive SDA low (a repeated START condition) at this time to abort the read. Nomenclature: Bus arbitration At the start of a frame, several devices may contend for use of the bus, and the bus arbitration process serves to select which device obtains control of the SDA line. In both I²C and I3C, bus arbitration is done with the SDA line in open-drain mode, which allows devices transmitting a binary 0 (low) to override devices transmitting a binary 1. Contending devices monitor the SDA line while driving it in open-drain mode. Whenever a device detects a low condition (0 bit) on SDA while transmitting a high (1 bit), it has lost arbitration and must cease contending until the next transaction begins. Nomenclature: Each transaction begins with the target address, and the implementation gives priority to lower-numbered target addresses. The difference is that I²C has no limit on how long arbitration can last (in the rare but legal situation of several devices contending to send a message to the same device, the contention will not be detected until after the address byte). I3C, however, guarantees that arbitration will be complete no later than the end of the first byte. This allows push-pull drivers and faster clock rates to be used the great majority of the time. Nomenclature: This is done in several ways: I3C supports multiple controllers, but they are not symmetrical; one is the current controller and responsible for generating the clock. Other devices sending a message on the bus (in-band interrupts or secondary controllers wishing use of the bus) must arbitrate using their own address before sending any other data. Thus, no two legal bus messages share the same first byte except if the controller and another device are simultaneously communicating with each other. Nomenclature: I3C, like I²C, allows multiple messages per transaction separated with "repeated START" symbols. Arbitration is per-transaction, so these subsequent messages are never subject to arbitration. Most I3C controller transactions begin with the reserved address 0x7E(11111102). As this has a lower priority than any I3C device, once it has passed arbitration, the controller knows that no other device is contending for the bus. Nomenclature: As a special case, if I3C devices are assigned low addresses (I3C supports dynamic, controller-controlled address assignment), then as soon as the 0x7E address has won arbitration for enough leading bits to distinguish it from any assigned address, the controller knows that arbitration is complete and it may switch to push-pull operation on SDA. If all assigned addresses are less than 0x40, this is after the first bit. If all addresses are less than 0x60, this is after the second bit, and so on. Nomenclature: In the case described above wherein the current controller begins a transaction with the address of a device which is itself contending for use of the bus, both will transmit their address bytes successfully. However, each will expect the other to acknowledge the address (by pulling SDA low) for the following acknowledge bit. Consequently, neither will, and both will observe the lack of acknowledgement. In this case, the message is not sent, but the controller wins arbitration: it may send a repeated start, followed by a retry which will be successful. Nomenclature: Common command codes A write addressed to the reserved address 0x7E is used to perform a number of special operations in I3C. All I3C devices must receive and interpret writes to this address in addition to their individual addresses. Nomenclature: First of all, a write consisting of just the address byte and no data bytes has no effect on I3C targets, but may be used to simplify I3C arbitration. As described above, this prefix may speed up arbitration (if the controller supports the optimization of switching to push-pull mid-byte), and it simplifies the controller by avoiding a slightly tricky arbitration case. Nomenclature: If the write is followed by a data byte, the byte encodes a "common command code", a standardized I3C operation. Command codes 0–0x7F are broadcast commands addressed to all I3C targets. They may be followed by additional, command-specific parameters. Command codes 0x80–0xFE are direct commands addressed to individual targets. These are followed by a series of repeated STARTs and writes or reads to specific targets. Nomenclature: While a direct command is in effect, per-target writes or reads convey command-specific parameters. This operation is in lieu of target's normal response to an I3C message. One direct command may be followed by multiple per-target messages, each preceded by a repeated START. This special mode ends at the end of the transaction (STOP symbol) or the next message addressed to 0x7E. Nomenclature: Some command codes exist in both broadcast and direct forms. For example, the commands to enable or disable in-band interrupts may be sent to individual targets or broadcast to all. Commands to get parameters from a target (for example the GETHDRCAP command to ask a device which high-data-rate modes it supports) only exist in direct form. Device classes: On an I3C bus in its default (SDR) mode, four different classes of devices can be supported: I3C Main Controller I3C Secondary Controller I3C Target I²C Target (legacy devices) High Data Rate (HDR) options: Each I3C bus transaction begins in SDR mode, but the I3C controller may issue an "Enter HDR" CCC broadcast command which tells all I3C targets that the transaction will continue in a specified HDR mode. I3C targets which do not support HDR may then ignore bus traffic until they see a specific "HDR exit" sequence which informs them it is time to listen to the bus again. (The controller knows which targets support HDR so will never attempt to use HDR to communicate with a target which does not support it.) Some HDR modes are also compatible with I²C devices if the I²C devices have a 50 ns spike filter on the SCL line; that is, they will ignore a high level on the SCL line which lasts less than 50 ns. This is required by the I²C specification, but not universally implemented, and not all implementations ignore frequently repeated spikes, so I3C HDR compatibility must be verified. The compatible HDR modes use SCL pulses of at most 45 ns so that I²C devices will ignore them. High Data Rate (HDR) options: The HDR-DDR mode uses double data rate signalling with a 12.5 MHz clock to achieve a 25 Mbit/s raw data rate (20 Mbit/s effective). This requires changing the SDA line while SCK is high, a violation of the I²C protocol, but I²C devices will not see the brief high-going pulse on SCL and thus not notice the violation. The HDR-TSP and HDR-TSL modes use one of three symbols as ternary digits (trits): Two bytes plus two parity bits (18 bits total) are broken into six 3-bit triplets, and each triplet is encoded as two trits. Sent at 25 Mtrit/s, this achieves a 33.3 Mbit/s effective data rate. High Data Rate (HDR) options: The trit pair consisting of two transitions of SDA only is not used to encode data, and is instead used for framing, to mark the end of an HDR sequence. Although this limits the maximum time between SCL transitions to three trit times, that exceeds the 50 ns limit for legacy I²C devices, so HDR-TSP (ternary symbol, pure) mode may only be used on a bus without legacy I²C devices. High Data Rate (HDR) options: To permit buses including I²C devices (with a spike filter), the HDR-TSL (ternary symbol, legacy) mode must be used. This maintains I²C compatibility by trit stuffing: after any rising edge on SCL, if the following trit is not 0, a 1 trit (transition on SCL only) is inserted by the sender, and ignored by the receiver. This ensures that SCL is never high for more than one trit time. I²C features not supported in I3C: Pull-up resistors are provided by the I3C controller. External pull-up resistors are no longer needed. Clock Stretching – devices are expected to be fast enough to operate at bus speed. The I3C controller is the sole clock source. I²C Extended (10-bit) Addresses. All devices on an I3C bus are addressed by a 7-bit address. Native I3C devices have a unique 48-bit address which is used only during dynamic address assignments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Insert (print advertising)** Insert (print advertising): In advertising, an insert or blow-in card is a separate advertisement put in a magazine, newspaper, or other publication. They are usually the main source of income for non-subscription local newspapers and other publications. Sundays typically bring numerous large inserts in newspapers, because most weekly sales begin on that day, and it also has the highest circulation of any day of the week.A buckslip or buck slip is a slip of paper, often the size of a U.S. dollar bill (a buck), which includes additional information about a product.Bind-in cards are cards that are bound into the bindings of the publication, and will therefore not drop out.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Viral pathogenesis** Viral pathogenesis: Viral pathogenesis is the study of the process and mechanisms by which viruses cause diseases in their target hosts, often at the cellular or molecular level. It is a specialized field of study in virology.Pathogenesis is a qualitative description of the process by which an initial infection causes disease. Viral disease is the sum of the effects of viral replication on the host and the host's subsequent immune response against the virus. Viruses are able to initiate infection, disperse throughout the body, and replicate due to specific virulence factors.There are several factors that affect pathogenesis. Some of these factors include virulence characteristics of the virus that is infecting. In order to cause disease, the virus must also overcome several inhibitory effects present in the host. Some of the inhibitory effects include distance, physical barriers and host defenses. These inhibitory effects may differ among individuals due to the inhibitory effects being genetically controlled. Viral pathogenesis: Viral pathogenesis is affected by various factors: (1) transmission, entry and spread within the host, (2) tropism, (3) virus virulence and disease mechanisms, (4) host factors and host defense. Mechanisms of infection: Viruses need to establish infections in host cells in order to multiply. For infections to occur, the virus has to hijack host factors and evade the host immune response for efficient replication. Viral replication frequently requires complex interactions between the virus and host factors that may result in deleterious effects in the host, which confers the virus its pathogenicity. Mechanisms of infection: Important steps of a virus life cycle that shape pathogenesis Transmission from a host with an infection to a second host Entry of the virus into the body Local replication in susceptible cells Dissemination and spread to secondary tissues and target organs Secondary replication in susceptible cells Shedding of the virus into the environment Onward transmission to third host Primary transmission Three requirements must be satisfied to ensure successful infection of a host. Firstly, there must be sufficient quantity of virus available to initiate infection. Cells at the site of infection must be accessible, in that their cell membranes display host-encoded receptors that the virus can exploit for entry into the cell, and the host anti-viral defense systems must be ineffective or absent. Mechanisms of infection: Entry to host Viruses causing disease in humans often enter through the mouth, nose, genital tract, or through damaged areas of skin, so cells of the respiratory, gastrointestinal, skin and genital tissues are often the primary site of infection. Some viruses are capable of transmission to a mammalian fetus through infected germ cells at the time of fertilization, later in pregnancy via the placenta, and by infection at birth. Mechanisms of infection: Local replication and spread Following initial entry to the host, the virus hijacks the host cell machinery to undergo viral amplification. Here, the virus must modulate the host innate immune response to prevent its elimination by the body while facilitating its replication. Replicated virus from the initially infected cell then disperse to infect neighbouring susceptible cells, possibly with spread to different cell types like leukocytes. This results in a localised infection, in which the virus mainly spreads and infects adjacent cells to the site of entry. Otherwise, the virus can be released into extracellular fluids. Mechanisms of infection: Examples of localised infections include: common cold (rhinovirus), flu (parainfluenza), gastrointestinal infections (rotavirus) or skin infections (papillomavirus). Mechanisms of infection: Dissemination and secondary replication In other cases, the virus can cause systemic disease through a disseminated infection spread throughout the body. The predominant mode of viral dissemination occurs through the blood or lymphatic system, some of which include viruses responsible for chickenpox (varicella zoster virus), smallpox (variola), HIV (human immunodeficiency virus). A minority of viruses can disseminate via the nervous system. Notably, the poliovirus can be transmitted via the fecal-oral route, where it initially replicates in its site of entry, the small intestine and spread to regional lymph nodes. Then, the virus disseminates via the bloodstream into different organs in the body (e.g. liver, spleen), followed by a secondary round of replication and dissemination into the central nervous system to damage motor neurons. Mechanisms of infection: Shedding and secondary transmission Finally, the viruses spread to sites where shedding into the environment can occur. The respiratory, alimentary and urogenital tracts and the blood are the most frequent sites of shedding in the form of bodily fluids, aerosols, skin, excrement. The virus would then go on to be transmitted to another person, and establish the infection cycle all over again. Factors affecting pathogenesis: There are a few main overarching factors affecting viral diseases: Virus tropism Virus factors Host factors Molecular basis of virus tropism Virus tropism refers to the virus' preferential site of replication in discrete cell types within an organ. In most cases, tropism is determined by the ability of the viral surface proteins to fuse or bind to surface receptors of specific target cells to establish infection. Thus, the binding specificity of viral surface proteins dictates tropism as well as the destruction of particular cell populations, and is therefore a major determinant of virus pathogenesis. Factors affecting pathogenesis: However, co-receptors are sometimes required in addition to the binding of cellular receptors on host cells to viral proteins in order to establish infection. For instance, HIV-1 requires target cells to express co-receptors CCR5 or CXCR4, on top of the CD4 receptor for productive viral attachment. Interestingly, HIV-1 can undergo a tropism switch, where the virus glycoprotein gp120 initially uses CCR5 (mainly on macrophages) as the primary co-receptor for entering the host cell. Subsequently, HIV-1 switches to bind to CXCR4 (mainly on T cells) as the infection progresses, in doing so transitions the viral pathogenicity to a different stage.Apart from cellular receptors, viral tropism can also governed by other intracellular factors, such as tissue-specific transcription factors. An example would be the JC polyomavirus, in which its tropism is limited to glial cells since its enhancer is only active in glial cells, and JC viral gene expression requires host transcription factors expressed exclusively in glial cells.The accessibility of host tissues and organs to the virus also regulates tropism. Accessibility is affected by physical barriers, such as in enteroviruses, which replicate in the intestine since they are able to withstand bile, digestive enzymes and acidic environments. Factors affecting pathogenesis: Virus factors Viral genetics encoding viral factors will determine the degree of viral pathogenesis. This can be measured as virulence, which can be used to compare the quantitative degree of pathology between related viruses. In other words, different virus strains possessing different virus factors can lead to different degrees of virulence, which in turn can be exploited to study the differences in pathogenesis of viral variants with different virulence.Virus factors are largely influenced by viral genetics, which is the virulence determinant of structural or non-structural proteins and non-coding sequences. For a virus to successfully infect and cause disease in the host, it has to encode specific virus factors in its genome to overcome the preventive effects of physical barriers, and modulate host inhibition of virus replication. In the case of poliovirus, all vaccine strains found in the oral polio vaccine contain attenuating point mutations in the 5' untranslated region (5' UTR). Conversely, the virulent strain responsible for causing polio disease does not contain these 5' UTR point mutations and thus display greater viral pathogenicity in hosts.Virus factors encoded in the genome often control the tropism, routes of virus entry, shedding and transmission. In polioviruses, the attenuating point mutations are thought to induce a replication and translation defect to reduce the virus' ability of cross-linking to host cells and replicate within the nervous system.Viruses have also developed a variety of immunomodulation mechanisms to subvert the host immune response. This tend to feature virus-encoded decoy receptors that target cytokines and chemokines produced as part of the host immune response, or homologues of host cytokines. As such, viruses capable of manipulating the host cell response to infection as an immune evasion strategy exhibit greater pathogenicity. Factors affecting pathogenesis: Host factors Viral pathogenesis is also largely dependent on host factors. Several viral infections have displayed a variety of effects, ranging from asymptomatic to symptomatic or even critical infection, solely based on differing host factors alone. In particular, genetic factors, age and immunocompetence play an important role is dictating whether the viral infection can be modulated by the host. Mice that possess functional Mx genes encode an Mx1 protein which can selectively inhibit influenza replication. Therefore, mice carrying a non-functional Mx allele fail to synthesise the Mx protein and are more susceptible to influenza infection. Alternatively, immunocompromised individuals due to existing illnesses may have a defective immune system which makes them more vulnerable to damage by the virus. Furthermore, a number of viruses display variable pathogenicity depending on the age of the host. Mumps, polio, and Epstein-Barr virus cause more severe disease in adults, while others like rotavirus cause more severe infection in infants. It is therefore hypothesized that the host immune system and defense mechanisms might differ with age. Disease mechanisms: How do viral infections cause disease?: A viral infection does not always cause disease. A viral infection simply involves viral replication in the host, but disease is the damage caused by viral multiplication. An individual who has a viral infection but does not display disease symptoms is known as a carrier. Disease mechanisms: How do viral infections cause disease?: Damage caused by the virus Once inside host cells, viruses can destroy cells through a variety of mechanisms. Viruses often induce direct cytopathic effects to disrupt cellular functions. This could be through releasing enzymes to degrade host metabolic precursors, or releasing proteins that inhibit the synthesis of important host factors, proteins, DNA and/or RNA. Namely, viral proteins of herpes simplex virus can degrade host DNA and inhibit host cell DNA replication and mRNA transcription. Poliovirus can inactivate proteins involved in host mRNA translation without affecting poliovirus mRNA translation. In some cases, expression of viral fusion proteins on the surface of the host cells can cause host cell fusion to form multinucleated cells. Notable examples include measles virus, HIV, respiratory syncytial virus. Disease mechanisms: How do viral infections cause disease?: Importantly, viral infections can differ by the "lifestyle strategy". Persistent infections happen when cells continue to survive despite a viral infection and can be further classified into latent (only the viral genome is present, there is no replication occurring) and chronic (basal levels of viral replication without stimulating an immune response). In acute infections, lytic viruses are shed at high titres for rapid infection to a secondary tissue/host, whereas persistent viruses undergo shedding at lower titres for a longer duration of transmission (months to years).Lytic viruses are capable of destroying host cells by incurring and/or interfering with the specialised functions of host cells. An example would be the triggering of necrosis in host cells infected with the virus. Otherwise, signatures of viral infection, like the binding of HIV to co-receptors CCR5 or CXCR4, can also trigger cell death via apoptosis through host signalling cascades by immune cells. However, many viruses encode proteins that can modulate apoptosis depending on whether the infection is acute or persistent. Induction of apoptosis, such as through interaction with caspases, will promote viral shedding for lytic viruses to facilitate transmission, while viral inhibition of apoptosis could prolong the production of virus in cells, or allow the virus to remain hidden from the immune system in chronic, persistent infections. Nevertheless, induction of apoptosis in major immune cells or antigen-presenting cells may also act as a mechanism of immunosuppression in persistent infections like HIV. The primary cause of immunosuppression in HIV patients is due to the depletion of CD4+ T helper cells.Interestingly, adenovirus has an E1A protein to induce apoptosis by initiating the cell cycle, and an E1B protein to block the apoptotic pathway through inhibition of caspase interaction.Persistent viruses can sometimes transform host cells into cancer cells. Viruses such as the human papillomavirus (HPV), human T-lymphotropic virus (HTLV) etc., can stimulate growth of tumours in infected hosts, either by disrupting tumour suppressor gene expression (HPV) or upregulating proto-oncogene expression (HTLV). Disease mechanisms: How do viral infections cause disease?: Damage caused by host immune system Sometimes, instead of cell death or cellular dysfunction caused by the virus, the host immune response can mediate disease and excessive inflammation. The stimulation of the innate and adaptive immune system in response to viral infections destroys infected cells, which may lead to severe pathological consequences to the host. This damage caused by the immune system is known as virus-induced immunopathology.Specifically, immunopathology is caused by the excessive release of antibodies, interferons and pro-inflammatory cytokines, activation of the complement system, or hyperactivity of cytotoxic T cells. Secretion of interferons and other cytokines can trigger cell damage, fever and flu-like symptoms. In severe cases of certain viral infections, as in avian H5N1 influenza in 2005, aberrant induction of the host immune response can elicit a flaring release of cytokines known as a cytokine storm.In some instances, viral infection can initiate an autoimmune response, which occurs via different proposed mechanisms: molecular mimicry and bystander mechanism. Molecular mimicry refers to an overlap in structural similarity between a viral antigen and a self-antigen. The bystander mechanism hypothesizes the initiation of a non-specific and overreactive antiviral response that tackles self-antigens in the process. Damage caused by the host itself due to autoimmunity was observed in the West Nile virus. Incubation Period: Viruses display variable incubation periods upon virus entry into the host. The incubation period refers to the time taken for the onset of disease after first contact with the virus. In Rabiesvirus, the incubation period varies with the distance traversed by the virus to the target organ; but in most viruses the length of incubation depends on many factors. Surprisingly, generalised infections by togaviruses have a short incubation period due to the direct entry of the virus into target cells through insect bites.There are several other factors that affect the incubation period. The mechanisms behind long incubation periods, months or years for example, are not completely understood yet. Evolution of virulence: Some relatively avirulent viruses in their natural host show increased virulence upon transfer to a new host species. When an emerging virus first invades a new host species, the hosts have little or no immunity against the virus and often experience high mortality. Over time, a decrease in virulence in the predominant strain can sometimes be observed. A successful pathogen needs to spread to at least one other host, and lower virulence can result in higher transmission rates under some circumstances. Likewise, genetic resistance against the virus can develop in a host population over time.An example of the evolution of virulence in emerging virus is the case of myxomatosis in rabbits. The release of wild European rabbits in 1859 into Victoria, Australia for sport resulted in a rabbit plague. In order to curb with rabbit overpopulation, myxoma virus, a lethal species-specific poxvirus responsible for myxomatosis in rabbits, was deliberately released in South Australia in 1950. This led to a 90% decrease in rabbit populations, and the disease became endemic in a span of five years. Significantly, severely attenuated strains of the myxoma virus were detected in merely 2 years of its release, and genetic resistance in rabbits emerged within seven years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Karn's algorithm** Karn's algorithm: Karn's algorithm addresses the problem of getting accurate estimates of the round-trip time for messages when using the Transmission Control Protocol (TCP) in computer networking. The algorithm, also sometimes termed as the Karn-Partridge algorithm was proposed in a paper by Phil Karn and Craig Partridge in 1987.Accurate round trip estimates in TCP can be difficult to calculate because of an ambiguity created by retransmitted segments. The round trip time is estimated as the difference between the time that a segment was sent and the time that its acknowledgment was returned to the sender, but when packets are re-transmitted there is an ambiguity: the acknowledgment may be a response to the first transmission of the segment or to a subsequent re-transmission. Karn's algorithm: Karn's Algorithm ignores retransmitted segments when updating the round-trip time estimate. Round trip time estimation is based only on unambiguous acknowledgments, which are acknowledgments for segments that were sent only once. Karn's algorithm: This simplistic implementation of Karn's algorithm can lead to problems as well. Consider what happens when TCP sends a segment after a sharp increase in delay. Using the prior round-trip time estimate, TCP computes a timeout and retransmits a segment. If TCP ignores the round-trip time of all retransmitted packets, the round trip estimate will never be updated, and TCP will continue retransmitting every segment, never adjusting to the increased delay. Karn's algorithm: A solution to this problem is to incorporate transmission timeouts with a timer backoff strategy. The timer backoff strategy computes an initial timeout. If the timer expires and causes a retransmission, TCP increases the timeout generally by a factor of two. This algorithm has proven to be extremely effective in balancing performance and efficiency in networks with high packet loss. Ideally, Karn's algorithm would not be needed. Networks that have high round-trip time and retransmission timeouts should be investigated using root cause analysis techniques.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Problem frames approach** Problem frames approach: Problem analysis or the problem frames approach is an approach to software requirements analysis. It was developed by British software consultant Michael A. Jackson in the 1990s. History: The problem frames approach was first sketched by Jackson in his book Software Requirements & Specifications (1995) and in a number of articles in various journals devoted to software engineering. It has received its fullest description in his Problem Frames: Analysing and Structuring Software Development Problems (2001). History: A session on problem frames was part of the 9th International Workshop on Requirements Engineering: Foundation for Software Quality (REFSQ)] held in Klagenfurt/Velden, Austria in 2003. The First International Workshop on Applications and Advances in Problem Frames was held as part of ICSE’04 held in Edinburgh, Scotland. One outcome of that workshop was a 2005 special issue on problem frames in the International Journal of Information and Software Technology. History: The Second International Workshop on Applications and Advances in Problem Frames was held as part of ICSE 2006 in Shanghai, China. The Third International Workshop on Applications and Advances in Problem Frames (IWAAPF) was held as part of ICSE 2008 in Leipzig, Germany. In 2010, the IWAAPF workshops were replaced by the International Workshop on Applications and Advances of Problem-Orientation (IWAAPO). IWAAPO broadens the focus of the workshops to include alternative and complementary approaches to software development that share an emphasis on problem analysis. IWAAPO-2010 was held as part of ICSE 2010 in Cape Town, South Africa.Today research on the problem frames approach is being conducted at a number of universities, most notably at the Open University in the United Kingdom as part of its Relating Problem & Solution Structures research themeThe ideas in the problem frames approach have been generalized into the concepts of problem-oriented development (POD) and problem-oriented engineering (POE), of which problem-oriented software engineering (POSE) is a particular sub-category. The first International Workshop on Problem-Oriented Development was held in June 2009. Overview: Fundamental philosophy Problem analysis or the problem frames approach is an approach — a set of concepts — to be used when gathering requirements and creating specifications for computer software. Its basic philosophy is strikingly different from other software requirements methods in insisting that: The best way to approach requirements analysis is through a process of parallel — not hierarchical — decomposition of user requirements. Overview: User requirements are about relationships in the real world—the application domain – not about the software system or even the interface with the software system. It is more helpful ... to recognize that the solution is located in the computer and its software, and the problem is in the world outside. ... The computers can provide solutions to these problems because they are connected to the world outside. Overview: The moral is clear: to study and analyse a problem you must focus on studying and analysing the problem world in some depth, and in your investigations you must be willing to travel some distance away from the computer. ... [In a call forwarding problem...] You need to describe what's there – people and offices and holidays and moving office and delegating responsibility – and what effects [in the problem world] you would like the system to achieve – calls to A's number must reach A, and [when B is on vacation, and C is temporarily working at D's desk] calls to B's or C's number must reach C. Overview: None of these appear in the interface with the computer.... They are all deeper into the world than that. The approach uses three sets of conceptual tools. Tools for describing specific problems Concepts used for describing specific problems include: phenomena (of various kinds, including events), problem context, problem domain, solution domain (aka the machine), shared phenomena (which exist in domain interfaces), domain requirements (which exist in the problem domains) and specifications (which exist at the problem domain:machine interface). The graphical tools for describing problems are the context diagram and the problem diagram. Tools for describing classes of problems (problem frames) The Problem Frames Approach includes concepts for describing classes of problems. A recognized class of problems is called a problem frame (roughly analogous to a design pattern). Overview: In a problem frame, domains are given general names and described in terms of their important characteristics. A domain, for example, may be classified as causal (reacts in a deterministic, predictable way to events) or biddable (can be bid, or asked, to respond to events, but cannot be expected always to react to events in any predictable, deterministic way). (A biddable domain usually consists of people.) The graphical tool for representing a problem frame is a frame diagram. A frame diagram looks generally like a problem diagram except for a few minor differences—domains have general, rather than specific, names; and rectangles representing domains are annotated to indicate the type (causal or biddable) of the domain. Overview: A list of recognized classes of problems (problem frames) The first group of problem frames identified by Jackson included: required behavior commanded behavior information display simple workpieces transformationSubsequently, other researchers have described or proposed additional problem frames. Describing problems: The problem context Problem analysis considers a software application to be a kind of software machine. A software development project aims to change the problem context by creating a software machine and adding it to the problem context, where it will bring about certain desired effects. The particular portion of the problem context that is of interest in connection with a particular problem — the particular portion of the problem context that forms the context of the problem — is called the application domain. Describing problems: After the software development project has been finished, and the software machine has been inserted into the problem context, the problem context will contain both the application domain and the machine. At that point, the situation will look like this: The problem context contains the machine and the application domain. The machine interface is where the Machine and the application domain meet and interact. Describing problems: The same situation can be shown in a different kind of diagram, a context diagram, this way: The context diagram The problem analyst's first task is to truly understand the problem. That means understanding the context in which the problem is set. And that means drawing a context diagram. Describing problems: Here is Jackson's description of examining the problem context, in this case the context for a bridge to be built: You're an engineer planning to build a bridge across a river. So you visit the site. Standing on one bank of the river, you look at the surrounding land, and at the river traffic. You feel how exposed the place is, and how hard the wind is blowing and how fast the river is running. You look at the bank and wonder what faults a geological survey will show up in the rocky terrain. You picture to yourself the bridge that you are going to build. (Software Requirements & Specifications: "The Problem Context")An analyst trying to understand a software development problem must go through the same process as the bridge engineer. He starts by examining the various problem domains in the application domain. These domains form the context into which the planned Machine must fit. Then he imagines how the Machine will fit into this context. And then he constructs a context diagram showing his vision of the problem context with the Machine installed in it. Describing problems: The context diagram shows the various problem domains in the application domain, their connections, and the Machine and its connections to (some of) the problem domains. Here is what a context diagram looks like. This diagram shows: the machine to be built. The dark border helps to identify the box that represents the Machine. the problem domains that are relevant to the problem. the solid lines represent domain interfaces — areas where domains overlap and share phenomena in common.A domain is simply a part of the world that we are interested in. It consists of phenomena — individuals, events, states of affairs, relationships, and behaviors. A domain interface is an area where domains connect and communicate. Domain interfaces are not data flows or messages. An interface is a place where domains partially overlap, so that the phenomena in the interface are shared phenomena — they exist in both of the overlapping domains. Describing problems: You can imagine domains as being like primitive one-celled organisms (like amoebas). They are able to extend parts of themselves into pseudopods. Imagine that two such organisms extend pseudopods toward each other in a sort of handshake, and that the cellular material in the area where they are shaking hands is mixing, so that it belongs to both of them. That's an interface. Describing problems: In the following diagram, X is the interface between domains A and B. Individuals that exist or events that occur in X, exist or occur in both A and B. Shared individuals, states and events may look differently to the domains that share them. Consider for example an interface between a computer and a keyboard. When the keyboard domain sees an event Keyboard operator presses the spacebar the computer will see the same event as Byte hex("20") appears in the input buffer. Describing problems: Problem diagrams The problem analyst's basic tool for describing a problem is a problem diagram. Here is a generic problem diagram. In addition to the kinds of things shown on a context diagram, a problem diagram shows: a dotted oval representing the requirement to bring about certain effects in the problem domains. Describing problems: dotted lines representing requirement references — references in the requirement to phenomena in the problem domains.An interface that connects a problem domain to the machine is called a specification interface and the phenomena in the specification interface are called specification phenomena. The goal of the requirements analyst is to develop a specification for the behavior that the Machine must exhibit at the Machine interface in order to satisfy the requirement. Describing problems: Here is an example of a real, if simple, problem diagram. This problem might be part of a computer system in a hospital. In the hospital, patients are connected to sensors that can detect and measure their temperature and blood pressure. The requirement is to construct a Machine that can display information about patient conditions on a panel in the nurses station. Describing problems: The name of the requirement is "Display ~ Patient Condition". The tilde (~) indicates that the requirement is about a relationship or correspondence between the panel display and patient conditions. The arrowhead indicates that the requirement reference connected to the Panel Display domain is also a requirement constraint. That means that the requirement contains some kind of stipulation that the Panel display must meet. In short, the requirement is that The panel display must display information that matches and accurately reports the condition of the patients. Describing classes of problems: Problem frames A problem frame is a description of a recognizable class of problems, where the class of problems has a known solution. In a sense, problem frames are problem patterns. Each problem frame has its own frame diagram. A frame diagram looks essentially like a problem diagram, but instead of showing specific domains and requirements, it shows types of domains and types of requirements; domains have general, rather than specific, names; and rectangles representing domains are annotated to indicate the type (causal or biddable) of the domain. Variant frames In Problem Frames Jackson discussed variants of the five basic problem frames that he had identified. A variant typically adds a domain to the problem context. Describing classes of problems: a description variant introduces a description lexical domain an operator variant introduces an operator a connection variant introduces a connection domain between the machine and the central domain with which it interfaces a control variant introduces no new domain; it changes the control characteristics of interface phenomena Problem concerns Jackson also discusses certain kinds of concerns that arise when working with problem frames. Describing classes of problems: Particular concerns overrun initialization reliability identities completenessComposition concerns commensurable descriptions consistency precedence interference synchronization Recognized problem frames: The first problem frames identified by Jackson included: required behavior commanded behavior information display simple workpieces transformationSubsequently, other researchers have described or proposed additional problem frames. Required-behavior problem frame The intuitive idea behind this problem frame is: There is some part of the physical world whose behavior is to be controlled so that it satisfies certain conditions. The problem is to build a machine that will impose that control. Commanded-behavior problem frame The intuitive idea behind this problem frame is: There is some part of the physical world whose behavior is to be controlled in accordance with commands issued by an operator. The problem is to build a machine that will accept the operator's commands and impose the control accordingly. Information display problem frame The intuitive idea behind this problem frame is: There is some part of the physical world about whose states and behavior certain information is continually needed. The problem is to build a machine that will obtain this information from the world and present it at the required place in the required form. Recognized problem frames: Simple workpieces problem frame The intuitive idea behind this problem frame is: A tool is needed to allow a user to create and edit a certain class of computer-processible text or graphic objects, or similar structures, so that they can be subsequently copied, printed, analyzed or used in other ways. The problem is to build a machine that can act as this tool. Recognized problem frames: Transformation problem frame The intuitive idea behind this problem frame is: There are some given computer-readable input files whose data must be transformed to give certain required output files. The output data must be in a particular format, and it must be derived from the input data according to certain rules. The problem is to build a machine that will produce the required outputs from the inputs. Problem analysis and the software development process: When problem analysis is incorporated into the software development process, the software development lifecycle starts with the problem analyst, who studies the situation and: creates a context diagram gathers a list of requirements and adds a requirements oval to the context diagram, creating a grand "all-in-one" problem diagram. (However, in many cases actually creating an all-in-one problem diagram may be impractical or unhelpful: there will be too many requirements references criss-crossing the diagram to make it very useful.) decomposes the all-in-one problem and problem diagram into simpler problems and simpler problem diagrams. These problems are projections, not subsets, of the all-in-one diagram. Problem analysis and the software development process: continues to decompose problems until each problem is simple enough that it can be seen to be an instance of a recognized problem frame. Each subproblem description includes a description of the specification interfaces for the machine to be built.At this point, problem analysis — problem decomposition — is complete. The next step is to reverse the process and to build the desired software system though a process of solution composition. Problem analysis and the software development process: The solution composition process is not yet well understood, and is still very much a research topic. Extrapolating from hints in Software Requirements & Specifications, we can guess that the software development process would continue with the developers, who would: compose the multiple subproblem machine specifications into the specification for a single all-in-one machine: a specification for a software machine that satisfies all of the customer's requirements. This is a non-trivial activity — the composition process may very well raise composition problems that need to be solved. Problem analysis and the software development process: implement the all-in-one machine by going through the traditional code/test/deploy process. Similar approaches: There are a few other software development ideas that are similar in some ways to problem analysis. Similar approaches: The notion of a design pattern is similar to Jackson's notion of a problem frame. It differs in that a design pattern is used for recognizing and handling design issues (often design issues in specific object-oriented programming languages such as C++ or Java) rather than for recognizing and handling requirements issues. Furthermore, one difference is that design patterns cover solutions while in problem frames problems are represented. However, the design patterns also tend to account for semantic outcomes that are not native to the programming language they are to be implemented in. So, another difference is that problem frames is a native meta-notation for the domain of problems, whereas design patterns are a catalogue of technical debt left behind by the language implementers. Similar approaches: Aspect-oriented programming, AOP (also known as aspect-oriented software development, AOSD) is similarly interested in parallel decomposition, which addresses what AOP proponents call cross-cutting concerns or aspects. AOP addresses concerns that are much closer to the design and code-generation phase than to the requirements analysis phase. Similar approaches: AOP has moved into requirement engineering notations such as ITU-T Z.151 User Requirements Notation (URN). In URN the AOP is over all the intentional elements. AOP can also be applied over requirement modelling that uses problem frames as a heuristic. URN models driven with problem frame thinking, and interleaved with aspects, allows for inclusion of architectural tactics into the requirement model. Similar approaches: Martin Fowler's book Analysis Patterns is very similar to problem analysis in its search for patterns. It doesn't really present a new requirements analysis method, however. And the notion of parallel decomposition — which is so important for problem analysis — is not a part of Fowler's analysis patterns. Similar approaches: Jon G. Hall, Lucia Rapanotti, together with Jackson, have developed the Problem Oriented Software Engineering (POSE) framework which shares the problem frames foundations. Since, 2005, Hall and Rapanotti have extended POSE to Problem Oriented Engineering (POE), which provides a framework for engineering design, including a development process model and assurance-driven design, and may be scalable to projects that include many stake-holders and that combine diverse engineering disciplines such as software and education provision.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chromosomal deletion syndrome** Chromosomal deletion syndrome: Chromosomal deletion syndromes result from deletion of parts of chromosomes. Depending on the location, size, and whom the deletion is inherited from, there are a few known different variations of chromosome deletions. Chromosomal deletion syndromes typically involve larger deletions that are visible using karyotyping techniques. Smaller deletions result in Microdeletion syndrome, which are detected using fluorescence in situ hybridization (FISH) Examples of chromosomal deletion syndromes include 5p-Deletion (cri du chat syndrome), 4p-Deletion (Wolf–Hirschhorn syndrome), Prader–Willi syndrome, and Angelman syndrome. 5p-Deletion: The chromosomal basis of Cri du chat syndrome consists of a deletion of the most terminal portion of the short arm of chromosome 5. 5p deletions, whether terminal or interstitial, occur at different breakpoints; the chromosomal basis generally consists of a deletion on the short arm of chromosome 5. The variability seen among individuals may be attributed to the differences in their genotypes. With an incidence of 1 in 15,000 to 1 in 50,000 live births, it is suggested to be one of the most common contiguous gene deletion disorders. 5p deletions are most common de novo occurrences, which are paternal in origin in 80–90% of cases, possibly arising from chromosome breakage during gamete formation in malesSome examples of the possible dysmorphic features include: downslanting palpebral fissures, broad nasal bridge, microcephaly, low-set ears, preauricular tags, round faces, short neck, micrognathia, and dental malocclusion, hypertelorism, epicanthal folds, downturned corners of the mouth. There is no specific correlation found between size of deletion and severity of clinical features because the results vary so widely. 4p-Deletion: The chromosomal basis of Wolf-Hirschhorn syndrome (WHS) consists of a deletion of the most terminal portion of the short arm of chromosome 4. The deleted segment of reported individuals represent about one half of the p arm, occurring distal to the bands 4p15.1-p15.2. The proximal boundary of the WHSCR was defined by a 1.9 megabase terminal deletion of 4p16.3. This allele includes the proposed candidate genes LEMT1 and WHSC1. This was identified by two individuals that exhibited all 4 components of the core WHS phenotype, which allowed scientists to trace the loci of the deleted genes. Many reports are particularly striking in the appearance of the craniofacial structure (prominent forehead, hypertelorism, the wide bridge of the nose continuing to the forehead) which has led to the descriptive term “Greek warrior helmet appearance.There is wide evidence that the WHS core phenotype (growth delay, intellectual disability, seizures, and distinctive craniofacial features) is due to haploinsufficiency of several closely linked genes as opposed to a single gene. Related genes that impact variation include: WHSC1 spans a 90-kb genomic region, two-thirds of which maps in the telomeric end of the WHCR; WHSC1 may play a significant role in normal development. Its deletion likely contributes to the WHS phenotype. However, variation in severity and phenotype of WHS suggests possible roles for genes that lie proximally and distally to the WHSCR. 4p-Deletion: WHSC2 (also known as NELF-A) is involved in multiple aspects of mRNA processing and the cell cycle SLBP, a gene encoding Stem Loop Binding Protein, resides telomeric to WHSC2, and plays a crucial role in regulating histone synthesis and availability during S phase. LETM1 has initially been proposed as a candidate gene for seizures; it functions in ion exchange with potential roles in cell signaling and energy production. FGFRL1, encoding a putative fibroblast growth factor decoy receptor, has been implicated in the craniofacial phenotype and potentially other skeletal features, and short stature of WHS. CPLX1 has lately been suggested as a potential candidate gene for epilepsy in WHS. Prader–Willi vs. Angelman Syndrome: Prader–Willi (PWS) and Angelman syndrome (AS) are distinct neurogenetic disorders caused by chromosomal deletions, uniparental disomy or loss of the imprinted gene expression in the 15q11-q13 region. Whether an individual exhibits PWS or AS depends on if there is a lack of the paternally expressed gene to contribute to the region.PWS is frequently found to be the reason for secondary obesity due to early onset hyperphagia - the abnormal increase in appetite for consumption of food. There are known three molecular causes of Prader–Willi syndrome development. One of them consists in micro-deletions of the chromosome region 15q11–q13. 70% of patients present a 5–7-Mb de novo deletion in the proximal region of the paternal chromosome 15. The second frequent genetic abnormality (~ 25–30% of cases) is maternal uniparental disomy of chromosome 15. The mechanism is due to maternal meiotic non-disjunction followed by mitotic loss of the paternal chromosome 15 after fertilization. The third cause for PWS is the disruption of the imprinting process on the paternally inherited chromosome 15 (epigenetic phenomena). This disruption is present in approximately 2–5% of affected individuals. Less than 20% of individuals with an imprinting defect are found to have a very small deletion in the PWS imprinting centre region, located at the 5′ end of the SNRPN gene.AS is a severe debilitating neurodevelopmental disorder characterized by mental retardation, speech impairment, seizures, motor dysfunction, and a high prevalence of autism. The paternal origin of the genetic material that is affected in the syndrome is important because the particular region of chromosome 15 involved is subject to parent-of-origin imprinting, meaning that for a number of genes in this region, only one copy of the gene is expressed while the other is silenced through imprinting. For the genes affected in PWS, it is the maternal copy that is usually imprinted (and thus is silenced), while the mutated paternal copy is not functional.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lingerie** Lingerie: Lingerie (UK: , US: , French: [lɛ̃ʒʁi] (listen)) is a category of primarily women's clothing including undergarments (mainly brassieres), sleepwear, and lightweight robes. The choice of the word is often motivated by an intention to imply that the garments are alluring, fashionable, or both. In a 2015 US survey, 75% of women and 26% of men reported having worn "sexy lingerie" in their lifetime.Lingerie is made of lightweight, stretchy, smooth, sheer or decorative fabrics such as silk, satin, Lycra, charmeuse, chiffon, or (especially and traditionally) lace. These fabrics can be made of various natural fibres like silk or cotton or of various synthetic fibres like polyester or nylon. Etymology: The word lingerie is a word taken directly from the French language, meaning undergarments, and used exclusively for more lightweight items of female undergarments. The French word in its original form derives from the French word linge, meaning 'linen' or 'clothes'. Informal usage suggests visually appealing or even erotic clothing. Although most lingerie is designed to be worn by women, some manufacturers now design lingerie for men. Origins: The concept of lingerie as a visually appealing undergarment was developed during the late nineteenth century. Lady Duff-Gordon of Lucile was a pioneer in developing lingerie that freed women from more restrictive corsets. Through the first half of the 20th century, women wore underwear for three primary reasons: to alter their outward shape (first with corsets and later with girdles or brassieres), for hygienic reasons and for modesty. Before the invention of crinoline, women's underwear was often very large and bulky. Origins: During the late 19th century, corsets became smaller, less bulky and more constricting and were gradually supplanted by the brassiere, first patented in the 20th century by Caresse Crosby. When the First World War broke out, women found themselves filling in men's work roles, creating a demand for more practical undergarments. Manufacturers began to use lighter and more breathable fabrics. In 1935, brassières were updated with padded cups to flatter small breasts and three years later underwire bras were introduced that gave a protruding bustline. There was also a return to a small waist achieved with girdles. The 1940s woman was thin, but had curvaceous hips and breasts that were pointy and shapely. In the 1960s, the female silhouette was liberated along with social mores. The look was adolescent breasts, slim hips, and extreme thinness. André Courrèges was the first to make a fashion statement out of the youth culture when his 1965 collection presented androgynous figures and the image of a modern woman comfortable with her own body.As the 20th century progressed, underwear became smaller and more form fitting. In the 1960s, lingerie manufacturers such as Frederick's of Hollywood begin to glamorise lingerie. The lingerie industry expanded in the 21st century with designs that doubled as outerwear. The French refer to this as 'dessous-dessus,' meaning something akin to innerwear as outerwear. Market structure: The global lingerie market in 2003 was estimated at $29 billion, while in 2005, bras accounted for 56 per cent of the lingerie market and briefs represented 29 per cent. The United States's largest lingerie retailer, Victoria's Secret, operates almost exclusively in North America, but the European market is fragmented, with Triumph International and DB Apparel predominant. Also prominent are French lingerie houses, including Chantelle and Aubade. Market structure: In March 2020 The Guardian reported a trend for male lingerie on the catwalk and predictions as to the likelihood of it successfully extending to the high street fashion stores. Typology: Babydoll, a short nightgown, or negligee, intended as nightwear for women. A shorter style, it is often worn with panties. Babydolls are typically loose-fitting with an empire waist and thin straps. Basque, a tight, form-fitting bodice or coat. Bloomers, baggy underwear that extends to just below or above the knee. Bloomers were worn for several decades during the first part of the 20th century, but are not widely worn today. Bodystocking, a unitard. Bodystockings may be worn over the torso, or they may be worn over the thighs and abdomen. Typology: Bodice, covers the body from the neck to the waist. Bodices are often low cut in the front and high in the back, and are often connected with laces or hooks. Bodices may also be reinforced with steel or bone to provide greater breast support.Brassiere, more commonly referred to as a bra, a close-fitting garment that is worn to help lift and support a woman's breasts Bustier, a form fitting garment used to push up the bust and to shape the waist. Typology: Camisole, sleeveless and covering the top part of the body. Camisoles are typically constructed of light materials and feature thin spaghetti straps. Chemise, a one-piece undergarment that is the same in shape as a straight-hanging sleeveless dress. It is similar to the babydoll, but it is fitted more closely around the hips. Corset, a bodice worn to mould and shape the torso. This effect is typically achieved through boning, either of bone or steel. Corselet, or merry widow, combined brassiere and girdle. The corselet is considered to be a type of foundation garment, and the modern corselet is most commonly known as a shaping slip. G-string, or thong, a type of panty, characterised by a narrow piece of cloth that passes between the buttocks and is attached to a band around the hips. A G-string or thong may be worn as a bikini bottom or as underwear.Garter/Garter belt/Suspender belt, used to keep stockings up. Girdle, a type of foundation garment. Historically, the girdle extended from the waist to the upper thigh, though modern styles more closely resemble a tight pair of athletic shorts. Hosiery, close-fitting, elastic garments that cover the feet and legs. Negligee, a dressing gown. It is usually floor length, though it can be knee length as well. Nightgown, or nightie, a loosely hanging item of nightwear, may vary from hip-length (babydoll) to floor-length (peignoir). Nightshirt, a shirt meant to be worn while sleeping. It is usually longer and looser than the average T-shirt, and it is typically made of softer material. Panties or knickers, a generic term for underwear covering the genitals and sometimes buttocks that come in all shapes, fabrics and colours, offering varying degrees of coverage. Petticoat, an underskirt. Petticoats were prominent throughout the 16th to 20th centuries. Today, petticoats are typically worn to add fullness to skirts in the Gothic and Lolita subcultures. Pettipants, a type of bloomer featuring ruffles, resembling petticoats. Pettipants are most commonly worn by square dancers and people participating in historical reenactment. Tanga, a type of panty featuring full back and front coverage, but string-like sides that are typically thicker than those found on a string bikini. Tap pants, a type of short typically made of lace, silk or satin. Teddy, an undergarment that resembles the shape of a one-piece bathing suit because it is typically sleeveless, and sometimes even strapless. Undergarment, a garment which one wears underneath clothes. Also known as "underwear."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Occupational asthma** Occupational asthma: Occupational asthma is new onset asthma or the recurrence of previously quiescent asthma directly caused by exposure to an agent at workplace. It is an occupational lung disease and a type of work-related asthma. Agents that can induce occupational asthma can be grouped into sensitizers and irritants.Sensitizer-induced occupational asthma is an immunologic form of asthma which occurs due to inhalation of specific substances (i.e., high-molecular-weight proteins from plants and animal origins, or low-molecular-weight agents that include chemicals, metals and wood dusts) and occurs after a latency period of several weeks to years.Irritant-induced (occupational) asthma is a non-immunologic form of asthma that results from a single or multiple high dose exposure to irritant products. It is usually develops early after exposure; however can also develop insidiously over a few months after a massive exposure to a complex mixture of alkaline dust and combustion products, as shown in the World Trade Center disaster. Unlike those with sensitizer-induced occupational asthma, subjects with irritant-induced occupational asthma do not develop work-related asthma symptoms after re-exposure to low concentrations of the irritant that initiated the symptoms. Reactive airways dysfunction syndrome (RADS) is a severe form of irritant induced asthma where respiratory symptoms usually develop in the minutes or hours after a single accidental inhalation of a high concentration of irritant gas, aerosol, vapor, or smoke.Another type of work-related asthma is work-exacerbated asthma (WEA) which is asthma worsened by workplace conditions but not caused by it. WEA is present in about a fifth of patients with asthma and a wide variety of conditions at work, including irritant chemicals, dusts, second-hand smoke, common allergens that may be present at work, as well as other "exposures" such as emotional stress, worksite temperature, and physical exertion can exacerbate asthma symptoms in these patients. Both occupational asthma and work-exacerbated asthma can be present in an individual. Occupational asthma: A number of diseases have symptoms that mimic occupational asthma, such as asthma due to nonoccupational causes, chronic obstructive pulmonary disease (COPD), irritable larynx syndrome, hyperventilation syndrome, hypersensitivity pneumonitis, and bronchiolitis obliterans. Signs and symptoms: Like other types of asthma, it is characterized by airway inflammation, reversible airways obstruction, and bronchospasm, but it is caused by something in the workplace environment. Symptoms include shortness of breath, tightness of the chest, coughing, sputum production and wheezing. Some patients may also develop upper airway symptoms such as itchy eyes, tearing, sneezing, nasal congestion and rhinorrhea.Symptoms may develop over many years as in sensitizer induced asthma or may occur after a single exposure to a high-concentration agent as in case of RADS. Risk factors: At present, over 400 workplace substances have been identified as having asthmagenic or allergenic properties. Agents such as flour, diisocyanates, latex, persulfate salts, aldehydes, animals, wood dusts, metals, enzymes usually account for the majority cases, however, the distribution of causal agents may vary widely across geographic areas, depending on the pattern of industrial activities. For example, in France the industries most affected are bakeries and cake-shops, automobile industry and hairdressers, whereas in Canada the principal cause is wood dust, followed by isocyanates. Furthermore, the most common cause of occupational asthma in the workplace are isocyanates. Isocyanates are used in the production of motor vehicles and in the application of orthopaedic polyurethane and fibreglass casts.The occupations most at risk are: adhesive handlers (e.g. acrylate), animal handlers and veterinarians (animal proteins), bakers and millers (cereal grains), carpet makers (gums), electronics workers (soldering resin), forest workers, carpenters and cabinetmakers (wood dust), hairdressers (e.g. persulfate), health care workers (latex and chemicals such as glutaraldehyde), janitors and cleaning staff (e.g. chloramine-T), pharmaceutical workers (drugs, enzymes), seafood processors, shellac handlers (e.g. amines), solderers and refiners (metals), spray painters, insulation installers, plastics and foam industry workers (e.g. diisocyanates), textile workers (dyes) and users of plastics and epoxy resins (e.g. anhydrides)The following tables show occupations that are known to be at risk for occupational asthma, the main reference for these is the Canadian Centre for Occupational Health and Safety. Diagnosis: To diagnose occupational asthma it is necessary to confirm the symptoms of asthma and establish the causal connection with the work environment. Various diagnostic tests can be used to aid in diagnoses of work related asthma.A spirometer is a device used to measure timed expired and inspired volumes, and can be used to help diagnose asthma. Peak expiratory flow rate (PEFR) is a hand held device which measures how fast a person can exhale and is a reliable test for occupational asthma. Serial PEFR can be measured to see if there is a difference in ability to exhale at work compared to that in a controlled environment. Diagnosis: A non-specific bronchial hyperreactivity test can be used to support the diagnose occupational asthma. It involves measuring the forced expiratory volume in 1 second (FEV-1) of the patient before and after exposure to methacholine or mannitol. Presence of airway responsiveness i.e. significant drop in FEV-1 can be seen in patients with occupational asthma.Specific inhalation challenges test consist of exposing the subjects to the suspected occupational agent in the laboratory and/or at the workplace and assess for asthma symptoms as well as a reduction in FEV1.Other tests such as skin prick test, serum immunologic testing and measurement of sputum eosinophils can also be useful in establishing the diagnosis of occupational asthma. Prevention: Several forms of preventive measures have been suggested to prevent development of occupational asthma and also detect risk or disease early to allow intervention and improve outcomes. These include: comprehensive programs, education and training, medical examinations, use of medications, reduction of exposures and elimination of exposures. Asthma symptoms and airway hyperresponsiveness can persist for several years after removal from the offending environment. Thus, early restriction from exposure to the trigger is advisable. Completely stopping exposure is more effective treatment than reducing exposure, but not always feasible. Management: Medication Medications used for occupational asthma are similar to those used for other types of asthma such as short-acting beta-agonists like salbutamol or terbutaline, long-acting beta-agonists like salmeterol and formoterol and inhaled corticosteroids. Immunotherapy can also be used in some cases of sensitizer induced occupational asthma. Epidemiology: Occupational asthma is one of the most common occupational lung disease. Approximately 17% of all adult-onset asthma cases are related to occupational exposures. About one fourth of adults with asthma have work-exacerbated asthma. Patients with work-related asthma are more likely to experience asthma attacks, emergency room visits, and worsening of their asthma symptoms compared with other adult asthma patients. Society and culture: Compensation When a person is diagnosed with occupational asthma, it can result in serious socio-economic consequences not only for the workers but also for the employer and the healthcare system because the worker must change positions. The probability of being re-employed is lower for those with occupational asthma compared to those with normal asthma. The employer not only pays compensation to the employee, but will also have to spend a considerable amount of time and energy and funds for hiring and training new personnel. In the United States, it was estimated that the direct cost of occupational asthma in 1996 was $1.2 billion and the indirect cost $0.4 billion, for a total cost of $1.6 billion. In most cases, the employer could have saved more money by adhering to safety standards rather than causing employees to become injured. Society and culture: However, this can entail severe socio-economic consequences for the worker as well as the employer due to loss of job, unemployment, compensation issues, medical expenditures, and hiring and re-training of new personnel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pork blood soup** Pork blood soup: Pork blood soup is a soup that uses pork blood as its primary ingredient. Additional ingredients may include barley and herbs such as marjoram, as well as other foods and seasonings. Some versions are prepared with coagulated pork blood and other coagulated pork offal, such as intestine, liver and heart. Varieties: China Pork blood soup is soup in Chinese cuisine, and was consumed by laborers in Kaifeng "over 1,000 years ago", along with offal dumplings called jiaozi. Czech Republic Prdelačka is a traditional Czech pork blood soup made during the pig slaughter season. It is prepared with pork blood pudding, potato, onion and garlic as primary ingredients. Thailand Pork blood soup is soup in Thai cuisine. Guay Tiao Namtok is a Thai pork blood soup noodle that is prepared with pork blood as a soup base. The dish may come from Chinese cuisine, since some part of southern Chinese evacuated to Thailand for a century.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autocoding** Autocoding: Autocoding refers to software solutions that help manufacturers, particularly those in the food industry, ensure that products have the correct packaging and correct 'sell by' date codes, thereby reducing the number of Emergency Product Withdrawals (EPW). The term was first used during an initiative between Geest PLC (acquired by Bakkavör in 2005 ) and Tesco in 2001. The key objective of the software was to reduce the number of EPW's associated with date and price coding errors, and pot and lid marriage errors. This still remains the main objective of autocoding software, but functionality has been expanded to encompass quality assurance and OEE performance data. History: The concept of autocoding originally came from an initiative between Geest PLC and Tesco in 2001, with both parties seeking to reduce the number of emergency product withdrawals associated with packaging and date coding errors. From here, several software engineering companies expanded on this preliminary work to create more reliable and robust systems. In 2004, 2D barcoding was introduced for the first time, a major step forward for the securing of all parts of food packaging. Prior to this, 1D retail barcodes were used that only offered limited protection as they did not cover all packaging on products. 2D barcodes meant that lids, sleeves, pack could all be identified by a barcode scanner to check that the product is packaged and dated correctly. In 2009, Marks & Spencer introduced a code of practice for the labelling of products that stipulated an autocoding system must be used on every production line. In 2013, it was estimated that autocoding software protects over 1500 lines in the United Kingdom; the largest supplier is Olympus Automation, which protects 723 food production lines. Elements: All autocoding systems comprise a products database which contains standard reference information for each product including packaging type, labels and sell by date criteria. In most cases a touch screen industrial PC is positioned on the shop-floor to allow the operator to select the next product from a product schedule. Elements: 1D and 2D barcode scanning The shop-floor touch screen device is linked to barcode scanners deployed to scan the code on each piece of packaging, including promotional labels and sleeves. Originally the bar codes scanned were based on standard 1D codes but to avoid mistakes 2D bar codes were introduced in 2004 so that each packaging type could hold a unique identity. To checks that the scanners are operational Autocoding solutions include two way communications with all hardware devices, or prevent the lines starting if links are not available. Elements: Date code printing To ensure that ‘sell by dates’ are accurate, most autocoding systems directly control the line printers through the software application. Once the operator has selected the product to run, the product reference table identifies the date range to use and the printer output is sent directly to the printer. Again, like the bar code scanners, autocoding systems include two-way communications with date code printers, and prevent the production lines from starting if links are not available. Elements: Line stops If any error is detected, such as wrong film/pack, wrong lid, wrong case, or printer fault, the line is stopped. This standard requirement is achieved through the use of programmable logic controllers. Providers: Numerous companies provide autocoding solutions, ranging from standalone systems to comprehensive MES/MIS solutions that incorporate additional features and benefits. Notable industrial vendors include:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Functional (mathematics)** Functional (mathematics): In mathematics, a functional (as a noun) is a certain type of function. The exact definition of the term varies depending on the subfield (and sometimes even the author). Functional (mathematics): In linear algebra, it is synonymous with linear forms, which are linear mappings from a vector space V into its field of scalars (that is, they are elements of the dual space V∗ In functional analysis and related fields, it refers more generally to a mapping from a space X into the field of real or complex numbers. In functional analysis, the term linear functional is a synonym of linear form; that is, it is a scalar-valued linear map. Depending on the author, such mappings may or may not be assumed to be linear, or to be defined on the whole space X. Functional (mathematics): In computer science, it is synonymous with higher-order functions, that is, functions that take functions as arguments or return them.This article is mainly concerned with the second concept, which arose in the early 18th century as part of the calculus of variations. The first concept, which is more modern and abstract, is discussed in detail in a separate article, under the name linear form. The third concept is detailed in the computer science article on higher-order functions. Functional (mathematics): In the case where the space X is a space of functions, the functional is a "function of a function", and some older authors actually define the term "functional" to mean "function of a function". Functional (mathematics): However, the fact that X is a space of functions is not mathematically essential, so this older definition is no longer prevalent.The term originates from the calculus of variations, where one searches for a function that minimizes (or maximizes) a given functional. A particularly important application in physics is search for a state of a system that minimizes (or maximizes) the action, or in other words the time integral of the Lagrangian. Details: Duality The mapping is a function, where x0 is an argument of a function f. At the same time, the mapping of a function to the value of the function at a point is a functional; here, x0 is a parameter. Provided that f is a linear function from a vector space to the underlying scalar field, the above linear maps are dual to each other, and in functional analysis both are called linear functionals. Details: Definite integral Integrals such as form a special class of functionals. They map a function f into a real number, provided that H is real-valued. Examples include the area underneath the graph of a positive function f Lp norm of a function on a set E the arclength of a curve in 2-dimensional Euclidean space Inner product spaces Given an inner product space X, and a fixed vector x→∈X, the map defined by y→↦x→⋅y→ is a linear functional on X. Details: The set of vectors y→ such that x→⋅y→ is zero is a vector subspace of X, called the null space or kernel of the functional, or the orthogonal complement of x→, denoted {x→}⊥. Details: For example, taking the inner product with a fixed function g∈L2([−π,π]) defines a (linear) functional on the Hilbert space L2([−π,π]) of square integrable functions on [−π,π]: Locality If a functional's value can be computed for small segments of the input curve and then summed to find the total value, the functional is called local. Otherwise it is called non-local. For example: is local while is non-local. This occurs commonly when integrals occur separately in the numerator and denominator of an equation such as in calculations of center of mass. Functional equations: The traditional usage also applies when one talks about a functional equation, meaning an equation between functionals: an equation F=G between functionals can be read as an 'equation to solve', with solutions being themselves functions. In such equations there may be several sets of variable unknowns, like when it is said that an additive map f is one satisfying Cauchy's functional equation: Derivative and integration: Functional derivatives are used in Lagrangian mechanics. They are derivatives of functionals; that is, they carry information on how a functional changes when the input function changes by a small amount. Richard Feynman used functional integrals as the central idea in his sum over the histories formulation of quantum mechanics. This usage implies an integral taken over some function space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Milkshape 3D** Milkshape 3D: MilkShape 3D (MS3D) is a shareware low-polygon 3D modeling program created by Mete Ciragan. It is used mainly for compiling models for Half-Life, Blockland, The Sims 2, The Sims 3, Rock Raiders, and other sandbox video games. It is also used to create models for a large number of indie games, Milkshape 3D's repertoire of export capabilities has been extended considerably, due to the efforts of both its creator and the community around it, and it is now able to be used for most games today, so long as an exporter for the required format is available. History: MilkShape 3D was created by chUmbaLum sOft, a small software company in Zurich, Switzerland, which was established in the autumn of 1996. chUmbaLum sOft develops 3D tools for games and other applications. MilkShape 3D was originally created as a low poly modelling program by Mete Ciragan for the GoldSrc engine. Over time many features were added as were many export formats. Though not as advanced as other leading 3D modelling programs, it remains a simple and cost-effective tool for creating 3D models. Features: MilkShape 3D has all basic operations like select, move, rotate, scale, extrude, turn edge, subdivide, among many others. MilkShape 3D also allows low-level editing with the vertex and face tool. Standard and extended primitives such as spheres, boxes, and cylinders are available. Milkshape 3D can export to over 70 file formats. Features: MilkShape 3D is a skeletal animator. As well as supporting its own file-format, Milkshape 3D is able to export to morph target animation like the ones in the Quake model formats or to export to skeletal animations like Half-Life, Genesis3D, Unreal, etc. The number of file types that the program can support features all major 3D game engines, including Source, Unreal, id Tech and LithTech. It has become known as a useful converter from one format to another. Controversy: Versions of MilkShape 3D prior to 1.8.1 Beta 1 allegedly contained code which caused itself to shutdown if it detected certain other programs running on the computer, such as Registry Monitor. Versions older than 1.6.5 (April 2003) would go so far as to shut down the offending program and prevent it from being run again while MilkShape 3D was still running. This behavior was removed shortly after it had been discovered. Ostensibly, this was to prevent users from figuring out how to edit the Windows registry to commit software cracking and use MilkShape 3D without paying for it. However, there was no End User License Agreement until version 1.8.1, which authorized the program to do this. Some users have therefore accused MilkShape 3D of being spyware and have boycotted it as a result. These issues have been resolved since version 1.8.1 Beta 2 (May 2007).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metocean** Metocean: In offshore and coastal engineering, metocean refers to the syllabic abbreviation of meteorology and (physical) oceanography. Metocean study: In various stages of an offshore or coastal engineering project a metocean study will be undertaken. This, in order to estimate the environmental conditions of direct influence on the choices to be made during the project phase at hand, and to arrive at an effective and efficient solution for the problems/goals stated. In later phases of a project, more detailed and thorough metocean studies may be needed, depending on whether there is an expected additional gain with respect to the successful and efficient completion of the project. Metocean conditions: Metocean conditions refer to the combined wind, wave and climate (etc.) conditions as found on a certain location. They are most often presented as statistics, including seasonal variations, scatter tables, wind roses and probability of exceedance. The metocean conditions may include, depending on the project and its location, statistics on: Meteorology wind speed, direction, gustiness, wind rose and wind spectrum air temperature humidity occurrence and strength of typhoons, hurricanes and (other) cyclonesPhysical oceanographywater level fluctuations historical, expected and seasonal sea level changes storm surges tides tsunamis seiches wind waves – wind seas and swells – characterised by statistics like: significant wave heights and periods, propagation directions and (directional) spectra bathymetry salinity, temperature and other constituents stratification, density-driven currents and internal waves ice occurrence, extent, thickness, strength and seabed gouging Metocean data: The metocean conditions are preferably based on metocean data, which can come from measuring instruments deployed in or near the project area, global (re-analysis) models and remote sensing (often by satellites). For estimating probabilities of exceedance – for relevant physical quantities – data of extreme events during more than one year is needed. Metocean data: By use of validated numerical models, the availability of metocean data can be extended. For instance, consider the case of a coastal location where no wave measurements are available. If there is long-term wave data available in a nearby offshore location (e.g. from satellites), a wind wave model can be employed to transform the offshore wave statistics to the nearshore location (provided the bathymetry is known). Metocean data: Often, long-term local measurements of wave conditions due to extreme events (e.g. hurricanes) are missing. By using estimates for the wind fields during past extreme events, the corresponding wave conditions can be computed through wave hindcasts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ping Zhang (biologist)** Ping Zhang (biologist): Ping Zhang is an American structural biologist researching the structural and mechanistic basis of multi-component kinase signaling complexes that are linked to human cancers and other diseases, with a long-term goal of developing new therapeutic strategies. She is a NIH Stadtman Investigator in the Structural Biophysics Laboratory at the National Cancer Institute. Education: Zhang completed a Ph.D. in Michael Rossmann’s lab at Purdue University in the field of biochemistry and structural virology. Her Ph.D. project was resolving the structures of poliovirus-receptor complexes using X-ray crystallography and cryogenic electron microscopy (cryo-EM). She completed her postdoctoral training at Howard Hughes Medical Institute and in Susan S. Taylor’s laboratory at University of California, San Diego, working on a signal transduction system related to human diseases and learning other techniques in structural biology and cell signaling that are suited for studying dynamic signaling complexes. Career and research: Ping was an assistant project scientist in the department of pharmacology at the University of California, San Diego. She joined the Structural Biophysics Laboratory at National Cancer Institute (NCI) as a NIH Stadtman Tenure Track Investigator in August 2016. Career and research: She researches the structural and mechanistic basis of multi-component kinase signaling complexes that are linked to human cancers and other diseases, with a long-term goal of developing new therapeutic strategies. Current research topics include the Raf family kinases, and the leucine-rich repeat kinases and an oncogenic PKA kinase fusion protein. Zhang's lab applies integrated structural biology (single-particle cryo-electron microscopy and X-ray crystallography) and biochemical approaches to achieve our objective of studying these kinase complexes in their functional states. This strategy is used to reveal the mechanistic details and factors critical for driving the functional activities of these kinases and how these activities may be altered in pathological states.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Are We There Yet? (video game)** Are We There Yet? (video game): Are We There Yet? is a 1991 puzzle video game developed by Manley & Associates for IBM PC compatibles and published by Electronic Arts. Gameplay: Are We There Yet? is a game in which the Mallard family wins a coupon book to tourist traps, and must solve puzzles in each state before they can come home. Reception: Stanley Trevena reviewed the game for Computer Gaming World, and stated that "Are We There, Yet? is a puzzle bonanza that should be sampled by all conundrum connoisseurs." Reviews: ASM (Aktueller Software Markt) - Feb, 1992 VideoGames & Computer Entertainment Aktueller Software Markt Game Players PC Entertainment Compute! Games #108
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non sequitur (literary device)** Non sequitur (literary device): A non sequitur (English: non SEK-wit-ər, Classical Latin: [noːn ˈsɛkᶣɪtʊr]; "[it] does not follow") is a conversational literary device, often used for comedic purposes. It is something said that, because of its apparent lack of meaning relative to what preceded it, seems absurd to the point of being humorous or confusing. This use of the term is distinct from the non sequitur in logic, where it is a fallacy. Etymology: The expression is Latin for "[it] does not follow". It comes from the words non meaning "not" and the verb sequi meaning "to follow". Usage: A non sequitur can denote an abrupt, illogical, or unexpected turn in plot or dialogue by including a relatively inappropriate change in manner. A non sequitur joke sincerely has no explanation, but it reflects the idiosyncrasies, mental frames and alternative world of the particular comic persona.Comic artist Gary Larson's The Far Side cartoons are known for what Larson calls "...absurd, almost non sequitur animal" characters, such as talking cows, which he uses to create a "...weird, zany, ...bizarre, odd, strange" effect; in one strip, "two cows in a field gaze toward burning Chicago, saying 'It seems that agent 6373 had accomplished her mission.'" More readings: The Koan: Texts and Contexts in Zen Buddhism. United Kingdom, Oxford University Press, 2000. Shabo, Magedah. Rhetoric, Logic, and Argumentation: A Guide for Student Writers. United States, Prestwick House, 2010.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Washout (aeronautics)** Washout (aeronautics): Washout is a characteristic of aircraft wing design which deliberately reduces the lift distribution across the span of an aircraft’s wing. The wing is designed so that the angle of incidence is greater at the wing roots and decreases across the span, becoming lowest at the wing tip. This is usually to ensure that at stall speed the wing root stalls before the wing tips, providing the aircraft with continued aileron control and some resistance to spinning. Washout may also be used to modify the spanwise lift distribution to reduce lift-induced drag. Design considerations: Washout is commonly achieved by designing the wing with a slight twist, reducing the angle of incidence from root to tip, and therefore causing a lower angle of attack at the tips than at the roots. This feature is sometimes referred to as structural washout, to distinguish it from aerodynamic washout. Design considerations: Wingtip stall is unlikely to occur symmetrically, especially if the aircraft is maneuvering. As an aircraft turns, the wing tip on the inside of the turn is moving more slowly and is most likely to stall. As an aircraft rolls, the descending wing tip is at higher angle of attack and is most likely to stall. When one wing tip stalls it leads to wing drop, a rapid rolling motion. Also, roll control may be reduced if the airflow over the ailerons is disrupted by the stall, reducing their effectiveness. Design considerations: On aircraft with swept wings, wing tip stall also produces an undesirable nose-up pitching moment which hampers recovery from the stall. Washout may be accomplished by other means e.g. modified aerofoil section, vortex generators, leading edge wing fences, notches, or stall strips. This is referred to as aerodynamic washout. Its purpose is to tailor the spanwise lift distribution or reduce the probability of wing tip stall. Design considerations: Winglets have the opposite effect to washout. Winglets allow a greater proportion of lift to be generated near the wing tips. (This can be described as aerodynamic wash-in.) Winglets also promote a greater bending moment at the wing root, possibly necessitating a heavier wing structure. Installation of winglets may necessitate greater aerodynamic washout in order to provide the required resistance to spinning, or to optimise the spanwise lift distribution. Design considerations: The reverse twist (higher incidence at wingtip), wash-in, can also be found in some designs though less common. The Grumman X-29 had strong wash-in to compensate for the additional root-first stalling promoted by the forward sweep.Washout near the tips can also be used to decrease lift-induced drag, since at a lower angle of incidence, the lift produced will be lower, and thus the component of that lift which acts against thrust is reduced, however, it has been theorised by Albion H. Bowers that certain washout characteristics in the tips, that lead to a bell-shaped span loading may in fact produce lift-induced thrust, and upwash. He thus suggests that birds do not utilise vertical stabilisers, since they do not need to counteract adverse yaw caused by lift-induced drag.Washout is also found in gliders and hang gliders.In helicopters, blade twist is used to reduce lift towards the blade tip, thus reducing unequal rotor lift distribution.: 2–9
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fröhlich effect** Fröhlich effect: The Fröhlich effect is a visual illusion wherein the first position of a moving object entering a window is misperceived. When observers are asked to localize the onset position of the moving target, they typically make localization errors in the direction of movement ("ahead" of its true localization).A proposed explanation for this effect is that the visual system is predictive, accounting for neural delays by extrapolating the trajectory of a moving stimulus into the future. In other words, when light from a moving object hits the retina, a certain amount of time is required before the object is perceived. In that time, the object has moved to a new location in the world. The motion extrapolation hypothesis asserts that the visual system will take care of such delays by extrapolating the position of moving objects forward in time. As such it is related to the flash lag illusion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glycineamide ribonucleotide** Glycineamide ribonucleotide: Glycineamide ribonucleotide (or GAR) is a biochemical intermediate in the formation of purine nucleotides via inosine-5-monophosphate, and hence is a building block for DNA and RNA. The vitamins thiamine and cobalamin also contain fragments derived from GAR. Glycineamide ribonucleotide: GAR is the product of the enzyme phosphoribosylamine—glycine ligase acting on phosphoribosylamine (PRA) to combine it with glycine in a process driven by ATP. The reaction, EC 6.3.4.13 forms an amide bond: PRA + glycine + ATP → GAR + ADP + PiThe biosynthesis pathway next adds a formyl group from 10-formyltetrahydrofolate to GAR, catalysed by phosphoribosylglycinamide formyltransferase in reaction EC 2.1.2.2 and producing formylglycinamide ribotide (FGAR): GAR + 10-formyltetrahydrofolate → FGAR + tetrahydrofolate
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DnAnalytics** DnAnalytics: dnAnalytics is an open-source numerical library for .NET written in C# and F#. It features functionality similar to BLAS and LAPACK. Features: The software library provides facilities for: Linear algebra classes with support for sparse matrices and vectors (with a F# friendly interface). Dense and sparse solvers. Probability distributions. Random number generation (including Mersenne Twister MT19937). QR, LU, SVD, and Cholesky decomposition classes. Matrix IO classes that read and write matrices from/to Matlab, Matrix Market, and delimited files. Complex and “special” math routines. Descriptive Statistics, Histogram, and Pearson Correlation Coefficient. Overload mathematical operators to simplify complex expressions. Visual Studio visual debuggers for matrices and vectors Runs under Microsoft Windows and platforms that support Mono. Optional support for Intel Math Kernel Library (Microsoft Windows and Linux)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fiber-optic sensor** Fiber-optic sensor: A fiber-optic sensor is a sensor that uses optical fiber either as the sensing element ("intrinsic sensors"), or as a means of relaying signals from a remote sensor to the electronics that process the signals ("extrinsic sensors"). Fibers have many uses in remote sensing. Depending on the application, fiber may be used because of its small size, or because no electrical power is needed at the remote location, or because many sensors can be multiplexed along the length of a fiber by using light wavelength shift for each sensor, or by sensing the time delay as light passes along the fiber through each sensor. Time delay can be determined using a device such as an optical time-domain reflectometer and wavelength shift can be calculated using an instrument implementing optical frequency domain reflectometry. Fiber-optic sensor: Fiber-optic sensors are also immune to electromagnetic interference, and do not conduct electricity so they can be used in places where there is high voltage electricity or flammable material such as jet fuel. Fiber-optic sensors can be designed to withstand high temperatures as well. Intrinsic sensors: Optical fibers can be used as sensors to measure strain, temperature, pressure and other quantities by modifying a fiber so that the quantity to be measured modulates the intensity, phase, polarization, wavelength or transit time of light in the fiber. Sensors that vary the intensity of light are the simplest, since only a simple source and detector are required. A particularly useful feature of intrinsic fiber-optic sensors is that they can, if required, provide distributed sensing over very large distances.Temperature can be measured by using a fiber that has evanescent loss that varies with temperature, or by analyzing the Rayleigh Scattering, Raman scattering or the Brillouin scattering in the optical fiber. Electrical voltage can be sensed by nonlinear optical effects in specially-doped fiber, which alter the polarization of light as a function of voltage or electric field. Angle measurement sensors can be based on the Sagnac effect. Intrinsic sensors: Special fibers like long-period fiber grating (LPG) optical fibers can be used for direction recognition . Photonics Research Group of Aston University in UK has some publications on vectorial bend sensor applications.Optical fibers are used as hydrophones for seismic and sonar applications. Hydrophone systems with more than one hundred sensors per fiber cable have been developed. Hydrophone sensor systems are used by the oil industry as well as a few countries' navies. Both bottom-mounted hydrophone arrays and towed streamer systems are in use. The German company Sennheiser developed a laser microphone for use with optical fibers.A fiber-optic microphone and fiber-optic based headphone are useful in areas with strong electrical or magnetic fields, such as communication amongst the team of people working on a patient inside a magnetic resonance imaging (MRI) machine during MRI-guided surgery.Optical fiber sensors for temperature and pressure have been developed for downhole measurement in oil wells. The fiber-optic sensor is well suited for this environment as it functions at temperatures too high for semiconductor sensors (distributed temperature sensing). Intrinsic sensors: Optical fibers can be made into interferometric sensors such as fiber-optic gyroscopes, which are used in the Boeing 767 and in some car models (for navigation purposes). They are also used to make hydrogen sensors. Intrinsic sensors: Fiber-optic sensors have been developed to measure co-located temperature and strain simultaneously with very high accuracy using fiber Bragg gratings. This is particularly useful when acquiring information from small or complex structures. Fiber optic sensors are also particularly well suited for remote monitoring, and they can be interrogated 290 km away from the monitoring station using an optical fiber cable. Brillouin scattering effects can also be used to detect strain and temperature over large distances (20–120 kilometers). Intrinsic sensors: Other examples A fiber-optic AC/DC voltage sensor in the middle and high voltage range (100–2000 V) can be created by inducing measurable amounts of Kerr nonlinearity in single-mode optical fiber by exposing a calculated length of fiber to the external electric field. The measurement technique is based on polarimetric detection and high accuracy is achieved in a hostile industrial environment. Intrinsic sensors: High frequency (5 MHz–1 GHz) electromagnetic fields can be detected by induced nonlinear effects in fiber with a suitable structure. The fiber used is designed such that the Faraday and Kerr effects cause considerable phase change in the presence of the external field. With appropriate sensor design, this type of fiber can be used to measure different electrical and magnetic quantities and different internal parameters of fiber material. Intrinsic sensors: Electrical power can be measured in a fiber by using a structured bulk fiber ampere sensor coupled with proper signal processing in a polarimetric detection scheme. Experiments have been carried out in support of the technique.Fiber-optic sensors are used in electrical switchgear to transmit light from an electrical arc flash to a digital protective relay to enable fast tripping of a breaker to reduce the energy in the arc blast.Fiber Bragg grating based fiber-optic sensors significantly enhance performance, efficiency and safety in several industries. With FBG integrated technology, sensors can provide detailed analysis and comprehensive reports on insights with very high resolution. These type of sensors are used extensively in several industries like telecommunication, automotive, aerospace, energy, etc. Fiber Bragg gratings are sensitive to the static pressure, mechanical tension and compression and fiber temperature changes. The efficiency of fiber Bragg grating based fiber-optic sensors can be provided by means of central wavelength adjustment of light emitting source in accordance with the current Bragg gratings reflection spectra. Extrinsic sensors: Extrinsic fiber-optic sensors use an optical fiber cable, normally a multimode one, to transmit modulated light from either a non-fiber optical sensor, or an electronic sensor connected to an optical transmitter. A major benefit of extrinsic sensors is their ability to reach places which are otherwise inaccessible. An example is the measurement of temperature inside aircraft jet engines by using a fiber to transmit radiation into a radiation pyrometer located outside the engine. Extrinsic sensors can also be used in the same way to measure the internal temperature of electrical transformers, where the extreme electromagnetic fields present make other measurement techniques impossible. Extrinsic sensors: Extrinsic fiber-optic sensors provide excellent protection of measurement signals against noise corruption. Unfortunately, many conventional sensors produce electrical output which must be converted into an optical signal for use with fiber. For example, in the case of a platinum resistance thermometer, the temperature changes are translated into resistance changes. The PRT must therefore have an electrical power supply. The modulated voltage level at the output of the PRT can then be injected into the optical fiber via the usual type of transmitter. This complicates the measurement process and means that low-voltage power cables must be routed to the transducer. Extrinsic sensors: Extrinsic sensors are used to measure vibration, rotation, displacement, velocity, acceleration, torque, and temperature. Chemical sensors and biosensors: It is well-known the propagation of light in optical fiber is confined in the core of the fiber based on the total internal reflection (TIR) principle and near-zero propagation loss within the cladding, which is very important for the optical communication but limits its sensing applications due to the non-interaction of light with surroundings. Therefore, it is essential to exploit novel fiber-optic structures to disturb the light propagation, thereby enabling the interaction of the light with surroundings and constructing fiber-optic sensors. Until now, several methods, including polishing, chemical etching, tapering, bending, as well as femtosecond grating inscription, have been proposed to tailor the light propagation and prompt the interaction of light with sensing materials. In the above-mentioned fiber-optic structures, the enhanced evanescent fields can be efficiently excited to induce the light to expose to and interact with the surrounding medium. However, the fibers themselves can only sense very few kinds of analytes with low-sensitivity and zero-selectivity, which greatly limits their development and applications, especially for biosensors that require both high-sensitivity and high-selectivity. To overcome the issue, an efficient way is to resort to responsive materials, which possess the ability to change their properties, such as RI, absorption, conductivity, etc., once the surrounding environments change. Due to the rapid progress of functional materials in recent years, various sensing materials are available for fiber-optic chemical sensors and biosensors fabrication, including graphene, metals and metal oxides, carbon nanotubes, nanowires, nanoparticles, polymers, quantum dots, etc. Generally, these materials reversibly change their shape/volume upon stimulation by the surrounding environments (the target analysts), which then leads to the variation of RI or absorption of the sensing materials. Consequently, the surrounding changes will be recorded and interrogated by the optical fibers, realizing sensing functions of optical fibers. Currently, various fiber-optic chemical sensors and biosensors have been proposed and demonstrated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automated journalism** Automated journalism: Automated journalism, also known as algorithmic journalism or robot journalism, is a term that attempts to describe modern technological processes that have infiltrated the journalistic profession, such as news articles generated by computer programs. There are four main fields of application for automated journalism, namely automated content production, Data Mining, news dissemination and content optimization. Through artificial intelligence (AI) software, stories are produced automatically by computers rather than human reporters. These programs interpret, organize, and present data in human-readable ways. Typically, the process involves an algorithm that scans large amounts of provided data, selects from an assortment of pre-programmed article structures, orders key points, and inserts details such as names, places, amounts, rankings, statistics, and other figures. The output can also be customized to fit a certain voice, tone, or style.Data science and AI companies such as Automated Insights, Narrative Science, United Robots and Monok develop and provide these algorithms to news outlets. As of 2016, only a few media organizations have used automated journalism. Early adopters include news providers such as the Associated Press, Forbes, ProPublica, and the Los Angeles Times.Early implementations were mainly used for stories based on statistics and numerical figures. Common topics include sports recaps, weather, financial reports, real estate analysis, and earnings reviews. StatSheet, an online platform covering college basketball, runs entirely on an automated program. The Associated Press began using automation to cover 10,000 minor baseball leagues games annually, using a program from Automated Insights and statistics from MLB Advanced Media. Outside of sports, the Associated Press also uses automation to produce stories on corporate earnings. In 2006, Thomson Reuters announced their switch to automation to generate financial news stories on its online news platform. More famously, an algorithm called Quakebot published a story about a 2014 California earthquake on The Los Angeles Times website within three minutes after the shaking had stopped.Automated journalism is sometimes seen as an opportunity to free journalists from routine reporting, providing them with more time for complex tasks. It also allows efficiency and cost-cutting, alleviating some financial burden that many news organizations face. However, automated journalism is also perceived as a threat to the authorship and quality of news and a threat to the livelihoods of human journalists. Benefits: Speed Robot reporters are built to produce large quantities of information at quicker speeds. The Associated Press announced that their use of automation has increased the volume of earnings reports from customers by more than ten times. With software from Automated Insights and data from other companies, they can produce 150 to 300-word articles in the same time it takes journalists to crunch numbers and prepare information. By automating routine stories and tasks, journalists are promised more time for complex jobs such as investigative reporting and in-depth analysis of events.Francesco Marconi of the Associated Press stated that, through automation, the news agency freed up 20 percent of reporters’ time to focus on higher-impact projects. Benefits: Cost Automated journalism is cheaper because more content can be produced within less time. It also lowers labour costs for news organizations. Reduced human input means less expenses on wages or salaries, paid leaves, vacations, and employment insurance. Automation serves as a cost-cutting tool for news outlets struggling with tight budgets but still wish to maintain the scope and quality of their coverage. Criticisms: Authorship In an automated story, there is often confusion about who should be credited as the author. Several participants of a study on algorithmic authorship attributed the credit to the programmer; others perceived the news organization as the author, emphasizing the collaborative nature of the work. There is also no way for the reader to verify whether an article was written by a robot or human, which raises issues of transparency although such issues also arise with respect to authorship attribution between human authors too. Criticisms: Credibility and quality Concerns about the perceived credibility of automated news is similar to concerns about the perceived credibility of news in general. Critics doubt if algorithms are "fair and accurate, free from subjectivity, error, or attempted influence." Again, these issues about fairness, accuracy, subjectivity, error, and attempts at influence or propaganda has also been present in articles written by humans over thousands of years. A common criticism is that machines do not replace human capabilities such as creativity, humour, and critical-thinking. However, as the technology evolves, the aim is to mimic human characteristics. When the UK's Guardian newspaper used an AI to write an entire article in September 2020, commentators pointed out that the AI still relied on human editorial content. Austin Tanney, the head of AI at Kainos said: "The Guardian got three or four different articles and spliced them together. They also gave it the opening paragraph. It doesn’t belittle what it is. It was written by AI, but there was human editorial on that."Beyond human evaluation, there are now numerous algorithmic methods to identify machine written articles although some articles may still contain errors that are obvious for a human to identify, they can at times score better with these automatic identifiers than human-written articles. Criticisms: Employment Among the concerns about automation is the loss of employment for journalists as publishers switch to using AIs. The use of automation has become a near necessity in newsrooms nowadays, in order to keep up with the ever-increasing demand for news stories, which in turn has affected the very nature of the journalistic profession. In 2014, an annual census from The American Society of News Editors announced that the newspaper industry lost 3,800 full-time, professional editors. Falling by more than 10% within a year, this is the biggest drop since the industry cut over 10,000 jobs in 2007 and 2008. Criticisms: Dependence on platform and technology companies There has been a significant amount of recent scholarship on the relationship between platform companies, such as Google and Facebook, and the news industry with researchers examining the impact of these platforms on the distribution and monetization of news content, as well as the implications for journalism and democracy. Some scholars have extended this line of thinking to automated journalism and the use of AI in the news. A 2022 paper by the Oxford University academic Felix Simon, for example, argues that the concentration of AI tools and infrastructure in the hands of a few major technology companies, such as Google, Microsoft, and Amazon Web Services, is a significant issue for the news industry, as it risks shifting more control to these companies and increasing the industry's dependence on them. Simon argues that this could lead to vendor lock-in, where news organizations become structurally dependent on AI provided by these companies and are unable to switch to another vendor without incurring significant costs. The companies also possess artefactual and contractual control over their AI infrastructure and services, which could expose news organizations to the risk of unforeseen changes or the stopping of their AI solutions entirely. Additionally, the author argues the reliance on these companies for AI can make it more difficult for news organizations to understand the decisions or predictions made by the systems and can limit their ability to protect sources or proprietary business information. Opinions on automated journalism: A 2017 Nieman Reports article by Nicola Bruno discusses whether or not machines will replace journalists and addresses concerns around the concept of automated journalism practices. Ultimately, Bruno came to the conclusion that AI would assist journalists, not replace them. "No automated software or amateur reporter will ever replace a good journalist", she said. Opinions on automated journalism: In 2020, however, Microsoft did just that - replacing 27 journalists with AI. One staff member was quoted by The Guardian as saying: “I spend all my time reading about how automation and AI is going to take all our jobs, and here I am – AI has taken my job.” The journalist went on to say that replacing humans with software was risky, as existing staff were careful to stick to “very strict editorial guidelines” which ensured that users were not presented with violent or inappropriate content when opening their browser, for example. List of implementations: In May 2020, Microsoft announced that a number of its MSN contract journalists would be replaced by robot journalism. On 8 September 2020, The Guardian published an article entirely written by the neural network GPT-3, although the published fragments were manually picked by a human editor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Positional notation** Positional notation: Positional notation (or place-value notation, or positional numeral system) usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the value may be negated if placed before another digit). In modern positional systems, such as the decimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string. Positional notation: The Babylonian numeral system, base 60, was the first positional system to be developed, and its influence is present today in the way time and angles are counted in tallies related to 60, such as 60 minutes in an hour and 360 degrees in a circle. Today, the Hindu–Arabic numeral system (base ten) is the most commonly used system globally. However, the binary numeral system (base two) is used in almost all computers and electronic devices because it is easier to implement efficiently in electronic circuits. Positional notation: Systems with negative base, complex base or negative digits have been described. Most of them do not require a minus sign for designating negative numbers. The use of a radix point (decimal point in base ten), extends to include fractions and allows representing any real number with arbitrary accuracy. With positional notation, arithmetical computations are much simpler than with any older numeral system; this led to the rapid spread of the notation when it was introduced in western Europe. History: Today, the base-10 (decimal) system, which is presumably motivated by counting with the ten fingers, is ubiquitous. Other bases have been used in the past, and some continue to be used today. For example, the Babylonian numeral system, credited as the first positional numeral system, was base-60. However, it lacked a real zero. Initially inferred only from context, later, by about 700 BC, zero came to be indicated by a "space" or a "punctuation symbol" (such as two slanted wedges) between numerals. It was a placeholder rather than a true zero because it was not used alone or at the end of a number. Numbers like 2 and 120 (2×60) looked the same because the larger number lacked a final placeholder. Only context could differentiate them. History: The polymath Archimedes (ca. 287–212 BC) invented a decimal positional system in his Sand Reckoner which was based on 108 and later led the German mathematician Carl Friedrich Gauss to lament what heights science would have already reached in his days if Archimedes had fully realized the potential of his ingenious discovery.Before positional notation became standard, simple additive systems (sign-value notation) such as Roman numerals were used, and accountants in ancient Rome and during the Middle Ages used the abacus or stone counters to do arithmetic. History: Counting rods and most abacuses have been used to represent numbers in a positional numeral system. With counting rods or abacus to perform arithmetic operations, the writing of the starting, intermediate and final values of a calculation could easily be done with a simple additive system in each position or column. This approach required no memorization of tables (as does positional notation) and could produce practical results quickly. The oldest extant positional notation system is either that of Chinese rod numerals, used from at least the early 8th century, or perhaps Khmer numerals, showing possible usages of positional-numbers in the 7th century. Khmer numerals and other Indian numerals originate with the Brahmi numerals of about the 3rd century BC, which symbols were, at the time, not used positionally. Medieval Indian numerals are positional, as are the derived Arabic numerals, recorded from the 10th century. History: After the French Revolution (1789–1799), the new French government promoted the extension of the decimal system. Some of those pro-decimal efforts—such as decimal time and the decimal calendar—were unsuccessful. Other French pro-decimal efforts—currency decimalisation and the metrication of weights and measures—spread widely out of France to almost the whole world. History: History of positional fractions J. Lennart Berggren notes that positional decimal fractions were used for the first time by Arab mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century. The Jewish mathematician Immanuel Bonfils used decimal fractions around 1350, but did not develop any notation to represent them. The Persian mathematician Jamshīd al-Kāshī made the same discovery of decimal fractions in the 15th century. Al Khwarizmi introduced fractions to Islamic countries in the early 9th century; his fraction presentation was similar to the traditional Chinese mathematical fractions from Sunzi Suanjing. This form of fraction with numerator on top and denominator at bottom without a horizontal bar was also used by 10th century Abu'l-Hasan al-Uqlidisi and 15th century Jamshīd al-Kāshī's work "Arithmetic Key". History: The adoption of the decimal representation of numbers less than one, a fraction, is often credited to Simon Stevin through his textbook De Thiende; but both Stevin and E. J. Dijksterhuis indicate that Regiomontanus contributed to the European adoption of general decimals: European mathematicians, when taking over from the Hindus, via the Arabs, the idea of positional value for integers, neglected to extend this idea to fractions. For some centuries they confined themselves to using common and sexagesimal fractions... This half-heartedness has never been completely overcome, and sexagesimal fractions still form the basis of our trigonometry, astronomy and measurement of time. ¶ ... Mathematicians sought to avoid fractions by taking the radius R equal to a number of units of length of the form 10n and then assuming for n so great an integral value that all occurring quantities could be expressed with sufficient accuracy by integers. ¶ The first to apply this method was the German astronomer Regiomontanus. To the extent that he expressed goniometrical line-segments in a unit R/10n, Regiomontanus may be called an anticipator of the doctrine of decimal positional fractions.: 17, 18 In the estimation of Dijksterhuis, "after the publication of De Thiende only a small advance was required to establish the complete system of decimal positional fractions, and this step was taken promptly by a number of writers ... next to Stevin the most important figure in this development was Regiomontanus." Dijksterhuis noted that [Stevin] "gives full credit to Regiomontanus for his prior contribution, saying that the trigonometric tables of the German astronomer actually contain the whole theory of 'numbers of the tenth progress'.": 19 Mathematics: Base of the numeral system In mathematical numeral systems the radix r is usually the number of unique digits, including zero, that a positional numeral system uses to represent numbers. In some cases, such as with a negative base, the radix is the absolute value r=|b| of the base b. For example, for the decimal system the radix (and base) is ten, because it uses the ten digits from 0 through 9. When a number "hits" 9, the next number will not be another different symbol, but a "1" followed by a "0". In binary, the radix is two, since after it hits "1", instead of "2" or another written symbol, it jumps straight to "10", followed by "11" and "100". Mathematics: The highest symbol of a positional numeral system usually has the value one less than the value of the radix of that numeral system. The standard positional numeral systems differ from one another only in the base they use. The radix is an integer that is greater than 1, since a radix of zero would not have any digits, and a radix of 1 would only have the zero digit. Negative bases are rarely used. In a system with more than |b| unique digits, numbers may have many different possible representations. It is important that the radix is finite, from which follows that the number of digits is quite low. Otherwise, the length of a numeral would not necessarily be logarithmic in its size. Mathematics: (In certain non-standard positional numeral systems, including bijective numeration, the definition of the base or the allowed digits deviates from the above.) In standard base-ten (decimal) positional notation, there are ten decimal digits and the number 5305 10 10 10 10 0) .In standard base-sixteen (hexadecimal), there are the sixteen hexadecimal digits (0–9 and A–F) and the number 14 16 16 16 16 5305 dec), where B represents the number eleven as a single symbol. Mathematics: In general, in base-b, there are b digits =: D and the number (a3a2a1a0)b=(a3×b3)+(a2×b2)+(a1×b1)+(a0×b0) has ∀k:ak∈D. Note that a3a2a1a0 represents a sequence of digits, not multiplication. Mathematics: Notation When describing base in mathematical notation, the letter b is generally used as a symbol for this concept, so, for a binary system, b equals 2. Another common way of expressing the base is writing it as a decimal subscript after the number that is being represented (this notation is used in this article). 11110112 implies that the number 1111011 is a base-2 number, equal to 12310 (a decimal notation representation), 1738 (octal) and 7B16 (hexadecimal). In books and articles, when using initially the written abbreviations of number bases, the base is not subsequently printed: it is assumed that binary 1111011 is the same as 11110112. Mathematics: The base b may also be indicated by the phrase "base-b". So binary numbers are "base-2"; octal numbers are "base-8"; decimal numbers are "base-10"; and so on. Mathematics: To a given radix b the set of digits {0, 1, ..., b−2, b−1} is called the standard set of digits. Thus, binary numbers have digits {0, 1}; decimal numbers have digits {0, 1, 2, ..., 8, 9}; and so on. Therefore, the following are notational errors: 522, 22, 1A9. (In all cases, one or more digits is not in the set of allowed digits for the given base.) Exponentiation Positional numeral systems work using exponentiation of the base. A digit's value is the digit multiplied by the value of its place. Place values are the number of the base raised to the nth power, where n is the number of other digits between a given digit and the radix point. If a given digit is on the left hand side of the radix point (i.e. its value is an integer) then n is positive or zero; if the digit is on the right hand side of the radix point (i.e., its value is fractional) then n is negative. Mathematics: As an example of usage, the number 465 in its respective base b (which must be at least base 7 because the highest digit in it is 6) is equal to: 4×b2+6×b1+5×b0 If the number 465 was in base-10, then it would equal: 10 10 10 100 10 465 (46510 = 46510) If however, the number were in base 7, then it would equal: 49 243 (4657 = 24310) 10b = b for any base b, since 10b = 1×b1 + 0×b0. For example, 102 = 2; 103 = 3; 1016 = 1610. Note that the last "16" is indicated to be in base 10. The base makes no difference for one-digit numerals. Mathematics: This concept can be demonstrated using a diagram. One object represents one unit. When the number of objects is equal to or greater than the base b, then a group of objects is created with b objects. When the number of these groups exceeds b, then a group of these groups of objects is created with b groups of b objects; and so on. Thus the same number in different bases will have different values: 241 in base 5: 2 groups of 52 (25) 4 groups of 5 1 group of 1 ooooo ooooo ooooo ooooo ooooo ooooo ooooo ooooo + + o ooooo ooooo ooooo ooooo ooooo ooooo 241 in base 8: 2 groups of 82 (64) 4 groups of 8 1 group of 1 oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo + + o oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo oooooooo The notation can be further augmented by allowing a leading minus sign. This allows the representation of negative numbers. For a given base, every representation corresponds to exactly one real number and every real number has at least one representation. The representations of rational numbers are those representations that are finite, use the bar notation, or end with an infinitely repeating cycle of digits. Mathematics: Digits and numerals A digit is a symbol that is used for positional notation, and a numeral consists of one or more digits used for representing a number with positional notation. Today's most common digits are the decimal digits "0", "1", "2", "3", "4", "5", "6", "7", "8", and "9". The distinction between a digit and a numeral is most pronounced in the context of a number base. Mathematics: A non-zero numeral with more than one digit position will mean a different number in a different number base, but in general, the digits will mean the same. For example, the base-8 numeral 238 contains two digits, "2" and "3", and with a base number (subscripted) "8". When converted to base-10, the 238 is equivalent to 1910, i.e. 238 = 1910. In our notation here, the subscript "8" of the numeral 238 is part of the numeral, but this may not always be the case. Mathematics: Imagine the numeral "23" as having an ambiguous base number. Then "23" could likely be any base, from base-4 up. In base-4, the "23" means 1110, i.e. 234 = 1110. In base-60, the "23" means the number 12310, i.e. 2360 = 12310. The numeral "23" then, in this case, corresponds to the set of base-10 numbers {11, 13, 15, 17, 19, 21, 23, ..., 121, 123} while its digits "2" and "3" always retain their original meaning: the "2" means "two of", and the "3" means "three of". Mathematics: In certain applications when a numeral with a fixed number of positions needs to represent a greater number, a higher number-base with more digits per position can be used. A three-digit, decimal numeral can represent only up to 999. But if the number-base is increased to 11, say, by adding the digit "A", then the same three positions, maximized to "AAA", can represent a number as great as 1330. We could increase the number base again and assign "B" to 11, and so on (but there is also a possible encryption between number and digit in the number-digit-numeral hierarchy). A three-digit numeral "ZZZ" in base-60 could mean 215999. If we use the entire collection of our alphanumerics we could ultimately serve a base-62 numeral system, but we remove two digits, uppercase "I" and uppercase "O", to reduce confusion with digits "1" and "0". Mathematics: We are left with a base-60, or sexagesimal numeral system utilizing 60 of the 62 standard alphanumerics. (But see Sexagesimal system below.) In general, the number of possible values that can be represented by a d digit number in base r is rd The common numeral systems in computer science are binary (radix 2), octal (radix 8), and hexadecimal (radix 16). In binary only digits "0" and "1" are in the numerals. In the octal numerals, are the eight digits 0–7. Hex is 0–9 A–F, where the ten numerics retain their usual meaning, and the alphabetics correspond to values 10–15, for a total of sixteen digits. The numeral "10" is binary numeral "2", octal numeral "8", or hexadecimal numeral "16". Mathematics: Radix point The notation can be extended into the negative exponents of the base b. Thereby the so-called radix point, mostly ».«, is used as separator of the positions with non-negative from those with negative exponent. Mathematics: Numbers that are not integers use places beyond the radix point. For every position behind this point (and thus after the units digit), the exponent n of the power bn decreases by 1 and the power approaches 0. For example, the number 2.35 is equal to: 10 10 10 −2 Sign If the base and all the digits in the set of digits are non-negative, negative numbers cannot be expressed. To overcome this, a minus sign, here »-«, is added to the numeral system. In the usual notation it is prepended to the string of digits representing the otherwise non-negative number. Mathematics: Base conversion The conversion to a base b2 of an integer n represented in base b1 can be done by a succession of Euclidean divisions by b2: the right-most digit in base b2 is the remainder of the division of n by b2; the second right-most digit is the remainder of the division of the quotient by b2, and so on. The left-most digit is the last quotient. In general, the kth digit from the right is the remainder of the division by b2 of the (k−1)th quotient. Mathematics: For example: converting A10BHex to decimal (41227): 0xA10B/10 = 0x101A R: 7 (ones place) 0x101A/10 = 0x19C R: 2 (tens place) 0x19C/10 = 0x29 R: 2 (hundreds place) 0x29/10 = 0x4 R: 1 ... Mathematics: When converting to a larger base (such as from binary to decimal), the remainder represents b2 as a single digit, using digits from b1 . For example: converting 0b11111001 (binary) to 249 (decimal): 0b11111001/10 = 0b11000 R: 0b1001 (0b1001 = "9" for ones place) 0b11000/10 = 0b10 R: 0b100 (0b100 = "4" for tens) 0b10/10 = 0b0 R: 0b10 (0b10 = "2" for hundreds) For the fractional part, conversion can be done by taking digits after the radix point (the numerator), and dividing it by the implied denominator in the target radix. Approximation may be needed due to a possibility of non-terminating digits if the reduced fraction's denominator has a prime factor other than any of the base's prime factor(s) to convert to. For example, 0.1 in decimal (1/10) is 0b1/0b1010 in binary, by dividing this in that radix, the result is 0b0.00011 (because one of the prime factors of 10 is 5). For more general fractions and bases see the algorithm for positive bases. Mathematics: In practice, Horner's method is more efficient than the repeated division required above. A number in positional notation can be thought of as a polynomial, where each digit is a coefficient. Coefficients can be larger than one digit, so an efficient way to convert bases is to convert each digit, then evaluate the polynomial via Horner's method within the target base. Converting each digit is a simple lookup table, removing the need for expensive division or modulus operations; and multiplication by x becomes right-shifting. However, other polynomial evaluation algorithms would work as well, like repeated squaring for single or sparse digits. Example: Convert 0xA10B to 41227 A10B = (10*16^3) + (1*16^2) + (0*16^1) + (11*16^0) Lookup table: 0x0 = 0 0x1 = 1 ... Mathematics: 0x9 = 9 0xA = 10 0xB = 11 0xC = 12 0xD = 13 0xE = 14 0xF = 15 Therefore 0xA10B's decimal digits are 10, 1, 0, and 11. Mathematics: Lay out the digits out like this. The most significant digit (10) is "dropped": 10 1 0 11 <- Digits of 0xA10B --------------- 10 Then we multiply the bottom number from the source base (16), the product is placed under the next digit of the source value, and then add: 10 1 0 11 160 --------------- 10 161 Repeat until the final addition is performed: 10 1 0 11 160 2576 41216 --------------- 10 161 2576 41227 and that is 41227 in decimal. Mathematics: Convert 0b11111001 to 249 Lookup table: 0b0 = 0 0b1 = 1 Result: 1 1 1 1 1 0 0 1 <- Digits of 0b11111001 2 6 14 30 62 124 248 ------------------------- 1 3 7 15 31 62 124 249 Terminating fractions The numbers which have a finite representation form the semiring := {mb−ν∣m∈N0∧ν∈N0}. Mathematics: More explicitly, if := b is a factorization of b into the primes p1,…,pn∈P with exponents ν1,…,νn∈N , then with the non-empty set of denominators := {p1,…,pn} we have := {x∈Q|∃μi∈Z:x∏i=1npiμi∈Z}=bZZ=⟨S⟩−1Z where ⟨S⟩ is the group generated by the p∈S and ⟨S⟩−1Z is the so-called localization of Z with respect to S The denominator of an element of ZS contains if reduced to lowest terms only prime factors out of S This ring of all terminating fractions to base b is dense in the field of rational numbers Q . Its completion for the usual (Archimedean) metric is the same as for Q , namely the real numbers R . So, if S={p} then Z{p} has not to be confused with Z(p) , the discrete valuation ring for the prime p , which is equal to ZT with T=P∖{p} If b divides c , we have bZZ⊆cZZ. Mathematics: Infinite representations Rational numbers The representation of non-integers can be extended to allow an infinite string of digits beyond the point. For example, 1.12112111211112 ... base-3 represents the sum of the infinite series: 10 11 12 13 14 +⋯ Since a complete infinite string of digits cannot be explicitly written, the trailing ellipsis (...) designates the omitted digits, which may or may not follow a pattern of some kind. One common pattern is when a finite sequence of digits repeats infinitely. This is designated by drawing a vinculum across the repeating block: 2.42 314 2.42314314314314314 …5 This is the repeating decimal notation (to which there does not exist a single universally accepted notation or phrasing). Mathematics: For base 10 it is called a repeating decimal or recurring decimal. An irrational number has an infinite non-repeating representation in all integer bases. Whether a rational number has a finite representation or requires an infinite repeating representation depends on the base. For example, one third can be represented by: 0.1 3 0. 10 0.3333333 10 or, with the base implied: 0. 0.3333333 … (see also 0.999...) 0. 01 0.010101 …2 0.2 6 For integers p and q with gcd (p, q) = 1, the fraction p/q has a finite representation in base b if and only if each prime factor of q is also a prime factor of b. Mathematics: For a given base, any number that can be represented by a finite number of digits (without using the bar notation) will have multiple representations, including one or two infinite representations: 1. A finite or infinite number of zeroes can be appended: 3.46 3.460 3.460000 3.46 0¯7 2. The last non-zero digit can be reduced by one and an infinite string of digits, each corresponding to one less than the base, are appended (or replace any following zero digits): 3.46 3.45 6¯7 10 0. Mathematics: 10 (see also 0.999...) 220 214. 4¯5 Irrational numbers A (real) irrational number has an infinite non-repeating representation in all integer bases. Examples are the non-solvable nth roots y=xn with yn=x and y ∉ Q, numbers which are called algebraic, or numbers like π,e which are transcendental. The number of transcendentals is uncountable and the sole way to write them down with a finite number of symbols is to give them a symbol or a finite sequence of symbols. Applications: Decimal system In the decimal (base-10) Hindu–Arabic numeral system, each position starting from the right is a higher power of 10. The first position represents 100 (1), the second position 101 (10), the third position 102 (10 × 10 or 100), the fourth position 103 (10 × 10 × 10 or 1000), and so on. Applications: Fractional values are indicated by a separator, which can vary in different locations. Usually this separator is a period or full stop, or a comma. Digits to the right of it are multiplied by 10 raised to a negative power or exponent. The first position to the right of the separator indicates 10−1 (0.1), the second position 10−2 (0.01), and so on for each successive position. Applications: As an example, the number 2674 in a base-10 numeral system is: (2 × 103) + (6 × 102) + (7 × 101) + (4 × 100)or (2 × 1000) + (6 × 100) + (7 × 10) + (4 × 1). Sexagesimal system The sexagesimal or base-60 system was used for the integral and fractional portions of Babylonian numerals and other Mesopotamian systems, by Hellenistic astronomers using Greek numerals for the fractional portion only, and is still used for modern time and angles, but only for minutes and seconds. However, not all of these uses were positional. Applications: Modern time separates each position by a colon or a prime symbol. For example, the time might be 10:25:59 (10 hours 25 minutes 59 seconds). Angles use similar notation. For example, an angle might be 10°25′59″ (10 degrees 25 minutes 59 seconds). In both cases, only minutes and seconds use sexagesimal notation—angular degrees can be larger than 59 (one rotation around a circle is 360°, two rotations are 720°, etc.), and both time and angles use decimal fractions of a second. This contrasts with the numbers used by Hellenistic and Renaissance astronomers, who used thirds, fourths, etc. for finer increments. Where we might write 10°25′59.392″, they would have written 10°25 ′ 59 ′′ 23 ′′′ 31 ′′′′ 12 ′′′′′ or 10°25i59ii23iii31iv12v. Applications: Using a digit set of digits with upper and lowercase letters allows short notation for sexagesimal numbers, e.g. 10:25:59 becomes 'ARz' (by omitting I and O, but not i and o), which is useful for use in URLs, etc., but it is not very intelligible to humans. Applications: In the 1930s, Otto Neugebauer introduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integral and fractional portions of the number and using a comma (,) to separate the positions within each portion. For example, the mean synodic month used by both Babylonian and Hellenistic astronomers and still used in the Hebrew calendar is 29;31,50,8,20 days, and the angle used in the example above would be written 10;25,59,23,31,12 degrees. Applications: Computing In computing, the binary (base-2), octal (base-8) and hexadecimal (base-16) bases are most commonly used. Computers, at the most basic level, deal only with sequences of conventional zeroes and ones, thus it is easier in this sense to deal with powers of two. The hexadecimal system is used as "shorthand" for binary—every 4 binary digits (bits) relate to one and only one hexadecimal digit. In hexadecimal, the six digits after 9 are denoted by A, B, C, D, E, and F (and sometimes a, b, c, d, e, and f). Applications: The octal numbering system is also used as another way to represent binary numbers. In this case the base is 8 and therefore only digits 0, 1, 2, 3, 4, 5, 6, and 7 are used. When converting from binary to octal every 3 bits relate to one and only one octal digit. Hexadecimal, decimal, octal, and a wide variety of other bases have been used for binary-to-text encoding, implementations of arbitrary-precision arithmetic, and other applications. For a list of bases and their applications, see list of numeral systems. Applications: Other bases in human language Base-12 systems (duodecimal or dozenal) have been popular because multiplication and division are easier than in base-10, with addition and subtraction being just as easy. Twelve is a useful base because it has many factors. It is the smallest common multiple of one, two, three, four and six. There is still a special word for "dozen" in English, and by analogy with the word for 102, hundred, commerce developed a word for 122, gross. The standard 12-hour clock and common use of 12 in English units emphasize the utility of the base. In addition, prior to its conversion to decimal, the old British currency Pound Sterling (GBP) partially used base-12; there were 12 pence (d) in a shilling (s), 20 shillings in a pound (£), and therefore 240 pence in a pound. Hence the term LSD or, more properly, £sd. Applications: The Maya civilization and other civilizations of pre-Columbian Mesoamerica used base-20 (vigesimal), as did several North American tribes (two being in southern California). Evidence of base-20 counting systems is also found in the languages of central and western Africa. Applications: Remnants of a Gaulish base-20 system also exist in French, as seen today in the names of the numbers from 60 through 99. For example, sixty-five is soixante-cinq (literally, "sixty [and] five"), while seventy-five is soixante-quinze (literally, "sixty [and] fifteen"). Furthermore, for any number between 80 and 99, the "tens-column" number is expressed as a multiple of twenty. For example, eighty-two is quatre-vingt-deux (literally, four twenty[s] [and] two), while ninety-two is quatre-vingt-douze (literally, four twenty[s] [and] twelve). In Old French, forty was expressed as two twenties and sixty was three twenties, so that fifty-three was expressed as two twenties [and] thirteen, and so on. Applications: In English the same base-20 counting appears in the use of "scores". Although mostly historical, it is occasionally used colloquially. Verse 10 of Psalm 90 in the King James Version of the Bible starts: "The days of our years are threescore years and ten; and if by reason of strength they be fourscore years, yet is their strength labour and sorrow". The Gettysburg Address starts: "Four score and seven years ago". Applications: The Irish language also used base-20 in the past, twenty being fichid, forty dhá fhichid, sixty trí fhichid and eighty ceithre fhichid. A remnant of this system may be seen in the modern word for 40, daoichead. The Welsh language continues to use a base-20 counting system, particularly for the age of people, dates and in common phrases. 15 is also important, with 16–19 being "one on 15", "two on 15" etc. 18 is normally "two nines". A decimal system is commonly used. The Inuit languages use a base-20 counting system. Students from Kaktovik, Alaska invented a base-20 numeral system in 1994Danish numerals display a similar base-20 structure. The Māori language of New Zealand also has evidence of an underlying base-20 system as seen in the terms Te Hokowhitu a Tu referring to a war party (literally "the seven 20s of Tu") and Tama-hokotahi, referring to a great warrior ("the one man equal to 20"). The binary system was used in the Egyptian Old Kingdom, 3000 BC to 2050 BC. It was cursive by rounding off rational numbers smaller than 1 to 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64, with a 1/64 term thrown away (the system was called the Eye of Horus). A number of Australian Aboriginal languages employ binary or binary-like counting systems. For example, in Kala Lagaw Ya, the numbers one through six are urapon, ukasar, ukasar-urapon, ukasar-ukasar, ukasar-ukasar-urapon, ukasar-ukasar-ukasar. North and Central American natives used base-4 (quaternary) to represent the four cardinal directions. Mesoamericans tended to add a second base-5 system to create a modified base-20 system. A base-5 system (quinary) has been used in many cultures for counting. Plainly it is based on the number of digits on a human hand. It may also be regarded as a sub-base of other bases, such as base-10, base-20, and base-60. Applications: A base-8 system (octal) was devised by the Yuki tribe of Northern California, who used the spaces between the fingers to count, corresponding to the digits one through eight. There is also linguistic evidence which suggests that the Bronze Age Proto-Indo Europeans (from whom most European and Indic languages descend) might have replaced a base-8 system (or a system which could only count up to 8) with a base-10 system. The evidence is that the word for 9, newm, is suggested by some to derive from the word for "new", newo-, suggesting that the number 9 had been recently invented and called the "new number".Many ancient counting systems use five as a primary base, almost surely coming from the number of fingers on a person's hand. Often these systems are supplemented with a secondary base, sometimes ten, sometimes twenty. In some African languages the word for five is the same as "hand" or "fist" (Dyola language of Guinea-Bissau, Banda language of Central Africa). Counting continues by adding 1, 2, 3, or 4 to combinations of 5, until the secondary base is reached. In the case of twenty, this word often means "man complete". This system is referred to as quinquavigesimal. It is found in many languages of the Sudan region. Applications: The Telefol language, spoken in Papua New Guinea, is notable for possessing a base-27 numeral system. Non-standard positional numeral systems: Interesting properties exist when the base is not fixed or positive and when the digit symbol sets denote negative values. There are many more variations. These systems are of practical and theoretic value to computer scientists. Non-standard positional numeral systems: Balanced ternary uses a base of 3 but the digit set is {1,0,1} instead of {0,1,2}. The "1" has an equivalent value of −1. The negation of a number is easily formed by switching the on the 1s. This system can be used to solve the balance problem, which requires finding a minimal set of known counter-weights to determine an unknown weight. Weights of 1, 3, 9, ... 3n known units can be used to determine any unknown weight up to 1 + 3 + ... + 3n units. A weight can be used on either side of the balance or not at all. Weights used on the balance pan with the unknown weight are designated with 1, with 1 if used on the empty pan, and with 0 if not used. If an unknown weight W is balanced with 3 (31) on its pan and 1 and 27 (30 and 33) on the other, then its weight in decimal is 25 or 1011 in balanced base-3. 10113 = 1 × 33 + 0 × 32 − 1 × 31 + 1 × 30 = 25.The factorial number system uses a varying radix, giving factorials as place values; they are related to Chinese remainder theorem and residue number system enumerations. This system effectively enumerates permutations. A derivative of this uses the Towers of Hanoi puzzle configuration as a counting system. The configuration of the towers can be put into 1-to-1 correspondence with the decimal count of the step at which the configuration occurs and vice versa. Non-positional positions: Each position does not need to be positional itself. Babylonian sexagesimal numerals were positional, but in each position were groups of two kinds of wedges representing ones and tens (a narrow vertical wedge | for the one and an open left pointing wedge ⟨ for the ten) — up to 5+9=14 symbols per position (i.e. 5 tens ⟨⟨⟨⟨⟨ and 9 ones ||||||||| grouped into one or two near squares containing up to three tiers of symbols, or a place holder (\\) for the lack of a position). Hellenistic astronomers used one or two alphabetic Greek numerals for each position (one chosen from 5 letters representing 10–50 and/or one chosen from 9 letters representing 1–9, or a zero symbol).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Owzthat** Owzthat: Owzthat is a dice-based cricket simulation. In its non-commercial form it is often called pencil cricket as in pre-war Britain six-sided pencils, shaved back to bare wood with the numbers and words written on them, were used. Today the game is supplied by a variety of manufacturers, including William Lindop Ltd. The name is derived from a verbal cricket appeal regarding whether a batsman is out. The game: The game is usually played between two players, but can be played alone. It is played with two six-sided long dice and a paper scorecard. One die, the batting die, is labelled 1,2,3,4, 'owzthat' and 6. The second die, the umpire die, is labelled 'bowled', 'stumped', 'caught', 'not out', 'no ball', and 'L.B.W.'. Before commencing, the form of 'cricket match' to be played is agreed e.g. test cricket, limited overs cricket, etc.. An appropriate cricket scorecard is then drawn up and the teams are written in. A toss of a coin decides which team chooses to bat first. The game: The batting side starts the game by rolling the batting die. Any runs signalled are recorded on the scorecard. When a 'owzthat' appeal is signalled, the umpire die is rolled for a decision. The batsman has a 1/3 chance of being not out, if the 'Not Out' or 'No Ball' is signalled. As in real cricket a 'No Ball' entitles the batsman to an additional strike (roll) and an extra run. A batsman is out if 'bowled', 'stumped', 'caught', or 'L.B.W.’ are signalled, and the next batsman comes to the crease. Depending on the cricket format the batting side is dismissed when all the batsmen are out or and if the over limit is reached. The other side then bats in an attempt to score more runs and hence win.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lithium tert-butoxide** Lithium tert-butoxide: Lithium tert-butoxide is the metalorganic compound with the formula LiOC(CH3)3. A white solid, it is used as a strong base in organic synthesis. The compound is often depicted as a salt, and it often behaves as such, but it is not ionized in solution. Both octameric and hexameric forms have been characterized by X-ray crystallography Preparation: Lithium tert-butoxide is commercially available as a solution and as a solid, but it is often generated in situ for laboratory use because samples are so sensitive and older samples are often of poor quality. It can be obtained by treating tert-butanol with butyl lithium. Reactions: As a strong base, lithium tert-butoxide is easily protonated. Lithium tert-butoxide is used to prepare other tert-butoxide compounds such as copper(I) t-butoxide and hexa(tert-butoxy)dimolybdenum(III): 2 MoCl3(thf)3 + 6 LiOBu-t → Mo2(OBu-t)6 + 6 LiCl + 6 thf
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Referential density** Referential density: Referential density is a concept of ficto-narrative theory put forward by Thomas G. Pavel in his 1986 book, Fictional Worlds. The concept refers to the referential relationship of a text to a fictional world, the ontology of which can be established by a possible worlds approach. A large text that refers to a small fictional world is said to have low referential density, whereas a small text referring to a large fictional world has high referential density. The size of the text is measured in abstract terms as amplitude, which in most cases will correspond to its physical length; exceptions to this may arise in cases of embedded discourses, such as metanarratives (or imaging digressions), which refer to the actual world. For this reason, the form and genre of a fictional work provide only an approximate indication of its size; by the same token, it is possible to refer to the size and referential density of part of a fictional work. The size of a fictional world, in turn, is measured in terms of the sum total of properties applicable to the objects and agents inhabiting the fictional world. Relative density: Relative (referential) density builds upon the abstract definition of referential density by including context sensitive factors such as the degree of external information the reader has to import to his reconstruction of the fictional world, the text's narrative crowding, the ratio between action and description, and the epistemic paths chosen by the text. These factors will usually have more impact on the number of references in a text than its amplitude. Significance: Referential density and relative (referential) density account for much of what makes fictional texts 'thick' or 'easy' reads. All other factors being equal, high density will make for difficult reading in that the reader is required to reconstruct the fictional world in a short space, whereas low density is characteristic of a high degree of action. An author may, however, focus on psychology and thereby have a static plot with low density. On the other hand, certain authors and genres make the reader's reconstruction of the fictional world the very point of the text's enjoyment, which is the case with most works of science-fiction, fantasy, and historical fiction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equilibrium unfolding** Equilibrium unfolding: In biochemistry, equilibrium unfolding is the process of unfolding a protein or RNA molecule by gradually changing its environment, such as by changing the temperature or pressure, pH, adding chemical denaturants, or applying force as with an atomic force microscope tip. If the equilibrium was maintained at all steps, the process theoretically should be reversible during equilibrium folding. Equilibrium unfolding can be used to determine the thermodynamic stability of the protein or RNA structure, i.e. free energy difference between the folded and unfolded states. Theoretical background: In its simplest form, equilibrium unfolding assumes that the molecule may belong to only two thermodynamic states, the folded state (typically denoted N for "native" state) and the unfolded state (typically denoted U). This "all-or-none" model of protein folding was first proposed by Tim Anson in 1945, but is believed to hold only for small, single structural domains of proteins (Jackson, 1998); larger domains and multi-domain proteins often exhibit intermediate states. As usual in statistical mechanics, these states correspond to ensembles of molecular conformations, not just one conformation. Theoretical background: The molecule may transition between the native and unfolded states according to a simple kinetic model N ⇌ Uwith rate constants kf and ku for the folding ( U⟶N ) and unfolding ( N⟶U ) reactions, respectively. The dimensionless equilibrium constant Keq=defkukf=[U]eq[N]eq can be used to determine the conformational stability ΔGo by the equation ln ⁡Keq where R is the gas constant and T is the absolute temperature in kelvin. Thus, ΔGo is positive if the unfolded state is less stable (i.e., disfavored) relative to the native state. Theoretical background: The most direct way to measure the conformational stability ΔGo of a molecule with two-state folding is to measure its kinetic rate constants kf and ku under the solution conditions of interest. However, since protein folding is typically completed in milliseconds, such measurements can be difficult to perform, usually requiring expensive stopped flow or (more recently) continuous-flow mixers to provoke folding with a high time resolution. Dual polarisation interferometry is an emerging technique to directly measure conformational change and ΔGo Chemical denaturation: In the less extensive technique of equilibrium unfolding, the fractions of folded and unfolded molecules (denoted as pN and pU , respectively) are measured as the solution conditions are gradually changed from those favoring the native state to those favoring the unfolded state, e.g., by adding a denaturant such as guanidinium hydrochloride or urea. (In equilibrium folding, the reverse process is carried out.) Given that the fractions must sum to one and their ratio must be given by the Boltzmann factor, we have pN=11+e−ΔG/RT pU=1−pN=e−ΔG/RT1+e−ΔG/RT=11+eΔG/RT Protein stabilities are typically found to vary linearly with the denaturant concentration. A number of models have been proposed to explain this observation prominent among them being the denaturant binding model, solvent-exchange model (both by John Schellman) and the Linear Extrapolation Model (LEM; by Nick Pace). All of the models assume that only two thermodynamic states are populated/de-populated upon denaturation. They could be extended to interpret more complicated reaction schemes. Chemical denaturation: The denaturant binding model assumes that there are specific but independent sites on the protein molecule (folded or unfolded) to which the denaturant binds with an effective (average) binding constant k. The equilibrium shifts towards the unfolded state at high denaturant concentrations as it has more binding sites for the denaturant relative to the folded state ( Δn ). In other words, the increased number of potential sites exposed in the unfolded state is seen as the reason for denaturation transitions. An elementary treatment results in the following functional form: ln ⁡(1+k[D]) where ΔGw is the stability of the protein in water and [D] is the denaturant concentration. Thus the analysis of denaturation data with this model requires 7 parameters: ΔGw ,Δn , k, and the slopes and intercepts of the folded and unfolded state baselines. Chemical denaturation: The solvent exchange model (also called the ‘weak binding model’ or ‘selective solvation’) of Schellman invokes the idea of an equilibrium between the water molecules bound to independent sites on protein and the denaturant molecules in solution. It has the form: ln ⁡(1+(K−1)XD) where K is the equilibrium constant for the exchange reaction and Xd is the mole-fraction of the denaturant in solution. This model tries to answer the question of whether the denaturant molecules actually bind to the protein or they seem to be bound just because denaturants occupy about 20-30% of the total solution volume at high concentrations used in experiments, i.e. non-specific effects – and hence the term ‘weak binding’. As in the denaturant-binding model, fitting to this model also requires 7 parameters. One common theme obtained from both these models is that the binding constants (in the molar scale) for urea and guanidinium hydrochloride are small: ~ 0.2 M−1 for urea and 0.6 M−1 for GuHCl. Chemical denaturation: Intuitively, the difference in the number of binding sites between the folded and unfolded states is directly proportional to the differences in the accessible surface area. This forms the basis for the LEM which assumes a simple linear dependence of stability on the denaturant concentration. The resulting slope of the plot of stability versus the denaturant concentration is called the m-value. In pure mathematical terms, m-value is the derivative of the change in stabilization free energy upon the addition of denaturant. However, a strong correlation between the accessible surface area (ASA) exposed upon unfolding, i.e. difference in the ASA between the unfolded and folded state of the studied protein (dASA), and the m-value has been documented by Pace and co-workers. In view of this observation, the m-values are typically interpreted as being proportional to the dASA. There is no physical basis for the LEM and it is purely empirical, though it is widely used in interpreting solvent-denaturation data. It has the general form: ΔG=m([D]1/2−[D]) where the slope m is called the "m-value"(> 0 for the above definition) and [D]1/2 (also called Cm) represents the denaturant concentration at which 50% of the molecules are folded (the denaturation midpoint of the transition, where pN=pU=1/2 ). Chemical denaturation: In practice, the observed experimental data at different denaturant concentrations are fit to a two-state model with this functional form for ΔG , together with linear baselines for the folded and unfolded states. The m and [D]1/2 are two fitting parameters, along with four others for the linear baselines (slope and intercept for each line); in some cases, the slopes are assumed to be zero, giving four fitting parameters in total. The conformational stability ΔG can be calculated for any denaturant concentration (including the stability at zero denaturant) from the fitted parameters m and [D]1/2 . When combined with kinetic data on folding, the m-value can be used to roughly estimate the amount of buried hydrophobic surface in the folding transition state. Chemical denaturation: Structural probes Unfortunately, the probabilities pN and pU cannot be measured directly. Instead, we assay the relative population of folded molecules using various structural probes, e.g., absorbance at 287 nm (which reports on the solvent exposure of tryptophan and tyrosine), far-ultraviolet circular dichroism (180-250 nm, which reports on the secondary structure of the protein backbone), dual polarisation interferometry (which reports the molecular size and fold density) and near-ultraviolet fluorescence (which reports on changes in the environment of tryptophan and tyrosine). However, nearly any probe of folded structure will work; since the measurement is taken at equilibrium, there is no need for high time resolution. Thus, measurements can be made of NMR chemical shifts, intrinsic viscosity, solvent exposure (chemical reactivity) of side chains such as cysteine, backbone exposure to proteases, and various hydrodynamic measurements. Chemical denaturation: To convert these observations into the probabilities pN and pU , one generally assumes that the observable A adopts one of two values, AN or AU , corresponding to the native or unfolded state, respectively. Hence, the observed value equals the linear sum A=ANpN+AUpU By fitting the observations of A under various solution conditions to this functional form, one can estimate AN and AU , as well as the parameters of ΔG . The fitting variables AN and AU are sometimes allowed to vary linearly with the solution conditions, e.g., temperature or denaturant concentration, when the asymptotes of A are observed to vary linearly under strongly folding or strongly unfolding conditions. Thermal denaturation: Assuming a two state denaturation as stated above, one can derive the fundamental thermodynamic parameters namely, ΔH , ΔS and ΔG provided one has knowledge on the ΔCp of the system under investigation. Thermal denaturation: The thermodynamic observables of denaturation can be described by the following equations: ln ln ln ln ⁡(TTd)] where ΔH , ΔS and ΔG indicate the enthalpy, entropy and Gibbs free energy of unfolding under a constant pH and pressure. The temperature, T is varied to probe the thermal stability of the system and Td is the temperature at which half of the molecules in the system are unfolded. The last equation is known as the Gibbs–Helmholtz equation. Thermal denaturation: Determining the heat capacity of proteins In principle one can calculate all the above thermodynamic observables from a single differential scanning calorimetry thermogram of the system assuming that the ΔCp is independent of the temperature. However, it is difficult to obtain accurate values for ΔCp this way. More accurately, the ΔCp can be derived from the variations in ΔH(Td) vs. Td which can be achieved from measurements with slight variations in pH or protein concentration. The slope of the linear fit is equal to the ΔCp . Note that any non-linearity of the datapoints indicates that ΔCp is probably not independent of the temperature. Thermal denaturation: Alternatively, the ΔCp can also be estimated from the calculation of the accessible surface area (ASA) of a protein prior and after thermal denaturation as follows: ASA ASA unfolded ASA native For proteins that have a known 3d structure, the ASA native can be calculated through computer programs such as Deepview (also known as swiss PDB viewer). The ASA unfolded can be calculated from tabulated values of each amino acid through the semi-empirical equation: ASA unfolded polar ASA polar aromatic ASA aromatic nonpolar ASA nonpolar ) where the subscripts polar, non-polar and aromatic indicate the parts of the 20 naturally occurring amino acids. Thermal denaturation: Finally for proteins, there is a linear correlation between ASA and ΔCp through the following equation: 0.61 ASA Assessing two-state unfolding Furthermore, one can assess whether the folding proceeds according to a two-state unfolding as described above. This can be done with differential scanning calorimetry by comparing the calorimetric enthalpy of denaturation i.e. the area under the peak, peak to the van 't Hoff enthalpy described as follows: ln ⁡KdT−1 at T=Td the ΔHvH(Td) can be described as: max peak When a two-state unfolding is observed the peak =ΔHvH(Td) . The max is the height of the heat capacity peak. Generalization to protein complexes and multi-domain proteins: Using the above principles, equations that relate a global protein signal, corresponding to the folding states in equilibrium, and the variable value of a denaturing agent, either temperature or a chemical molecule, have been derived for homomeric and heteromeric proteins, from monomers to trimers and potentially tetramers. These equations provide a robust theoretical basis for measuring the stability of complex proteins, and for comparing the stabilities of wild type and mutant proteins. Such equations cannot be derived for pentamers of higher oligomers because of mathematical limitations (Abel–Ruffini theorem).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Delayed extraction** Delayed extraction: Delayed extraction is a method used with a time-of-flight mass spectrometer in which the accelerating voltage is applied after some short time delay following pulsed laser desorption/ionization from a flat surface of target plate or, in other implementation, pulsed electron ionization or Resonance enhanced multiphoton ionization in some narrow space between two plates of the ion extraction system. The extraction delay can produce time-of-flight compensation for ion energy spread and improve mass resolution. Implementation: Resolution can be improved in time-of-flight mass spectrometer with ions produced at high vacuum conditions (better than few microTorr) by allowing the initial packet ions to spread in space due to their translational energy before being accelerated into the flight tube. With ions produced by electron ionization or laser ionization of atoms or molecules from a rarefied gas, this is referred to as "time-lag focusing". With ions produced by laser desorption/ionization or MALDI from a conductive surface of target plate, this is referred as "delayed extraction." With delayed extraction, the mass resolution is improved due to the correlation between velocity and position of the ions after those have been produced in the ion source. Ions produced with greater kinetic energy have a higher velocity and during the delay time move closer to the extraction electrode before the accelerating voltage is applied across the target or pulsed electrode. The slower ions with less kinetic energy stay closer to a surface of the target electrode or pulsed electrode when the accelerating voltage is applied and therefore start being accelerated at a greater potential compared to the ions farther from the target electrode. With the proper delay time, the slower ions will receive enough extra potential energy to catch the faster ions after flying some distance from the pulsed acceleration system. Ions of the same mass-to-charge ratio will then drift through the flight tube to the detector in the same time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISO 5776** ISO 5776: ISO 5776, published by the International Organization for Standardization (ISO), is an international standard that specifies symbols for proofreading such as of manuscripts, typescripts and printer's proofs. The total number of symbols specified is 16, each in English, French and Russian. The standard is partially derived from the British Standard BS-5261, but is closer to German standards DIN 16511 and 16549-1. All of these standards date from the time before desktop publishing. A first edition of the standard was published in 1983.A second edition of the standard was published in 2016 which cancels and replaces the first edition from 1983.The third revised edition was published in 2022 and replaced the second edition from 2016.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Performance** Performance: A performance is an act of staging or presenting a play, concert, or other form of entertainment. It is also defined as the action or process of carrying out or accomplishing an action, task, or function. Management science: In the work place, job performance is the hypothesized conception or requirements of a role. There are two types of job performances: contextual and task. Task performance is dependent on cognitive ability, while contextual performance is dependent on personality. Task performance relates to behavioral roles that are recognized in job descriptions and remuneration systems. They are directly related to organizational performance, whereas contextual performances are value-based and add additional behavioral roles that are not recognized in job descriptions and covered by compensation; these are extra roles that are indirectly related to organizational performance. Citizenship performance, like contextual performance, relates to a set of individual activity/contribution (prosocial organizational behavior) that supports organizational culture. Arts: In performing arts, a performance generally comprises an event in which a performer, or group of performers, present one or more works of art to an audience. In instrumental music and drama, a performance is typically described as a "play". Typically, the performers participate in rehearsals beforehand to practice the work. Arts: An effective performance is determined by the achieved skills and competency of the performer, also known as the level of skill and knowledge. In 1994, Spencer and McClelland defined competency as "a combination of motives, traits, self-concepts, attitudes, cognitive behavior skills (content knowledge) that helps a performer to differentiate themselves as superior from the average performer". A performance also describes the way in which an actor performs. In a solo capacity, it may also refer to a mime artist, comedian, conjurer, magician, or other entertainer. Aspects of performance art: Another aspect of performance that grew in popularity in the early 20th century is performance art. The origins of Performance art started with Dada and Russian constructivism groups, focusing on avant-garde poetry readings and live paintings meant to be viewed by an audience. It can be scripted or completely improvised and includes audience participation if desired.The emergence of abstract expressionism in the 1950s with Jackson Pollock and Willem de Kooning gave way to action painting, a technique that emphasized the dynamic movements of artists as they splattered paint and other media on canvas or glass. For these artists, the motion of putting paint on canvas was just as valuable as the finished painting, and so it was common for artists to document their work in film; such as the short film Jackson Pollock 51(1951), featuring Pollock dripping paint onto a massive canvas on his studio floor. Situationists in France, led by Guy Debord, married avant-garde art with revolutionary politics to incite everyday acts of anarchy. The "Naked City Map" (1957) fragments the 19 sections of Paris, featuring the technique of détournement and abstraction of the traditional environment, deconstructing the geometry and order of a typical city map.At the New School for Social Research in New York, John Cage and Allan Kaprow became involved in developing happening performance art. These carefully scripted one-off events incorporated the audience into acts of chaos and spontaneity. These happenings challenged traditional art conventions and encouraged artists to carefully consider the role of an audience. In Japan, the 1954 Gutai group led by Yoshihara Jiro, Kanayma Akira, Murakami Saburo, Kazuo Shiraga, and Shimamoto Shozo made the materials of art-making come to life with body movement and blurring the line between art and theater. Kazuo Shiraga's Challenging Mud (1955) is a performance of the artist rolling and moving in mud, using their body as the art-making tool, and emphasizing the temporary nature of performance art. Valie Export, an Austrian artist born Waltraud Lehner, performed "Tap and Touch Cinema" in 1968. She walked around the streets in Vienna during a film festival wearing a styrofoam box with a curtain over her chest. Bystanders were asked to put their hands inside the box and touch her bare chest. This commentary on women sexualization in film focused on the sense of touch rather than sight. Adrian Piper and her performance Catalysis III (1970) featured the artist walking down New York City streets with her outfit painted white and a sign across her chest that said "wet paint." She was interested in the invisible social and racial dynamics in America and was determined to encourage civic-mindedness and interruption of the system. Carolee Schneemann, American artist, performed Interior Scroll in 1975, where she unrolls Super-8 film "Kitsch's Last Meal" from her genitals. This nude performance contributes to a discourse on femininity, sexualization, and film. Performance state: Williams and Krane define the ideal performance state as a mental state having the following characteristics: Absence of fear Not thinking about the performance Adaptive focus on the activity A sense of effortlessness and belief in confidence or self-efficacy A sense of personal control A distortion of time and space where time does not affect the activityOther related factors are: motivation to achieve success or avoid failure, task relevant attention, positive self-talk, and cognitive regulation to achieve automaticity. Performance is also dependent on adaptation of eight areas: Handling crisis, managing stress, creative problem solving, knowing necessary functional tools and skills, agile management of complex processes, interpersonal adaptability, cultural adaptability, and physical fitness. Performance is not always a result of practice, but rather about honing in a skill. Over practicing itself can result in failure due to ego depletion.According to Andranik Tangian, the best results are achieved when spontaneity and even improvisation are backed up by rational elements that arrange means of expression in a certain structure, supporting the communication (not just verbal) with the audience. Performance state: Stage fright Theatrical performances, especially when the audience is limited to only a few observers, can lead to significant increases in the performer's heart rate. This increase takes place in several stages relative to the performance itself, including anticipatory activation (one minute before the start of subject's speaking role), confrontation activation (during the subject's speaking role, at which point their heart rate peaks) and release period (one minute after the conclusion of the subject's speech). The same physiological reactions can be experienced in other mediums such as instrumental performance. When experiments were conducted to determine whether there was a correlation between audience size and heart rate (an indicator of anxiety) of instrumental performers, the researcher's findings ran contrary to previous studies, showing a positive correlation rather than a negative one.Heart rate shares a strong, positive correlation with the self reported anxiety of performers. Other physiological responses to public performance include perspiration, secretion of the adrenal glands, and increased blood pressure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chilton and Colburn J-factor analogy** Chilton and Colburn J-factor analogy: Chilton–Colburn J-factor analogy (also known as the modified Reynolds analogy) is a successful and widely used analogy between heat, momentum, and mass transfer. The basic mechanisms and mathematics of heat, mass, and momentum transport are essentially the same. Among many analogies (like Reynolds analogy, Prandtl–Taylor analogy) developed to directly relate heat transfer coefficients, mass transfer coefficients, and friction factors Chilton and Colburn J-factor analogy proved to be the most accurate. It is written as follows, JM=f2=JH=hcpGPr23=JD=kc′v¯⋅Sc23 This equation permits the prediction of an unknown transfer coefficient when one of the other coefficients is known. The analogy is valid for fully developed turbulent flow in conduits with Re > 10000, 0.7 < Pr < 160, and tubes where L/d > 60 (the same constraints as the Sieder–Tate correlation). The wider range of data can be correlated by Friend–Metzner analogy. Chilton and Colburn J-factor analogy: Relationship between Heat and Mass; JM=f2=ShReSc13=JH=f2=NuRePr13
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Speedball (drug)** Speedball (drug): Speedball, powerball, or over and under, is the polydrug mixture of a stimulant with a depressant, usually an opioid. Speedball (drug): Common stimulants and depressant used for recreational drug use include cocaine or amphetamine with heroin, morphine, and/or fentanyl that may be taken intravenously or by nasal insufflation.Speedballs often give stronger effects than either drug when taken alone due to drug synergy, and are a particularly hazardous mixture that can easily cause heart attack, respiratory arrest and death. When compared to single drugs, speedballs are more likely to lead to addiction, and users are more likely to relapse and also to overdose. History: Original speedball combinations used methamphetamine mixed with heroin, or cocaine hydrochloride mixed with morphine sulfate. Physiological response: Cocaine acts as a stimulant, whereas heroin/morphine acts as a depressant. Co-administration is meant to provide an intense rush of euphoria with a high that is supposed to combine the effects of both drugs, while hoping to reduce the negative effects, such as the anxiety, hypertension and palpitations associated with stimulants, and sedation/drowsiness from the depressant. While this can be somewhat effective, there is an imperfect overlap in the effects of stimulants and depressants. Some users report a higher rush and better comedown, and others report displeasure at the drugs effectively cancelling each other out.By suppressing the typical negative side effects of the two drugs, the user may falsely believe they have a higher tolerance, or that they are less intoxicated than they actually are. This can cause users to misjudge the intake of one or both of the drugs, resulting in a fatal overdose. Cocaine's stimulating effects also cause the body to use more oxygen, while the depressant effects of heroin slow breathing rates. This combination significantly increases the chance of experiencing respiratory depression or respiratory failure, which may become fatal. Super speedballs: The United States Drug Enforcement Administration warned in 2019 that the rapid rise of fentanyl supply in the country has led to combinations of both fentanyl and heroin with cocaine ("super speedballs"). In addition, the cross-contamination of powdered fentanyl into cocaine supplies has led to reports of cocaine users unknowingly consuming a speedball-like combination. Notable deaths attributed to speedball use: Jean-Michel Basquiat, though other sources list his death as heroin overdose only. John Belushi Ken Caminiti Chris Farley Pete Farndon Zac Foley Trevor Goddard Mitch Hedberg Philip Seymour Hoffman Sebastian Horsley DJ Rashad Chris Kelly Brent Mydland River Phoenix Judee Sill Layne Staley Joey Stefano, died from mixing cocaine, morphine, heroin and ketamine. Michael K. Williams, died of overdose of a mixture of fentanyl-laced heroin and cocaine. Notable incidents of use In 1996, Steven Adler had a stroke after taking a speedball, leaving him with a permanent speech impediment. That same year, Dave Gahan suffered a heart attack following a speedball overdose, but survived. According to his autobiography, Slash experienced cardiac arrest for eight minutes after taking a speedball, but was revived.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microbead** Microbead: Microbeads are manufactured solid plastic particles of less than one millimeter in their largest dimension. They are most frequently made of polyethylene but can be of other petrochemical plastics such as polypropylene and polystyrene. Microbead: They are used in exfoliating personal care products, toothpastes and in biomedical and health-science research.Microbeads can cause plastic particle water pollution and pose an environmental hazard for aquatic animals in freshwater and ocean water. In the US, the Microbead-Free Waters Act of 2015 phases out microbeads in rinse-off cosmetics by July 2017. Several other countries have also banned microbeads from rinse-off cosmetics, including Canada, France, New Zealand, Sweden, Taiwan and the United Kingdom. Types: Microbeads are manufactured solid plastic particles of less than one millimeter in their largest dimension when they are first created, and are typically created using material such as polyethylene (PE), polyethylene terephthalate (PET), nylon (PA), polypropylene (PP) and polymethyl methacrylate (PMMA). The most frequently used materials are polyethylene or other petrochemical plastics such as polypropylene and polystyrene. Microbeads are commercially available in particle sizes from 10 micrometres (0.00039 in) to 1 millimetre (0.039 in). Low melting temperature and fast phase transitions make them especially suitable for creating porous structures in ceramics and other materials. Regional differences: The parameters for what qualifies as a microbead change subtly based on location and the corresponding legal jurisdiction; minor distinctions in the definition may be encountered from one country to another. For example, America's official definition for a microbead, as per the Microbead-Free Waters Act 2015 laid out by Congress, is "any solid plastic particle less than 5 millimeters in size that was created with the intention of being used to exfoliate or cleanse the human body." On the other hand, Environment and Climate Change Canada (ECCC), the governmental agency responsible for Canada's microbead ban, settled on a definition which includes only plastics with diameters between 0.5 microns and 2 millimeters; although initially cutoffs of 0.1 microns and 5 millimeters, respectively, were proposed, the definition was revised after consulting with members of industry and meeting resistance from plastic manufacturers who claimed that many of their raw materials (for example, those needed to make bottles for soft drinks) would be covered by the ban, affecting their business unduly. While the intent clause in the American law leaves open a loophole for producers of other equally frivolous and environmentally destructive products to potentially exploit in the future—as long as their use case does not involve grooming or personal care—the Canadian law has already been criticized publicly for its overly restrictive nature, which could cripple its efficacy in practice; in response to the revised definition, concerned conservation groups (including the Sierra Club of Canada) have raised warnings about the law's wording, fearing that Canada may "become a dumping ground for [those] microbead-containing products" which are now banned in the United States. Use: Microbeads are added as an exfoliating agent to cosmetics and personal care products, such as soap, facial scrub and toothpastes. They may be added to over-the-counter drugs to make them easier to swallow. In biomedical and health science research microbeads are used in microscopy techniques, fluid visualization, fluid flow analysis and process troubleshooting.Sphericity and particle size uniformity create a ball-bearing effect in creams and lotions, resulting in a silky texture and spreadability. Smoothness and roundness can provide lubrication. Colored microspheres add visual appeal to cosmetic products. Environmental effects: When microbeads are washed down the drain, they subsequently pass unfiltered through sewage treatment plants and make their way into rivers and canals, resulting in plastic particle water pollution. Environmental effects: A team of researchers from Uppsala University published and subsequently retracted a study (at least one researcher was found to have fabricated results) which stated that one of the various animals affected by microbeads was perch, a freshwater fish. The beads can absorb and concentrate pollutants like pesticides and polycyclic hydrocarbons. Microbeads have been found to pollute the Great Lakes in high concentrations, particularly Lake Erie. A study from the State University of New York, found anywhere from 1,500 to 1.1 million microbeads per square mile on the surface of the Great Lakes.One study suggested that environmentally relevant levels of polyethylene microbeads had no impact on larvae. Some wastewater treatment plants (WWTPs) in the U.S. and Europe can remove microbeads with an efficiency of greater than 98%, others may not. As such, other sources of microplastic pollution (e.g. microfibers/fibers and car tires) are more likely to be associated with environmental hazards.A variety of wildlife, from insect larvae, small fish, amphibians and turtles to birds and larger mammals, mistake microbeads for their food source. This ingestion of plastics introduces the potential for toxicity not only to these animals but to other species higher in the food chain. Harmful chemicals thus transferred can include hydrophobic pollutants that collect on the surface of the water such as polychlorinated biphenyls (PCBs), dichlorodiphenyltrichloroethane (DDT), and polycyclic aromatic hydrocarbons (PAHs). Banning production and sale in cosmetics: In 2012, the North Sea Foundation and the Plastic Soup Foundation launched an app that allows Dutch consumers to check whether personal care products contain microbeads. In the summer of 2013, the United Nations Environment Programme and UK based NGO Fauna and Flora International joined the partnership to further develop the app for international audiences. The app has enjoyed success, convincing a number of large multinationals to stop using microbeads, and is available in seven languages. Banning production and sale in cosmetics: There are many natural and biodegradable alternatives to microbeads that have no environmental impact when washed down the drain, as they will either decompose or get filtered out before being released into the natural environment. Some examples to use as natural exfoliates include ground up almonds, oatmeal, sea salt and coconut husks. Burt's Bees and St. Ives use apricot pits and cocoa husks in their products instead of microbeads to reduce their negative environmental impact.Due to the increase in bans of microbeads in the United States, many cosmetic companies are also phasing out microbeads from their production lines. L'Oréal is planning to phase out polyethylene microbeads in the exfoliates, cleansers and shower gels from their products by 2017. Johnson and Johnson, who have already started to phase out microbeads at the end of 2015, will by 2017 not be producing any polyethylene microbeads in their products. Lastly, Crest phased out microbead plastics in its toothpastes by February 2016. The global phase out should be completed by the end of 2017.The following countries have taken action towards ban on microbeads. Banning production and sale in cosmetics: Australia In 2016, the federal and state governments agreed to support a voluntary industry phase-out of microbeads in rinse-off personal care, cosmetic, and cleaning products. An independent assessment in 2020 found that more than 99% of products it inspected were microbead-free. The New South Wales state government banned the supply of rinse-off personal care products containing microbeads, effective from 1 November 2022. Banning production and sale in cosmetics: Canada On May 18, 2015, Canada took its first steps toward banning microbeads when a Member of Parliament from Toronto, John McKay, introduced Bill C-680, which would ban the sale of microbeads. The first Canadian province to take action against microbeads was Ontario, where Maire-France Lalonde, a Member of the Provincial Parliament introduced Microbead Elimination and Monitoring Act. This bill enforced the ban of manufacturing microbeads in cosmetics, facial scrubs or washes, and similar products. The bill also proposed that there would be yearly samples taken from the Canadian Great Lakes, which would be analyzed for traces of microbeads.Pointe-Claire mayor Morris Trudeau and members of the City Council requested Pointe-Claire residents to sign a petition asking governments of Canada and Quebec to ban "the use of plastic microbeads in cosmetic and cleansing products." Trudeau suggested that if Quebec bans microbeads, manufactures will be encouraged to stop producing them in their products. Megan Leslie, Halifax Member of Parliament presented a motion against microbeads in the House of Commons, which got "unanimous support" and is hoping for them to be listed under the Canadian Environmental Protection Act as a toxin.On June 29, 2016, the Federal Government of Canada added microbeads in the Canadian Environmental Protection Act under Schedule 1 as a toxic substance. The import or manufacture of toiletries containing microbeads was banned on 1 January 2018 and sales were banned from 1 July 2018. Microbeads in natural health products and non-prescription drugs will be banned in 2019. Banning production and sale in cosmetics: Ireland In November 2016 Simon Coveney, the Minister for Housing, Planning, Community and Local Government, said that the Fine Gael-led government would press for an EU-wide ban on microbeads and rejected a Green Party bill banning them on the basis that it might conflict with the EU's freedom of movement of goods. In June 2019, Coveney's successor Eoghan Murphy introduced the Microbeads (Prohibition) Bill 2019, which would ban manufacture, sale, and export of rinse-off microbead products. The government also intends to include microbeads when updating the law on preventing marine pollution. Microbeads were banned in February 2020. Banning production and sale in cosmetics: Netherlands The Netherlands was the first country to announce its intent to be free of microbeads in cosmetics by the end of 2016. State Secretary for Infrastructure and the Environment Mansveld has said she is pleased with the progress made by the members of the Nederlandse Cosmetica Vereniging (NCV), the Dutch trade organisation for producers and importers of cosmetics, who have ceased using microbeads or are working towards removing microbeads from their product. Among the NCV's members are large multinationals such as Unilever, L'Oréal, Colgate-Palmolive, Henkel, and Johnson & Johnson. Banning production and sale in cosmetics: South Africa A ban on microbeads has been proposed in South Africa after microplastic pollution was found in tap water. Banning production and sale in cosmetics: United Kingdom The British government has banned the production of microbeads in rinse-off cosmetics and cleaning products in England effective 9 January 2018, followed by a sales ban on 19 June 2018. Scotland introduced its own manufacture and sales ban on the same day and Wales introduced its on 30 June 2018. The ban was extended to Northern Ireland from 11 March 2019. Banning production and sale in cosmetics: United States National At the federal level the Microbead-Free Waters Act of 2015 prohibits the manufacture and introduction into interstate commerce of rinse-off cosmetics containing intentionally-added plastic microbeads by July 1, 2017. Representative Frank Pallone proposed the bill in 2014 (H.R. 4895, reintroduced in 2015 as H.R. 1321). On December 7, 2015, his proposal was narrowed by amendment to rinse-off cosmetics, and passed unanimously by the House. The American Chemistry Council and other industry groups supported the final bill, which the Senate passed on December 18, 2015, and the president signed on December 28, 2015. After the Microbead-Free Waters Act of 2015, the use of microbeads in toothpaste and other rinse-off cosmetic products has been discontinued in the US, however since 2015 many industries have instead shifted toward using FDA-approved "rinse-off" metallized-plastic glitter as their primary abrasive agent. Banning production and sale in cosmetics: States Illinois became the first U.S. state to enact legislation banning the manufacture and sale of products containing microbeads; the two-part ban goes into effect in 2018 and 2019. The Personal Care Products Council, a trade group for the cosmetics industry, came out in support of the Illinois bill. Other states have followed. As of October 2015 all state bans except California's ban, allow biodegradable microbeads. Johnson & Johnson and Procter & Gamble opposed the California law. In 2014, legislation was voted on but failed to pass in New York. Banning production and sale in cosmetics: Local In 2015, Erie County, New York passed the first local ban in the state of New York. It bans the sale and distribution of all plastic microbeads (including biodegradable ones) including from personal care products. As of September 2015, its prohibition on sales is stronger than any other law in the country. It was enacted on August 12, 2015 and took effect in February 2016.In November 2015, four other NY counties followed suit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hypertryptophanemia** Hypertryptophanemia: Hypertryptophanemia is a rare autosomal recessive metabolic disorder that results in a massive buildup of the amino acid tryptophan in the blood, with associated symptoms and tryptophanuria (-uria denotes "in the urine").Elevated levels of tryptophan are also seen in Hartnup disease, a disorder of amino acid transport. However, the increase of tryptophan in that disorder is negligible when compared to that of hypertryptophanemia. Symptoms and signs: A number of abnormalities and symptoms have been observed with hypertryptophanemia.Musculoskeletal effects include: joint contractures of the elbows and interphalangeal joints of the fingers and thumbs (specifically the distal phalanges), pes planus (fallen arches), an ulnar drift affecting the fingers of both hands (an unusual, yet correctible feature where the fingers slant toward the ulnar side of the forearm), joint pain and laxity, and adduction of the thumbs (where the thumb appears drawn into the palm, related to contracture of the adductor pollicis).Behavioral, developmental and other anomalies often include: hypersexuality, perceptual hypersensitivity, emotional lability (mood swings), hyperaggressive behavior; hypertelorism (widely-set eyes), optical strabismus (misalignment) and myopia.Metabolically, hypertryptophanemia results in tryptophanuria and exhibits significantly elevated serum levels of tryptophan, exceeding 650% of maximum (normal range: 25–73 micromole/l) in some instances.A product of the bacterial biosynthesis of tryptophan is indole. The excess of tryptophan in hypertryptophanemia also results in substantial excretion of indoleic acids. These findings suggest a possible congenital defect in the metabolic pathway where tryptophan is converted to kynurenine. Genetics: Hypertryptophanemia is believed to be inherited in an autosomal recessive manner. This means a defective gene responsible for the disorder is located on an autosome, and two copies of the defective gene (one inherited from each parent) are required in order to be born with the disorder. The parents of an individual with an autosomal recessive disorder both carry one copy of the defective gene, but usually do not experience any signs or symptoms of the disorder. Pathophysiology: At present, no specific enzyme deficiency nor genetic mutation has been implicated as the cause of hypertryptophanemia. Several known factors regarding tryptophan metabolism and kynurenines, however, may explain the presence of behavioral abnormalities seen with the disorder.Tryptophan is an essential amino acid, and is required for protein synthesis. Aside from this crucial role, the remainder of tryptophan is primarily metabolized along the kynurenine pathway in most tissues, including those of the brain and central nervous system.As the main defect behind hypertryptophanemia is suspected to alter and disrupt the metabolic pathway from tryptophan to kynurenine, a possible correlation between hypertryptophanemia and the known effects of kynurenines on neuronal function, physiology and behavior may be of interest.One of these kynurenines, aptly named kynurenic acid, serves as a neuroprotectant through its function as an antagonist at both nicotinic and glutamate receptors (responsive to the neurotransmitters nicotine and glutamate, respectively). This action is in opposition to the agonist quinolinic acid, another kynurenine, noted for its potential as a neurotoxin. Quinolinic acid activity has been associated with neurodegenerative disorders such as Huntington's disease, the neuroprective abilities of kynurenic acid forming a counterbalance against this process, and the related excitotoxicity and similar damaging effects on neurons.Indoleic acid excretion is another indicator of hypertryptophanemia. Indirectly related to kynurenine metabolism, indole modifies neural function and human behavior by interacting with voltage-dependent sodium channels (integral membrane proteins that form ion channels, allowing vital synaptic action potentials).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tricyclobutabenzene** Tricyclobutabenzene: Tricyclobutabenzene is an aromatic hydrocarbon consisting of a benzene core with three cyclobutane rings fused onto it. This compound and related compounds are studied in the laboratory because they are often displaying unusual conformations and because of their unusual reactivity. Tricyclobutabenzenes are isomers of radialenes and form an equilibrium with them. The parent tricyclobutabenzene (C12H12) was first synthesised in 1979 by the following sequence: This compound is stable up to 250 °C (482 °F). A polyoxygenated tricyclobutabenzene with an extraordinary bond length of 160 pm for the bond connecting two carbonyl groups[1] by the following sequence: An ordinary bond of this type is only 148 pm and for comparison the C-C bond in isatin is 154 pm long. On the other hand, no change is recorded in the aromatic bond length alternation. Similar chemistry yielded the six-fold ketone hexaoxotricyclobutabenzene C12O6, which happens to be a novel oxide of carbon. A key starting material is the iodo triflate depicted below which is a benzotriyne synthon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Difludiazepam** Difludiazepam: Difludiazepam (Ro07-4065) is a benzodiazepine derivative which is the 2',6'-difluoro derivative of fludiazepam. It was invented in the 1970s but was never marketed, and has been used as a research tool to help determine the shape and function of the GABAA receptors, at which it has an IC50 of 4.1nM. Difludiazepam has subsequently been sold as a designer drug, and was first notified to the EMCDDA by Swedish authorities in 2017.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Energy gel** Energy gel: Energy gels are edible carbohydrate gels that provide energy for exercise and promote recovery, commonly used in endurance events such as running, cycling, and triathlons. Energy gels are also referred to as endurance gels, sports gels, nutritional gels, and carbohydrate gels.Energy gels are packaged in small, single-serve plastic packets. Each packet has a strip with a small notch at the top that can be peeled off to reveal an opening through which the gel can be consumed. One handed operation is often adopted by users to facilitate continuous exercise performance. Packaging and ingredients: The size content of energy gels is commonly 1.2 oz (32g), with a range from 1 oz to 1.5 oz packets. The portable packaging is designed to facilitate uninterrupted training or performance conditions. Common ingredients include water, maltodextrin, fructose, and various micronutrients, preservatives, and flavor compounds or caffeine. History: Sports energy gels emerged in the United Kingdom in 1986 as a "convenient, prewrapped, portable" way to deliver carbohydrates during endurance events. Gels have a gooey texture and are sometimes referred to as "goo" generically. The gel Leppin Squeezy was distributed at the Hawaii Ironman Triathlon in 1988. Once considered a "cult product in clear packaging", energy gel products are now marketed in fancy packaging and come in a variety of flavors. The energy gel market grew during the 1990s, as professional athletes began endorsing products. Manufacturers generally encourage the consumption of multiple packets, with water, when participating in endurance events. Use: Energy gels are promoted to individuals seeking a boost from caffeine and carbohydrates during exercise performance. The recommended use of an energy gel is 15 minutes before starting and 30–45 minutes after starting the endurance exercise. Taste: Energy gels have varied taste by addition of flavor ingredients added during manufacturing, such as menthol and chai latte.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GIFBuilder** GIFBuilder: GIFBuilder was an early animated GIF creation program for Apple Macintosh computers. It was written by Yves Piguet and released as freeware. It is one of the few freeware applications to support the GIF format. GIFBuilder: GIFBuilder was released in 1996 and that year won the Ziff Davis Shareware Award in the Graphics and Multimedia category. GIFBuilder was developed from clip2gif, an earlier program by the same author, which was an Apple Event-based CGI script for generating GIF images on WebSTAR and other Macintosh web servers of the era.It was ported to Mac OS X using the Carbon (API) library, but has not been updated to function with OS X version after Lion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lie point symmetry** Lie point symmetry: Lie point symmetry is a concept in advanced mathematics. Towards the end of the nineteenth century, Sophus Lie introduced the notion of Lie group in order to study the solutions of ordinary differential equations (ODEs). He showed the following main property: the order of an ordinary differential equation can be reduced by one if it is invariant under one-parameter Lie group of point transformations. This observation unified and extended the available integration techniques. Lie devoted the remainder of his mathematical career to developing these continuous groups that have now an impact on many areas of mathematically based sciences. The applications of Lie groups to differential systems were mainly established by Lie and Emmy Noether, and then advocated by Élie Cartan. Lie point symmetry: Roughly speaking, a Lie point symmetry of a system is a local group of transformations that maps every solution of the system to another solution of the same system. In other words, it maps the solution set of the system to itself. Elementary examples of Lie groups are translations, rotations and scalings. The Lie symmetry theory is a well-known subject. In it are discussed continuous symmetries opposed to, for example, discrete symmetries. The literature for this theory can be found, among other places, in these notes. Overview: Types of symmetries Lie groups and hence their infinitesimal generators can be naturally "extended" to act on the space of independent variables, state variables (dependent variables) and derivatives of the state variables up to any finite order. There are many other kinds of symmetries. For example, contact transformations let coefficients of the transformations infinitesimal generator depend also on first derivatives of the coordinates. Lie-Bäcklund transformations let them involve derivatives up to an arbitrary order. The possibility of the existence of such symmetries was recognized by Noether. For Lie point symmetries, the coefficients of the infinitesimal generators depend only on coordinates, denoted by Z Applications Lie symmetries were introduced by Lie in order to solve ordinary differential equations. Another application of symmetry methods is to reduce systems of differential equations, finding equivalent systems of differential equations of simpler form. This is called reduction. In the literature, one can find the classical reduction process, and the moving frame-based reduction process. Also symmetry groups can be used for classifying different symmetry classes of solutions. Geometrical framework: Infinitesimal approach Lie's fundamental theorems underline that Lie groups can be characterized by elements known as infinitesimal generators. These mathematical objects form a Lie algebra of infinitesimal generators. Deduced "infinitesimal symmetry conditions" (defining equations of the symmetry group) can be explicitly solved in order to find the closed form of symmetry groups, and thus the associated infinitesimal generators. Geometrical framework: Let Z=(z1,…,zn) be the set of coordinates on which a system is defined where n is the cardinality of Z . An infinitesimal generator δ in the field R(Z) is a linear operator δ:R(Z)→R(Z) that has R in its kernel and that satisfies the Leibniz rule: ∀(f1,f2)∈R(Z)2,δf1f2=f1δf2+f2δf1 .In the canonical basis of elementary derivations {∂∂z1,…,∂∂zn} , it is written as: δ=∑i=1nξzi(Z)∂∂zi where ξzi is in R(Z) for all i in {1,…,n} Lie groups and Lie algebras of infinitesimal generators Lie algebras can be generated by a generating set of infinitesimal generators as defined above. To every Lie group, one can associate a Lie algebra. Roughly, a Lie algebra g is an algebra constituted by a vector space equipped with Lie bracket as additional operation. The base field of a Lie algebra depends on the concept of invariant. Here only finite-dimensional Lie algebras are considered. Geometrical framework: Continuous dynamical systems A dynamical system (or flow) is a one-parameter group action. Let us denote by D such a dynamical system, more precisely, a (left-)action of a group (G,+) on a manifold M :D:G×M→Mν×Z→D(ν,Z) such that for all point Z in M :D(e,Z)=Z where e is the neutral element of G for all (ν,ν^) in G2 , D(ν,D(ν^,Z))=D(ν+ν^,Z) .A continuous dynamical system is defined on a group G that can be identified to R i.e. the group elements are continuous. Geometrical framework: Invariants An invariant, roughly speaking, is an element that does not change under a transformation. Definition of Lie point symmetries: In this paragraph, we consider precisely expanded Lie point symmetries i.e. we work in an expanded space meaning that the distinction between independent variable, state variables and parameters are avoided as much as possible. A symmetry group of a system is a continuous dynamical system defined on a local Lie group G acting on a manifold M . For the sake of clarity, we restrict ourselves to n-dimensional real manifolds M=Rn where n is the number of system coordinates. Lie point symmetries of algebraic systems Let us define algebraic systems used in the forthcoming symmetry definition. Algebraic systems Let F=(f1,…,fk)=(p1/q1,…,pk/qk) be a finite set of rational functions over the field R where pi and qi are polynomials in R[Z] i.e. in variables Z=(z1,…,zn) with coefficients in R . An algebraic system associated to F is defined by the following equalities and inequalities: and 0. An algebraic system defined by F=(f1,…,fk) is regular (a.k.a. smooth) if the system F is of maximal rank k , meaning that the Jacobian matrix (∂fi/∂zj) is of rank k at every solution Z of the associated semi-algebraic variety. Definition of Lie point symmetries The following theorem (see th. 2.8 in ch.2 of ) gives necessary and sufficient conditions so that a local Lie group G is a symmetry group of an algebraic system. Theorem. Let G be a connected local Lie group of a continuous dynamical system acting in the n-dimensional space Rn . Let F:Rn→Rk with k≤n define a regular system of algebraic equations: fi(Z)=0∀i∈{1,…,k}. Then G is a symmetry group of this algebraic system if, and only if, whenever f1(Z)=⋯=fk(Z)=0 for every infinitesimal generator δ in the Lie algebra g of G Example Consider the algebraic system defined on a space of 6 variables, namely Z=(P,Q,a,b,c,l) with: 1. The infinitesimal generator δ=a(a−1)∂∂a+(l+b)∂∂b+(2ac−c)∂∂c+(−aP+P)∂∂P is associated to one of the one-parameter symmetry groups. It acts on 4 variables, namely a,b,c and P . One can easily verify that δf1=f1−f2 and δf2=0 . Thus the relations δf1=δf2=0 are satisfied for any Z in R6 that vanishes the algebraic system. Lie point symmetries of dynamical systems Let us define systems of first-order ODEs used in the forthcoming symmetry definition. Definition of Lie point symmetries: Systems of ODEs and associated infinitesimal generators Let d⋅/dt be a derivation w.r.t. the continuous independent variable t . We consider two sets X=(x1,…,xk) and Θ=(θ1,…,θl) . The associated coordinate set is defined by Z=(z1,…,zn)=(t,x1,…,xk,θ1,…,θl) and its cardinal is n=1+k+l . With these notations, a system of first-order ODEs is a system where: with fi∈R(Z)∀i∈{1,…,k},dθjdt=0∀j∈{1,…,l} and the set F=(f1,…,fk) specifies the evolution of state variables of ODEs w.r.t. the independent variable. The elements of the set X are called state variables, these of Θ parameters. Definition of Lie point symmetries: One can associate also a continuous dynamical system to a system of ODEs by resolving its equations. Definition of Lie point symmetries: An infinitesimal generator is a derivation that is closely related to systems of ODEs (more precisely to continuous dynamical systems). For the link between a system of ODEs, the associated vector field and the infinitesimal generator, see section 1.3 of. The infinitesimal generator δ associated to a system of ODEs, described as above, is defined with the same notations as follows: δ=∂∂t+∑i=1kfi(Z)∂∂xi⋅ Definition of Lie point symmetries Here is a geometrical definition of such symmetries. Let D be a continuous dynamical system and δD its infinitesimal generator. A continuous dynamical system S is a Lie point symmetry of D if, and only if, S sends every orbit of D to an orbit. Hence, the infinitesimal generator δS satisfies the following relation based on Lie bracket: [δD,δS]=λδD where λ is any constant of δD and δS i.e. δDλ=δSλ=0 . These generators are linearly independent. Definition of Lie point symmetries: One does not need the explicit formulas of D in order to compute the infinitesimal generators of its symmetries. Example Consider Pierre François Verhulst's logistic growth model with linear predation, where the state variable x represents a population. The parameter a is the difference between the growth and predation rate and the parameter b corresponds to the receptive capacity of the environment: 0. The continuous dynamical system associated to this system of ODEs is: D:(R,+)×R4→R4(t^,(t,x,a,b))→(t+t^,axeat^a−(1−eat^)bx,a,b). The independent variable t^ varies continuously; thus the associated group can be identified with R The infinitesimal generator associated to this system of ODEs is: δD=∂∂t+((a−bx)x)∂∂x⋅ The following infinitesimal generators belong to the 2-dimensional symmetry group of D :δS1=−x∂∂x+b∂∂b,δS2=t∂∂t−x∂∂x−a∂∂a⋅ Software: There exist many software packages in this area. For example, the package liesymm of Maple provides some Lie symmetry methods for PDEs. It manipulates integration of determining systems and also differential forms. Despite its success on small systems, its integration capabilities for solving determining systems automatically are limited by complexity issues. The DETools package uses the prolongation of vector fields for searching Lie symmetries of ODEs. Finding Lie symmetries for ODEs, in the general case, may be as complicated as solving the original system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pharmacodynamics** Pharmacodynamics: Pharmacodynamics (PD) is the study of the biochemical and physiologic effects of drugs (especially pharmaceutical drugs). The effects can include those manifested within animals (including humans), microorganisms, or combinations of organisms (for example, infection). Pharmacodynamics and pharmacokinetics are the main branches of pharmacology, being itself a topic of biology interested in the study of the interactions of both endogenous and exogenous chemical substances with living organisms. In particular, pharmacodynamics is the study of how a drug affects an organism, whereas pharmacokinetics is the study of how the organism affects the drug. Both together influence dosing, benefit, and adverse effects. Pharmacodynamics is sometimes abbreviated as PD and pharmacokinetics as PK, especially in combined reference (for example, when speaking of PK/PD models). Pharmacodynamics: Pharmacodynamics places particular emphasis on dose–response relationships, that is, the relationships between drug concentration and effect. One dominant example is drug-receptor interactions as modeled by LR where L, R, and LR represent ligand (drug), receptor, and ligand-receptor complex concentrations, respectively. This equation represents a simplified model of reaction dynamics that can be studied mathematically through tools such as free energy maps. Basics: There are four principal protein targets with which drugs can interact: Enzymes- (e.g. neostigmine and acetyl cholinesterase) Inhibitors Inducers Activators Membrane carriers- [Reuptake vs Efflux] (e.g. tricyclic antidepressants and catecholamine uptake-1) Enhancer (RE) Inhibitor (RI) Releaser (RA)Ion channels (e.g. nimodipine and voltage-gated Ca2+ channels) Blocker OpenerReceptor (e.g. Listed in table below) Agonists can be full, partial or inverse. Antagonists can be competitive, non-competitive, or uncompetive. Allosteric modulator can have 3 effects within a receptor. One is its capability or incapability to activate a receptor (2 possibilities). The other two are agonist affinity and efficacy. They may be increased, decreased or unaffected (3 and 3 possibilities).NMBD = neuromuscular blocking drugs; NMDA = N-methyl-d-aspartate; EGF = epidermal growth factor. Effects on the body: The majority of drugs either There are 7 main drug actions: stimulating action through direct receptor agonism and downstream effects depressing action through direct receptor agonism and downstream effects (ex.: inverse agonist) blocking/antagonizing action (as with silent antagonists), the drug binds the receptor but does not activate it stabilizing action, the drug seems to act neither as a stimulant or as a depressant (ex.: some drugs possess receptor activity that allows them to stabilize general receptor activation, like buprenorphine in opioid dependent individuals or aripiprazole in schizophrenia, all depending on the dose and the recipient) exchanging/replacing substances or accumulating them to form a reserve (ex.: glycogen storage) direct beneficial chemical reaction as in free radical scavenging direct harmful chemical reaction which might result in damage or destruction of the cells, through induced toxic or lethal damage (cytotoxicity or irritation) Desired activity The desired activity of a drug is mainly due to successful targeting of one of the following: Cellular membrane disruption Chemical reaction with downstream effects Interaction with enzyme proteins Interaction with structural proteins Interaction with carrier proteins Interaction with ion channels Ligand binding to receptors: Hormone receptors Neuromodulator receptors Neurotransmitter receptorsGeneral anesthetics were once thought to work by disordering the neural membranes, thereby altering the Na+ influx. Antacids and chelating agents combine chemically in the body. Enzyme-substrate binding is a way to alter the production or metabolism of key endogenous chemicals, for example aspirin irreversibly inhibits the enzyme prostaglandin synthetase (cyclooxygenase) thereby preventing inflammatory response. Colchicine, a drug for gout, interferes with the function of the structural protein tubulin, while Digitalis, a drug still used in heart failure, inhibits the activity of the carrier molecule, Na-K-ATPase pump. The widest class of drugs act as ligands that bind to receptors that determine cellular effects. Upon drug binding, receptors can elicit their normal action (agonist), blocked action (antagonist), or even action opposite to normal (inverse agonist). Effects on the body: In principle, a pharmacologist would aim for a target plasma concentration of the drug for a desired level of response. In reality, there are many factors affecting this goal. Pharmacokinetic factors determine peak concentrations, and concentrations cannot be maintained with absolute consistency because of metabolic breakdown and excretory clearance. Genetic factors may exist which would alter metabolism or drug action itself, and a patient's immediate status may also affect indicated dosage. Effects on the body: Undesirable effects Undesirable effects of a drug include: Increased probability of cell mutation (carcinogenic activity) A multitude of simultaneous assorted actions which may be deleterious Interaction (additive, multiplicative, or metabolic) Induced physiological damage, or abnormal chronic conditions Therapeutic window The therapeutic window is the amount of a medication between the amount that gives an effect (effective dose) and the amount that gives more adverse effects than desired effects. For instance, medication with a small pharmaceutical window must be administered with care and control, e.g. by frequently measuring blood concentration of the drug, since it easily loses effects or gives adverse effects. Effects on the body: Duration of action The duration of action of a drug is the length of time that particular drug is effective. Duration of action is a function of several parameters including plasma half-life, the time to equilibrate between plasma and target compartments, and the off rate of the drug from its biological target. Effects on the body: Recreational drug use In recreational psychoactive drug spaces, duration refers to the length of time over which the subjective effects of a psychoactive substance manifest themselves. Duration can be broken down into 6 parts: (1) total duration (2) onset (3) come up (4) peak (5) offset and (6) after effects. Depending upon the substance consumed, each of these occurs in a separate and continuous fashion. Effects on the body: Total The total duration of a substance can be defined as the amount of time it takes for the effects of a substance to completely wear off into sobriety, starting from the moment the substance is first administered. Onset The onset phase can be defined as the period until the very first changes in perception (i.e. "first alerts") are able to be detected. Come up The "come up" phase can be defined as the period between the first noticeable changes in perception and the point of highest subjective intensity. This is colloquially known as "coming up." Peak The peak phase can be defined as period of time in which the intensity of the substance's effects are at its height. Effects on the body: Offset The offset phase can be defined as the amount of time in between the conclusion of the peak and shifting into a sober state. This is colloquially referred to as "coming down." After effects The after effects can be defined as any residual effects which may remain after the experience has reached its conclusion. After effects depend on the substance and usage. This is colloquially known as a "hangover" for negative after effects of substances, such as alcohol, cocaine, and MDMA or an "afterglow" for describing a typically positive, pleasant effect, typically found in substances such as cannabis, LSD in low to high doses, and ketamine. Receptor binding and effect: The binding of ligands (drug) to receptors is governed by the law of mass action which relates the large-scale status to the rate of numerous molecular processes. The rates of formation and un-formation can be used to determine the equilibrium concentration of bound receptors. The equilibrium dissociation constant is defined by: LR Kd=[L][R][LR] where L=ligand, R=receptor, square brackets [] denote concentration. The fraction of bound receptors is pLR=[LR][R]+[LR]=11+Kd[L] Where pLR is the fraction of receptor bound by the ligand. Receptor binding and effect: This expression is one way to consider the effect of a drug, in which the response is related to the fraction of bound receptors (see: Hill equation). The fraction of bound receptors is known as occupancy. The relationship between occupancy and pharmacological response is usually non-linear. This explains the so-called receptor reserve phenomenon i.e. the concentration producing 50% occupancy is typically higher than the concentration producing 50% of maximum response. More precisely, receptor reserve refers to a phenomenon whereby stimulation of only a fraction of the whole receptor population apparently elicits the maximal effect achievable in a particular tissue. Receptor binding and effect: The simplest interpretation of receptor reserve is that it is a model that states there are excess receptors on the cell surface than what is necessary for full effect. Taking a more sophisticated approach, receptor reserve is an integrative measure of the response-inducing capacity of an agonist (in some receptor models it is termed intrinsic efficacy or intrinsic activity) and of the signal amplification capacity of the corresponding receptor (and its downstream signaling pathways). Thus, the existence (and magnitude) of receptor reserve depends on the agonist (efficacy), tissue (signal amplification ability) and measured effect (pathways activated to cause signal amplification). As receptor reserve is very sensitive to agonist's intrinsic efficacy, it is usually defined only for full (high-efficacy) agonists.Often the response is determined as a function of log[L] to consider many orders of magnitude of concentration. However, there is no biological or physical theory that relates effects to the log of concentration. It is just convenient for graphing purposes. It is useful to note that 50% of the receptors are bound when [L]=Kd . Receptor binding and effect: The graph shown represents the conc-response for two hypothetical receptor agonists, plotted in a semi-log fashion. The curve toward the left represents a higher potency (potency arrow does not indicate direction of increase) since lower concentrations are needed for a given response. The effect increases as a function of concentration. Multicellular pharmacodynamics: The concept of pharmacodynamics has been expanded to include Multicellular Pharmacodynamics (MCPD). MCPD is the study of the static and dynamic properties and relationships between a set of drugs and a dynamic and diverse multicellular four-dimensional organization. It is the study of the workings of a drug on a minimal multicellular system (mMCS), both in vivo and in silico. Networked Multicellular Pharmacodynamics (Net-MCPD) further extends the concept of MCPD to model regulatory genomic networks together with signal transduction pathways, as part of a complex of interacting components in the cell. Toxicodynamics: Pharmacokinetics and pharmacodynamics are termed toxicokinetics and toxicodynamics in the field of ecotoxicology. Here, the focus is on toxic effects on a wide range of organisms. The corresponding models are called toxicokinetic-toxicodynamic models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reduction (orthopedic surgery)** Reduction (orthopedic surgery): Reduction is a surgical procedure to repair a fracture or dislocation to the correct alignment. Description: When a bone fractures, the fragments lose their alignment in the form of displacement or angulation. For the fractured bone to heal without any deformity the bony fragments must be re-aligned to their normal anatomical position. Orthopedic surgery attempts to recreate the normal anatomy of the fractured bone by reduction of the displacement. This sense of the term "reduction" does not imply any sort of removal or quantitative decrease but rather implies a restoration: re ("back [to initial position]") + ducere ("lead"/"bring"), i.e., "bringing back to normal". Description: Because the process of reduction can briefly be intensely painful, it is commonly done under a short-acting anesthetic, sedative, or nerve block. Once the fragments are reduced, the reduction is maintained by application of casts, traction, or held by plates, screws, or other implants, which may in turn be external or internal. It is very important to verify the accuracy of reduction by clinical tests and X-ray, especially in the case of joint dislocations. Types: Reduction can be by "closed" or "open" methods: Closed reduction is the manipulation of the bone fragments without surgical exposure of the fragments. Open reduction is where the fracture fragments are exposed through surgical dissection of tissues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Moisture analysis** Moisture analysis: Moisture analysis covers a variety of methods for measuring the moisture content in solids, liquids, or gases. For example, moisture (usually measured as a percentage) is a common specification in commercial food production. There are many applications where trace moisture measurements are necessary for manufacturing and process quality assurance. Trace moisture in solids must be known in processes involving plastics, pharmaceuticals and heat treatment. Fields that require moisture measurement in gasses or liquids include hydrocarbon processing, pure semiconductor gases, bulk pure or mixed gases, dielectric gases such as those in transformers and power plants, and natural gas pipeline transport. Moisture content measurements can be reported in multiple units, such as: parts per million, pounds of water per million standard cubic feet of gas, mass of water vapor per unit volume or mass of water vapor per unit mass of dry gas. Moisture content vs. moisture dew point: Moisture dew point is the temperature at which moisture condenses out of a gas. This parameter is inherently related to the moisture content, which defines the amount of water molecules as a fraction of the total. Both can be used as a measure of the amount of moisture in a gas and one can be calculated from the other fairly accurately. Moisture content vs. moisture dew point: While both terms are sometimes used interchangeably, these two parameters, though related, are different measurements. Loss on drying: The classic laboratory method of measuring high-level moisture in solid or semi-solid materials is loss on drying. In this technique, a sample of material is weighed, heated in an oven for an appropriate period, cooled in the dry atmosphere of a desiccator, and then reweighed. If the volatile content of the solid is primarily water, the loss on drying technique gives a good measure of moisture content. Because the manual laboratory method is relatively slow, automated moisture analysers have been developed that can reduce the time necessary for a test from a couple of hours to just a few minutes. These analysers incorporate an electronic balance with a sample tray and surrounding heating element. Under microprocessor control, the sample can be heated rapidly. The moisture loss rate is measured throughout the process and then plotted in the form of a drying curve. Karl Fischer titration: An accurate method for determining the amount of water is the Karl Fischer titration, developed in 1935 by the German chemist, whose name it bears. This method detects only water, contrary to loss on drying, which detects any volatile substances. Techniques used for natural gas: Natural gas poses a unique problem in terms of moisture content analysis because it can contain very high levels of solid and liquid contaminants, as well as corrosives in varying concentrations. Techniques used for natural gas: Measurements of moisture in natural gas are typically performed with one of the following techniques: color indicator tubes chilled mirrors chilled mirror combined with spectroscopy electrolytic piezoelectric sorption, also known as quartz crystal microbalance aluminum oxide and silicon oxide spectroscopy.Other moisture measurement techniques exist but are not used in natural gas applications for various reasons. For example, the gravimetric hygrometer and the “two-pressure” system used by the National Bureau of Standards are precise, but are not suitable for use in industrial applications. Techniques used for natural gas: Color indicator tubes A color indicator tube (also referred to as a gas detector tube) is a device that natural gas pipelines use for a quick and rough measurement of moisture. Each tube contains chemicals that react to a specific compound to form a stain or color when passed through the gas. The tubes are used once and then discarded. A manufacturer calibrates the tubes, but since the measurement is directly related to exposure time, the flow rate, and the extractive technique, it is susceptible to error. In practice, the error can reach up to 25 percent. The color indicator tubes are well suited for infrequent, rough estimations of moisture in natural gas. Techniques used for natural gas: Chilled mirrors This type of device is considered the most popular when it comes to measuring the dew point of water in gaseous media. In this type of device, when gas flows across a reflective cooling surface, the eponymous chilled mirror. When the surface is cold enough, the available moisture will start to condense onto it in tiny droplets. The exact temperature at which this condensation first occurs is registered, and the mirror is slowly heated until the condensed water begins to evaporate. This temperature is also registered and the average of the condensation and evaporation temperatures is reported as the dew point. All chilled-mirror devices, both manual and automatic, are based on this same basic method. It is necessary to measure temperatures of both the condensation and evaporation, because the dew point is the equilibrium temperature at which water both condense and evaporate at the same rate. When cooling the mirror, the temperature keeps dropping after it has reached the dew point thus, the condensation temperature measurement is lower than the actual dew point temperature before water starts to condense. Therefore, the temperature of the mirror is slowly increased until evaporation is observed to occur and the dew point is reported as the average of these two temperatures. By obtaining an accurate dew point temperature, one can calculate moisture content in the gas. The mirror temperature can be regulated by either the flow of a refrigerant over the mirror or by a thermoelectric cooler also known as a Peltier element. Techniques used for natural gas: The formation behavior of condensation on the mirror's surface can be registered by either optical or visual means. In both cases, a light source is directed onto the mirror and changes in the reflection of this light due to the formation of condensation are detected by a sensor or the human eye, respectively. The exact point at which condensation begins to occur is not discernible to the unaided eye, so modern manually operated instruments use a microscope to enhance the accuracy of measurements taken using this method.Chilled mirror analyzers are subject to the confounding effects of some contaminants, however, at levels similar to other analyzers. With proper filtration and gas analysis preparation systems, other condensable liquids such as heavy hydrocarbons, alcohol, and glycol will not distort the results provided by these devices. It is also worth noting that in the case of natural gas, in which the aforementioned contaminants are an issue, on-line analyzers routinely measure the water dew point at line pressure, which reduces the likelihood that any heavy hydrocarbons, for example, will condense before water. On the other hand, chilled-mirror devices are not subject to drift, and are not influenced by fluctuations in gas composition or changes in moisture content. Techniques used for natural gas: Chilled mirror combined with spectroscopy This method of analysis combines some of the benefits of a chilled-mirror measurement with spectroscopy. In this method, a transparent inert material is cooled as an infrared (IR) beam is directed through it at an angle to the exterior surface. When it encounters this surface, the IR beam is reflected back through the material. A gaseous media is passed across the surface of the material at the point corresponding to the location where the IR beam is reflected. When a condensate forms on the surface of the cooling material, an analysis of the reflected IR beam will show absorption in the wavelengths that correspond to the molecular structure of the condensation formed. In this way, the device is able to distinguish between water condensation and other types of condensates, such as, for example, hydrocarbons when the gaseous media is natural gas. One advantage of this method is its relative immunity to contaminants thanks to the inert nature of the transparent material. Similar to a true chilled-mirror device, this type of analyzer can accurately measure the condensation temperature of potential liquids in a gaseous medium, but is not capable of measuring the actual water dew point as this requires the accurate measurement of the evaporation temperature as well. Techniques used for natural gas: Electrolytic The electrolytic sensor uses two closely spaced, parallel windings coated with a thin film of phosphorus pentoxide (P2O5). As this coating absorbs incoming water vapor, an electrical potential is applied to the windings that electrolyze the water to hydrogen and oxygen. The current consumed by the electrolysis determines the mass of water vapor entering the sensor. The flow rate and pressure of the incoming sample must be controlled precisely to maintain a standard sample mass flow rate into the sensor. Techniques used for natural gas: The method is fairly inexpensive and can be used effectively in pure gas streams where response rates are not critical. Contamination from oils, liquids or glycols on the windings will cause drift in the readings and damage to the sensor. The sensor cannot react to sudden changes in moisture, i.e., the reaction on the windings’ surfaces takes some time to stabilize. Large amounts of water in the pipeline (called slugs) will wet the surface and require tens of minutes or hours to “dry-down.” Effective sample conditioning and removal of liquids are essential when using an electrolytic sensor. Techniques used for natural gas: Piezoelectric sorption The piezoelectric sorption instrument compares the changes in the frequency of hygroscopic coated quartz oscillators. As the mass of the crystal changes due to the adsorption of water vapor, the frequency of the oscillator changes. The sensor is a relative measurement, so an integrated calibration system with desiccant dryers, permeations tubes and sample line switching is used frequently to correlate the system. Techniques used for natural gas: The system has succeeded in many applications, including natural gas. It is possible to have interference from glycol, methanol, and damage from the hydrogen sulfide, which can result in erratic readings. The sensor itself is relatively inexpensive and very precise. The required calibration system is not as precise and adds to the cost and mechanical complexity of the system. The labor for frequent replacement of desiccant dryers, permeation components, and sensor heads greatly increases the operational costs. Additionally, slugs of water render the system non-functional for the long periods of time as the sensor head has to “dry-down.” Aluminum oxide and silicon oxide The oxide sensor is made up of an inert substrate material and two dielectric layers, one of which is sensitive to humidity. The moisture molecules pass through the pores on the surface and cause a change to the physical property of the layer beneath it. Techniques used for natural gas: An aluminum oxide sensor has two metal layers that form the electrodes of a capacitor. The number of water molecules adsorbed will cause a change in the dielectric constant of the sensor. The sensor impedance correlates to the water concentration. A silicon oxide sensor can be an optical device that changes its refractive index as water is absorbed into the sensitive layer or a different impedance type in which silicon replaces the aluminum. Techniques used for natural gas: In the first type (optical) when light is reflected through the substrate, a wavelength shift can be detected on the output, which can be precisely correlated to the moisture concentration. A Fiber optic connector can be used to separate the sensor head and the electronics. Techniques used for natural gas: This type of sensor is not extremely expensive and can be installed at pipeline pressure (in-situ). Water molecules do take time to enter and exit the pores, so some wet-up and dry down delays will be observed, especially after a slug. Contaminants and corrosives may damage and clog the pores, causing a “drift” in the calibration, but the sensor heads can be refurbished or replaced and will perform better in very clean gas streams. As with the piezoelectric and electrolytic sensors, the sensor is susceptible to interference from glycol and methanol, the calibration will drift as the sensor’s surface becomes inactive due to damage or blockage, so the calibration is reliable only at the beginning of the sensor’s life. Techniques used for natural gas: In the second type (silicon oxide sensor), the device is often temperature controlled for improved stability and is considered being chemically more stable than aluminium oxide types and far faster responding due to the fact they hold less water in equilibrium at an elevated operating temperature. Whilst most absorption type devices can be installed at pipe line pressures (up to 130 Barg) traceability to International Standards is compromised. Operation at near atmospheric pressure do provide traceability and offer other significant benefits, such as enabling the direct validation against known moisture content. Techniques used for natural gas: Spectroscopy Absorption spectroscopy is a relatively simple method of passing light through a gas sample and measuring the amount of light absorbed at a specific wavelength. Traditional spectroscopic techniques have not been successful at doing this in natural gas because methane absorbs light in the same wavelength regions as water. But if one uses a very high resolution spectrometer, it is possible to find some water peaks that are not overlapped by other gas peaks. Techniques used for natural gas: The tunable laser provides a narrow, tunable wavelength light source that can be used to analyze these small spectral features. According to the Beer-Lambert law, the amount of light absorbed by the gas is proportional to the amount of the gas present in the light’s path; therefore, this technique is a direct measurement of moisture. In order to achieve a long enough path length of light, a mirror is used in the instrument. The mirror may become partially blocked by liquid and solid contaminations, but since the measurement is a ratio of absorbed light over the total light detected, the calibration is unaffected by the partially blocked mirror (if the mirror is totally blocked, it must be cleaned). Techniques used for natural gas: A TDLAS analyzer has a higher upfront cost compared to most of the analyzers above. However, tunable diode laser absorption spectroscopy is superior when it comes to the following: the necessity for an analyzer that will not suffer from interference or damage from corrosive gases, liquids or solids, or an analyzer that will react very quickly to drastic moisture changes or an analyzer that will remain calibrated for very long periods of time, assuming the gas composition does not change.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Combination (chess)** Combination (chess): In chess, a combination is a sequence of moves, often initiated by a sacrifice, which leaves the opponent few options and results in tangible gain. At most points in a chess game, each player has several reasonable options from which to choose, which makes it difficult to plan ahead except in strategic terms. Combinations, in contrast to the norm, are sufficiently forcing that one can calculate exactly how advantage will be achieved against any defense. Indeed, it is usually necessary to see several moves ahead in exact detail before launching a combination, or else the initial sacrifice should not be undertaken. Definition: In 1952/53, the editors of Shakhmaty v SSSR decided on this definition: A combination is a forced sequence of moves which uses tactical means and exploits specific peculiarities of the position to achieve a certain goal. (Golombek 1977) Irving Chernev wrote:What is a combination? A combination is a blend of ideas – pins, forks, discovered checks, double attacks – which endow the pieces with magical power. It is a series of staggering blows before the knockout. It is the climactic scene in the play appearing on the board. It is the touch of enchantment that gives life to inanimate pieces. It is all this and more – A combination is the heart of chess (Chernev 1960). Example: A combination is usually built out of two or more fundamental chess tactics such as forks, pins, skewers, undermining, discovered attacks, etc. Thus a combination is usually at least three moves long, but the longer it takes to recoup the initial sacrifice, the more impressive the combination. The position shown is from G. Stepanov–Peter Romanovsky, Leningrad 1926, and begins a combination which illustrates several forks and skewers. Black has just played 1... Rxf3+Retreating with 2.Ke2 would allow 2...Nd4+, a royal fork attacking both White's king and queen and winning the queen. Similarly, 2.Kd2 would allow 2...Rf2+ (skewering the white king and queen) 3.Be2 Rxe2+! 4.Kxe2 Nd4+, again winning the queen. White accordingly chose 2. Ke4but after 2... d5+!White resigned. White still could not take the black rook without losing his queen, but the alternative 3.cxd5 exd5+ 4.Kxd5 Be6+ would leave White with no good defense. Taking the bishop with 5.Kxe6 allows the long-threatened fork 5...Nd4+, while taking the knight with 5.Kxc6 allows the skewer 5...Rc8+ followed by 6...Rxc2. Retreating with 5.Ke4 permits the black bishop to skewer the white king and queen with 5...Bf5+, so White has only one option left: 5.Kd6. Example: After 5.Kd6, Black would have played 5...Rd8+. White couldn't take the bishop or the knight for exactly the same reasons as before (after 6.Kxe6 Nd4+ 7.Ke7, Black comes out a rook ahead with 7...Nxc2 8.Kxd8 Nxa1), which leaves one legal move, namely 6.Kc7, but then 6...Rf7+ absolutely forces the white king to take the black knight, allowing the skewer 7...Rc8+ followed by 8...Rxc2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Myopic crescent** Myopic crescent: A myopic crescent is a moon-shaped feature that can develop at the temporal (lateral) border of disc (it rarely occurs at the nasal border) of myopic eyes. It is primarily caused by atrophic changes that are genetically determined, with a minor contribution from stretching due to elongation of the eyeball. In myopia that is no longer progressing, the crescent may be asymptomatic except for its presence on ocular examination. However, in high-degree myopia, it may extend to the upper and lower borders, or form a complete ring around the optic disc and form a central scotoma. Myopic crescent: The myopic crescent is commonly seen in pathological axial myopia. The condition sometimes described erroneously as myopic choroiditis, but myopic crescent is not an inflammatory process and does not run parallel to the degree of myopia. It usually tends to occur after mid adult life. Myopic crescent is often associated with some degree of retinal degeneration and occasionally vitreous degeneration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mumble (software)** Mumble (software): Mumble is a voice over IP (VoIP) application primarily designed for use by gamers and is similar to programs such as TeamSpeak.Mumble uses a client–server architecture which allows users to talk to each other via the same server. It has a very simple administrative interface and features high sound quality and low latency. All communication is encrypted.Mumble is free and open-source software, is cross-platform, and is released under the terms of the BSD-3-Clause license. Channel hierarchy: A Mumble server (called Murmur) has a root channel and a hierarchical tree of channels beneath it. Users can temporarily connect channels to create larger virtual channels. This is useful during larger events where a small group of users may be chatting in a channel, but are linked to a common channel with other users to hear announcements. It also matches team-based first-person shooter (FPS) games. Each channel has an associated set of groups and access control lists which control user permissions. The system supports many usage scenarios, at the cost of added configuration complexity. Sound quality: Mumble uses the low-latency audio codec Opus as of version 1.2.4, the codec that succeeds the previous defaults Speex and CELT. This and the rest of Mumble's design allow for low-latency communication, meaning a shorter delay between when something is said on one end and when it's heard on the other. Mumble also incorporates echo cancellation to reduce echo when using speakers or poor quality sound hardware. Security and privacy: Mumble connects to a server via a TLS control channel, with the audio travelling via UDP encrypted with AES in OCB mode. As of 1.2.9 Mumble now prefers ECDHE + AES-GCM cipher suites if possible, providing Perfect Forward Secrecy. While password authentication for users is supported, since 1.2.0 it is typically eschewed in favor of strong authentication in the form of public key certificates. Overlay: There is an integrated overlay for use in fullscreen applications. The overlay shows who is talking and what linked channel they are in. As of version 1.0, users could upload avatars to represent themselves in the overlay, creating a more personalized experience. As of version 1.2, the overlay works with most Direct3D 9/10 and OpenGL applications on Windows and has OpenGL support for Linux and Mac OS X. Support for DirectX 11 applications was later added. Positional audio: For certain games, Mumble modifies the audio to position other players' voices according to their relative position in the game. This not only includes giving a sense of direction, but also of distance. To realise this, Mumble sends each player's in-game position to players in the same game with every audio packet. Mumble can gather the information needed to do this in two ways: it either reads the needed information directly out of the memory of the game or the games provide it themselves via the so-called link plugin interface. Positional audio: The link plugin provides games with a way to expose the information needed for positional audio themselves by including a small piece of source code provided by the Mumble project. Several high-profile games have implemented this functionality including many of Valve's Source Engine based games (Team Fortress 2, Day of Defeat: Source, Counter-Strike: Source, Half-Life 2: Deathmatch) and Guild Wars 2. Mobile apps: Third-party mobile apps are available for Mumble, such as Mumble for iOS, Plumble for Android(F-Droid, Google Play, Note: Discontinued in 2016), and Mumla (F-Droid, Google Play). Server integration: Mumble fits into existing technological and social structures. As such, the server is fully remote controllable over ZeroC Ice. User channels as well as virtual server instances can be manipulated. The project provides a number of sample scripts illustrating the abilities of the interface as well as prefabricated scripts offering features like authenticating users using an existing phpBB or Simple Machines Forum database. The murmur server uses port 64738 TCP and UDP by default. The port number refers to the address of the reset function on a Commodore 64. Server integration: An alternative minimalist implementation of the mumble-server (Murmur) is called uMurmur. It is intended for installation on embedded devices with limited resources, such as, for example, residential gateways running OpenWrt. Server hosting: Like many other VoIP clients, Mumble servers can be both rented or hosted locally. Hosting a Mumble server locally requires downloading Murmur (included as an option in the Mumble installer) and launching it. Configuring the server is achieved via editing the configuration file. The configuration file holds information for the server's name, user authentication, audio quality restrictions, and port. Administrating the server from within requires a user to be given administrator rights, or can also be done by logging into the SuperUser account. Administrators within the server can add or edit rooms, manage users, and view the server's information.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ski orienteering** Ski orienteering: Ski orienteering (SkiO) is a cross-country skiing endurance winter racing sport and one of the four orienteering disciplines recognized by the IOF. A successful ski orienteer combines high physical endurance, strength and excellent technical skiing skills with the ability to navigate and make the best route choices while skiing at a high speed. Ski orienteering: Standard orienteering maps are commonly used, but since 2019, a separate mapping standard ISSkiOM has been produced which recommends a subset of the symbols used in other disciplines. Ski-orienteering maps uses green symbols to indicate trails and tracks and different symbols to indicate their navigability in snow; other symbols indicate whether any roads are snow-covered or clear. Navigation tactics is similar to mountain bike orienteering. Standard skate-skiing equipment is used, along with a map holder attached to the chest. Compared to cross-country skiing, upper body strength is more important because of double-poling needed along narrow snow trails. Events: Ski orienteering events are designed to test both physical strength and navigation skills of the athletes. Ski orienteers use the map to navigate a dense ski track network in order to visit a number of control points in the shortest possible time. The track network is printed on the map, and there is no marked route in the terrain. The control points must be visited in the right order. The map gives all information the athlete needs in order to decide which route is the fastest, including the quality and width of the tracks. The athlete has to take hundreds of route choice decisions at high speed during every race: a slight lack of concentration for just a hundredth of a second may cost the medal. Ski orienteering is time-measured and objective. The clock is the judge: fastest time wins. The electronic card verifies that the athlete has visited all control points in the right order.International competitions The World Ski Orienteering Championships is the official event to award the titles of World Champions in Ski Orienteering. The World Championships is organized every odd year. The programme includes Sprint, Middle and Long Distance competitions, and a Relay for both men and women. Events: The World Cup is the official series of events to find the world's best ski orienteers over a season. The World Cup is organized every even year.Junior World Ski Orienteering Championships and World Masters Ski Orienteering Championships are organized annually. World-wide sport Ski orienteering is practiced on four continents. The events take place in the natural environment, over a variety of outdoor terrains, from city parks to countryside fields, forests and mountain sides - wherever there is snow. The leading ski orienteering regions are Asia, Europe and North America. Events: National teams from 35 countries are expected to participate in the next World Ski Orienteering Championships to be held in Sweden in March 2011. Ski orienteering is on the programme of the Asian Winter Games and the CISM World Military Winter Games. The IOF has applied for inclusion of ski orienteering in the 2018 Olympic Winter Games and will also apply to FISU for inclusion in the 2013 Winter Universiades. World Rankings: As of 1 June 2019, the highest ranked male ski-orienteerers are: Equipment: A person taking part in competitions in ski orienteering is equipped with: Clothing adequate for cross-country skiing, boots and skis and ski poles.An orienteering map provided by the organizer, showing the control points which must be visited in order. The map is designed to give all the information the competitor needs to decide which route is the fastest, such as the quality of the tracks, gradient and distance. Green lines on the map show a trail suited to race on skis. Depending on the thickness and continuity of the lines, the competitor makes decisions about which route is the fastest between control points. Equipment: Map holder: a map holder attached to the chest makes it possible to view the map while skiing at full speed. Optionally lighter type of compass is attached to the map holder or to the skier's arm. An electronic punching chip (see orienteering control point). Bid for inclusion in 2018 Winter Olympic Games: The International Orienteering Federation (IOF) had applied for ski orienteering to be included in the programme of the 2018 Olympic Winter Games. However this was unsuccessful. In the past few years, ski orienteering has grown considerably in terms of global spread. The growth has been boosted by the inclusion of ski orienteering into the Asian Winter Games and the CISM World Military Winter Games.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Idiopathic chronic fatigue** Idiopathic chronic fatigue: Idiopathic chronic fatigue (ICF) or chronic idiopathic fatigue or insufficient/idiopathic fatigue is characterized by unexplained fatigue that lasts at least six consecutive months. which does not meet the criteria for chronic fatigue syndrome. It is widely understood to have a profound effect on the lives of patients who experience it.ICF is a common illness of unknown origin, and remains poorly understood. Classification: Idiopathic chronic fatigue does not have its dedicated diagnostic code in the World Health Organization's ICD-11 classification. ICF is defined as a physical medical condition of unknown origin where CFS symptoms are not met, and because the World Health Organization does not recognize any kind of fatigue-based psychiatric illness (unless it is accompanied by related psychiatric symptoms), this means that only the fatigue codes in the physical symptoms category of the ICD can be used. MG22 (Fatigue) and R53.8 (Other malaise and fatigue) codes in the ICD-11 and ICD-10 respectively both allow ICF to be coded as fatigue or unspecified chronic fatigue, and are used when no more specific codes exist. These codes help distinguish ICF from many other forms of fatigue including cancer-related fatigue, chronic fatigue syndrome, fatigue due to depression, fatigue due to old age, weakness/asthenia, and in the ICD-10, also from fatigue lasting under 6 months. The ICD-11 MG22 Fatigue code is also shared with lethargy, and exhaustion, which may not be as long lasting. Diagnosis: ICF is fatigue of unknown origin, persisting or relapsing for a minimum of six consecutive months, and failing to meet the criteria for chronic fatigue syndrome. There are no agreed upon international criteria for idiopathic chronic fatigue, however the CDC's 1994 Idiopathic Chronic Fatigue criteria, known as the Fukuda ICF criteria, are commonly used. Diagnosis: Differential diagnosis Differences from chronic fatigue ICF differs from chronic fatigue since it is unexplained rather than linked to a medical or psychological illness (for example, diabetes or depression). This means that ICF patients have reduced treatment options: there is no underlying disease or known cause that could be treated in order to reduce the degree of fatigue, which results in a poorer prognosis for ICF.In ICF, the fatigue lasts for a minimum of six months, but chronic fatigue is usually (but not always) considered to last for a minimum of six months to be considered chronic, and if lasting between one and six months it is considered prolonged fatigue. Diagnosis: Chronic fatigue is the term used when medical tests and a mental and physical assessment has not yet been carried out. ICF can only be diagnosed after these are done and the results show no underlying untreated cause. Diagnosis: Differences from neurasthenia Neurasthenia consists of a large number of symptoms, typically patients had a mix of physical and psychological complaints for example anxiety, stress-related headaches, heart palpitations, depressed mood, fatigue, lethargy, insomnia, restlessness and weariness. Fatigue was common but not essential. ICF consists of the single symptom of fatigue, which may be either mental or physical or both, and may be described in many different ways including as "exhaustion".ICF has no known cause, and psychological factors such as stress have been ruled out, but neurasthenia was believed to be caused by the stresses of the modern age and psychological or psychosocial factors were seen as important. Diagnosis: Neurasthenia has been very rarely reported since the 21st century, and was deprecated in the World Health Organization's most recent International Classification of Diseases, known as the ICD-11. Neurasthenia had previously been categorized as a psychological illness, and originally as neurological. ICF is not psychological- the WHO does not have a classification for any fatigue-only psychiatric illness. Diagnosis: Differential diagnosis from chronic fatigue syndrome Chronic fatigue syndrome (CFS) requires the additional symptoms of:post-exertional malaise (significantly worsening symptoms with activity which results in a significant reduction in daily activities, which may be delayed by up to 3 days sleep dysfunction either:cognitive problems, or orthostatic intolerance A range of other symptoms commonly result from CFS including headaches, muscle and joint pain and low-grade feverICF requires:only one symptom: chronic fatigue does not need a significant reduction in activities: some people are able to push through the fatigue to continue activities is only diagnosed if CFS symptoms are not metPrevalence of ICF is between 3-15%, which is two to ten times higher than CFS Older age at onset is more common in ICF, particularly from age 50, while in CFS age at onset is typically 16-35 years old The recovery rate within a year is significantly higher for ICF patients, 30-50% compared to under 10% in CFS ICF is categorized within general signs and symptoms by the World Health Organization, while CFS is categorized as a neurological diseaseAbility to tolerate exertion including exercise has been shown to be greater in ICF patients compared to CFS patients, particularly on consecutive days, and this applies to both men and women.Severity of illness in ICF is typically less than in CFS, with some relatively small studies finding no severe ICF patients, the same studies found fibromyalgia was significantly less common in ICF. Diagnosis: Signs and symptoms Clinically evaluated fatigue New or definite onset (not lifelong) Fatigue persists or is relapsing for six consecutive months or longer Fails to meet the criteria for chronic fatigue syndrome The cause is unknown (not resulting from another medical condition) Exclusions Fatigue which begins within 2 years of a substance use disorder (addiction) or at any time after Chronic fatigue syndrome fatigue caused by an active medical condition major depression with psychotic or melancholic features bipolar disorder schizophrenia or schizophrenia-related disorders delusional disorders the eating disorders anorexia nervosa and bulimia nervosa Occupational stress or other life stress and burnout Domestic violence fatigue caused as a known side effect of medication fatigue caused by a previous medical condition that may not be fully resolved Common medical causes of fatigue These must be ruled out before a diagnosis of ICF can be made. Diagnosis: Infectious diseases including viruses and TB chronic fatigue syndrome Vascular diseases (affecting heart and circulation) Toxins and drug effects including poisons and substance use (addiction) Diseases affecting the lungs, including chronic obstructive pulmonary disease (COPD) Endocrine and metabolic problems, e.g., thyroid diseases and diabetes Diseases involving benign or cancerous tumours, including cancer fatigue Anaemia, Lupus and certain autoimmune or neurological diseases dementia (any form) severe obesity (a body mass index greater than 45) Management: Idiopathic chronic fatigue is typically managed in general medicine rather than by referral to a specialist. There is no cure, no approved drug, and treatment options are limited. Management may involve a form of counseling, or antidepressant medication, although some patients may prefer herbal or alternative remedies. Counseling A form of counseling known as cognitive behavioral therapy may help some people manage or cope with idiopathic chronic fatigue. Medication There are no approved drugs for ICF. Antidepressants Antidepressants drugs such as tricyclic antidepressants (TCAs) or selective serotonin reuptake inhibitors (SSRIs) may be appropriate if symptoms are exacerbated by suspected or diagnosed serotonin related health issues, such as depression. Alternative and complementary treatments Only limited trials had been conducted for alternative and complementary treatments; there is no clear evidence of these treatments being effective for ICF due to a lack of randomized controlled trials. Prognosis: Between 30% and just under 50% of patients recover within one year. Epidemiology: Fatigue is common in the general population and often caused by overwork, too much activity or a specific illness or disease. Around 20% of patients who visit their clinician report fatigue. Prolonged fatigue is fatigue that persists for more than a month, and chronic fatigue is fatigue that lasts at least six consecutive months, which may be caused by a physical or psychological illness, or may be idiopathic (no known cause). Chronic fatigue with a known cause is twice as common as idiopathic chronic fatigue.Idiopathic chronic fatigue affects between 2.4% and 6.42% of patients, with females more likely to be affected than men. Age at onset is typically over 50 years of age. A significant number of patients present with idiopathic chronic fatigue as part of a mix of medically unexplained symptoms, while others present with a primary problem of fatigue alone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mathematical Social Sciences** Mathematical Social Sciences: Mathematical Social Sciences is a peer-reviewed mathematics journal in the field of social science, in particular economics. The journal covers research on mathematical modelling in fields such as economics, psychology, political science, and other social sciences, including individual decision making and preferences, decisions under risk, collective choice, voting, theories of measurement, and game theory. It was established in 1980 and is published by Elsevier. The editors-in-chief have been Ki Hang Kim (1980-1983), Hervé Moulin (1983-2004), Jean-François Laslier (2005-2016), Simon Grant, Christopher Chambers (2009-2020), Yusufcan Masatlioglu (2020-2021), Juan Moreno-Ternero (2017-) and Emel Filiz-Ozbay (2021-).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Free product** Free product: In mathematics, specifically group theory, the free product is an operation that takes two groups G and H and constructs a new group G ∗ H. The result contains both G and H as subgroups, is generated by the elements of these subgroups, and is the “universal” group having these properties, in the sense that any two homomorphisms from G and H into a group K factor uniquely through a homomorphism from G ∗ H to K. Unless one of the groups G and H is trivial, the free product is always infinite. The construction of a free product is similar in spirit to the construction of a free group (the universal group with a given set of generators). Free product: The free product is the coproduct in the category of groups. That is, the free product plays the same role in group theory that disjoint union plays in set theory, or that the direct sum plays in module theory. Even if the groups are commutative, their free product is not, unless one of the two groups is the trivial group. Therefore, the free product is not the coproduct in the category of abelian groups. Free product: The free product is important in algebraic topology because of van Kampen's theorem, which states that the fundamental group of the union of two path-connected topological spaces whose intersection is also path-connected is always an amalgamated free product of the fundamental groups of the spaces. In particular, the fundamental group of the wedge sum of two spaces (i.e. the space obtained by joining two spaces together at a single point) is, under certain conditions given in the Seifert van-Kampen theorem, the free product of the fundamental groups of the spaces. Free product: Free products are also important in Bass–Serre theory, the study of groups acting by automorphisms on trees. Specifically, any group acting with finite vertex stabilizers on a tree may be constructed from finite groups using amalgamated free products and HNN extensions. Using the action of the modular group on a certain tessellation of the hyperbolic plane, it follows from this theory that the modular group is isomorphic to the free product of cyclic groups of orders 4 and 6 amalgamated over a cyclic group of order 2. Construction: If G and H are groups, a word in G and H is a product of the form s1s2⋯sn, where each si is either an element of G or an element of H. Such a word may be reduced using the following operations: Remove an instance of the identity element (of either G or H). Replace a pair of the form g1g2 by its product in G, or a pair h1h2 by its product in H.Every reduced word is an alternating product of elements of G and elements of H, e.g. g1h1g2h2⋯gkhk. The free product G ∗ H is the group whose elements are the reduced words in G and H, under the operation of concatenation followed by reduction. For example, if G is the infinite cyclic group ⟨x⟩ , and H is the infinite cyclic group ⟨y⟩ , then every element of G ∗ H is an alternating product of powers of x with powers of y. In this case, G ∗ H is isomorphic to the free group generated by x and y. Presentation: Suppose that G=⟨SG∣RG⟩ is a presentation for G (where SG is a set of generators and RG is a set of relations), and suppose that H=⟨SH∣RH⟩ is a presentation for H. Then G∗H=⟨SG∪SH∣RG∪RH⟩. That is, G ∗ H is generated by the generators for G together with the generators for H, with relations consisting of the relations from G together with the relations from H (assume here no notational clashes so that these are in fact disjoint unions). Examples For example, suppose that G is a cyclic group of order 4, G=⟨x∣x4=1⟩, and H is a cyclic group of order 5 H=⟨y∣y5=1⟩. Then G ∗ H is the infinite group G∗H=⟨x,y∣x4=y5=1⟩. Because there are no relations in a free group, the free product of free groups is always a free group. In particular, Fm∗Fn≅Fm+n, where Fn denotes the free group on n generators. Another example is the modular group PSL2(Z) . It is isomorphic to the free product of two cyclic groups PSL2(Z)=(Z/2Z)∗(Z/3Z). Generalization: Free product with amalgamation: The more general construction of free product with amalgamation is correspondingly a special kind of pushout in the same category. Suppose G and H are given as before, along with monomorphisms (i.e. injective group homomorphisms): φ:F→G and ψ:F→H, where F is some arbitrary group. Start with the free product G∗H and adjoin as relations φ(f)ψ(f)−1=1 for every f in F . In other words, take the smallest normal subgroup N of G∗H containing all elements on the left-hand side of the above equation, which are tacitly being considered in G∗H by means of the inclusions of G and H in their free product. The free product with amalgamation of G and H , with respect to φ and ψ , is the quotient group (G∗H)/N. Generalization: Free product with amalgamation: The amalgamation has forced an identification between φ(F) in G with ψ(F) in H , element by element. This is the construction needed to compute the fundamental group of two connected spaces joined along a path-connected subspace, with F taking the role of the fundamental group of the subspace. See: Seifert–van Kampen theorem. Karrass and Solitar have given a description of the subgroups of a free product with amalgamation. For example, the homomorphisms from G and H to the quotient group (G∗H)/N that are induced by φ and ψ are both injective, as is the induced homomorphism from F Free products with amalgamation and a closely related notion of HNN extension are basic building blocks in Bass–Serre theory of groups acting on trees. In other branches: One may similarly define free products of other algebraic structures than groups, including algebras over a field. Free products of algebras of random variables play the same role in defining "freeness" in the theory of free probability that Cartesian products play in defining statistical independence in classical probability theory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Second-wave positive psychology** Second-wave positive psychology: Second wave positive psychology (PP 2.0) is concerned with how to bring out the best in individuals and society in spite of and because of the dark side of human existence through the dialectical principles of yin and yang. There has also been a distinct shift from focusing on individual happiness and success to the double vision of individual well-being and the big picture of humanity. PP 2.0 is more about bringing out the "better angels of our nature" than achieving optimal happiness or personal success, because the better angels of empathy, compassion, reason, justice, and self-transcendence will make people better human beings and this world a better place. PP 2.0 centers around the universal human capacity for meaning seeking and meaning making in achieving optimal human functioning under both desirable and undesirable conditions. This emerging movement is a response to perceived problems of what some have called "positive psychology as usual." Limitations of positive psychology: Positive psychology "as usual" has been presented as the branch of psychology that uses scientific understanding and effective intervention to aid in the achievement of a satisfactory life, rather than treating mental illness. The focus of positive psychology is on personal growth rather than on pathology.It has been argued that this binary, dichotomous view has fueled both positive psychology's success and decline. The single-minded focus on positivity has resulted in persistent backlash (e.g., Frawley, 2015 for a recent review). The following criticisms have been leveled against positive psychology by researchers both outside of and within the positive psychology community. These include the "tyranny" of positivity and the lack of balance between positives and negatives; failing to cover the entire spectrum of human experiences; failing to recognize the importance of contextual variables; and assuming that the Western individualistic culture represents the universal human experience. As a result, various positive psychologists have proposed the need for a broader perspective. Existential positive psychology: Paul Wong has argued for the need to integrate positive psychology with existential psychology, resulting in "existential positive psychology" (EPP). This approach differs significantly from positive psychology "as usual" both in terms of epistemology and content. Existential positive psychology: EPP takes a pluralistic and holistic approach to research. It is open to insights and wisdom from both the East and the West and research findings from all sources regardless of the paradigm of truth claims. In terms of content, it explores both people's existential anxieties and their ultimate concerns. Thus, EPP contributes to a broader and more comprehensive understanding of human experiences. Existential positive psychology: Both existential philosophers and psychologists see life as a series of paradoxes, predicaments, and problems. From this existential perspective, life is also full of striving and sense-making, tragedies, and triumphs. The dynamic interplay between good and evil, negatives and positives is one of the hallmarks of EPP. Positives cannot exist apart from negatives, and authentic happiness grows from pain and suffering. This paradoxical view reflects Albert Camus' insight that "there is no joy of life without despair" (p. 56) and Rollo May's observation that "the ultimate paradox is that negation becomes affirmation" (p. 164).It may be argued that positive psychology is intrinsically existential because it is concerned with such fundamental questions about human existence as: What is the good life? What makes life worth living? How can one find happiness? Positive psychology research on these existential issues without taking into account the existential literature inevitably leads to superficiality or mischaracterization.A comprehensive positive psychology cannot be developed without taking into account the reality of death, the only certainty for all living organisms. However, human beings alone are burdened with the cognitive capacity to be aware of their own mortality and to fear what may follow after one's own demise. However, death awareness may be essential to meaningful living; "though the physicality of death destroys us, the idea of death saves us" (p. 7). Thus, awareness of our finality is indeed an important motivational factor for us to do something meaningful and significant with our lives. Existential positive psychology: Therefore, EPP advocates that the proper context of studying well-being and the meaningful life is the reality of suffering and death. Researchers who share this view include Bretherton and Ørner, Schneider, and Taheny.Recently, positive psychologists have recognized that positive psychology is rooted in Humanistic psychology, but in practice it continues to distance itself from its heritage because of the alleged lack of scientific research in humanistic psychology. A mature positive psychology needs to return to its existential-humanistic roots, because it can both broaden and deepen positive psychology. Second wave positive psychology: Paul Wong extends EPP to second wave positive psychology (PP 2.0) by formally incorporating the dialectical principles of Chinese psychology, the bio-behavioral dual-system model of adaptation, and cross-cultural positive psychology. Thus, PP 2.0 provides a big tent that allows for multiple indigenous positive psychologies and a much broader list of variables that contribute to well-being and flourishing. Second wave positive psychology: PP 2.0 is necessary, because neither positive psychology nor humanistic-existential psychology can adequately understand such complex human phenomena as meaning, virtue, and happiness. Such deep knowledge can only be achieved by an integrative and collaborative endeavor. This calls for a humble science. In other words, PP 2.0 denies that the positivist paradigm is the only way to examine truth claims, especially when we research the profound questions of what makes life worth living. Second wave positive psychology: Assumptions Human nature has the potentials for both good and evil; thus, self-control of selfish and destructive instincts is a necessary part of cultivating the "better angels of our nature." Dialectical principles can best integrate positive and negative factors in different contexts. Meaning offers both the best protection against adversities and existential concerns as well as the best pathway to achieve the good life of virtue, happiness, and significance. Just as physical health can only be maintained in recognition of the fact that we are living in an environment infected with bacteria and viruses, so the promotion of positive mental health and optimal human functioning must recognize the inevitable dark side of the human existence. Individual well-being is connected with the common good of humanity. Mission To improve the lives of all people and nurture their potential, regardless of their circumstances and cultural backgrounds. To repair the worst and bring out the best, with a focus on the human potential for growth. To integrate negatives and positives to optimize well-being. To study how global beliefs and values affect people's eudemonic well-being and human functioning. To minimize or transform the downside of the bright side, and optimize or transform the upside of the dark side. To cultivate the capacity for meaning seeking and meaning making. To study how death awareness can contribute to personal transformation. To develop objective measures of both short-term and long-term well-being for individuals and society. To enhance well-being throughout the lifespan including the end-of-life stage. To identify and research a host of variables related to both yin and yang. To contain and transform evil to serve the common good. To cultivate inner goodness and develop valid measures of goodness as an outcome. Second wave positive psychology: Importance of dialectical thinking Taoist dialectical thinking permeates every aspect of Chinese psychology. Peng and Nisbett showed that Chinese thinking is dialectical rather than binary. Similarly, Sundararajan documents the dialectical co-existence of positive and negative emotions in Chinese people.Paul Wong's research has also demonstrated that the wisdom of yin and yang operates in many situations. He has argued that Chinese people can hold external and internal loci of control simultaneously. Therefore, their locus of control beliefs can only be measured by a two-dimensional space with external and internal loci of control as two independent scales. However, dialectic thinking is not unique to Chinese culture. For example, pessimism and optimism can co-exist, resulting in tragic optimism. Death fear and death acceptance can also co-exist, resulting in the multidimensional Death Attitude Profile. Resources and deficits co-exist, as conceptualized in the Resource-Congruence Model. Thus, family can be resources for effective coping, but intra-family conflict can also be a deficit or stressor.Wong's Dual-Systems Model of approach and avoidance spells out the mechanisms whereby the good life can be achieved in the midst of adversities, not by accentuating the positive and avoiding the negative, but by embracing the dynamic and dialectic interaction between positive and negative experiences. This general bio-behavioral model is also based on the dialectical principle. Dialectical thinking represents a simple but powerful conceptual framework, capable of integrating a great deal of the literature relevant to well-being. Yin represents not only the dark side of life, but also the conservative and passive modes of adaptation, such as acceptance, letting go, avoidance, withdrawal, disengagement, doing nothing, and self-transcendence. Yang represents not only the bright side of life, but also the energetic and active modes of adaptation, such as goal setting and goal striving, problem solving and controlling, and expanding and maintaining territories. Second wave positive psychology: The dynamic balance between positive and negative forces is mediated through dialectical principles. For instance, Lomas and Ivtzan have identified three ways of restoring and maintaining the balance: (a) the principle of appraisal, (b) the principle of co-valence, and (c) the principle of complementarity.Similarly, Wong identifies four principles of transforming the dark side: (a) becoming wiser and better through the synthesis of opposites, (b) becoming more balanced and flexible through the co-existence of opposites which complement or moderate each other, (c) becoming more aware and appreciative of the after-effects or contrast effect due to the opponent-process, and (d) becoming stronger and more spiritual through self-transcendence.Thus, the wisdom of achieving the golden mean or the middle way is through the dialectical interactions between yin and yang. These dialectical principles constitute the foundation of PP 2.0 and ensure that the dark side of life serves the adaptive functions of survival and flourishing. Second wave positive psychology: Incorporating the dark side The dark side refers to more than just challenging experiences, thoughts, emotions, and behaviors that trigger discomfort. It also encompasses existential anxieties and the evitable sufferings in life. Apart from existential concerns, the dark side also refers to our Achilles' heel. From Aristotle to William Shakespeare, the literature has always recognized the existence of tragic heroes—powerful and successful individuals who are eventually ruined by their own character flaws. As Aristotle said, "A man cannot become a hero until he can see the root of his own downfall." All one's talents, character strengths, and efforts will eventually come to null, with disastrous consequences to oneself and others, when one pays no attention to one's own Achilles heel. Second wave positive psychology: The meaning hypothesis Paul Wong proposes that the meaning hypothesis is an overarching conceptual framework for PP 2.0 because it is based on the universal human capacity for meaning making and meaning seeking and the vital role meaning plays in human experience and well-being. It hypothesizes that meaning is the best possible end value for the good life and offers the best protection against existential anxieties and adversities. Second wave positive psychology: The meaning hypothesis places more emphasis on a fundamental change in global beliefs and values than behavioral change. The "meaning mindset" affirms that life has unconditional meaning and it can be found in any situation. Figure 1 presents a schematic presentation of the meaning mindset. Second wave positive psychology: If one chooses the meaning mindset, one can still find meaning and fulfillment even when failing to complete one's life mission. Thus, there is no failure when one pursues a virtuous and noble mission as one's life goal. A perspective shift to the meaning mindset helps eliminate one main source of human misery related to the striving to achieve material success or worldly fame. Cultivating a meaning mindset may yield better payoff than positive psychology exercises of enhancing happiness and character strengths because the perspective shift reorients one's focus away from egotistic pursuits to self-transcendence and altruism, which benefit both the individual and society. Second wave positive psychology: Conclusion Science is always self-corrective and progressive. PP 2.0 avoids many of the problems inherent in positive psychology "as usual" and opens up new avenues of research and applications. The future of psychology can benefit from integrating three distinct movements—humanistic-existential psychology, positive psychology, and indigenous psychology. Second wave positive psychology: The 21st century belongs to PP 2.0 because of its ability to integrate various sub-disciplines of mainstream psychology and its humble science approach. PP 2.0 is willing to put aside dogmatic epistemological positions in the service of the greater good as some have recommended.Meaning management of the dialectical principles is sensitive to individual and cultural contexts, but is, at the same time, also cognizant of the common good of humility and self-transcendence. This big picture perspective of PP 2.0 avoids many of the excesses associated with the egotistic pursuits of happiness and success in positive psychology as usual.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dysferlin** Dysferlin: Dysferlin also known as dystrophy-associated fer-1-like protein is a protein that in humans is encoded by the DYSF gene. Dysferlin is linked with plasma membrane repair., stabilization of calcium signaling and the development of the T-tubule system of the muscle A defect in the DYSF gene, located on chromosome 2p12-14, results in several types of muscular dystrophy; including Miyoshi myopathy (MM), Limb-girdle muscular dystrophy type 2B (LGMD2B) and Distal Myopathy (DM). A reduction or absence of dysferlin, termed dysferlinopathy, usually becomes apparent in the third or fourth decade of life and is characterised by weakness and wasting of various voluntary skeletal muscles. Pathogenic mutations leading to dysferlinopathy can occur throughout the DYSF gene. Structure: The human dysferlin protein is a 237 kilodalton type-II transmembrane protein. It contains a large intracellular cytoplasmic N-terminal domain, an extreme C-terminal transmembrane domain, and a short C-terminal extracellular domain. The cytosolic domain of dysferlin is composed of seven highly conserved C2 domains (C2A-G) which are conserved across several proteins within the ferlin family, including dysferlin homolog myoferlin. In fact, the C2 domain at any given position is more similar to the C2 domain at the corresponding position within other ferlin family members than the adjacent C2 domain within the same protein. This suggests that each individual C2 domain may in fact play a specific role in dysferlin function and each has in fact been shown to be required for two of dysferlin’s roles stabilization of calcium signaling and membrane repair. Mutations in each of these domains can cause dysferlinopathy. A crystal structure of the C2A domain of human dysferlin has been solved, and reveals that the C2A domain changes conformation when interacting with calcium ions, which is consistent with a growing body of evidence suggesting that the C2A domain plays a role in calcium-dependent lipid binding. Its ability to stabilize calcium signaling in the intact dysferlin protein depends on its calcium binding activity. In addition to the C2 domains, dysferlin also contains "FerA" and "DysF" domains. Mutations in both FerA and DysF can cause muscular dystrophies. DysF domain has an interesting structure as in contains one DysF domain within another DysF domain, a result of gene duplication; however, the function of this domain is currently unknown. FerA domain is conserved among all members of ferlin protein family. FerA domain is a four helix bundle and it can interact with membrane, usually in a calcium-dependent manner. Function: The most intensively studied role for dysferlin is in a cellular process called membrane repair. Membrane repair is a critical mechanism by which cells are able to seal dramatic wounds to the plasma membrane. Muscle is thought to be particularly prone to membrane wounds given that muscle cells transmit high force and undergo cycles of contraction. Dysferlin is highly expressed in muscle, and is homologous to the ferlin family of proteins, which are thought to regulate membrane fusion across a wide variety of species and cell types. Several lines of evidence suggest that dysferlin may be involved in membrane repair in muscle. First, dysferlin-deficient muscle fibers show accumulation of vesicles (which are critical for membrane repair in non-muscle cell types) near membrane lesions, indicating that dysferlin may be required for fusion of repair vesicles with the plasma membrane. Further, dysferlin-deficient muscle fibers take up extracellular dyes to a greater extent than wild-type muscle fibers following laser-induced wounding in-vitro. Dysferlin is also markedly enriched at membrane lesions with several additional proteins thought to be involved in membrane resealing, including annexin and MG53. Exactly how dysferlin contributes to membrane resealing is not clear, but biochemical evidence indicates that dysferlin may bind lipids in a calcium-dependent manner, consistent with a role for dysferlin in regulating fusion of repair vesicles with the sarcolemma during membrane repair. Furthermore, live-cell imaging of dysferlin-eGFP expressing myotubes indicates that dysferlin localizes to a cellular compartment that responds to injury by forming large dysferlin-containing vesicles, and formation of these vesicles may contribute to wound repair. Dysferlin may also be involved in Alzheimer's disease pathogenesis.Another well studied role for dysferlin is in stabilization of calcium signaling, especially following a mild injury. This approach was based on two observations: that muscle lacking dysferlin that is injured by eccentric contractions can repair its plasma membrane, or sarcolemma, as efficiently as healthy muscle can, and that most of the dysferlin in healthy muscle is concentrated in the transverse tubules at triad junctions, where calcium release is regulated. Destabilization of signaling in dysferlinopathic muscle can result in the generation of calcium waves, which can contribute to the disease pathology. Nearly every change in dysferlin that affects membrane repair also destabilizes calcium signaling, suggesting that these two activities are closely linked. Remarkably, however, membrane repair requires calcium ions, whereas calcium ions contribute to the destabilization of signaling when dysferlin is absent or mutated. These paradoxical results have yet to be reconciled. Interactions: Dysferlin has been shown to bind to itself, to form dimers and perhaps larger oligomers. It can also has been shown to interact with Caveolin 3 in skeletal muscle, and this interaction is thought to retain dysferlin within the plasma membrane. Dysferlin also interacts with MG53, and a functional interaction between dysferlin, caveolin-3 and MG53 is thought to be critical for membrane repair in skeletal muscle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HyperCam** HyperCam: HyperCam is a screencasting program made and created by Hyperionics and Solveig Multimedia. It captures the action from a Microsoft Windows screen and saves it to an AVI (Audio Video Interleaved) or WMV (Windows Media Video) or ASF (Advanced Systems Format) movie file. HyperCam will also record all sound output, and sound from the system microphone can also be recorded. Features: HyperCam is primarily intended for creating software presentations, tutorials, demonstrations, walkthroughs, and other various tasks the user wants to demonstrate. The latest versions also capture overlay video and can re-record movies and video clips (e.g. recording videos playing in Windows Media Player, RealVideo, QuickTime, etc.). Beginning with version 3.0, HyperCam also includes a built-in editor for trimming and merging captured AVI, WMV, ASF files.The unregistered versions of HyperCam 1, HyperCam 2 and HyperCam 3 apply a digital watermark to the upper-left corner of each recorded file and will ask the user to register on every startup. Base registration, which costs $39.95, will eliminate this watermark. Features: Hyperionics has now made HyperCam 2 a permanent free download for "worldwide use". Presence in Internet culture: In the early days of YouTube, HyperCam 2's unregistered version became widely used among content creators due to it being free and having a significantly small watermark. This has caused it to become popular as a representation of YouTube's past, with "Unregistered HyperCam 2" becoming a staple in nostalgia-based internet culture. HyperCam 2 also made an appearance on Reddit's social experiment Place in April of 2017.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HMGB2** HMGB2: High-mobility group protein B2 also known as high-mobility group protein 2 (HMG-2) is a protein that in humans is encoded by the HMGB2 gene. Function: This gene encodes a member of the non-histone chromosomal high-mobility group protein family. The proteins of this family are chromatin-associated and ubiquitously distributed in the nucleus of higher eukaryotic cells. In vitro studies have demonstrated that this protein is able to efficiently bend DNA and form DNA circles. These studies suggest a role in facilitating cooperative interactions between cis-acting proteins by promoting DNA flexibility. This protein was also reported to be involved in the final ligation step in DNA end-joining processes of DNA double-strand breaks repair and V(D)J recombination.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inventory management software** Inventory management software: Inventory management software is a software system for tracking inventory levels, orders, sales and deliveries. It can also be used in the manufacturing industry to create a work order, bill of materials and other production-related documents. Companies use inventory management software to avoid product overstock and outages. It is a tool for organizing inventory data that before was generally stored in hard-copy form or in spreadsheets. Features: Inventory management software is made up of several key components working together to create a cohesive inventory of many organization's systems. These features include: Reorder point Should inventory reach a specific threshold, a company's inventory management system can be programmed to tell managers to reorder that product. This helps companies avoid running out of products or tying up too much capital in inventory. Features: Asset tracking When a product is in a warehouse or store, it can be tracked via its barcode and/or other tracking criteria, such as serial number, lot number or revision number. Systems. for Business, Encyclopedia of Business, 2nd ed. Nowadays, inventory management software often utilizes barcode, radio-frequency identification (RFID), and/or wireless tracking technology. Service management Companies that are primarily service-oriented rather than product-oriented can use inventory management software to track the cost of the materials they use to provide services, such as cleaning supplies. This way, they can attach prices to their services that reflect the total cost of performing them. Product identification Barcodes are often the means whereby data on products and orders are inputted into inventory management software. A barcode reader is used to read barcodes and look up information on the products they represent. Radio-frequency identification (RFID) tags and wireless methods of product identification are also growing in popularity. Modern inventory software programs may use QR codes or NFC tags to identify inventory items and smartphones as scanners. This method provides an option for businesses to track inventory using barcode scanning without a need to purchase expensive scanning hardware. Features: Inventory optimization A fully automated demand forecasting and inventory optimization system to attain key inventory optimization metrics such as: Reorder point: the number of units that should trigger a replenishment order Order quantity: the number of units that should be reordered, based on the reorder point, stock on hand and stock on order Lead demand: the number of units that will be sold during the lead time Stock cover: the number of days left before a stockout if no reorder is made Accuracy: the expected accuracy of the forecasts History: The Universal Product Code (UPC) was adopted by the grocery industry in April 1973 as the standard barcode for all grocers, though it was not introduced at retailing locations until 1974. This helped drive down costs for inventory management because retailers in the United States and Canada didn't have to purchase multiple barcode readers to scan competing barcodes. There was now one primary barcode for grocers and other retailers to buy one type of reader for. History: In the early 1980s, personal computers began to be popular. This further pushed down the cost of barcodes and readers. It also allowed the first versions of inventory management software to be put into place. One of the biggest hurdles in selling readers and barcodes to retailers was the fact that they didn't have a place to store the information they scanned. As computers became more common and affordable, this hurdle was overcome. Once barcodes and inventory management programs started spreading through grocery stores, inventory management by hand became less practical. Writing inventory data by hand on paper was replaced by scanning products and inputting information into a computer by hand. History: Starting in the early 2000s, inventory management software progressed to the point where businesspeople no longer needed to input data by hand but could instantly update their database with barcode readers. Also, the existence of cloud based business software and their increasing adoption by businesses mark a new era for inventory management software. Now they usually allow integrations with other business backend processes, like accounting and online sales. Purpose: Companies often use inventory management software to reduce their carrying costs. The software is used to track products and parts as they are transported from a vendor to a warehouse, between warehouses, and finally to a retail location or directly to a customer. Inventory management software is used for a variety of purposes, including: Maintaining a balance between too much and too little inventory. Tracking inventory as it is transported between locations. Receiving items into a warehouse or other location. Picking, packing and shipping items from a warehouse. Keeping track of product sales and inventory levels. Cutting down on product obsolescence and spoilage. Avoiding missing out on sales due to out-of-stock situations. Manufacturing uses: Manufacturers primarily use inventory management software to create work orders and bills of materials. This facilitates the manufacturing process by helping manufacturers efficiently assemble the tools and parts they need to perform specific tasks. For more complex manufacturing jobs, manufacturers can create multilevel work orders and bills of materials, which have a timeline of processes that need to happen in the proper order to build a final product. Other work orders that can be created using inventory management software include reverse work orders and auto work orders. Manufacturers also use inventory management software for tracking assets, receiving new inventory and additional tasks businesses in other industries use it for. Advantages of ERP inventory management software: There are several advantages to using inventory management software in a business setting. Cost savings A company's inventory represents one of its largest investments, along with its workforce and locations. Inventory management software helps companies cut expenses by minimizing the amount of unnecessary parts and products in storage. It also helps companies keep lost sales to a minimum by having enough stock on hand to meet demand. Increased efficiency Inventory management software often allows for automation of many inventory-related tasks. For example, software can automatically collect data, conduct calculations, and create records. This not only results in time savings, cost savings, but also increases business efficiency. Advantages of ERP inventory management software: Warehouse organization Inventory management software can help distributors, wholesalers, manufacturers and retailers optimize their warehouses. If certain products are often sold together or are more popular than others, those products can be grouped together or placed near the delivery area to speed up the process of picking. By 2018, 66% of warehouses "are poised to undergo a seismic shift, moving from still prevalent pen and paper processes to automated and mechanized inventory solutions. With these new automated processes, cycle counts will be performed more often and with less effort, increasing inventory visibility, and leading to more accurate fulfillment, fewer out of stock situations and fewer lost sales. More confidence in inventory accuracy will lead to a new focus on optimizing mix, expanding a selection and accelerating inventory turns." Updated data Up-to-date, real-time data on inventory conditions and levels is another advantage inventory management software gives companies. Company executives can usually access the software through a mobile device, laptop or PC to check current inventory numbers. This automatic updating of inventory records allows businesses to make informed decisions. Advantages of ERP inventory management software: Data security With the aid of restricted user rights, company managers can allow many employees to assist in inventory management. They can grant employees enough information access to receive products, make orders, transfer products and do other tasks without compromising company security. This can speed up the inventory management process and save managers' time. Advantages of ERP inventory management software: Insight into trends Tracking where products are stocked, which suppliers they come from, and the length of time they are stored is made possible with inventory management software. By analysing such data, companies can control inventory levels and maximize the use of warehouse space. Furthermore, firms are more prepared for the demands and supplies of the market, especially during special circumstances such as a peak season on a particular month. Through the reports generated by the inventory management software, firms are also able to gather important data that may be put in a model for it to be analyzed. Disadvantages of ERP inventory management software: The main disadvantages of inventory management software are its cost and complexity. Disadvantages of ERP inventory management software: Expense Cost can be a major disadvantage of inventory management software. Many large companies use an ERP as inventory management software, but small businesses can find it difficult to afford it. Barcode readers and other hardware can compound this problem by adding even more cost to companies. The advantage of allowing multiple employees to perform inventory management tasks is tempered by the cost of additional barcode readers. Use of smartphones as QR code readers has been a way that smaller companies avoid the high expense of custom hardware for inventory management. Disadvantages of ERP inventory management software: Complexity Inventory management software is not necessarily simple or easy to learn. A company's management team must dedicate a certain amount of time to learning a new system, including both software and hardware, in order to put it to use. Most inventory management software includes training manuals and other information available to users. Despite its apparent complexity, inventory management software offers a degree of stability to companies. For example, if an IT employee in charge of the system leaves the company, a replacement can be comparatively inexpensive to train compared to if the company used multiple programs to store inventory data. Benefits of cloud inventory management software: The main benefits of a cloud inventory management software include: Real-time tracking of inventory For startups and SMBs, tracking inventory in real time is very important. Not only can business owners track and collect data but also generate reports. At the same time, entrepreneurs can access cloud-based inventory data from a wide range of internet-enabled devices, including smartphones, tablets, laptops, as well as traditional desktop PCs. In addition, users do not have to be inside business premises to use web-based inventory program and can access the inventory software while on the road. Benefits of cloud inventory management software: Cut down hardware expenses Because the software resides in the cloud, business owners do not have to purchase and maintain expensive hardware. Instead, SMBs and startups can direct capital and profits towards expanding the business to reach a wider audience. Cloud-based solutions also eliminate the need to hire a large IT workforce. The service provider will take care of maintaining the inventory software. Benefits of cloud inventory management software: Fast deployment Deploying web based inventory software is quite easy. All business owners have to do is sign up for a monthly or yearly subscription and start using the inventory management software via the internet. Such flexibility allows businesses to scale up relatively quickly without spending a large amount of money. Benefits of cloud inventory management software: Easy integration Cloud inventory management software offers ease of integration with current systems for business owners. For example, business owners can integrate the inventory software with their eCommerce store or cloud-based accounting software. The rise in popularity of 3rd party marketplaces prompted cloud-based inventory management companies to include the integration of such sites with the rest of a business owner's retail business, allowing one to view and control stock across all channels. Benefits of cloud inventory management software: Enhanced efficiency Cloud inventory systems increase efficiency in a number of ways. One is real-time inventory monitoring. A single change can replicate itself company-wide instantaneously. As a result, businesses can have greater confidence in the accuracy of the information in the system, and management can more easily track the flow of supplies and products – and generate reports. In addition, cloud-based solutions offer greater accessibility. Benefits of cloud inventory management software: Improved coordination Cloud inventory programs also allow departments within a company to work together more efficiently. Department A can pull information about Department B's inventory directly from the software without needing to contact Department B's staff for the information. This inter-departmental communication also makes it easier to know when to restock and which customer orders have been shipped, etc. Operations can run more smoothly and efficiently, enhancing customer experience. Accurate inventory information can also have a huge impact on a company's bottom line. It allows you to see where the bottlenecks and workflow issues are – and to calculate break-even points as well as profit margins. Disadvantages of cloud inventory management software: Security and privacy Using the cloud means that data is managed by a third party provider and there can be a risk of data being accessed by unauthorized users. Dependency Since maintenance is managed by the vendor, users are essentially fully dependent on the provider. Decreased flexibility Depending on the cloud service provider, system and software upgrades will be performed based on their schedule, hence businesses may experience some limitations in flexibility in the process. Integration Not all on-premises systems or service providers can be synced with the cloud software used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Light-weight Identity** Light-weight Identity: Light-weight Identity (LID), or Light Identity Management (LIdM), is an identity management system for online digital identities developed in part by NetMesh. It was first published in early 2005, and is the original URL-based identity system, later followed by OpenID. LID uses URLs as a verification of the user's identity, and makes use of several open-source protocols such as OpenID, Yadis, and PGP/GPG.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mex (mathematics)** Mex (mathematics): In mathematics, the mex of a subset of a well-ordered set is the smallest value from the whole set that does not belong to the subset. That is, it is the minimum value of the complement set. The name "mex" is shorthand for "minimum excluded" value. Beyond sets, subclasses of well-ordered classes have minimum excluded values. Minimum excluded values of subclasses of the ordinal numbers are used in combinatorial game theory to assign nim-values to impartial games. Mex (mathematics): According to the Sprague–Grundy theorem, the nim-value of a game position is the minimum excluded value of the class of values of the positions that can be reached in a single move from the given position.Minimum excluded values are also used in graph theory, in greedy coloring algorithms. These algorithms typically choose an ordering of the vertices of a graph and choose a numbering of the available vertex colors. They then consider the vertices in order, for each vertex choosing its color to be the minimum excluded value of the set of colors already assigned to its neighbors. Examples: The following examples all assume that the given set is a subset of the class of ordinal numbers: mex ⁡(∅)=0 mex ⁡({1,2,3})=0 mex ⁡({0,2,4,6,…})=1 mex 12 })=2 mex ⁡({0,1,2,3,…})=ω mex ⁡({0,1,2,3,…,ω})=ω+1 where ω is the limit ordinal for the natural numbers. Game theory: In the Sprague–Grundy theory the minimum excluded ordinal is used to determine the nimber of a normal-play impartial game. In such a game, either player has the same moves in each position and the last player to move wins. The nimber is equal to 0 for a game that is lost immediately by the first player, and is equal to the mex of the nimbers of all possible next positions for any other game. Game theory: For example, in a one-pile version of Nim, the game starts with a pile of n stones, and the player to move may take any positive number of stones. If n is zero stones, the nimber is 0 because the mex of the empty set of legal moves is the nimber 0. If n is 1 stone, the player to move will leave 0 stones, and mex({0}) = 1, gives the nimber for this case. If n is 2 stones, the player to move can leave 0 or 1 stones, giving the nimber 2 as the mex of the nimbers {0, 1}. In general, the player to move with a pile of n stones can leave anywhere from 0 to n−1 stones; the mex of the nimbers {0, 1, …, n−1} is always the nimber n. The first player wins in Nim if and only if the nimber is not zero, so from this analysis we can conclude that the first player wins if and only if the starting number of stones in a one-pile game of Nim is not zero; the winning move is to take all the stones. Game theory: If we change the game so that the player to move can take up to 3 stones only, then with n = 4 stones, the successor states have nimbers {1, 2, 3}, giving a mex of 0. Since the nimber for 4 stones is 0, the first player loses. The second player's strategy is to respond to whatever move the first player makes by taking the rest of the stones. For n = 5 stones, the nimbers of the successor states of 2, 3, and 4 stones are the nimbers 2, 3, and 0 (as we just calculated); the mex of the set of nimbers {0, 2, 3} is the nimber 1, so starting with 5 stones in this game is a win for the first player. Game theory: See nimbers for more details on the meaning of nimber values.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perfluorotributylamine** Perfluorotributylamine: Perfluorotributylamine (PFTBA), also referred to as FC43, is a colorless liquid with the formula N(C4F9)3. The compound consists of three butyl groups connected to an amine center, in which all of the hydrogen atoms have been replaced with fluorine. The compound is produced for the electronics industry, along with other perfluoroalkylamines. The high degree of fluorination significantly reduces the basicity of the central amine due to electron-withdrawing effects. Preparation: It is prepared by electrofluorination of tributylamine using hydrogen fluoride as solvent and source of fluorine: N(C4H9)3 + 27 HF → N(C4F9)3 + 27 H2 Uses: The compound has two commercial uses. It is used as an ingredient in Fluosol, artificial blood. This application exploits the high solubility of oxygen and carbon dioxide in the solvent, as well as the low viscosity and toxicity. It is also a component of Fluorinert coolant liquids. CPUs of some computers are immersed in this liquid to facilitate cooling. Uses: Niche The compound is used as a calibrant in gas chromatography when the analytical technique uses mass spectrometry as a detector to identify and quantify chemical compounds in gases or liquids. When undergoing ionization in the mass spectrometer, the compound decomposes in a repeatable pattern to form fragments of specific masses, which can be used to tune the mass response and accuracy of the mass spectrometer. Most commonly used ions are those with approximate mass of 69, 131, 219, 414 and 502 atomic mass units. Safety: Fluorofluids are generally of very low toxicity, so much that they have been evaluated as synthetic blood. Environmental impact: It is a greenhouse gas with warming properties more than 7,000 times that of carbon dioxide over a 100-year period, and, as such, is one of the most potent greenhouse gasses ever discovered. Its concentration in the atmosphere is approximately 0.18 parts per trillion. The compound can persist in the atmosphere for up to 500 years. Sulfur hexafluoride, however, has a GWP of 23,900, which would make it much more powerful.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trimethylene carbonate** Trimethylene carbonate: Trimethylene carbonate, or 1,3-propylene carbonate, is a 6-membered cyclic carbonate ester. It is a colourless solid that upon heating or catalytic ring-opening converts to poly(trimethylene carbonate) (PTMC). Such polymers are called aliphatic polycarbonates and are of interest for potential biomedical applications. An isomeric derivative is propylene carbonate, a colourless liquid that does not spontaneously polymerize. Preparation: This compound may be prepared from 1,3-propanediol and ethyl chloroformate (a phosgene substitute), or from oxetane and carbon dioxide with an appropriate catalyst: HOC3H6OH + ClCO2C2H5 → C3H6O2CO + C2H5OH + HCl C3H6O + CO2 → C3H6O2COThis cyclic carbonate undergoes ring-opening polymerization to give poly(trimethylene carbonate), abbreviated PTMC. Medical devices: The polymer PTC is of commercial interest as a biodegradable polymer with biomedical applications. A block copolymer of glycolic acid and trimethylene carbonate (TMC) is the material of the Maxon suture, a monofilament resorbable suture which was introduced in the mid-1980s. The same material is used in other resorbable medical devices.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crossover thrash** Crossover thrash: Crossover thrash (often abbreviated to crossover) is a fusion genre of thrash metal and hardcore punk. The genre lies on a continuum between heavy metal and hardcore punk. Other genres on the same continuum, such as metalcore and grindcore, may overlap with crossover thrash. Terminological ambiguity: The genre is often confused with thrashcore, which is essentially a faster hardcore punk rather than a more punk-oriented form of metal. Throughout the early and mid 1980s, the term "thrash" was often used as a synonym for hardcore punk (as in the New York Thrash compilation of 1982). The term "thrashcore" to distinguish acts of the genre from others was not coined until at least 1993. Many crossover bands, such as D.R.I., began as influential thrashcore bands. The "-core" suffix of "thrashcore" is sometimes used to distinguish it from crossover thrash and thrash metal, the latter of which is often referred to simply as "thrash", which in turn is rarely used to refer to crossover thrash or thrashcore. Thrashcore is occasionally used by the music press to refer to thrash metal-inflected metalcore. History: Crossover thrash evolved when performers in metal began borrowing elements of hardcore punk's music. Void and their 1982 Split LP with fellow D.C. band The Faith are hailed as one of the earliest examples of hardcore/heavy metal crossover and their chaotic musical approach is often cited as particularly influential. Punk-based metal bands generally evolved into the genre by developing a more technically advanced approach than the average hardcore outfit (which focused on very fast tempos and very brief songs); these bands were more metal-sounding and aggressive than traditional hardcore punk and thrashcore. The initial contact between punk rock and heavy metal involved a "fair amount of mutual loathing. Despite their shared devotion to speed, spite, shredded attire and stomping on distortion pedals, their relationship seemed, at first, unlikely."While Motörhead explored punk in the late '70s, it was UK hardcore that drew "...inspiration from metal's volcanic heart" to create a "bludgeoning tonality and cataclysmic narratives" that "bridged the gulf" between metal and punk; the key band is "UK hardcore's most crucial band: Discharge", which from 1980 to 1983 "challenged prevailing notions of what punk was supposed to sound like, and in doing so revolutionized the prospects of metal." Especially early on, crossover thrash had a strong affinity with skate punk, but gradually became more and more the province of metal audiences. The scene gestated at a Berkeley club called Ruthie's, in 1984. The term "metalcore" was originally used to refer to these crossover groups. As Steven Blush said, It was natural. The most intense music, after Black Flag and Dead Kennedys, was Slayer and Metallica. Therefore, that's where everybody was going. That turned into a culture war, basically." Hardcore punk groups Corrosion of Conformity, D.R.I., Ludichrist, and Suicidal Tendencies played alongside thrash metal groups like Megadeth, Anthrax, Metallica, Slayer, Exodus, Testament, Nuclear Assault and Overkill. This scene influenced the skinhead wing of New York hardcore, including crossover groups such as Cro-Mags, Murphy's Law, Agnostic Front, and Warzone.In 1984, New Jersey crossover group Hogan's Heroes was formed and played alongside thrash metal groups like Destruction, Death Angel, Forbidden, and Prong. In the October 1984 issue of Maximum Rocknroll, famed Metallica LP cover artist Brian "Pushead" Schroeder wrote "You ain't heard this! Blisters with speedcore franticness, mean with whining licks as it kicks into a maniac pace. Well organized melodies that cry out in terrorizing metallic thrash. While some bands are trying to be metal, English Dogs are just the dawning of speedcore!", referring to the EP To the End's of the Earth. Other prominent crossover thrash groups include Attitude Adjustment, Crumbsuckers, Cryptic Slaughter, Discharge, Ludichrist, Municipal Waste, Nuclear Assault, Stormtroopers of Death, M.O.D., SSD, The Exploited & Leeway.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sugar glass** Sugar glass: Sugar glass (also called candy glass, edible glass, and breakaway glass) is a brittle transparent form of sugar that looks like glass. It can be formed into a sheet that looks like flat glass or an object, such as a bottle or drinking glass. Description: Sugar glass is made by dissolving sugar in water and heating it to at least the "hard crack" stage (approx. 150 °C / 300 °F) in the candy making process. Glucose or corn syrup is used to prevent the sugar from recrystallizing, by getting in the way of the sugar molecules forming crystals. Cream of tartar also helps by turning the sugar into glucose and fructose.Because sugar glass is hygroscopic, it must be used soon after preparation, or it will soften and lose its brittle quality. Description: Sugar glass has been used to simulate glass in movies, photographs, plays and professional wrestling. Other uses: Sugar glass is also used to make sugar sculptures or other forms of edible art.Sugar glass with blue dye was used to represent the methamphetamine in the AMC TV series Breaking Bad. Actor Aaron Paul would eat it on set.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**E–Z notation** E–Z notation: E–Z configuration, or the E–Z convention, is the IUPAC preferred method of describing the absolute stereochemistry of double bonds in organic chemistry. It is an extension of cis–trans isomer notation (which only describes relative stereochemistry) that can be used to describe double bonds having two, three or four substituents. E–Z notation: Following the Cahn–Ingold–Prelog priority rules (CIP rules), each substituent on a double bond is assigned a priority, then positions of the higher of the two substituents on each carbon are compared to each other. If the two groups of higher priority are on opposite sides of the double bond (trans to each other), the bond is assigned the configuration E (from entgegen, German: [ɛntˈɡeːɡən], the German word for "opposite"). If the two groups of higher priority are on the same side of the double bond (cis to each other), the bond is assigned the configuration Z (from zusammen, German: [tsuˈzamən], the German word for "together"). E–Z notation: The letters E and Z are conventionally printed in italic type, within parentheses, and separated from the rest of the name with a hyphen. They are always printed as full capitals (not in lowercase or small capitals), but do not constitute the first letter of the name for English capitalization rules (as in the example above). Another example: The CIP rules assign a higher priority to bromine than to chlorine, and a higher priority to chlorine than to hydrogen, hence the following (possibly counterintuitive) nomenclature. E–Z notation: For organic molecules with multiple double bonds, it is sometimes necessary to indicate the alkene location for each E or Z symbol. For example, the chemical name of alitretinoin is (2E,4E,6Z,8E)-3,7-dimethyl-9-(2,6,6-trimethyl-1-cyclohexenyl)nona-2,4,6,8-tetraenoic acid, indicating that the alkenes starting at positions 2, 4, and 8 are E while the one starting at position 6 is Z.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multiply transitive group action** Multiply transitive group action: A group G acts 2-transitively on a set S if it acts transitively on the set of distinct ordered pairs {(x,y)∈S×S:x≠y} . That is, assuming (without a real loss of generality) that G acts on the left of S , for each pair of pairs (x,y),(w,z)∈S×S with x≠y and w≠z , there exists a g∈G such that g(x,y)=(w,z) . The group action is sharply 2-transitive if such g∈G is unique. Multiply transitive group action: A 2-transitive group is a group such that there exists a group action that's 2-transitive and faithful. Similarly we can define sharply 2-transitive group. Multiply transitive group action: Equivalently, gx=w and gy=z , since the induced action on the distinct set of pairs is g(x,y)=(gx,gy) The definition works in general with k replacing 2. Such multiply transitive permutation groups can be defined for any natural number k. Specifically, a permutation group G acting on n points is k-transitive if, given two sets of points a1, ... ak and b1, ... bk with the property that all the ai are distinct and all the bi are distinct, there is a group element g in G which maps ai to bi for each i between 1 and k. The Mathieu groups are important examples. Examples: Every group is trivially 1-transitive, by its action on itself by left-multiplication. Let Sn be the symmetric group acting on {1,...,n} , then the action is sharply n-transitive. The group of n-dimensional homothety-translations acts 2-transitively on Rn The group of n-dimensional projective transforms almost acts sharply (n+2)-transitively on the n-dimensional real projective space RPn . The almost is because the (n+2) points must be in general linear position. In other words, the n-dimensional projective transforms act transitively on the space of projective frames of RPn Classifications of 2-transitive groups: Every 2-transitive group is a primitive group, but not conversely. Every Zassenhaus group is 2-transitive, but not conversely. The solvable 2-transitive groups were classified by Bertram Huppert and are described in the list of transitive finite linear groups. The insoluble groups were classified by (Hering 1985) using the classification of finite simple groups and are all almost simple groups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fatty acid metabolism** Fatty acid metabolism: Fatty acid metabolism consists of various metabolic processes involving or closely related to fatty acids, a family of molecules classified within the lipid macronutrient category. These processes can mainly be divided into (1) catabolic processes that generate energy and (2) anabolic processes where they serve as building blocks for other compounds.In catabolism, fatty acids are metabolized to produce energy, mainly in the form of adenosine triphosphate (ATP). When compared to other macronutrient classes (carbohydrates and protein), fatty acids yield the most ATP on an energy per gram basis, when they are completely oxidized to CO2 and water by beta oxidation and the citric acid cycle. Fatty acids (mainly in the form of triglycerides) are therefore the foremost storage form of fuel in most animals, and to a lesser extent in plants. Fatty acid metabolism: In anabolism, intact fatty acids are important precursors to triglycerides, phospholipids, second messengers, hormones and ketone bodies. For example, phospholipids form the phospholipid bilayers out of which all the membranes of the cell are constructed from fatty acids. Phospholipids comprise the plasma membrane and other membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus. In another type of anabolism, fatty acids are modified to form other compounds such as second messengers and local hormones. The prostaglandins made from arachidonic acid stored in the cell membrane are probably the best-known of these local hormones. Fatty acid catabolism: Fatty acids are stored as triglycerides in the fat depots of adipose tissue. Between meals they are released as follows: Lipolysis, the removal of the fatty acid chains from the glycerol to which they are bound in their storage form as triglycerides (or fats), is carried out by lipases. These lipases are activated by high epinephrine and glucagon levels in the blood (or norepinephrine secreted by sympathetic nerves in adipose tissue), caused by declining blood glucose levels after meals, which simultaneously lowers the insulin level in the blood. Fatty acid catabolism: Once freed from glycerol, the free fatty acids enter the blood, which transports them, attached to plasma albumin, throughout the body. Fatty acid catabolism: Long-chain free fatty acids enter metabolizing cells (i.e. most living cells in the body except red blood cells and neurons in the central nervous system) through specific transport proteins, such as the SLC27 family fatty acid transport protein. Red blood cells do not contain mitochondria and are therefore incapable of metabolizing fatty acids; the tissues of the central nervous system cannot use fatty acids, despite containing mitochondria, because long-chain fatty acids (as opposed to medium-chain fatty acids) cannot cross the blood-brain barrier into the interstitial fluids that bathe these cells. Fatty acid catabolism: Once inside the cell, long-chain-fatty-acid—CoA ligase catalyzes the reaction between a fatty acid molecule with ATP (which is broken down to AMP and inorganic pyrophosphate) to give a fatty acyl-adenylate, which then reacts with free coenzyme A to give a fatty acyl-CoA molecule. In order for the acyl-CoA to enter the mitochondrion the carnitine shuttle is used:Acyl-CoA is transferred to the hydroxyl group of carnitine by carnitine palmitoyltransferase I, located on the cytosolic faces of the outer and inner mitochondrial membranes. Acyl-carnitine is shuttled inside by a carnitine-acylcarnitine translocase, as a carnitine is shuttled outside. Fatty acid catabolism: Acyl-carnitine is converted back to acyl-CoA by carnitine palmitoyltransferase II, located on the interior face of the inner mitochondrial membrane. The liberated carnitine is shuttled back to the cytosol, as an acyl-CoA is shuttled into the mitochondrial matrix.Beta oxidation, in the mitochondrial matrix, then cuts the long carbon chains of the fatty acids (in the form of acyl-CoA molecules) into a series of two-carbon (acetate) units, which, combined with co-enzyme A, form molecules of acetyl CoA, which condense with oxaloacetate to form citrate at the "beginning" of the citric acid cycle. It is convenient to think of this reaction as marking the "starting point" of the cycle, as this is when fuel - acetyl-CoA - is added to the cycle, which will be dissipated as CO2 and H2O with the release of a substantial quantity of energy captured in the form of ATP, during the course of each turn of the cycle and subsequent oxidative phosphorylation.Briefly, the steps in beta oxidation are as follows:Dehydrogenation by acyl-CoA dehydrogenase, yielding 1 FADH2 Hydration by enoyl-CoA hydratase Dehydrogenation by 3-hydroxyacyl-CoA dehydrogenase, yielding 1 NADH + H+ Cleavage by thiolase, yielding 1 acetyl-CoA and a fatty acid that has now been shortened by 2 carbons (forming a new, shortened acyl-CoA)This beta oxidation reaction is repeated until the fatty acid has been completely reduced to acetyl-CoA or, in, the case of fatty acids with odd numbers of carbon atoms, acetyl-CoA and 1 molecule of propionyl-CoA per molecule of fatty acid. Each beta oxidative cut of the acyl-CoA molecule eventually yields 5 ATP molecules in oxidative phosphorylation.The acetyl-CoA produced by beta oxidation enters the citric acid cycle in the mitochondrion by combining with oxaloacetate to form citrate. Coupled to oxidative phosphorylation this results in the complete combustion of the acetyl-CoA to CO2 and water. The energy released in this process is captured in the form of 1 GTP and 11 ATP molecules per acetyl-CoA molecule oxidized. This is the fate of acetyl-CoA wherever beta oxidation of fatty acids occurs, except under certain circumstances in the liver.The propionyl-CoA is later converted into succinyl-CoA through biotin-dependant propionyl-CoA carboxylase (PCC) and Vitamin B12-dependant methylmalonyl-CoA mutase (MCM), sequentially. Succinyl-CoA is first converted to malate, and then to pyruvate where it is then transported to the matrix to enter the citric acid cycle.In the liver oxaloacetate can be wholly or partially diverted into the gluconeogenic pathway during fasting, starvation, a low carbohydrate diet, prolonged strenuous exercise, and in uncontrolled type 1 diabetes mellitus. Under these circumstances, oxaloacetate is hydrogenated to malate, which is then removed from the mitochondria of the liver cells to be converted into glucose in the cytoplasm of the liver cells, from where it is released into the blood. In the liver, therefore, oxaloacetate is unavailable for condensation with acetyl-CoA when significant gluconeogenesis has been stimulated by low (or absent) insulin and high glucagon concentrations in the blood. Under these conditions, acetyl-CoA is diverted to the formation of acetoacetate and beta-hydroxybutyrate. Acetoacetate, beta-hydroxybutyrate, and their spontaneous breakdown product, acetone, are frequently, but confusingly, known as ketone bodies (as they are not "bodies" at all, but water-soluble chemical substances). The ketones are released by the liver into the blood. All cells with mitochondria can take up ketones from the blood and reconvert them into acetyl-CoA, which can then be used as fuel in their citric acid cycles, as no other tissue can divert its oxaloacetate into the gluconeogenic pathway in the way that this can occur in the liver. Unlike free fatty acids, ketones can cross the blood–brain barrier and are therefore available as fuel for the cells of the central nervous system, acting as a substitute for glucose, on which these cells normally survive. The occurrence of high levels of ketones in the blood during starvation, a low carbohydrate diet, prolonged heavy exercise, or uncontrolled type 1 diabetes mellitus is known as ketosis, and, in its extreme form, in out-of-control type 1 diabetes mellitus, as ketoacidosis. Fatty acid catabolism: The glycerol released by lipase action is phosphorylated by glycerol kinase in the liver (the only tissue in which this reaction can occur), and the resulting glycerol 3-phosphate is oxidized to dihydroxyacetone phosphate. The glycolytic enzyme triose phosphate isomerase converts this compound to glyceraldehyde 3-phosphate, which is oxidized via glycolysis, or converted to glucose via gluconeogenesis. Fatty acid catabolism: Fatty acids as an energy source Fatty acids, stored as triglycerides in an organism, are a concentrated source of energy because they contain little oxygen and are anhydrous. The energy yield from a gram of fatty acids is approximately 9 kcal (37 kJ), much higher than the 4 kcal (17 kJ) for carbohydrates. Since the hydrocarbon portion of fatty acids is hydrophobic, these molecules can be stored in a relatively anhydrous (water-free) environment. Carbohydrates, on the other hand, are more highly hydrated. For example, 1 g of glycogen binds approximately 2 g of water, which translates to 1.33 kcal/g (4 kcal/3 g). This means that fatty acids can hold more than six times the amount of energy per unit of stored mass. Put another way, if the human body relied on carbohydrates to store energy, then a person would need to carry 31 kg (67.5 lb) of hydrated glycogen to have the energy equivalent to 4.6 kg (10 lb) of fat.Hibernating animals provide a good example for utilization of fat reserves as fuel. For example, bears hibernate for about 7 months, and during this entire period, the energy is derived from degradation of fat stores. Migrating birds similarly build up large fat reserves before embarking on their intercontinental journeys.The fat stores of young adult humans average between about 10–20 kg, but vary greatly depending on gender and individual disposition. By contrast, the human body stores only about 400 g of glycogen, of which 300 g is locked inside the skeletal muscles and is unavailable to the body as a whole. The 100 g or so of glycogen stored in the liver is depleted within one day of starvation. Thereafter the glucose that is released into the blood by the liver for general use by the body tissues has to be synthesized from the glucogenic amino acids and a few other gluconeogenic substrates, which do not include fatty acids. Nonetheless, lipolysis releases glycerol which can enter the pathway of gluconeogenesis. Fatty acid catabolism: Carbohydrate synthesis from glycerol and fatty acids Fatty acids are broken down to acetyl-CoA by means of beta oxidation inside the mitochondria, whereas fatty acids are synthesized from acetyl-CoA outside the mitochondria, in the cytosol. The two pathways are distinct, not only in where they occur, but also in the reactions that occur, and the substrates that are used. The two pathways are mutually inhibitory, preventing the acetyl-CoA produced by beta-oxidation from entering the synthetic pathway via the acetyl-CoA carboxylase reaction. It can also not be converted to pyruvate as the pyruvate dehydrogenase complex reaction is irreversible. Instead the acetyl-CoA produced by the beta-oxidation of fatty acids condenses with oxaloacetate, to enter the citric acid cycle. During each turn of the cycle, two carbon atoms leave the cycle as CO2 in the decarboxylation reactions catalyzed by isocitrate dehydrogenase and alpha-ketoglutarate dehydrogenase. Thus each turn of the citric acid cycle oxidizes an acetyl-CoA unit while regenerating the oxaloacetate molecule with which the acetyl-CoA had originally combined to form citric acid. The decarboxylation reactions occur before malate is formed in the cycle. Only plants possess the enzymes to convert acetyl-CoA into oxaloacetate from which malate can be formed to ultimately be converted to glucose.However, acetyl-CoA can be converted to acetoacetate, which can decarboxylate to acetone (either spontaneously, or catalyzed by acetoacetate decarboxylase). It can then be further metabolized to isopropanol which is excreted in breath/urine, or by CYP2E1 into hydroxyacetone (acetol). Acetol can be converted to propylene glycol. This converts to pyruvate (by two alternative enzymes), or propionaldehyde, or to L-lactaldehyde then L-lactate (the common lactate isomer). Another pathway turns acetol to methylglyoxal, then to pyruvate, or to D-lactaldehyde (via S-D-lactoyl-glutathione or otherwise) then D-lactate. D-lactate metabolism (to glucose) is slow or impaired in humans, so most of the D-lactate is excreted in the urine; thus D-lactate derived from acetone can contribute significantly to the metabolic acidosis associated with ketosis or isopropanol intoxication. L-Lactate can complete the net conversion of fatty acids into glucose. The first experiment to show conversion of acetone to glucose was carried out in 1951. This, and further experiments used carbon isotopic labelling. Up to 11% of the glucose can be derived from acetone during starvation in humans.The glycerol released into the blood during the lipolysis of triglycerides in adipose tissue can only be taken up by the liver. Here it is converted into glycerol 3-phosphate by the action of glycerol kinase which hydrolyzes one molecule of ATP per glycerol molecule which is phosphorylated. Glycerol 3-phosphate is then oxidized to dihydroxyacetone phosphate, which is, in turn, converted into glyceraldehyde 3-phosphate by the enzyme triose phosphate isomerase. From here the three carbon atoms of the original glycerol can be oxidized via glycolysis, or converted to glucose via gluconeogenesis. Other functions and uses of fatty acids: Intracellular signaling Fatty acids are an integral part of the phospholipids that make up the bulk of the plasma membranes, or cell membranes, of cells. These phospholipids can be cleaved into diacylglycerol (DAG) and inositol trisphosphate (IP3) through hydrolysis of the phospholipid, phosphatidylinositol 4,5-bisphosphate (PIP2), by the cell membrane bound enzyme phospholipase C (PLC). Other functions and uses of fatty acids: Eicosanoid paracrine hormones One product of fatty acid metabolism are the prostaglandins, compounds having diverse hormone-like effects in animals. Prostaglandins have been found in almost every tissue in humans and other animals. They are enzymatically derived from arachidonic acid, a 20-carbon polyunsaturated fatty acid. Every prostaglandin therefore contains 20 carbon atoms, including a 5-carbon ring. They are a subclass of eicosanoids and form the prostanoid class of fatty acid derivatives.The prostaglandins are synthesized in the cell membrane by the cleavage of arachidonate from the phospholipids that make up the membrane. This is catalyzed either by phospholipase A2 acting directly on a membrane phospholipid, or by a lipase acting on DAG (diacyl-glycerol). The arachidonate is then acted upon by the cyclooxygenase component of prostaglandin synthase. This forms a cyclopentane ring roughly in the middle of the fatty acid chain. The reaction also adds 4 oxygen atoms derived from two molecules of O2. The resulting molecule is prostaglandin G2, which is converted by the hydroperoxidase component of the enzyme complex into prostaglandin H2. This highly unstable compound is rapidly transformed into other prostaglandins, prostacyclin and thromboxanes. These are then released into the interstitial fluids surrounding the cells that have manufactured the eicosanoid hormone. Other functions and uses of fatty acids: If arachidonate is acted upon by a lipoxygenase instead of cyclooxygenase, Hydroxyeicosatetraenoic acids and leukotrienes are formed. They also act as local hormones. Other functions and uses of fatty acids: Prostaglandins have two derivatives: prostacyclins and thromboxanes. Prostacyclins are powerful locally acting vasodilators and inhibit the aggregation of blood platelets. Through their role in vasodilation, prostacyclins are also involved in inflammation. They are synthesized in the walls of blood vessels and serve the physiological function of preventing needless clot formation, as well as regulating the contraction of smooth muscle tissue. Conversely, thromboxanes (produced by platelet cells) are vasoconstrictors and facilitate platelet aggregation. Their name comes from their role in clot formation (thrombosis). Dietary sources of fatty acids, their digestion, absorption, transport in the blood and storage: A significant proportion of the fatty acids in the body are obtained from the diet, in the form of triglycerides of either animal or plant origin. The fatty acids in the fats obtained from land animals tend to be saturated, whereas the fatty acids in the triglycerides of fish and plants are often polyunsaturated and therefore present as oils. Dietary sources of fatty acids, their digestion, absorption, transport in the blood and storage: These triglycerides cannot be absorbed by the intestine. They are broken down into mono- and di-glycerides plus free fatty acids (but no free glycerol) by pancreatic lipase, which forms a 1:1 complex with a protein called colipase (also a constituent of pancreatic juice), which is necessary for its activity. The activated complex can work only at a water-fat interface. Therefore, it is essential that fats are first emulsified by bile salts for optimal activity of these enzymes. The digestion products consisting of a mixture of tri-, di- and monoglycerides and free fatty acids, which, together with the other fat soluble contents of the diet (e.g. the fat soluble vitamins and cholesterol) and bile salts form mixed micelles, in the watery duodenal contents (see diagrams on the right).The contents of these micelles (but not the bile salts) enter the enterocytes (epithelial cells lining the small intestine) where they are resynthesized into triglycerides, and packaged into chylomicrons which are released into the lacteals (the capillaries of the lymph system of the intestines). These lacteals drain into the thoracic duct which empties into the venous blood at the junction of the left jugular and left subclavian veins on the lower left hand side of the neck. This means that the fat-soluble products of digestion are discharged directly into the general circulation, without first passing through the liver, unlike all other digestion products. The reason for this peculiarity is unknown. Dietary sources of fatty acids, their digestion, absorption, transport in the blood and storage: The chylomicrons circulate throughout the body, giving the blood plasma a milky or creamy appearance after a fatty meal. Lipoprotein lipase on the endothelial surfaces of the capillaries, especially in adipose tissue, but to a lesser extent also in other tissues, partially digests the chylomicrons into free fatty acids, glycerol and chylomicron remnants. The fatty acids are absorbed by the adipocytes, but the glycerol and chylomicron remnants remain in the blood plasma, ultimately to be removed from the circulation by the liver. The free fatty acids released by the digestion of the chylomicrons are absorbed by the adipocytes, where they are resynthesized into triglycerides using glycerol derived from glucose in the glycolytic pathway. These triglycerides are stored, until needed for the fuel requirements of other tissues, in the fat droplet of the adipocyte. Dietary sources of fatty acids, their digestion, absorption, transport in the blood and storage: The liver absorbs a proportion of the glucose from the blood in the portal vein coming from the intestines. After the liver has replenished its glycogen stores (which amount to only about 100 g of glycogen when full) much of the rest of the glucose is converted into fatty acids as described below. These fatty acids are combined with glycerol to form triglycerides which are packaged into droplets very similar to chylomicrons, but known as very low-density lipoproteins (VLDL). These VLDL droplets are processed in exactly the same manner as chylomicrons, except that the VLDL remnant is known as an intermediate-density lipoprotein (IDL), which is capable of scavenging cholesterol from the blood. This converts IDL into low-density lipoprotein (LDL), which is taken up by cells that require cholesterol for incorporation into their cell membranes or for synthetic purposes (e.g. the formation of the steroid hormones). The remainder of the LDLs is removed by the liver.Adipose tissue and lactating mammary glands also take up glucose from the blood for conversion into triglycerides. This occurs in the same way as in the liver, except that these tissues do not release the triglycerides thus produced as VLDL into the blood. Adipose tissue cells store the triglycerides in their fat droplets, ultimately to release them again as free fatty acids and glycerol into the blood (as described above), when the plasma concentration of insulin is low, and that of glucagon and/or epinephrine is high. Mammary glands discharge the fat (as cream fat droplets) into the milk that they produce under the influence of the anterior pituitary hormone prolactin. Dietary sources of fatty acids, their digestion, absorption, transport in the blood and storage: All cells in the body need to manufacture and maintain their membranes and the membranes of their organelles. Whether they rely entirely on free fatty acids absorbed from the blood, or are able to synthesize their own fatty acids from blood glucose, is not known. The cells of the central nervous system will almost certainly have the capability of manufacturing their own fatty acids, as these molecules cannot reach them through the blood brain barrier. However, it is unknown how they are reached by the essential fatty acids, which mammals cannot synthesize themselves but are nevertheless important components of cell membranes (and other functions described above). Fatty acid synthesis: Much like beta-oxidation, straight-chain fatty acid synthesis occurs via the six recurring reactions shown below, until the 16-carbon palmitic acid is produced.The diagrams presented show how fatty acids are synthesized in microorganisms and list the enzymes found in Escherichia coli. These reactions are performed by fatty acid synthase II (FASII), which in general contains multiple enzymes that act as one complex. FASII is present in prokaryotes, plants, fungi, and parasites, as well as in mitochondria.In animals as well as some fungi such as yeast, these same reactions occur on fatty acid synthase I (FASI), a large dimeric protein that has all of the enzymatic activities required to create a fatty acid. FASI is less efficient than FASII; however, it allows for the formation of more molecules, including "medium-chain" fatty acids via early chain termination. Enzymes, acyltransferases and transacylases, incorporate fatty acids in phospholipids, triacylglycerols, etc. by transferring fatty acids between an acyl acceptor and donor. They also have the task of synthesizing bioactive lipids as well as their precursor molecules.Once a 16:0 carbon fatty acid has been formed, it can undergo a number of modifications, resulting in desaturation and/or elongation. Elongation, starting with stearate (18:0), is performed mainly in the endoplasmic reticulum by several membrane-bound enzymes. The enzymatic steps involved in the elongation process are principally the same as those carried out by fatty acid synthesis, but the four principal successive steps of the elongation are performed by individual proteins, which may be physically associated. Fatty acid synthesis: Abbreviations: ACP – Acyl carrier protein, CoA – Coenzyme A, NADP – Nicotinamide adenine dinucleotide phosphate. Fatty acid synthesis: Note that during fatty synthesis the reducing agent is NADPH, whereas NAD is the oxidizing agent in beta-oxidation (the breakdown of fatty acids to acetyl-CoA). This difference exemplifies a general principle that NADPH is consumed during biosynthetic reactions, whereas NADH is generated in energy-yielding reactions. (Thus NADPH is also required for the synthesis of cholesterol from acetyl-CoA; while NADH is generated during glycolysis.) The source of the NADPH is two-fold. When malate is oxidatively decarboxylated by “NADP+-linked malic enzyme" pyruvate, CO2 and NADPH are formed. NADPH is also formed by the pentose phosphate pathway which converts glucose into ribose, which can be used in synthesis of nucleotides and nucleic acids, or it can be catabolized to pyruvate. Fatty acid synthesis: Glycolytic end products are used in the conversion of carbohydrates into fatty acids In humans, fatty acids are formed from carbohydrates predominantly in the liver and adipose tissue, as well as in the mammary glands during lactation. The pyruvate produced by glycolysis is an important intermediary in the conversion of carbohydrates into fatty acids and cholesterol. This occurs via the conversion of pyruvate into acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids and cholesterol occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then converted back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA is carboxylated by acetyl CoA carboxylase into malonyl CoA, the first committed step in the synthesis of fatty acids. Fatty acid synthesis: Regulation of fatty acid synthesis Acetyl-CoA is formed into malonyl-CoA by acetyl-CoA carboxylase, at which point malonyl-CoA is destined to feed into the fatty acid synthesis pathway. Acetyl-CoA carboxylase is the point of regulation in saturated straight-chain fatty acid synthesis, and is subject to both phosphorylation and allosteric regulation. Regulation by phosphorylation occurs mostly in mammals, while allosteric regulation occurs in most organisms. Allosteric control occurs as feedback inhibition by palmitoyl-CoA and activation by citrate. When there are high levels of palmitoyl-CoA, the final product of saturated fatty acid synthesis, it allosterically inactivates acetyl-CoA carboxylase to prevent a build-up of fatty acids in cells. Citrate acts to activate acetyl-CoA carboxylase under high levels, because high levels indicate that there is enough acetyl-CoA to feed into the Krebs cycle and produce energy.High plasma levels of insulin in the blood plasma (e.g. after meals) cause the dephosphorylation and activation of acetyl-CoA carboxylase, thus promoting the formation of malonyl-CoA from acetyl-CoA, and consequently the conversion of carbohydrates into fatty acids, while epinephrine and glucagon (released into the blood during starvation and exercise) cause the phosphorylation of this enzyme, inhibiting lipogenesis in favor of fatty acid oxidation via beta-oxidation. Disorders: Disorders of fatty acid metabolism can be described in terms of, for example, hypertriglyceridemia (too high level of triglycerides), or other types of hyperlipidemia. These may be familial or acquired. Disorders: Familial types of disorders of fatty acid metabolism are generally classified as inborn errors of lipid metabolism. These disorders may be described as fatty acid oxidation disorders or as a lipid storage disorders, and are any one of several inborn errors of metabolism that result from enzyme or transport protein defects affecting the ability of the body to oxidize fatty acids in order to produce energy within muscles, liver, and other cell types. When a fatty acid oxidation disorder affects the muscles, it is a metabolic myopathy. Disorders: Moreover, cancer cells can display irregular fatty acid metabolism with regard to both fatty acid synthesis and mitochondrial fatty acid oxidation (FAO) that are involved in diverse aspects of tumorigenesis and cell growth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Type theory with records** Type theory with records: Type theory with records is a formal semantics representation framework, using records to express type theory types. It has been used in natural language processing, principally computational semantics and dialogue systems. Syntax: A record type is a set of fields. A field is a pair consisting of a label and a type. Within a record type, field labels are unique. The witness of a record type is a record. A record is a similar set of fields, but fields contain objects instead of types. The object in each field must be of the type declared in the corresponding field in the record type.Basic type: [x:Ind] Object: [x=a] Ptype: boy dog hug :hug(x,y)] Object: boy dog hug =p3] where a and b are individuals (type Ind ), p1 is proof that a is a boy, etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bisoprolol** Bisoprolol: Bisoprolol, sold under the brand name Zebeta among others, is a beta blocker medication used for heart diseases. This includes tachyarrhythmias, high blood pressure, chest pain from not enough blood flow to the heart, and heart failure. It is taken by mouth.Common side effects include headache, feeling tired, diarrhea, and swelling in the legs. More severe side effects include worsening asthma, blocking the ability to recognize low blood sugar, and worsening heart failure. There are concerns that use during pregnancy may be harmful to the baby. Bisoprolol is in the beta blocker family of medications and is of the β1 selective type.Bisoprolol is on the World Health Organization's List of Essential Medicines. Bisoprolol is available as a generic medication. In 2020, it was the 267th most commonly prescribed medication in the United States, with more than 1 million prescriptions. Medical uses: Bisoprolol is currently used for prevention of cardiovascular events following a heart attack in patients with risk factors for disease progression, in the management of congestive heart failure with reduced ejection fraction, and as a second-line agent for hypertension.Bisoprolol may be beneficial in the treatment of high blood pressure, but it is not recommended as a first-line antihypertensive agent. It can be an adjunct to first-line antihypertensive agents in patients with accompanying comorbidities, for example, congestive heart failure, where selected beta blockers can be added in patients who remain mildly to moderately symptomatic despite appropriate doses of an angiotensin-converting-enzyme inhibitor. In cardiac ischemia, the drug is used to reduce the activity of the heart muscle, thereby reducing its oxygen and nutrient demands and allowing its reduced blood supply to still transport sufficient amounts of oxygen and nutrients to meet its needs. Side effects: An overdose of bisoprolol can lead to fatigue, hypotension, hypoglycemia, bronchospasms, and bradycardia. Bronchospasms and hypoglycemia occur because at high doses, the drug can be an antagonist for β2 adrenergic receptors located in the lungs and liver. Bronchospasm occurs due to the blockage of β2 receptors in the lungs. Hypoglycemia occurs due to decreased stimulation of glycogenolysis and gluconeogenesis in the liver via β2 receptors. Side effects: Cautions Non-selective beta-blockers should be avoided in people with asthma or bronchospasm as they may cause exacerbations and worsening of symptoms. β1 selective beta-blockers like bisoprolol have not been shown to cause an increase in asthma exacerbations, and may be cautiously tried in those with controlled, mild-to-moderate asthma with cardiac comorbidities. A 2014 meta-analysis found that unlike non-selective beta-blockers, β1 selective beta-blockers (bisoprolol) showed only a small impact on lung function, with patients remaining responsive to Salbutamol (β2 -agonist) rescue therapy and endorses the use of bisoprolol in select patients with controlled asthma. This was supported by a 2020 clinical trial where bisoprolol had no significant impact on bronchodilation post Salbutamol administration. Pharmacology: Mechanism of action Bisoprolol is cardioprotective because it selectively and competitively blocks catecholamine (adrenaline) stimulation of β1 adrenergic receptors (adrenoreceptors), which are mainly found in the heart muscle cells and heart conduction tissue (cardiospecific), but also found in juxtaglomerular cells in the kidney. Normally, adrenaline and noradrenaline stimulation of the β1 adrenoreceptor activates a signalling cascade (Gs protein and cAMP) which ultimately leads to increased myocardial contractility and increased heart rate of the heart muscle and heart pacemaker, respectively. Bisoprolol competitively blocks the activation of this cascade, so decreases the adrenergic tone/stimulation of the heart muscle and pacemaker cells. Decreased adrenergic tone shows less contractility of heart muscle and lowered heart rate of pacemakers. Pharmacology: β1-selectivity Bisoprolol β1-selectivity is especially important in comparison to other nonselective beta blockers. The effects of the drug are limited to areas containing β1 adrenoreceptors, which is mainly the heart and part of the kidney. Bisoprolol, whilst β1 adrenoceptor selective can help patients to avoid certain side-effects associated with non-selective beta-blocker activity at additional adrenoceptors (α1 and β2), it does not signify its superiority in treating beta-blocker indicated cardiac conditions such as heart failure but could prove beneficial to patients with specific comorbidities.Bisoprolol has a higher degree of β1-selectivity compared to atenolol, metoprolol and betaxolol. With a selectivity ranging from being 11-15 times more selective for β1over β2 However nebivolol is approximately 3.5 times more β1-selective. Pharmacology: Renin-angiotensin system Bisoprolol inhibits renin secretion by about 65% and tachycardia by about 30%. Pharmacology: Pharmacokinetics After ingestion, bisoprolol is absorbed and has a high bioavailability of approximately 90% with plasma half-life of 10-12 hours. When being eliminated, the body evenly distributes it (50–50) between kidney excretion and liver biotransformation (then excreted).Bisoprolol has both lipid- and water-soluble properties.The plasma protein binding of bisoprolol is approximately 35%, the volume of distribution is 3.5 L/kg and the total clearance is approximately 15 L/h. Bisoprolol is eliminated from the body in two ways - 50% of the substance is converted in the liver to inactive metabolites, which are then excreted in the kidneys. The remaining 50% is eliminated unchanged via the kidneys. Since elimination is equal in liver and kidney, no dose adjustment is required in patients with hepatic or renal impairment. Pharmacology: The pharmacokinetics of bisoprolol are linear and independent of age.In patients with chronic heart failure, the plasma level of bisoprolol is higher and the half-life is longer than in healthy subjects when compared across studies. Currently, there is a lack of evidence directly comparing bisoprolol pharmacokinetics between healthy subjects and chronic heart failure subjects. History: Bisoprolol was patented in 1976 and approved for medical use in 1986. It was approved for medical use in the United States in 1992. Brand names In India, it is sold under trade name Bisotab and is available in 2 strengths of 2.5 mg and 5 mg.In Italy, it is sold under trade name Congescor and is available in 6 strengths of 1.25 mg, 2.5 mg, 3.75 mg, 5 mg, 7.5 mg and 10 mg. In Germany and Eastern Europe bisoprolol is marketed as Bisoprolol-ratiopharm by Ratiopharm (Teva).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crowdsourced Testing (company)** Crowdsourced Testing (company): Crowdsourced Testing is a crowdsourcing platform which provides functional, localization, usability and Beta testing through crowdsourcing. History: Crowdsourced Testing was launched in 2012 by QA on Request, a traditional quality assurance testing company. They were invited to join Start-Up Chile's fifth cohort in November 2012, and were selected out of 100 as one of the 21 startups to pitch for the Demo Day. It was then that Simon Papineau, Crowdsourced Testing's founder and current CEO began to split his time between Canada and Chile, as a Chilean team began to grow. In April 2013, Crowdsourced Testing was invited, along with six other companies, to join the Wayra acceleration program. They were invited to join Rackspace's 24k Startup Program in July 2013, and Tech Wildcatters' Globe Start in August 2013. Testing process: After every project, a tester's performance is rated on a scale of 0-10. This score is averaged with his previous scores in order to get his Tester Score. Only testers with scores above 6 are allowed to participate in paid projects. First-time testers who register to the Crowdsourced Testing platform are required to begin with free (non-paid) projects before getting their Tester Score above 6.When requesting a testing project, clients are able to choose between the devices they want tested, the time spent on each device, and the number of testers to test. Crowdsourced Testing also offers packs for people who do not know what tests their product requires. Products: In addition to its testing services, Crowdsourced Testing has launched two products designed to help testers during the testing process. Products: Damn Bugs Damn Bugs was launched in March 2013 as a free, web-based bug tracking software. It is only a bug-tracker, and does not claim to act as a test management software. Its features are: Free Chrome browser extension that allows testers to take screenshots and report bugs without having to leave the page that they are on Permission management for each testing session Standardized bug reports Project status reporting charts Email notifications for comments and updates related to your tasks Unlimited number of users allowedDamn Bugs also has a feedback page where users can offer suggestions or voice their concerns. Comments are classified as: "Under Review", "Planned", "Started", "Completed" or "Declined". So far, Damn Bugs developers claim to have implemented more than 75 suggestions into Damn Bugs. Products: Overlook Launched in May 2015, Overlook is a free web-based test plan management software. Its aim is to help teams create executable test plans for test-plan driven testing. Its features include creating and executing test plans, and iOS and Android readiness checklists. Like with its other product Damn Bugs, Crowdsourced Testing has launched a forum to collect user suggestions and requests. The team has claimed that they will go through all requests, and implement those that make sense.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Naeviology** Naeviology: Naeviology is a method of divination which looks at the moles, scars, or other bodily marks on a person as a means of telling their future. It peaked in popularity between the 1700 and 1800s. Several scientific papers have tried to automate the process of mole reading. In India this practice is called moleology or moleosophy. There is a related process called Chinese facial mole reading which links mole locations primarily on the face with personality traits or future life events; there are smartphone applications which claim to foretell the future using the phone's camera to survey moles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chromium(III) 2-ethylhexanoate** Chromium(III) 2-ethylhexanoate: Chromium(III) 2-ethylhexanoate, C24H45CrO6, is a coordination complex of chromium and ethylhexanoate. In combination with 2,5-dimethylpyrrole it forms the Phillips selective ethylene trimerisation catalyst (not to be confused with Phillips catalyst), used in the industrial production of linear alpha olefins, particularly 1-hexene or 1-octene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Logic for Computable Functions** Logic for Computable Functions: Logic for Computable Functions (LCF) is an interactive automated theorem prover developed at Stanford and Edinburgh by Robin Milner and collaborators in early 1970s, based on the theoretical foundation of logic of computable functions previously proposed by Dana Scott. Work on the LCF system introduced the general-purpose programming language ML to allow users to write theorem-proving tactics, supporting algebraic data types, parametric polymorphism, abstract data types, and exceptions. Basic idea: Theorems in the system are terms of a special "theorem" abstract data type. The general mechanism of abstract data types of ML ensures that theorems are derived using only the inference rules given by the operations of the theorem abstract type. Users can write arbitrarily complex ML programs to compute theorems; the validity of theorems does not depend on the complexity of such programs, but follows from the soundness of the abstract data type implementation and the correctness of the ML compiler. Advantages: The LCF approach provides similar trustworthiness to systems that generate explicit proof certificates but without the need to store proof objects in memory. The Theorem data type can be easily implemented to optionally store proof objects, depending on the system's run-time configuration, so it generalizes the basic proof-generation approach. The design decision to use a general-purpose programming language for developing theorems means that, depending on the complexity of programs written, it is possible to use the same language to write step-by-step proofs, decision procedures, or theorem provers. Disadvantages: Trusted computing base The implementation of the underlying ML compiler adds to the trusted computing base. Work on CakeML resulted in a formally verified ML compiler, alleviating some these concerns. Disadvantages: Efficiency and complexity of proof procedures Theorem proving often benefits from decision procedures and theorem proving algorithms, whose correctness has been extensively analyzed. A straightforward way of implementing these procedures in an LCF approach requires such procedures to always derive outcomes from the axioms, lemmas, and inference rules of the system, as opposed to directly computing the outcome. A potentially more efficient approach is to use reflection to prove that a function operating on formulas always gives correct result. Influences: Among subsequent implementations is Cambridge LCF. Later systems simplified the logic to use total instead of partial functions, leading to HOL, HOL Light, and the Isabelle proof assistant that supports various logics. As of 2019, the Isabelle proof assistant still contains an implementation of an LCF logic, Isabelle/LCF.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Double-lumen endobronchial tube** Double-lumen endobronchial tube: A double-lumen endotracheal tube (also called double-lumen endobronchial tube or DLT) is a type of endotracheal tube which is used in tracheal intubation during thoracic surgery and other medical conditions to achieve selective, one-sided ventilation of either the right or the left lung. Indications: There are several conditions that may make one-sided lung ventilation necessary. Absolute indications include separation of the right from the left lung to avoid spillage of blood or pus from an infected or bleeding side to the unaffected side. Relative indications include the collapse of one lung and the selective ventilation of the remaining lung in order to facilitate exposure of the anatomical structures to be operated on in thoracic surgeries, such as the repair of a thoracic aortic aneurysm, pneumonectomy or lobectomy. Development and description: A DLT is made up of two small-lumen endotracheal tubes of unequal length fixed side by side. The shorter tube ends in the trachea while the longer one is placed in either the left or right bronchus in order to selectively ventilate the left or right lung respectively. The first double-lumen tube used for bronchospirometry and later for one-lung anaesthesia in humans was introduced by Carlens in 1949. Modifications to the original Carlens tube have been introduced by White, Robertshaw and others. The most commonly used DLTs today are the Carlens and the Robertshaw tubes. These allow single-lung ventilation while the other lung is collapsed to make Thoracic surgery easier or possible. This may be necessary so as to facilitate the surgeon's view and access to relevant structures within the thoracic cavity. The deflated lung is re-inflated as surgery finishes to check for leakages or other injuries .These tubes are typically coaxial, with two separate channels and two separate openings. They incorporate an endotracheal lumen which terminates in the trachea and an endobronchial lumen, the distal tip of which is positioned 1–2 cm into the right or left mainstem bronchus. Development and description: Proper placement of DLTs requires considerable clinical experience, various techniques for their insertion having been developed. And there is a small simulator to help in the training of Carlens tube rotation maneuvers.Placement has been found to be easier with the aid of fiber optical equipment such as a bronchoscope. Currently, flexible fiberoptic bronchoscopy examination is recommended before, during placement, and at the conclusion of the use of DLTs. Alternatives: Other methods of achieving a one sided lung ventilation are the Univent tube, which has a single tracheal lumen and blocker, and other endobronchial blockers.The approach to ventilating each lung via a separate ventilator is called the DuoVent approach. This system operates by connecting both ventilators to a master control unit, allowing for synchrony between the two ventilators.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Discrete orthogonal polynomials** Discrete orthogonal polynomials: In mathematics, a sequence of discrete orthogonal polynomials is a sequence of polynomials that are pairwise orthogonal with respect to a discrete measure. Examples include the discrete Chebyshev polynomials, Charlier polynomials, Krawtchouk polynomials, Meixner polynomials, dual Hahn polynomials, Hahn polynomials, and Racah polynomials. If the measure has finite support, then the corresponding sequence of discrete orthogonal polynomials has only a finite number of elements. The Racah polynomials give an example of this. Definition: Consider a discrete measure μ on some set S={s0,s1,…} with weight function ω(x) A family of orthogonal polynomials {pn(x)} is called discrete, if they are orthogonal with respect to ω (resp. μ ), i.e. ∑x∈Spn(x)pm(x)ω(x)=κnδn,m, where δn,m is the Kronecker delta. Remark Any discrete measure is of the form μ=∑iaiδsi ,so one can define a weight function by ω(si)=ai Listeratur: Baik, Jinho; Kriecherbauer, T.; McLaughlin, K. T.-R.; Miller, P. D. (2007), Discrete orthogonal polynomials. Asymptotics and applications, Annals of Mathematics Studies, vol. 164, Princeton University Press, ISBN 978-0-691-12734-7, MR 2283089
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Terminal illness** Terminal illness: Terminal illness or end-stage disease is a disease that cannot be cured or adequately treated and is expected to result in the death of the patient. This term is more commonly used for progressive diseases such as cancer, dementia or advanced heart disease than for injury. In popular use, it indicates a disease that will progress until death with near absolute certainty, regardless of treatment. A patient who has such an illness may be referred to as a terminal patient, terminally ill or simply as being terminal. There is no standardized life expectancy for a patient to be considered terminal, although it is generally months or less. Life expectancy for terminal patients is a rough estimate given by the physician based on previous data and does not always reflect true longevity. An illness which is lifelong but not fatal is a chronic condition. Terminal illness: Terminal patients have options for disease management after diagnosis. Examples include caregiving, continued treatment, palliative and hospice care, and physician-assisted suicide. Decisions regarding management are made by the patient and their family, although medical professionals may offer recommendations of services available to terminal patients.Lifestyle after diagnosis varies depending on management decisions and the nature of the disease, and there may be restrictions depending on the condition of the patient. Terminal patients may experience depression or anxiety associated with impending death, and family and caregivers may struggle with psychological burdens. Psychotherapeutic interventions may alleviate some of these burdens, and is often incorporated into palliative care.Because terminal patients are aware of their impending deaths, they have time to prepare for care, such as advance directives and living wills, which have been shown to improve end-of-life care. While death cannot be avoided, patients can strive to die a death seen as good. Management: By definition, there is not a cure or adequate treatment for terminal illnesses. However, some kinds of medical treatments may be appropriate anyway, such as treatment to reduce pain or ease breathing.Some terminally ill patients stop all debilitating treatments to reduce unwanted side effects. Others continue aggressive treatment in the hope of an unexpected success. Still others reject conventional medical treatment and pursue unproven treatments such as radical dietary modifications. Patients' choices about different treatments may change over time.Palliative care is normally offered to terminally ill patients, regardless of their overall disease management style, if it seems likely to help manage symptoms such as pain and improve quality of life. Hospice care, which can be provided at home or in a long-term care facility, additionally provides emotional and spiritual support for the patient and loved ones. Some complementary approaches, such as relaxation therapy, massage, and acupuncture may relieve some symptoms and other causes of suffering. Management: Caregiving Terminal patients often need a caregiver, who could be a nurse, licensed practical nurse or a family member. Caregivers can help patients receive medications to reduce pain and control symptoms of nausea or vomiting. They can also assist the individual with daily living activities and movement. Caregivers provide assistance with food and psychological support and ensure that the individual is comfortable.The patient's family may have questions and most caregivers can provide information to help ease the mind. Doctors generally do not provide estimates for fear of instilling false hopes or obliterate an individual's hope.In most cases, the caregiver works along with physicians and follows professional instructions. Caregivers may call the physician or a nurse if the individual: experiences excessive pain. Management: is in distress or having difficulty breathing. has difficulty passing urine or is constipated. has fallen and appears hurt. is depressed and wants to harm themselves. refuses to take prescribed medications, raising ethical concerns best addressed by a person with more extensive formal training. or if the caregiver does not know how to handle the situation.Most caregivers become the patient's listeners and let the individual express fears and concerns without judgment. Caregivers reassure the patient and honor all advance directives. Caregivers respect the individual's need for privacy and usually hold all information confidential. Management: Palliative care Palliative care focuses on addressing patients' needs after disease diagnosis. While palliative care is not disease treatment, it addresses patients' physical needs, such as pain management, offers emotional support, caring for the patient psychologically and spiritually, and helps patients build support systems that can help them get through difficult times. Palliative care can also help patients make decisions and come to understand what they want regarding their treatment goals and quality of life.Palliative care is an attempt to improve patients' quality-of-life and comfort, and also provide support for family members and carers. Additionally, it lowers hospital admissions costs. However, needs for palliative care are often unmet whether due to lack of government support and also possible stigma associated with palliative care. For these reasons, the World Health Assembly recommends development of palliative care in health care systems.Palliative care and hospice care are often confused, and they have similar goals. However, hospice care is specifically for terminal patients while palliative care is more general and offered to patients who are not necessarily terminal. Management: Hospice care While hospitals focus on treating the disease, hospices focus on improving patient quality-of-life until death. A common misconception is that hospice care hastens death because patients "give up" fighting the disease. However, patients in hospice care often live the same length of time as patients in the hospital. A study of 3850 liver cancer patients found that patients who received hospice care, and those who did not, survived for the same amount of time. In fact, a study of 3399 adult lung cancer patients showed that patients who received hospice care actually survived longer than those who did not. Additionally, in both of these studies, patients receiving hospice care had significantly lower healthcare expenditures.Hospice care allows patients to spend more time with family and friends. Since patients are in the company of other hospice patients, they have an additional support network and can learn to cope together. Hospice patients are also able to live at peace away from a hospital setting; they may live at home with a hospice provider or at an inpatient hospice facility. Management: Medications for terminal patients Terminal patients experiencing pain, especially cancer-related pain, are often prescribed opioids to relieve suffering. The specific medication prescribed, however, will differ depending on severity of pain and disease status.There exist inequities in availability of opioids to terminal patients, especially in countries where opioid access is limited.A common symptom that many terminal patients experience is dyspnea, or difficulty with breathing. To ease this symptom, doctors may also prescribe opioids to patients. Some studies suggest that oral opioids may help with breathlessness. However, due to lack of consistent reliable evidence, it is currently unclear whether they truly work for this purpose.Depending on the patient's condition, other medications will be prescribed accordingly. For example, if patients develop depression, antidepressants will be prescribed. Anti-inflammation and anti-nausea medications may also be prescribed. Management: Continued treatment Some terminal patients opt to continue extensive treatments in hope of a miracle cure, whether by participating in experimental treatments and clinical trials or seeking more intense treatment for the disease. Rather than to "give up fighting," patients spend thousands more dollars to try to prolong life by a few more months. What these patients often do give up, however, is quality of life at the end of life by undergoing intense and often uncomfortable treatment. A meta-analysis of 34 studies including 11,326 patients from 11 countries found that less than half of all terminal patients correctly understood their disease prognosis, or the course of their disease and likeliness of survival. This could influence patients to pursue unnecessary treatment for the disease due to unrealistic expectations. Management: Transplant For patients with end stage kidney failure, studies have shown that transplants increase the quality of life and decreases mortality in this population. In order to be placed on the organ transplant list, patients are referred and assessed based on criteria that ranges from current comorbidities to potential for organ rejection post transplant. Initial screening measures include: blood tests, pregnancy tests, serologic tests, urinalysis, drug screening, imaging, and physical exams.For patients who are interested in liver transplantation, patients with acute liver failure have the highest priority over patients with only cirrhosis. Acute liver failure patients will present with worsening symptoms of somnolence or confusion (hepatic encephalopathy) and thinner blood (increased INR) due to the liver's inability to make clotting factors. Some patients could experience portal hypertension, hemorrhages, and abdominal swelling (ascites). Model for End Stage Liver Disease (MELD) is often used to help providers decide and prioritize candidates for transplant. Management: Physician-assisted suicide Physician-assisted suicide (PAS) is highly controversial, and legal in only a few countries. In PAS, physicians, with voluntary written and verbal consent from the patient, give patients the means to die, usually through lethal drugs. The patient then chooses to "die with dignity," deciding on their own time and place to die. Reasons as to why patients choose PAS differ. Factors that may play into a patient's decision include future disability and suffering, lack of control over death, impact on family, healthcare costs, insurance coverage, personal beliefs, religious beliefs, and much more.PAS may be referred to in many different ways, such as aid in dying, assisted dying, death with dignity, and many more. These often depend on the organization and the stance they take on the issue. In this section of the article, it will be referred to as PAS for the sake of consistency with the pre-existing Wikipedia page: Assisted Suicide. Management: In the United States, PAS or medical aid in dying is legal in select states, including Oregon, Washington, Montana, Vermont, and New Mexico, and there are groups both in favor of and against legalization.Some groups favor PAS because they do not believe they will have control over their pain, because they believe they will be a burden on their family, and because they do not want to lose autonomy and control over their own lives among other reasons. They believe that allowing PAS is an act of compassion.While some groups believe in personal choice over death, others raise concerns regarding insurance policies and potential for abuse. According to Sulmasy et al., the major non-religious arguments against physician-assisted suicide are quoted as follows: (1) "it offends me", suicide devalues human life; (2) slippery slope, the limits on euthanasia gradually erode; (3) "pain can be alleviated", palliative care and modern therapeutics more and more adequately manage pain; (4) physician integrity and patient trust, participating in suicide violates the integrity of the physician and undermines the trust patients place in physicians to heal and not to harm"Again, there are also arguments that there are enough protections in the law that the slippery slope is avoided. For example, the Death with Dignity Act in Oregon includes waiting periods, multiple requests for lethal drugs, a psychiatric evaluation in the case of possible depression influencing decisions, and the patient personally swallowing the pills to ensure voluntary decision.Physicians and medical professionals also have disagreeing views on PAS. Some groups, such as the American College of Physicians (ACP), the American Medical Association (AMA), the World Health Organization, American Nurses Association, Hospice Nurses Association, American Psychiatric Association, and more have issued position statements against its legalization.The ACP's argument concerns the nature of the doctor-patient relationship and the tenets of the medical profession. They state that instead of using PAS to control death: "through high-quality care, effective communication, compassionate support, and the right resources, physicians can help patients control many aspects of how they live out life's last chapter."Other groups such as the American Medical Students Association, the American Public Health Association, the American Medical Women's Association, and more support PAS as an act of compassion for the suffering patient.In many cases, the argument on PAS is also tied to proper palliative care. The International Association for Hospice and Palliative Care issued a position statement arguing against considering legalizing PAS unless comprehensive palliative care systems in the country were in place. It could be argued that with proper palliative care, the patient would experience fewer intolerable symptoms, physical or emotional, and would not choose death over these symptoms. Palliative care would also ensure that patients receive proper information about their disease prognosis as not to make decisions about PAS without complete and careful consideration. Medical care: Many aspects of medical care are different for terminal patients compared to patients in the hospital for other reasons. Medical care: Doctor–patient relationships Doctor–patient relationships are crucial in any medical setting, and especially so for terminal patients. There must be an inherent trust in the doctor to provide the best possible care for the patient. In the case of terminal illness, there is often ambiguity in communication with the patient about their condition. While terminal condition prognosis is often a grave matter, doctors do not wish to quash all hope, for it could unnecessarily harm the patient's mental state and have unintended consequences. However, being overly optimistic about outcomes can leave patients and families devastated when negative results arise, as is often the case with terminal illness. Medical care: Mortality predictions Often, a patient is considered terminally ill when his or her estimated life expectancy is six months or less, under the assumption that the disease will run its normal course based on previous data from other patients. The six-month standard is arbitrary, and best available estimates of longevity may be incorrect. Though a given patient may properly be considered terminal, this is not a guarantee that the patient will die within six months. Similarly, a patient with a slowly progressing disease, such as AIDS, may not be considered terminally ill if the best estimate of longevity is greater than six months. However, this does not guarantee that the patient will not die unexpectedly early.In general, physicians slightly overestimate the survival time of terminally ill cancer patients, so that, for example, a person who is expected to live for about six weeks would likely die around four weeks.A recent systematic review on palliative patients in general, rather than specifically cancer patients, states the following: "Accuracy of categorical estimates in this systematic review ranged from 23% up to 78% and continuous estimates over-predicted actual survival by, potentially, a factor of two." There was no evidence that any specific type of clinician was better at making these predictions. Medical care: Healthcare spending Healthcare during the last year of life is costly, especially for patients who used hospital services often during end-of-life.In fact, according to Langton et al., there were "exponential increases in service use and costs as death approached."Many dying terminal patients are also brought to the emergency department (ED) at the end of life when treatment is no longer beneficial, raising costs and using limited space in the ED.While there are often claims about "disproportionate" spending of money and resources on end-of-life patients, data have not proven this type of correlation.The cost of healthcare for end-of-life patients is 13% of annual healthcare spending in the U.S. However, of the group of patients with the highest healthcare spending, end-of-life patients only made up 11% of these people, meaning the most expensive spending is not made up mostly of terminal patients.Many recent studies have shown that palliative care and hospice options as an alternative are much less expensive for end-of-life patients. Psychological impact: Coping with impending death is a hard topic to digest universally. Patients may experience grief, fear, loneliness, depression, and anxiety among many other possible responses. Terminal illness can also lend patients to become more prone to psychological illness such as depression and anxiety disorders. Insomnia is a common symptom of these.It is important for loved ones to show their support for the patient during these times and to listen to his or her concerns.People who are terminally ill may not always come to accept their impending death. For example, a person who finds strength in denial may never reach a point of acceptance or accommodation and may react negatively to any statement that threatens this defense mechanism. Psychological impact: Impact on patient Depression is relatively common among terminal patients, and the prevalence increases as patients become sicker. Depression causes quality of life to go down, and a sizable portion of patients who request assisted suicide are depressed. These negative emotions may be heightened by lack of sleep and pain as well. Depression can be treated with antidepressants and/or therapy, but doctors often do not realize the extent of terminal patients' depression.Because depression is common among terminal patients, the American College of Physicians recommends regular assessments for depression for this population and appropriate prescription of antidepressants.Anxiety disorders are also relatively common for terminal patients as they face their mortality. Patients may feel distressed when thinking about what the future may hold, especially when considering the future of their families as well. It is important to note, however, that some palliative medications may facilitate anxiety. Psychological impact: Coping for patients Caregivers may listen to the concerns of terminal patients to help them reflect on their emotions. Different forms of psychotherapy and psychosocial intervention, which can be offered with palliative care, may also help patients think about and overcome their feelings. According to Block, "most terminally ill patients benefit from an approach that combines emotional support, flexibility, appreciation of the patient's strengths, a warm and genuine relationship with the therapist, elements of life-review, and exploration of fears and concerns." Impact on family Terminal patients' families often also suffer psychological consequences. If not well equipped to face the reality of their loved one's illness, family members may develop depressive symptoms and even have increased mortality. Taking care of sick family members may also cause stress, grief, and worry. Additionally, financial burden from medical treatment may be a source of stress. Parents of terminally ill children also face additional challenges in addition to mental health stressors including difficulty balancing caregiving and maintaining employment. Many report feeling as if they have to "do it all" by balancing caring for their chronically ill child, limiting absence from work, and supporting their family members. Children of terminally ill parents often experience a role reversal in which they become the caretakers of their adult parents. In taking on the burden of caring for their sick parent and assuming the responsibilities they can no longer accomplish, many children also experience significant declines in academic performance. Psychological impact: Coping for family Discussing the anticipated loss and planning for the future may help family members accept and prepare for the patient's death. Interventions may also be offered for anticipatory grief. In the case of more serious consequences such as depression, a more serious intervention or therapy is recommended. Upon the death of someone who is terminally ill, many family members that served as caregivers are likely to experience declines in their mental health. Grief counseling and grief therapy may also be recommended for family members after a loved one's death.Grief counseling and grief therapy may also be recommended for family members after a loved one's death. Dying: When dying, patients often worry about their quality of life towards the end, including emotional and physical suffering.In order for families and doctors to understand clearly what the patient wants for themselves, it is recommended that patients, doctors, and families all convene and discuss the patient's decisions before the patient becomes unable to decide. Dying: Advance directives At the end of life, especially when patients are unable to make decisions on their own regarding treatment, it is often up to family members and doctors to decide what they believe the patients would have wanted regarding their deaths, which is often a heavy burden and hard for family members to predict. An estimated 25% of American adults have an advance directive, meaning the majority of Americans leave these decisions to be made by family, which can lead to conflict and guilt. Although it may be a difficult subject to broach, it is important to discuss the patient's plans for how far to continue treatment should they become unable to decide. This must be done while the patient is still able to make the decisions, and takes the form of an advance directive. The advance directive should be updated regularly as the patient's condition changes so as to reflect the patient's wishes.Some of the decisions that advance directives may address include receiving fluids and nutrition support, getting blood transfusions, receiving antibiotics, resuscitation (if the heart stops beating), and intubation (if the patient stops breathing).Having an advance directive can improve end-of-life care.It is highly recommended by many research studies and meta-analyses for patients to discuss and create an advance directive with their doctors and families. Dying: Do-not-resuscitate One of the options of care that patients may discuss with their families and medical providers is the do-not-resuscitate (DNR) order. This means that if the patient's heart stops, CPR and other methods to bring back heartbeat would not be performed. This is the patient's choice to make and can depend on a variety of reasons, whether based on personal beliefs or medical concerns. DNR orders can be medically and legally binding depending on the applicable jurisdiction.Decisions like these should be indicated in the advance directive so that the patient's wishes can be carried out to improve end-of-life care. Dying: Symptoms near death A variety of symptoms become more apparent when a patient is nearing death. Recognizing these symptoms and knowing what will come may help family members prepare.During the final few weeks, symptoms will vary largely depending on the patient's disease. During the final hours, patients usually will reject food and water and will also sleep more, choosing not to interact with those around them. Their bodies may behave more irregularly, with changes in breathing, sometimes with longer pauses between breaths, irregular heart rate, low blood pressure, and coldness in the extremities. It is important to note, however, that symptoms will vary per patient. Dying: Good death Patients, healthcare workers, and recently bereaved family members often describe a "good death" in terms of effective choices made in a few areas: Assurance of effective pain and symptom management. Education about death and its aftermath, especially as it relates to decision-making. Completion of any significant goals, such as resolving past conflicts.In the last hours of life, palliative sedation may be recommended by a doctor or requested by the patient to ease the symptoms of death until they die. Palliative sedation is not intended to prolong life or hasten death; it is merely meant to relieve symptoms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peeps** Peeps: Peeps are a marshmallow confection marketed since 1953 in the United States and Canada in the shape of chicks, bunnies, and other animals as well as holiday shapes produced by Pennsylvania-headquartered Just Born Quality Confections. Originally promoted primarily at Easter, Peeps have subsequently been marketed as "Always in Season", and have expanded to Halloween, Christmas and Valentine's Day. Since 2014 the confection has been available year-round with the introduction of Peeps Minis. Peeps ingredients include sugar, corn syrup, gelatin, food dyes and salt. History: Peeps are produced by Just Born, a candy manufacturer founded in Bethlehem, Pennsylvania, by an immigrant from Vinnitsa in the Russian Empire (now in Ukraine), Sam Born (1891–1959). In 1953, Just Born acquired the Rodda Candy Company and its marshmallow chick line, and replaced the painstaking process of hand-forming the chicks with mass production. When founder Sam Born displayed a sign for his freshly made candy, he titled it "Just Born", playing off of his last name and the fact that he made his candy fresh daily. According to Mary Bellis, the newly purchased company, Just Born, was soon the "largest marshmallow candy manufacturer in the world." Just Born began producing other shapes in the 1960s, following seasonal themes. Twenty years later, the Marshmallow Peeps Bunny was released as a popular year-round shape of the candy. The yellow chicks were the original form of the candy — hence their name — but then the company introduced other colors and, eventually, the myriad of shapes in which they are now produced. Peeps were manufactured in different colors, such as lavender starting in 1995 and blue in 1998. Prior to that, they were only being produced in the traditional colors: yellow, pink and white. New flavors such as vanilla, strawberry, and chocolate were introduced between the years of 1999 and 2002.In 2009, Just Born expanded the Peeps product line further by introducing Peeps Lip Balm in four flavors: grape, strawberry, vanilla, and cotton candy. Just Born has come out with several other various accessories. Items such as nail polish, wrist bands, umbrellas, plush toys, golf gloves, earrings, and necklaces are produced and sold online and in retail stores. Other companies have produced items based on the popular Peeps candy. Peeps micro bead pillows were made by Kaboodle and conform to one's shape. The company Kaboodle promises that "they'll last a lot longer than their edible counterpart!" Ranging from infant sizes to adult sizes, Peeps Halloween costumes can also be found on the shelves of several costume stores. The first Peeps & Co. store opened in November 2009 in National Harbor, Maryland, Prince George's County. Peeps & Company retail stores were later opened in Minnesota (as of 2019, shut down due to low profits) and Pennsylvania. In 2014, Peeps Minis were introduced, and were intended to be available year-round. Contests and competitions: A "Peeps Eating" contest is held each year at National Harbor in front of the Peeps & Company store. The 2017 winner, Matt Stonie of California, ate 255 Peeps in five minutes. The first such event was arranged by Shawn Sparks in 1994, and had only six participants. Dave Smith started an annual Peep Off in Sacramento after contacting a participant in the first Peep Off. Another contest in Maryland asks that participants create a diorama of a culturally important scene from the modern era, featuring a number of Peeps. The winner gets two free inflatable life jackets. Contests and competitions: Several newspapers hold annual contests in which readers submit photos of dioramas featuring Peeps. The St. Paul Pioneer Press was the first paper to hold such a contest. Similar contests are put on by The Chicago Tribune, and the Seattle Times. These contests frequently correspond with the Easter holiday. MIT also has a yearly Peeps contest. The Washington Post held an annual contest until 2017, when it was discontinued. The smaller Washington City Paper introduced their contest in its place. Contests and competitions: The Racine Art Museum holds the Annual International Peeps Art Exhibition each April. Anyone can enter the contest, centered on the theme "peep-powered work of art".The following are other contests held in various states. Peeps jousting consists of putting two Marshmallow Chicks into the microwave and seeing which one gets the biggest and therefore affects/deforms the other. "Peepza" is a dessert pizza made with Peeps. Also, blogs were created according to Fox News entitled "101 Fun Ways to Torture a Peep." Alleged indestructibility: Peeps are sometimes jokingly described as "indestructible". In 1999 scientists at Emory University jokingly performed experiments on batches of Peeps to see how easily they could be dissolved, burned or otherwise disintegrated, using such agents as cigarette smoke, boiling water and liquid nitrogen. In addition to discussing whether Peeps migrate or evolve, they claimed that the eyes of the confectionery "wouldn't dissolve in anything". One website claims that Peeps are insoluble in acetone, water, diluted sulfuric acid, and sodium hydroxide (the site also claims that the Peeps experimental subjects sign release forms). Concentrated sulfuric acid seems to have effects similar to the expected effects of sulfuric acid on sugar.This debate featured in an episode of the sitcom Malcolm in the Middle ("Traffic Jam"), in which Francis, insisting the "Quacks" (as they were called) would dissolve in his stomach rather than expand, takes up the dare to eat 100 of them and does so, but gets very sick in the process. Alleged indestructibility: As marshmallow ages exposed to air — it dehydrates, becoming "stale" and slightly crunchy. According to Just Born, 25%–30% of their customers prefer eating Peeps stale. Public relations: Barry Church, a football player for the Dallas Cowboys, was unable to attend training camp due to an unexpected root canal caused by eating Jolly Ranchers. Just Born offered Church a season's supply of their product—in that marshmallows are a lot softer on the teeth. Recipes using Peeps: Several recipes and creative ideas to alter Peeps have been invented. Fox News Magazine published an article in 2013 including several recipes from various creators, including Peeps smores, Peeps pancakes, home-made chocolate covered Peeps, Peeps marshmallow chocolate chip cookies, Peeps brownies, Peeps popcorn, Peeps frosting, Peeps Krispie Treats, and Peeps syrup.Peeps can also be used as a marshmallow topper for hot chocolate.A recipe for "Peepshi" involves placing a Peep onto a Rice Krispie Treat and wrapping it in a Fruit by the Foot, to create a single "Peepshi roll" in the style of a sushi roll.In April 2017, several internet and Twitter postings, and TV news stories claimed 'outrage' that Peeps were being used as a pizza topping.In March 2021, Just Born and Pepsico (maker of Pepsi products) announced a "limited edition" of "Peeps marshmallow cola" soft drinks. Film adaptation: On April 22, 2014, Adam Rifkin acquired the feature film and TV rights to the classic candies to make a franchise of it. Then, on April 5, 2021, it was announced that Wonder Street has acquired the rights to the candies with David Goldblum writing and producing alongside Christine and Mark Holder. The film's plot centered around a ragtag group of Peeps characters who set out on a cross-country journey in order to attend Peepsfest, an annual brand celebration in Pennsylvania.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Macaulay representation of an integer** Macaulay representation of an integer: Given positive integers n and d , the d -th Macaulay representation of n is an expression for n as a sum of binomial coefficients: n=(cdd)+(cd−1d−1)+⋯+(c22)+(c11). Here, c1,…,cd is a uniquely determined, strictly increasing sequence of nonnegative integers known as the Macaulay coefficients. For any two positive integers n1 and n2 , n1 is less than n2 if and only if the sequence of Macaulay coefficients for n1 comes before the sequence of Macaulay coefficients for n2 in lexicographic order. Macaulay coefficients are also known as the combinatorial number system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trailer (computing)** Trailer (computing): In information technology, a trailer or footer refers to supplemental data (metadata) placed at the end of a block of data being stored or transmitted, which may contain information for the handling of the data block, or simply mark the block's end.In data transmission, the data following the end of the header and preceding the start of the trailer is called the payload or body. Trailer (computing): It is vital that trailer composition follow a clear and unambiguous specification or format, to allow for parsing. If a trailer is not removed properly, or part of the payload is removed thinking it is a trailer, it can cause confusion. The trailer contains information concerning the destination of a packet being sent over a network so for instance in the case of emails the destination of the email is contained in the trailer Examples: In data transfer, the OSI model's data link layer adds a trailer at the end of frames of the data encapsulation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methyl ricinoleate** Methyl ricinoleate: Methyl ricinoleate is a clear, viscous fluid that is used as a surfactant, cutting fluid additive, lubricant, and plasticizer. It is a plasticizer for cellulosic resins, polyvinyl acetate, and polystyrene. It is a type of fatty acid methyl ester synthesized from castor oil and methyl alcohol.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stochastic volatility jump** Stochastic volatility jump: In mathematical finance, the stochastic volatility jump (SVJ) model is suggested by Bates. This model fits the observed implied volatility surface well. The model is a Heston process for stochastic volatility with an added Merton log-normal jump. It assumes the following correlated processes: dS=μSdt+νSdZ1+(eα+δε−1)Sdq dν=λ(ν−ν¯)dt+ηνdZ2 corr ⁡(dZ1,dZ2)=ρ prob ⁡(dq=1)=λdt where S is the price of security, μ is the constant drift (i.e. expected return), t represents time, Z1 is a standard Brownian motion, q is a Poisson counter with density λ.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PCP theorem** PCP theorem: In computational complexity theory, the PCP theorem (also known as the PCP characterization theorem) states that every decision problem in the NP complexity class has probabilistically checkable proofs (proofs that can be checked by a randomized algorithm) of constant query complexity and logarithmic randomness complexity (uses a logarithmic number of random bits). The PCP theorem says that for some universal constant K, for every n, any mathematical proof for a statement of length n can be rewritten as a different proof of length poly(n) that is formally verifiable with 99% accuracy by a randomized algorithm that inspects only K letters of that proof. PCP theorem: The PCP theorem is the cornerstone of the theory of computational hardness of approximation, which investigates the inherent difficulty in designing efficient approximation algorithms for various optimization problems. It has been described by Ingo Wegener as "the most important result in complexity theory since Cook's theorem" and by Oded Goldreich as "a culmination of a sequence of impressive works […] rich in innovative ideas". Formal statement: The PCP theorem states that NP = PCP[O(log n), O(1)],where PCP[r(n), q(n)] is the class of problems for which a probabilistically checkable proof of a solution can be given, such that the proof can be checked in polynomial time using r(n) bits of randomness and by reading q(n) bits of the proof, correct proofs are always accepted, and incorrect proofs are rejected with probability at least 1/2. n is the length in bits of the description of a problem instance. Note further that the verification algorithm is non-adaptive: the choice of bits of the proof to check depend only on the random bits and the description of the problem instance, not the actual bits of the proof. PCP and hardness of approximation: An alternative formulation of the PCP theorem states that the maximum fraction of satisfiable constraints of a constraint satisfaction problem is NP-hard to approximate within some constant factor.Formally, for some constants q and α < 1, the following promise problem (Lyes, Lno) is an NP-hard decision problem: Lyes = {Φ: all constraints in Φ are simultaneously satisfiable} Lno = {Φ: every assignment satisfies fewer than an α fraction of Φ's constraints},where Φ is a constraint satisfaction problem (CSP) over a Boolean alphabet with at most q variables per constraint. PCP and hardness of approximation: The connection to the class PCP mentioned above can be seen by noticing that checking a constant number of bits q in a proof can be seen as evaluating a constraint in q Boolean variables on those bits of the proof. Since the verification algorithm uses O(log n) bits of randomness, it can be represented as a CSP as described above with poly(n) constraints. The other characterisation of the PCP theorem then guarantees the promise condition with α = 1/2: if the NP problem's answer is yes, then every constraint (which corresponds to a particular value for the random bits) has a satisfying assignment (an acceptable proof); otherwise, any proof should be rejected with probability at least 1/2, which means any assignment must satisfy fewer than 1/2 of the constraints (which means it will be accepted with probability lower than 1/2). Therefore, an algorithm for the promise problem would be able to solve the underlying NP problem, and hence the promise problem must be NP hard. PCP and hardness of approximation: As a consequence of this theorem, it can be shown that the solutions to many natural optimization problems including maximum boolean formula satisfiability, maximum independent set in graphs, and the shortest vector problem for lattices cannot be approximated efficiently unless P = NP. This can be done by reducing the problem of approximating a solution to such problems to a promise problem of the above form. These results are sometimes also called PCP theorems because they can be viewed as probabilistically checkable proofs for NP with some additional structure. Proof: A proof of a weaker result, NP ⊆ PCP[n3, 1] is given in one of the lectures of Dexter Kozen. History: The PCP theorem is the culmination of a long line of work on interactive proofs and probabilistically checkable proofs. The first theorem relating standard proofs and probabilistically checkable proofs is the statement that NEXP ⊆ PCP[poly(n), poly(n)], proved by Babai, Fortnow & Lund (1990). Origin of the initials The notation PCPc(n), s(n)[r(n), q(n)] is explained at probabilistically checkable proof. The notation is that of a function that returns a certain complexity class. See the explanation mentioned above. The name of this theorem (the "PCP theorem") probably comes either from "PCP" meaning "probabilistically checkable proof", or from the notation mentioned above (or both). History: First theorem [in 1990] Subsequently, the methods used in this work were extended by Babai, Lance Fortnow, Levin, and Szegedy in 1991 (Babai et al. 1991), Feige, Goldwasser, Lund, Safra, and Szegedy (1991), and Arora and Safra in 1992 (Arora & Safra 1992) to yield a proof of the PCP theorem by Arora, Lund, Motwani, Sudan, and Szegedy in 1998 (Arora et al. 1998). History: The 2001 Gödel Prize was awarded to Sanjeev Arora, Uriel Feige, Shafi Goldwasser, Carsten Lund, László Lovász, Rajeev Motwani, Shmuel Safra, Madhu Sudan, and Mario Szegedy for work on the PCP theorem and its connection to hardness of approximation. In 2005 Irit Dinur discovered a significantly simpler proof of the PCP theorem, using expander graphs. She received the 2019 Gödel Prize for this. History: Quantum analog In 2012, Thomas Vidick and Tsuyoshi Ito published a result that showed a "strong limitation on the ability of entangled provers to collude in a multiplayer game". This could be a step toward proving the quantum analogue of the PCP theorem, since when the result was reported in the media, professor Dorit Aharonov called it "the quantum analogue of an earlier paper on multiprover interactive proofs" that "basically led to the PCP theorem".In 2018, Thomas Vidick and Anand Natarajan proved a games variant of quantum PCP theorem under randomized reduction. It states that QMA ⊆ MIP*[log(n), 1, 1/2], where MIP*[f(n), c, s] is a complexity class of multi-prover quantum interactive proofs systems with f(n)-bit classical communications, and the completeness is c and the soundness is s. They also showed that the Hamiltonian version of a quantum PCP conjecture, namely a local Hamiltonian problem with constant promise gap c − s is QMA-hard, implies the games quantum PCP theorem. History: NLTS conjecture is a fundamental unresolved obstacle and precursor to a quantum analog of PCP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Redlasso** Redlasso: Redlasso is a broadcast media website which allows users to search, clip, and share licensed television and radio content. Initially envisioned as a video clip search engine, the company currently seeks to help publishers "extend the life of their perishable content in a secure and controllable platform, while giving users the ability to share the content they are interested in". History: Redlasso initially refused to suspend its operations after receiving a cease-and-desist letter from CBS, NBC and Fox. Responding to a letter from the networks demanding that Redlasso stop hosting unlicensed clips of their content, Redlasso indicated that it would not stop because it believed its activity was legally permissible.On July 27, 2008, NBC Universal, Inc., Fox News Network, LLC and Fox Television Stations, Inc. filed a copyright infringement suit against Redlasso in the United States District Court for the Southern District of New York. Notwithstanding its prior announcement that it would continue its operations and make broadcast clips available to the public, Redlasso discontinued its service two days after the networks' complaint was filed.On October 22, 2008, the court entered final judgment permanently enjoining Redlasso's service.On March 25, 2009, Fox Television Stations (FTS) and Redlasso have entered into an agreement that gives the online broadcast media center the rights to syndicate content from the group's local television news programs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coastal trading vessel** Coastal trading vessel: Coastal trading vessels, also known as coasters or skoots, are shallow-hulled ships used for trade between locations on the same island or continent. Their shallow hulls mean that they can get through reefs where deeper-hulled seagoing ships usually cannot (26-28 feet), but as a result they are not optimized for the large waves found on the open ocean. Coasters can load and unload cargo in shallow ports. For Europe inland waterways regulated to 33,49 m beam. World War II: During World War II there was a demand for coasters to support troops around the world. Type N3 ship and Type C1 ship was the designation for small cargo ships built for the United States Maritime Commission before and during World War II. Both were use for close to shore and short cargo runs. Government of the United Kingdom used Empire ships type Empire F as a merchant ship for coastal shipping. UK seamen called these "CHANTs", possibly because they had the same hull form as Channel Tankers (CHANT) and initially all the tankers were sold to foreign owners and therefore there was no conflict in nomenclature. The USA and UK both used coastal tankers also. UK used Empire coaster tankers and T1 tankers. Many coasters had some armament like: a 5-inch (127 mm) stern gun, 3-inch (76.2 mm) bow anti-aircraft gun and Oerlikon 20 mm anti-aircraft gun. Armament was removed after the war. After the war many of the ships were sold to private companies all around the world.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Patterned media** Patterned media: Patterned media (also known as bit-patterned media or BPM) is a potential future hard disk drive technology to record data in magnetic islands (one bit per island), as opposed to current hard disk drive technology where each bit is stored in 20–30 magnetic grains within a continuous magnetic film. The islands would be patterned from a precursor magnetic film using nanolithography. It is one of the proposed technologies to succeed perpendicular recording due to the greater storage densities it would enable. BPM was introduced by Toshiba in 2010. Comparison with existing HDD technology: In existing hard disk drives, data is stored in a thin magnetic film. This film is deposited so that it consists of isolated (weakly exchange coupled) grains of material of around 8 nm diameter. One bit of data consists of around 20–30 grains that are magnetized in the same direction (either "up" or "down", with respect to the plane of the disk). One method of increasing storage density has been to reduce the average grain volume. However, the energy barrier for thermal switching is proportional to the grain volume. With existing materials, further reductions in the grain volume would result in data loss occurring spontaneously due to superparamagnetism. Comparison with existing HDD technology: In patterned media, the thin magnetic film is first deposited so there is strong exchange coupling between the grains. Using nanolithography, it is then patterned into magnetic islands. The strong exchange coupling means that the energy barrier is now proportional to the island volume, rather than the volume of individual grains within the island. Therefore, storage density increases can be achieved by patterning islands of increasingly small diameter, whilst maintaining thermal stability. Patterned media is predicted to enable areal densities up to 20–300 Tbit/in2 (3.1–46.5 Tbit/cm2) as opposed to the 1 Tbit/in2 (160 Gbit/cm2) limit that exists with current HDD technology. Comparison with existing HDD technology: Differences in read/write head control strategies In existing HDDs data bits are ideally written on concentric circular tracks. This process is different in bit patterned media recording where data should be written on tracks with predetermined shapes, which are created by lithography (see below) on the disk. The trajectories that are required to be followed by the servo system in patterned media recording are characterized by a set of "servo tracks" existing on the disk. Deviation of a servo track from an ideal circular shape is called "repeatable runout" (RRO). Therefore, the servo controller in bit patterned media recording has to follow the RRO which is unknown in the time of design, and as a result the servo control methodologies used for conventional drives cannot be applied. Patterned media recording has some specific challenges in terms of servo control design: RRO profile is unknown. Comparison with existing HDD technology: RRO frequency spectrum can spread beyond the bandwidth of the servo system; therefore, it will be amplified by the feedback controller. RRO spectrum contains many harmonics of the spindle frequency (e.g. ~ 200 harmonics) that should be attenuated. This increases the computational burden in the controller. RRO profile is changing from track to track (i.e. it is varying). Methods of patterned media fabrication: Ion beam lithography In preliminary research, one of the processes investigated for creating prototypes was ion beam proximity lithography. This uses stencil masks to produce patterns an ion-sensitive material (resist), which are then transferred to magnetic material. The stencil mask contains a thin free-standing silicon nitride membrane in which openings are formed. The pattern to be generated is first formed on a substrate that contains a photo-resist using electron beam lithography. Next the substrate is used to transfer the given pattern onto the nitride membrane (stencil mask) using the process of plasma etching. To create sufficient substrates is to maintain size uniformity of the openings which is transferred to the mask during the fabrication process (etching). Many factors contribute to the achievement and maintenance of size uniformity in the mask, such as: pressure, temperature, energy (amount of voltage), and power used when etching. To optimize the process of etching uniform patterns correctly under these parameters, the substrate can be used as a template to fabricate stencil masks of silicon nitride through the process of ion proximity beam lithography. The stencil mask can then be used as a prototype to create pattern media. Methods of patterned media fabrication: Directed self-assembly of block copolymer films In 2014, Ricardo Ruiz of Hitachi Global Storage Technologies writes in an upcoming-conference briefing note that "the most promising solution to the lithographic challenge can be found in directed self-assembly of block copolymer films which has recently evolved as a viable technique to achieve sub-20nm lithography in time for BPM technology".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jacob's ladder (nautical)** Jacob's ladder (nautical): The term Jacob's ladder, used on a ship, applies to two kinds of rope ladders.The first is a flexible hanging ladder. It consists of vertical ropes or chains supporting horizontal, historically round and wooden, rungs. Today, flat runged flexible ladders are also called Jacob's ladders. The name is commonly used without the apostrophe (Jacobs ladder). Jacob's ladder (nautical): They are used to allow access over the side of ships and as a result Pilot ladders are often incorrectly referred to as Jacob's ladders. A pilot ladder has specific regulations on step size, spacing and the use of spreaders. It is the use of spreaders in a pilot ladder that distinguishes it from a Jacob's ladder. Spreaders are long treads that extend well past the vertical ropes to stop the ladder from twisting about its long axis (possible when a ship rolls and the ladder is no longer in contact with the ship's side) with the person possibly becoming trapped between the ship's side and the ladder. When not being used, the ladder is stowed away (usually rolled up) rather than left hanging. On late 19th-century warships this kind of ladder would replace the normal fixed ladders on deck during battle. These and railings would be removed and replaced with Jacob's ladders and ropes while preparing for battle the days before. This was done to prevent them from blocking line of sight or turning into shrapnel when hit by enemy shells. Jacob's ladder (nautical): The second applies to a kind of ladder found on square rigged ships. To climb above the lower mast to the topmast and above, sailors must get around the top, a platform projecting from the mast. Although on many ships the only way round was the overhanging futtock shrouds, modern-day tall ships often provide an easier vertical ladder from the ratlines as well. This is the Jacob's ladder. Jacob's ladder (nautical): While they were a popular way of boarding a vessel or carrying out shipside maintenance during the era of wooden ships and even as recently as the 1950s, their use today on board modern merchant ships is minimal due to obvious safety issues. Today, Jacob's ladders are used only to board lifeboats and liferafts and as a draft ladder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ramona passive sensor** Ramona passive sensor: Ramona was the second generation Czechoslovak electronic support measures (ESM) system that uses measurements of time difference of arrival (TDOA) of pulses at three or four sites to accurately detect and track airborne emitters by multilateration. History: Ramona's designation was KRTP-81 and it carried the NATO reporting name of Soft Ball. The serial number was derived from the Czech phrase "Komplet radiotechnického průzkumu" meaning "Radiotechnical Reconnaissance Set". A later upgraded version was designated KRTP-81M. Ramona was deployed in 1979 and could semi-automatically track 20 targets simultaneously. It superseded Kopáč. Appearance: Each receiver comprised a large spherical radome mounted on the top of a 25 m fixed mast. This radome, made of identical segments of polyurethane foam, contained the radio antennas and the microwave components, intermediate frequency preamplifiers and the two-way communications equipment for communicating between central and side sites. At first glance the system bore a striking resemblance to typical Eastern European water towers. Mode of operation: The deployed system typically comprises a central site (containing the signal processing equipment and an ESM receiver) and two or three side sites containing only an ESM receiver. The side sites relay the signals received to the central site over a point-to-point microwave link. The central site uses the known propagation delay from the side sites to estimate the TDOA of the pulses at each site. The TDOA of a pulse between one side site and the central site locates the target on a hyperboloid. A second side site provides a second TDOA and hence a second hyperboloid. The intersection of these two hyperboloids places the target on a line, providing a 2D measurement of the target's location (no height). Mode of operation: Ramona operated over the frequency range of 0.8-18 GHz and provided surveillance over a sector of approximately 100 degrees. System deployment was complex and took between 4 and 12 hours. The system was transportable using thirteen Tatra T138 trucks. Exports: 17 Ramona systems and 14 upgraded Ramona-M systems were built. Of these, 14 Ramona and 10 upgraded systems were exported to the Soviet Union. Of these, the system with serial number 104 was deployed by the Soviet Union in North Korea. Other systems were exported to the German Democratic Republic. Syria received four (three Ramona-M) between 1981-94. One of these systems was deployed to Djebel Baruk in Lebanon. This site was first attacked by Israeli Air Force and then occupied by IDF during June 1982. Literature: Jiří Hofman, Jan Bauer: Tajemství radiotechnického pátrače Tamara [The Secret of Radiotechnical Sensor Tamara], 2003, ISBN 80-86645-02-9, in Czech. Describes three generations of the sensors: PRP 1 (1964), Ramona (1979) and Tamara (1989). Jiří Hofman worked in the development of the sensors. Peter Emmett: Silent Trackers: The specter of passive surveillance in the information age, in Air Power, summer 2002 issue
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flash Core Module** Flash Core Module: IBM FlashCore Modules (FCM) are solid state technology computer data storage modules using PCI Express attachment and the NVMe command set. The raw storage capacities are 4.8 TB, 9.6 TB, 19.2 TB and 38.4 TB. The FlashCore modules support hardware self-encryption and real-time inline hardware data compression without performance impact. They are used in selected arrays from the IBM FlashSystem family. History: On September 17, 2007, Texas Memory Systems (TMS) announced the RamSan-500, the world's first enterprise-class flash-based solid state disk (SSD). The Flash Modules were designed from the ground up by Texas Memory Systems using proprietary form-factors, physical connectivity, hard-decision ECC algorithm, and flash translation layer (FTL) contained completely inside the SSD. The flash controllers used a hardware only data path that enabled lower latency than any other commodity controllers could achieve. This product marked the beginning of development of the RamSan All Flash Arrays (AFA) and hybrid DRAM and Flash Arrays, which included custom designed flash management and storage infrastructure management suite implemented in both software and hardware. TMS aggressively developed six more generations of flash controllers (for a total of seven generations) using SLC Nand Flash and adopting MLC Nand flash for the later generations. These flash controllers were offered in a variety of configurations and form factors that included embedded PowerPC processors, FPGAs, and daughter cards with additional flash nodes. History: Over 15 TMS products were offered utilizing these flash controllers, including 4 PCIe drives, RamSan-10/20/70/80, that could be installed in off the shelf servers. History: TMS was eventually acquired by IBM in 2012.On January 16, 2014, IBM announced the FlashSystem 840 product, which was the first FlashSystem designed entirely by IBM post-acquisition of TMS. IBM branded the flash controller technology IBM MicroLatency technology, and touted how the technology lowered data access times from milliseconds to microseconds.On February 19, 2015 IBM announced the FlashSystem 900 and V9000 products and re-branded the flash controller technology as IBM FlashCore technology, and described it as the suite of innovations and capabilities that can enable FlashSystem to help deliver better performance than enterprise disk systems. The flash modules themselves continued to be branded the IBM MicroLatency Modules. This version of the technology supported Micron's MLC flash chip technology. This was also the first generation of FlashCore and first enterprise AFA to offer at speed, inline hardware compression and decompression.With the announcement of the FlashSystem 9100 on July 10, 2018, FlashCore technology was re-implemented into a standard 2 1/2 inch U.2 NVMe SSD form factor and rebranded as FlashCore Modules (FCM). This marks the first time that the original technology developed by TMS was packaged in such a way that conformed to an industry specification and was interchangeable with industry-standard SSDs used inside of an AFA. Technology: IBM FlashCore Modules utilize an FPGA and NAND flash memory chips from off-the-shelf vendors to implement the entire data path in hardware. Each FCM contains a single FPGA with an NVMe gateway and multi-core ARM processors. Other major components include DRAM, MRAM, and of course NAND Flash. Technology: As with all FlashCore technology, the FTL is contained completely inside the FCM and the data path includes at speed, inline hardware compression and decompression. The controller design for IBM FCM uses techniques such as health binning, heat segregation, read voltage shifting, and hard-decision error correction codes to avoid re-reads and lower write amplification to provide consistent low latency.There are currently 3 generations of FCM: FCM1 - U.2 NVMe PCIe gen 3, TLC NAND Flash, available in 3 different capacities, 4.8TBu / 21.99TBe, 9.6TBu / 21.99 TBe, and 19.2TBu / 43.98TBe FCM2 - U.2 NVMe PCIe gen 3, QLC NAND Flash, available in 4 capacities, 4.8TBu / 21.99TBe, 9.6TBu / 21.99 TBe, 19.2TBu / 43.98TBe, and 38.4TBu / 87.96TBe The Introduction of FCM2 was the industry's largest capacity enterprise SSD as well as the first enterprise SSD to offer exclusively QLC NAND Flash! FCM3 - U.2 NVMe PCIe gen 3 and gen 4, QLC NAND Flash, available in 4 capacities, 4.8TBu / 21.99TBe, 9.6TBu / 21.99 TBe, 19.2TBu / 43.98TBe, and 38.4TBu / 87.96TBeThis version of FCM is a performance and infrastructure optimized enterprise QLC SSD. Technology: The larger 2 capacities double the compressor performance and increase decompressor performance by over 50%. Using the latest cutting edge FPGA technology, the larger capacities pick up gen 4 PCIe and a speed bump for the ARM cores. All capacities include an optimized infrastructure for a more efficient data path with reduced components.In April 2017, IBM's flash portfolio represented more than 380 patents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OpenPKG** OpenPKG: OpenPKG is an open source package management system for Unix. It is based on the well known RPM-system and allows easy and unified installation of packages onto common Unix-platforms (Solaris, Linux and FreeBSD). The project was launched by Ralf S. Engelschall in November 2000 and in June 2005 it offered more than 880 freely available packages.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**L-lysine 6-monooxygenase (NADPH)** L-lysine 6-monooxygenase (NADPH): In enzymology, a L-lysine 6-monooxygenase (NADPH) (EC 1.14.13.59) is an enzyme that catalyzes the chemical reaction L-lysine + NADPH + H+ + O2 ⇌ N6-hydroxy-L-lysine + NADP+ + H2OThe 4 substrates of this enzyme are L-lysine, NADPH, H+, and O2, whereas its 3 products are N6-hydroxy-L-lysine, NADP+, and H2O. L-lysine 6-monooxygenase (NADPH): This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is L-lysine,NADPH:oxygen oxidoreductase (6-hydroxylating). This enzyme is also called lysine N6-hydroxylase. This enzyme participates in lysine degradation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ARM Cortex-A7** ARM Cortex-A7: The ARM Cortex-A7 MPCore is a 32-bit microprocessor core licensed by ARM Holdings implementing the ARMv7-A architecture announced in 2011. Overview: It has two target applications; firstly as a smaller, simpler, and more power-efficient successor to the Cortex-A8. The other use is in the big.LITTLE architecture, combining one or more A7 cores with one or more Cortex-A15 cores into a heterogeneous system. To do this it is fully feature-compatible with the A15. Key features of the Cortex-A7 core are: Partial dual-issue, in-order microarchitecture with an 8-stage pipeline NEON SIMD instruction set extension VFPv4 Floating Point Unit Thumb-2 instruction set encoding Jazelle RCT Hardware virtualization Large Page Address Extensions (LPAE) Integrated level 2 Cache (0–1 MB) 1.9 DMIPS / MHz Typical clock speed 1.5 GHz Chips: Several system-on-chips (SoC) have implemented the Cortex-A7 core, including: Allwinner A20 (dual-core A7 + Mali-400 MP2 GPU) Allwinner A31 (quad-core A7 + PowerVR SGX544MP2 GPU) Allwinner A83T (octa-core A7 + PowerVR SGX544 GPU) Allwinner H3(quad-core A7 + Mali-400 MP2 GPU) Broadcom BCM23550 quad-core HSPA+ Multimedia Processor Broadcom BCM2836 (quad-core A7 + VideoCore IV GPU), designed specifically for Raspberry Pi 2 NXP Semiconductor (Formerly Freescale) QorIQ Layerscape LS1 (dual-core A7) Freescale i.MX 6 UltraLite HiSilicon K3V3, big.LITTLE architecture with dual-core Cortex-A7 and dual-core Cortex-A15. Use ARM Mali-T658 GPU. Chips: Marvell PXA1088 (quad-core A7 + Vivante GC1000) Mediatek MT6570 (dual-core A7 + ARM Mali-400MP1 GPU) Mediatek MT6572 (dual-core A7 + ARM Mali-400MP1 GPU) Mediatek MT6580 (quad-core A7 + ARM Mali-400MP2 GPU) Mediatek MT6582 (quad-core A7 + ARM Mali-400MP2 GPU) Mediatek MT6589 (quad-core A7 + Imagination Technologies PowerVR SGX544 GPU) Mediatek MT6592 (octa-core A7 + ARM Mali-450MP4 GPU) Mstar MSB2531A ARM Cortex A7 32bit 800MHZ Qualcomm Snapdragon 200 and Snapdragon 400 MSM8212 and MSM8612, MSM8226, MSM8626 and MSM8926 (quad core A7 + Adreno 305 GPU) Samsung Exynos 5 Octa (5410), big.LITTLE architecture with quad-core Cortex-A7 and quad-core Cortex-A15. Use Imagination Technologies PowerVR SGX544MP3 GPU. Chips: Samsung Exynos 5 Octa (5420), big.LITTLE architecture with quad-core Cortex-A7 and quad-core Cortex-A15. Use ARM Mali-T628MP6 GPU. STMicroelectronics STM32MP13x (single-core A7) STMicroelectronics STM32MP15x (dual-core A7 + M4 + Vivante GPU) ASPEED AST2600 BMC (dual-core A7 + M4)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Joanne Hort** Joanne Hort: Joanne Hort is a New Zealand food science academic, and as of 2019 is a full professor at the Massey University and holds the 'Fonterra Riddet Chair in Consumer and Sensory Science'. Academic career: After a 1997 PhD titled 'Cheddar cheese : Its texture, chemical composition and rheological properties' at the Sheffield Hallam University, Hort moved to University of Nottingham, rising to full professor. Hort then moved to Massey University, where she currently (2019) teaches.Hort's research focuses on the taste and texture of foods, particularly dairy products. Selected works: Kemp, Sarah E., Tracey Hollowood, and Joanne Hort. Sensory evaluation: a practical handbook. John Wiley & Sons, 2011. Hort, Joanne, and Geoff Le Grys. "Developments in the textural and rheological properties of UK Cheddar cheese during ripening." International Dairy Journal 11, no. 4-7 (2001): 475–481. Bayarri, Sara, Andrew J. Taylor, and Joanne Hort. "The role of fat in flavor perception: effect of partition and viscosity in model emulsions." Journal of agricultural and food chemistry 54, no. 23 (2006): 8862–8868. Hort, Joanne, and Tracey Ann Hollowood. "Controlled continuous flow delivery system for investigating taste− aroma interactions." Journal of Agricultural and Food Chemistry 52, no. 15 (2004): 4834–4843. Marciani, Luca, Johann C. Pfeiffer, Joanne Hort, Kay Head, Debbie Bush, Andy J. Taylor, Robin C. Spiller, Sue Francis, and Penny A. Gowland. "Improved methods for fMRI studies of combined taste and aroma stimuli." Journal of neuroscience methods 158, no. 2 (2006): 186–194.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virtual memory** Virtual memory: In computing, virtual memory, or virtual storage, is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory".The computer's operating system, using a combination of hardware and software, maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage, as seen by a process or task, appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit (MMU), automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities, utilizing, e.g., disk storage, to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer. Virtual memory: The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, ability to share memory used by libraries between processes, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging or segmentation. Properties: Virtual memory makes application programming easier by hiding fragmentation of physical memory; by delegating to the kernel the burden of managing the memory hierarchy (eliminating the need for the program to handle overlays explicitly); and, when each process is run in its own dedicated address space, by obviating the need to relocate program code or to access memory with relative addressing. Properties: Memory virtualization can be considered a generalization of the concept of virtual memory. Usage: Virtual memory is an integral part of a modern computer architecture; implementations usually require hardware support, typically in the form of a memory management unit built into the CPU. While not necessary, emulators and virtual machines can employ hardware support to increase performance of their virtual memory implementations. Older operating systems, such as those for the mainframes of the 1960s, and those for personal computers of the early to mid-1980s (e.g., DOS), generally have no virtual memory functionality, though notable exceptions for mainframes of the 1960s include: the Atlas Supervisor for the Atlas THE multiprogramming system for the Electrologica X8 (software based virtual memory without hardware support) MCP for the Burroughs B5000 MTS, TSS/360 and CP/CMS for the IBM System/360 Model 67 Multics for the GE 645 The Time Sharing Operating System for the RCA Spectra 70/46During the 1960s and early '70s, computer memory was very expensive. The introduction of virtual memory provided an ability for software systems with large memory demands to run on computers with less real memory. The savings from this provided a strong incentive to switch to virtual memory for all systems. The additional capability of providing virtual address spaces added another level of security and reliability, thus making virtual memory even more attractive to the marketplace. Usage: Most modern operating systems that support virtual memory also run each process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory. However, some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even modern ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory. Usage: Embedded systems and other special-purpose computer systems that require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism; virtual memory systems trigger unpredictable traps that may produce unwanted and unpredictable delays in response to input, especially if the trap requires that data be read into main memory from secondary memory. The hardware to translate virtual addresses to physical addresses typically requires a significant chip area to implement, and not all chips used in embedded systems include that hardware, which is another reason some of those systems don't use virtual memory. History: In the 1950s, all larger programs had to contain logic for managing primary and secondary storage, such as overlaying. Virtual memory was therefore introduced not only to extend primary memory, but to make such an extension as easy as possible for programmers to use. To allow for multiprogramming and multitasking, many early systems divided memory between multiple programs without virtual memory, such as early models of the PDP-10 via registers. History: A claim that the concept of virtual memory was first developed by German physicist Fritz-Rudolf Güntsch at the Technische Universität Berlin in 1956 in his doctoral thesis, Logical Design of a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High Speed Memory Operation does not stand up to careful scrutiny. The computer proposed by Güntsch (but never built) had an address space of 105 words which mapped exactly onto the 105 words of the drums, i.e. the addresses were real addresses and there was no form of indirect mapping, a key feature of virtual memory. What Güntsch did invent was a form of cache memory, since his high-speed memory was intended to contain a copy of some blocks of code or data taken from the drums. Indeed, he wrote (as quoted in translation): "The programmer need not respect the existence of the primary memory (he need not even know that it exists), for there is only one sort of addresses (sic) by which one can program as if there were only one storage." This is exactly the situation in computers with cache memory, one of the earliest commercial examples of which was the IBM System/360 Model 85. In the Model 85 all addresses were real addresses referring to the main core store. A semiconductor cache store, invisible to the user, held the contents of parts of the main store in use by the currently executing program. This is exactly analogous to Güntsch's system, designed as a means to improve performance, rather than to solve the problems involved in multi-programming. History: The first true virtual memory system was that implemented at the University of Manchester to create a one-level storage system as part of the Atlas Computer. It used a paging mechanism to map the virtual addresses available to the programmer onto the real memory that consisted of 16,384 words of primary core memory with an additional 98,304 words of secondary drum memory. The addition of virtual memory into the Atlas also eliminated a looming programming problem: planning and scheduling data transfers between main and secondary memory and recompiling programs for each change of size of main memory. The first Atlas was commissioned in 1962 but working prototypes of paging had been developed by 1959.: 2  In 1961, the Burroughs Corporation independently released the first commercial computer with virtual memory, the B5000, with segmentation rather than paging.IBM developed the concept of hypervisors in their CP-40 and CP-67, and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction as part of 370-XA on the 3081, and VM/XA versions of VM to exploit it. Before virtual memory could be implemented in mainstream operating systems, many problems had to be addressed. Dynamic address translation required expensive and difficult-to-build specialized hardware; initial implementations slowed down access to memory slightly. There were worries that new system-wide algorithms utilizing secondary storage would be less effective than previously used application-specific algorithms. By 1969, the debate over virtual memory for commercial computers was over; an IBM research team led by David Sayre showed that their virtual memory overlay system consistently worked better than the best manually controlled systems. Throughout the 1970s, the IBM 370 series running their virtual-storage based operating systems provided a means for business users to migrate multiple older systems into fewer, more powerful, mainframes that had improved price/performance. The first minicomputer to introduce virtual memory was the Norwegian NORD-1; during the 1970s, other minicomputers implemented virtual memory, notably VAX models running VMS. History: Virtual memory was introduced to the x86 architecture with the protected mode of the Intel 80286 processor, but its segment swapping technique scaled poorly to larger segment sizes. The Intel 80386 introduced paging support underneath the existing segmentation layer, enabling the page fault exception to chain with other exceptions without double fault. However, loading segment descriptors was an expensive operation, causing operating system designers to rely strictly on paging rather than a combination of paging and segmentation. Paged virtual memory: Nearly all current implementations of virtual memory divide a virtual address space into pages, blocks of contiguous virtual memory addresses. Pages on contemporary systems are usually at least 4 kilobytes in size; systems with large virtual address ranges or amounts of real memory generally use larger page sizes. Paged virtual memory: Page tables Page tables are used to translate the virtual addresses seen by the application into physical addresses used by the hardware to process instructions; such hardware that handles this specific translation is often known as the memory management unit. Each entry in the page table holds a flag indicating whether the corresponding page is in real memory or not. If it is in real memory, the page table entry will contain the real memory address at which the page is stored. When a reference is made to a page by the hardware, if the page table entry for the page indicates that it is not currently in real memory, the hardware raises a page fault exception, invoking the paging supervisor component of the operating system. Paged virtual memory: Systems can have, e.g., one page table for the whole system, separate page tables for each address space or process, separate page tables for each segment; similarly, systems can have, e.g., no segment table, one segment table for the whole system, separate segment tables for each address space or process, separate segment tables for each region in a tree of region tables for each address space or process. If there is only one page table, different applications running at the same time use different parts of a single range of virtual addresses. If there are multiple page or segment tables, there are multiple virtual address spaces and concurrent applications with separate page tables redirect to different real addresses. Paged virtual memory: Some earlier systems with smaller real memory sizes, such as the SDS 940, used page registers instead of page tables in memory for address translation. Paged virtual memory: Paging supervisor This part of the operating system creates and manages page tables and lists of free page frames. In order to ensure that there will be enough free page frames to quickly resolve page faults, the system may periodically steal allocated page frames, using a page replacement algorithm, e.g., a Least recently used (LRU) algorithm. Stolen page frames that have been modified are written back to auxiliary storage before they are added to the free queue. On some systems the paging supervisor is also responsible for managing translation registers that are not automatically loaded from page tables. Paged virtual memory: Typically, a page fault that cannot be resolved results in an abnormal termination of the application. However, some systems allow the application to have exception handlers for such errors. The paging supervisor may handle a page fault exception in several different ways, depending on the details: If the virtual address is invalid, the paging supervisor treats it as an error. Paged virtual memory: If the page is valid and the page information is not loaded into the MMU, the page information will be stored into one of the page registers. If the page is uninitialized, a new page frame may be assigned and cleared. If there is a stolen page frame containing the desired page, that page frame will be reused. For a fault due to a write attempt into a read-protected page, if it is a copy-on-write page then a free page frame will be assigned and the contents of the old page copied; otherwise it is treated as an error. If the virtual address is a valid page in a memory-mapped file or a paging file, a free page frame will be assigned and the page read in.In most cases, there will be an update to the page table, possibly followed by purging the Translation Lookaside Buffer (TLB), and the system restarts the instruction that causes the exception. If the free page frame queue is empty then the paging supervisor must free a page frame using the same page replacement algorithm for page stealing. Paged virtual memory: Pinned pages Operating systems have memory areas that are pinned (never swapped to secondary storage). Other terms used are locked, fixed, or wired pages. For example, interrupt mechanisms rely on an array of pointers to their handlers, such as I/O completion and page fault. If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become far more complex and time-consuming, particularly in the case of page fault interruptions. Hence, some part of the page table structures is not pageable. Paged virtual memory: Some pages may be pinned for short periods of time, others may be pinned for long periods of time, and still others may need to be permanently pinned. For example: The paging supervisor code and drivers for secondary storage devices on which pages reside must be permanently pinned, as otherwise paging wouldn't even work because the necessary code wouldn't be available. Paged virtual memory: Timing-dependent components may be pinned to avoid variable paging delays. Paged virtual memory: Data buffers that are accessed directly by peripheral devices that use direct memory access or I/O channels must reside in pinned pages while the I/O operation is in progress because such devices and the buses to which they are attached expect to find data buffers located at physical memory addresses; regardless of whether the bus has a memory management unit for I/O, transfers cannot be stopped if a page fault occurs and then restarted when the page fault has been processed. For example, the data could come from a measurement sensor unit and lost real time data that got lost because of a page fault can't be recovered.In IBM's operating systems for System/370 and successor systems, the term is "fixed", and such pages may be long-term fixed, or may be short-term fixed, or may be unfixed (i.e., pageable). System control structures are often long-term fixed (measured in wall-clock time, i.e., time measured in seconds, rather than time measured in fractions of one second) whereas I/O buffers are usually short-term fixed (usually measured in significantly less than wall-clock time, possibly for tens of milliseconds). Indeed, the OS has a special facility for "fast fixing" these short-term fixed data buffers (fixing which is performed without resorting to a time-consuming Supervisor Call instruction). Paged virtual memory: Multics used the term "wired". OpenVMS and Windows refer to pages temporarily made nonpageable (as for I/O buffers) as "locked", and simply "nonpageable" for those that are never pageable. The Single UNIX Specification also uses the term "locked" in the specification for mlock(), as do the mlock() man pages on many Unix-like systems. Paged virtual memory: Virtual-real operation In OS/VS1 and similar OSes, some parts of systems memory are managed in "virtual-real" mode, called "V=R". In this mode every virtual address corresponds to the same real address. This mode is used for interrupt mechanisms, for the paging supervisor and page tables in older systems, and for application programs using non-standard I/O management. For example, IBM's z/OS has 3 modes (virtual-virtual, virtual-real and virtual-fixed). Paged virtual memory: Thrashing When paging and page stealing are used, a problem called "thrashing" can occur, in which the computer spends an unsuitably large amount of time transferring pages to and from a backing store, hence slowing down useful work. A task's working set is the minimum set of pages that should be in memory in order for it to make useful progress. Thrashing occurs when there is insufficient memory available to store the working sets of all active programs. Adding real memory is the simplest response, but improving application design, scheduling, and memory usage can help. Another solution is to reduce the number of active tasks on the system. This reduces demand on real memory by swapping out the entire working set of one or more processes. Paged virtual memory: A system thrashing is often a result of a sudden spike in page demand from a small number of running programs. Swap-token is a lightweight and dynamic thrashing protection mechanism. The basic idea is to set a token in the system, which is randomly given to a process that has page faults when thrashing happens. The process that has the token is given a privilege to allocate more physical memory pages to build its working set, which is expected to quickly finish its execution and to release the memory pages to other processes. A time stamp is used to handover the token one by one. The first version of swap-token was implemented in Linux 2.6. The second version is called preempt swap-token and is also in Linux 2.6. In this updated swap-token implementation, a priority counter is set for each process to track the number of swap-out pages. The token is always given to the process with a high priority, which has a high number of swap-out pages. The length of the time stamp is not a constant but is determined by the priority: the higher the number of swap-out pages of a process, the longer the time stamp for it will be. Segmented virtual memory: Some systems, such as the Burroughs B5500, use segmentation instead of paging, dividing virtual address spaces into variable-length segments. A virtual address here consists of a segment number and an offset within the segment. The Intel 80286 supports a similar segmentation scheme as an option, but it is rarely used. Segmentation and paging can be used together by dividing each segment into pages; systems with this memory structure, such as Multics and IBM System/38, are usually paging-predominant, segmentation providing memory protection.In the Intel 80386 and later IA-32 processors, the segments reside in a 32-bit linear, paged address space. Segments can be moved in and out of that space; pages there can "page" in and out of main memory, providing two levels of virtual memory; few if any operating systems do so, instead using only paging. Early non-hardware-assisted x86 virtualization solutions combined paging and segmentation because x86 paging offers only two protection domains whereas a VMM, guest OS or guest application stack needs three.: 22  The difference between paging and segmentation systems is not only about memory division; segmentation is visible to user processes, as part of memory model semantics. Hence, instead of memory that looks like a single large space, it is structured into multiple spaces. Segmented virtual memory: This difference has important consequences; a segment is not a page with variable length or a simple way to lengthen the address space. Segmentation that can provide a single-level memory model in which there is no differentiation between process memory and file system consists of only a list of segments (files) mapped into the process's potential address space.This is not the same as the mechanisms provided by calls such as mmap and Win32's MapViewOfFile, because inter-file pointers do not work when mapping files into semi-arbitrary places. In Multics, a file (or a segment from a multi-segment file) is mapped into a segment in the address space, so files are always mapped at a segment boundary. A file's linkage section can contain pointers for which an attempt to load the pointer into a register or make an indirect reference through it causes a trap. The unresolved pointer contains an indication of the name of the segment to which the pointer refers and an offset within the segment; the handler for the trap maps the segment into the address space, puts the segment number into the pointer, changes the tag field in the pointer so that it no longer causes a trap, and returns to the code where the trap occurred, re-executing the instruction that caused the trap. This eliminates the need for a linker completely and works when different processes map the same file into different places in their private address spaces. Address space swapping: Some operating systems provide for swapping entire address spaces, in addition to whatever facilities they have for paging and segmentation. When this occurs, the OS writes those pages and segments currently in real memory to swap files. In a swap-in, the OS reads back the data from the swap files but does not automatically read back pages that had been paged out at the time of the swap out operation. Address space swapping: IBM's MVS, from OS/VS2 Release 2 through z/OS, provides for marking an address space as unswappable; doing so does not pin any pages in the address space. This can be done for the duration of a job by entering the name of an eligible main program in the Program Properties Table with an unswappable flag. In addition, privileged code can temporarily make an address space unswappable using a SYSEVENT Supervisor Call instruction (SVC); certain changes in the address space properties require that the OS swap it out and then swap it back in, using SYSEVENT TRANSWAP.Swapping does not necessarily require memory management hardware, if, for example, multiple jobs are swapped in and out of the same area of storage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fast blue optical transient** Fast blue optical transient: In astronomy, a fast blue optical transient (FBOT), or technically, luminous fast blue optical transients (LFBOT), is an explosion event similar to supernovas and Gamma-ray bursts which presents high optical luminosity between those but rises and decays faster and has its spectra concentrated on the blue range. It is caused by some very high-energy astrophysical process not yet understood but thought to be a type of supernova with events occurring at not more than 0.1% of the typical rate. In 2023, several newly reported FBOTs include: AT2022tsd (the "Tasmanian Devil") and AT2023fhn (the "Finch", or the "Fawn").
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hollow-cathode lamp** Hollow-cathode lamp: A hollow-cathode lamp (HCL) is type of cold cathode lamp used in physics and chemistry as a spectral line source (e.g. for atomic absorption spectrometers) and as a frequency tuner for light sources such as lasers. An HCL takes advantage of the hollow cathode effect, which causes conduction at a lower voltage and with more current than a cold cathode lamp that does not have a hollow cathode.An HCL usually consists of a glass tube containing a cathode, an anode, and a buffer gas (usually a noble gas). A large voltage across the anode and cathode will cause the buffer gas to ionize, creating a plasma. The buffer gas ions will then be accelerated into the cathode, sputtering off atoms from the cathode. Both the buffer gas and the sputtered cathode atoms will in turn be excited by collisions with other atoms/particles in the plasma. As these excited atoms decay to lower states, they will emit photons. These photons will then excite the atoms in the sample, which will release their own photons and be used to generate data.An HCL can also be used to tune light sources to a specific atomic transition by making use of the optogalvanic effect, which is a result of direct or indirect photoionization. By shining the light source into the HCL, one can excite or even eject electrons (directly photoionize) from the atoms inside the lamp, so long as the light source includes frequencies corresponding to the right atomic transitions. Indirect photoionization can then occur when electron collisions with the excited atom eject an atomic electron. Hollow-cathode lamp: A+hν→A∗ A∗+e−→A++2e− A = atom, hν = photon, A∗ = atom in excited state, and e− = electron The newly created ions cause an increase in the current across the cathode/anode and a resulting change in the voltage, which can then be measured. Hollow-cathode lamp: To tune the light source to a specific transition frequency, a tuning parameter (often the driving current) of the light source is varied. By looking for a resonance on a data plot of the voltage signal versus source tuning parameter, the light source can be tuned to the desired frequency. This is often aided by use of a lock-in circuit. Hollow-cathode lamp: The power supply current range is 0 to 25mA and a 600V ignition followed with 300V sustained power.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2-hydroxyethylphosphonate dioxygenase** 2-hydroxyethylphosphonate dioxygenase: 2-hydroxyethylphosphonate dioxygenase (EC 1.13.11.72, HEPD, phpD (gene)) is an enzyme with systematic name 2-hydroxyethylphosphonate:O2 1,2-oxidoreductase (hydroxymethylphosphonate forming). This enzyme catalyses the following chemical reaction 2-hydroxyethylphosphonate + O2 ⇌ hydroxymethylphosphonate + formate2-hydroxyethylphosphonate dioxygenase contains non-heme-Fe(II).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dustbin category** Dustbin category: The term "dustbin category" is sometimes used to describe a category that includes people or things that might be heterogeneous, only loosely related or poorly understood. It has been used in discussion of law, linguistics, medicine, sociology and other disciplines. For example: Some patients' symptoms do not fit well with any recognised category and there is a danger these may be forced into a 'dustbin' category such as 'depression, not otherwise specified.'
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyperthermic intrathoracic chemotherapy** Hyperthermic intrathoracic chemotherapy: Hyperthermic intrathoracic chemotherapy (HITOC) is part of a surgical strategy employed in the treatment of various pleural malignancies. The pleura in this situation could be considered to include the surface linings of the chest wall, lungs, mediastinum, and diaphragm. HITOC is the chest counterpart of HIPEC. Traditionally used in the treatment of malignant mesothelioma, a primary malignancy of the pleura, this modality has recently been evaluated in the treatment of secondary pleural malignancies (e.g. thymic tumors, secondary pleural carcinosis). Hyperthermic intrathoracic chemotherapy: Metastases to the pleural surface from any primary tumor represents Stage IV disease which carries in general an extremely poor prognosis. In addition; in highly selected situations the pleura can be involved by local spread or “seeding” from thoracic tumors such as lung, esophageal, and thymic cancers.Secondary pleural malignancies include metastasis from distant primary tumors including breast, colon, ovarian, uterine and renal cell carcinoma among others; as well as certain sarcomas and pseudomyxoma peritonei. Hyperthermic intrathoracic chemotherapy: Treatment options for such advanced diseases are limited to systemic chemotherapy, radiation, and supportive care measures. These may include management for shortness of breath due to recurrent, symptomatic malignant pleural effusions. However, the surgical removal of large pleural deposits with infusion of hyperthermic chemotherapy may offer significant survival and symptomatic benefit for patients in this disease category. The rationale for this approach is the simultaneous utilization of three different antineoplastic strategies: surgical resection, chemotherapy, and hyperthermia. Hyperthermic intrathoracic chemotherapy: The goal of surgical cytoreduction is to remove all gross disease including tumors that are in resectable areas of the lung or other structures and any large pleural nodules. After complete resection of visible disease, the chest cavity is perfused with hyperthermic chemotherapy with the goal of treating microscopic or minimally visible disease. The chemotherapy bathes the inside of the chest in concentrations that are very effective against the cancer cells but without the level of toxicity that could occur if the chemotherapy was given through the blood stream. Hyperthermic intrathoracic chemotherapy: The increased heat of the chemotherapy perfusion can itself injure the cancer cells and makes the chemotherapy more effective. Diseases treated: Thymoma and Thymic carcinoma: These tumors which arise from the thymus gland in the upper part of the chest overlying the heart, can seed the pleural surfaces in addition to invading the lung and other structures. Mesothelioma: A benign (noncancerous) or malignant (cancerous) tumor affecting the lining of the chest or abdomen. Exposure to asbestos particles in the air increases the risk of developing malignant mesothelioma. Diseases treated: Lung cancer: Usually when a lung cancer spreads to the pleural surface, the cancer has also spread to distant sites making the HITOC procedure unlikely to control the disease. In very highly selected situations there may be seeding of the chest that is contained and possible to treat with HITOC.There are other intra-abdominal malignancies that may cross the diaphragm and cause disease in the chest that could be potentially helped by HITOC. Some examples would include: Pseudomyxoma peritonei: A build-up of mucus in the peritoneal cavity. The mucus may come from ruptured ovarian cysts, the appendix, or from other abdominal tissues, and mucus-secreting cells may attach to the peritoneal lining and continue to secrete mucus. Diseases treated: Ovarian Carcinoma: Cancer that forms in tissues of the ovary. Most ovarian cancers are either ovarian epithelial carcinomas (cancer that begins in the cells on the surface of the ovary) or malignant germ cell tumors (cancer that begins in egg cells). Mucinous appendiceal carcinoma: A type of cancer that begins in cells that line the appendix and produce mucin (the main component of mucus). Low grade sarcomas: Sarcoma is a cancer of the bone, cartilage, fat, muscle, blood vessels, or other connective or supportive tissue. Low-grade refers to cancerous and precancerous growths with cells that look nearly normal under a microscope and are less likely to grow and spread quickly than cells in high-grade cancerous or precancerous growths.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Revolutionary wave** Revolutionary wave: A revolutionary wave or revolutionary decade is one series of revolutions occurring in various locations within a similar time-span. In many cases, past revolutions and revolutionary waves have inspired current ones, or an initial revolution has inspired other concurrent "affiliate revolutions" with similar aims. The causes of revolutionary waves have become the subjects of study by historians and political philosophers, including Robert Roswell Palmer, Crane Brinton, Hannah Arendt, Eric Hoffer, and Jacques Godechot.Writers and activists, including Justin Raimondo and Michael Lind, have used the phrase "revolutionary wave" to describe discrete revolutions happening within a short time-span. Typology: Mark N. Katz identified six forms of revolution; rural revolution urban revolution Coup d'état, e.g. Egypt, 1952 revolution from above, e.g. Mao's Great Leap Forward of 1958 revolution from without, e.g. the allied invasions of Italy, 1944 and Germany, 1945. Typology: revolution by osmosis, e.g. the gradual Islamization of several countries.These categories are not mutually exclusive; the Russian revolution of 1917 began with urban revolution to depose the Czar, followed by rural revolution, followed by the Bolshevik coup in November. Katz also cross-classified revolutions as follows; Central; countries, usually Great powers, which play a leading role in a Revolutionary wave; e.g. the USSR, Nazi Germany, Iran since 1979. Typology: Aspiring revolutions, which follow the Central revolution subordinate or puppet revolutions rival revolutions, e.g. communist Yugoslavia, and China after 1969Central and subordinate revolutions may support each other militarily, as for example the USSR, Cuba, Angola, Ethiopia, Nicaragua and other Marxist regimes did in the 1970s and 1980s.A further dimension to Katz's typology is that revolutions are either against (anti-monarchy, anti-dictatorial, anti-capitalist, anti-communist, anti-democratic) or for (pro-fascism, pro-liberalism, pro-communism, pro-nationalism etc.). In the latter cases, a transition period is often necessary to decide on the direction taken. Periodisation: There is no consensus on a complete list of revolutionary waves. In particular, scholars disagree on how similar the ideologies of different events should be in order for them to be grouped as part of a single wave, and over what period a wave can be considered to be taking place – for example, Mark N. Katz discussed a "Marxist-Leninist wave" lasting from 1917 to 1991, and a "fascist wave" from 1922 to 1945, but limits an "anti-communist wave" to just the 1989 to 1991 period. Pre-19th century: Republican waves in Rome (509 BCE), Athens (508 BCE), and Carthage (480 BCE).The Second Reformation (1566–1609), including the Revolt of the Netherlands and the Second and Third Wars of Religion in France. Jihadist wars in Western Africa in 16. century The Thirty Years' War (1618–1648), including Calvinist uprisings and the Huguenot Wars in France. The Atlantic Revolutions occurring at the end of the 18th century, including the American Revolution (1776), the French Revolution (1789), the Haitian Revolution (1791), the Batavian Revolution (1795) and the Irish Rebellion of 1798. 19th century: The Latin American wars of independence, including the various Spanish American wars of independence of 1810–1826 were often seen as inspired at least in part by the American and French Revolutions in terms of their liberal Enlightenment ideology and aims, are counted as the second part of the Atlantic Wave. The Revolutions of 1820, also the Decembrist revolt of 1825 in Russia and the Greek War of Independence. The Revolutions of 1830, such as the July Revolution in France and the Belgian Revolution or November Uprising against the Russian rule in Poland. The Revolutions of 1848 throughout Europe, following the February Revolution in France. The early 1850s saw the Taiping Rebellion in China, Great Revolt in India and Eureka Rebellion in Australia. In the 1860s, Italian unification, the German Unification Wars, the Spanish Revolution of 1868, the US Civil War (sometimes referred to as the 'Second American Revolution'), the Meiji restoration in Japan, and the Chinese Taiping rebellion, followed in 1870–71 by the collapse of the French Second Empire and replacement by the French Third Republic. The Royal Titles Act of 1876, establishing imperial rule in India; the Anglo-Egyptian War of 1882; the founding of the Italian Empire in 1882; the Third Anglo-Burmese War of 1885, unifying British rule in Burma; the Scramble for Africa from 1885; and the founding of French Indochina in 1886. The Great Eastern Crisis, including the Herzegovina uprising, April Uprising, Razlovtsi insurrection and the Cretan Revolt. 20th century: The Revolutions of 1905–11 in the aftermath of the Russo-Japanese War, including the Russian Revolution of 1905, the Argentine Revolution of 1905, the Persian Constitutional Revolution, the Young Turk Revolution, the Greek Goudi coup, the Monegasque Revolution, the 5 October 1910 revolution in Portugal, the Mexican Revolution, and the Xinhai Revolution in China involved nationalism, constitutionalism, modernization, and/or republicanism targeting autocracy and traditionalism. 20th century: The Revolutions of 1917–1923 in the aftermath of World War I, including the Russian Revolution and the emergence of an international communist party alliance in the Soviet-led Comintern (the beginning of the Marxist revolutionary wave), the collapse of the German Empire, Austro-Hungarian empire and Ottoman empire and resultant founding of Yugoslavia, Czechoslovakia and independent Poland and Austria; the first protest of the Indian independence movement organized by Mohandas Karamchand Gandhi, the Kemalist revolution in Turkey; the Arab revolt, the Easter rising and Irish Free State; as well as other nationalist, populist and socialist uprisings and protests worldwide. 20th century: The Fascist Revolutionary wave, beginning in Italy in 1922, also including the 28 May 1926 coup d'état in Portugal, Japan from 1931, Germany from 1933, Greece from 1936, and the Spanish Civil War. World War II Revolutions (1943–1949), including the Greek Civil War, French Resistance, Yugoslav Resistance, and Soviet takeovers in Eastern Europe. 20th century: The Indochina Wars were communist revolutions in East Asia and Southeast Asia including the Indonesian National Revolution in 1945; all were associates of the Marxist revolutionary wave The decolonisation of Africa were waves of revolution in Africa, cresting in the 1970s, including the communist revolutions and pro-Soviet military coups in Somalia, the Congo-Brazzaville, Benin and Ethiopia, and the fight of the communist parties allied under CONCP against the Portuguese Empire in the Portuguese Colonial War. 20th century: The Arab nationalist movement: revolutions occurred in Egypt, 1952; Syria, 1958; Iraq, 1958; Algeria, 1962; North Yemen, 1962; Sudan and Libya, 1969. The central regime in this case was Egypt, inspired especially by Gamal Abdel Nasser. 20th century: Following Nikita Khrushchev's "Secret Speech" denouncing Stalin in February 1956, a wave of political upheavals swept through the Eastern Bloc. In Poland, a workers' uprising in Poznań led to major political changes later that year, as the longtime Stalinist old guard of the Polish United Workers' Party was forced out of power in favor of a new, more independent-minded Communist leadership. Pro-reform movements in Hungary, inspired in part by the Polish upheavals, soon erupted into the Hungarian Revolution of 1956, a major popular uprising against the Soviet-backed regime in Budapest that was brutally crushed. There was also a nascent pro-reform movement in Romania that was suppressed. 20th century: The Black Power movement and the civil rights movement organized successful protests against government and private discrimination. Continuing unrest in African-American communities led to the multi-city riots during the "Long, hot summer of 1967" and the various 1968 riots following the assassination of Martin Luther King Jr. In Trinidad the Black Power Revolution was successful. 20th century: The Protests of 1968 saw youth movements worldwide supporting the opposition to the U.S. involvement in the Vietnam War and other left wing causes, the worldwide counterculture of the 1960s and the New Left inspired protest and revolution in the communist world and capitalist world, including the Prague Spring, Mao's Cultural Revolution in China, and the May 1968 protests in France; the latter led to the Werner Report on European monetary union. 20th century: The Carnation Revolution in Portugal (April 25, 1974) put an end to the oldest dictatorship in Western Europe (it lasted 48 years). The Central American crisis saw a socialist movement take power in the Nicaraguan Revolution and leftist popular uprisings in El Salvador and Guatemala. 20th century: A decade of religious fundamentalist revolutions, mostly from 1977-1987, including the Shia Islam Iranian revolution of 1979, revisionist Zionism, neo-Zionism, and the 1977 first Likud government in Israel; the Christian right and Christian Zionism movements, mostly in the US, and the Hindutva Janata party, later the BJP, in India, founded 1977. In the 1980s, Al Qaeda, founded 1988; Hamas, founded 1987; Islamic Unity of Afghanistan Mujahideen, founded 1981 or 1985; Lashkar-e-Taiba was founded in Pakistan in 1987. The modern version of the Taliban began in 1994. 20th century: The Revolutions of 1989 and the dissolution of the Soviet Union by the end of 1991, which ended the Marxist revolutionary wave, resulting in Russia and 14 countries declaring their independence from the Soviet Union: Armenia, Azerbaijan, Belarus, Estonia, Georgia, Kazakhstan, Kyrgyzstan, Latvia, Lithuania, Moldova, Tajikistan, Turkmenistan, Ukraine, and Uzbekistan. Communism soon was abandoned by other countries, including Afghanistan, Albania, Angola, Benin, Bulgaria, Cambodia, Congo-Brazzaville, Czechoslovakia, East Germany, Ethiopia, Hungary, Mongolia, Mozambique, Poland, Romania, Somalia, South Yemen, and Yugoslavia. Apartheid South Africa, Yugoslavia, and Czechoslovakia also collapsed in the early 1990s. 20th century: Pink Tide in Latin America starting in 1999 to late 2000s. 21st century: The colour revolutions were various related movements that developed in several societies in the former Soviet Union and the Balkans during the early 2000s. 21st century: Between 2009 and 2014, there were revolutions or mass protests in the Arab world, Iceland, Madagascar, Ireland, Iran, Thailand, Kyrgyzstan, Greece, Spain, Chile, the Maldives, California, China, Israel, Azerbaijan, Armenia, Rojava, Mexico, Canada, the UK, Romania, Turkey, France, Ukraine, Venezuela, Burkina Faso and Hong Kong. This period also saw the Occupy movement form in the West and the autodefensas in Mexico. 21st century: The Arab Winter is a violent mass reaction following the Arab Spring characterized by resurgent authoritarianism, dictatorships, and Islamic extremism in the Middle East since 2014. 21st century: Late 2019 and 2020 saw a significant wave of protest movements in Hong Kong, Catalonia, Lebanon, Chile, Algeria, Bolivia, Haiti, Iraq, Ecuador, Montenegro, Serbia, Bulgaria, Indonesia, Albania, Sudan, Venezuela, the United States, Kyrgyzstan, Nigeria, Argentina, Iran, Cuba, and the Yellow Vests Movement in various European countries. The causes are varied, spanning from corruption, austerity, electoral fraud, inequality and democratic backsliding. Central themes in many of these protests include economic and racial equality and widespread resentment against the economic and political elite, as well as the opposition to the COVID-19 lockdowns and related measures. In Marxism: Marxists see revolutionary waves as evidence that a world revolution is possible. For Rosa Luxemburg, "The most precious thing… in the sharp ebb and flow of the revolutionary waves is the proletariat's spiritual growth. The advance, by leaps and bounds, of the intellectual stature of the proletariat affords an inviolable guarantee of its further progress in the inevitable economic and political struggles ahead." Potential revolutionary waves: Mark Katz theorises that Buddhism (in Sri Lanka, Thailand, Indochina, Burma, Tibet) and Confucianism (to replace Marxism in China and promote unity with Chinese in Taiwan, Hong Kong, Singapore, Malaysia) might be the revolutionary waves of the future. In the past, these religions have been passively acquiescent to secular authority; but so was Islam, until recently.Katz also suggests that nationalisms such as Pan-Turanianism (in Turkey, Central Asia, Xinjiang, parts of Russia), 'Pan-native Americanism' (in Ecuador, Peru, Bolivia, Paraguay) and Pan-Slavism (in Russia, Ukraine, Belarus) could also form revolutionary waves.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ZoomInfo** ZoomInfo: ZoomInfo Technologies Inc. is a software and data company which provides data for companies and business individuals. Their main product is a commercial search-engine, specialized in contact and business information. From internet and other sources, the company collects contact and other information about individuals, companies and other business entities, such as departments. They maintain profiles for the subjects and make these available to their clients, as a service and for a fee. History: In 2007, DiscoverOrg was founded by Henry Schuck and Kirk Brown. In February 2019, it acquired its competitor, Zoom Information, Inc. and rebranded as ZoomInfo. DiscoverOrg's CEO Henry Schuck, CFO Cameron Hyzer, and Chief Revenue Officer Chris Hays kept their roles. Zoom Information was established in 2000 as Eliyon Technologies by founders Yonatan Stern and Michel Decary, and in August 2017 was acquired by Great Hill Partners, a private equity firm, for $240 million.In June 4, 2020, ZoomInfo became a publicly traded company on the Nasdaq Global Select Market under the ticker symbol “ZI.” Acquisitions In 2017, as DiscoverOrg, the company acquired RainKing and in 2018, NeverBounce and Datanyze. In 2019, ZoomInfo acquired Komiko and in 2020, Clickagy and EverString Technology. In 2021, ZoomInfo acquired Insent, Chorus.ai, and RingLead.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded