text
stringlengths
60
353k
source
stringclasses
2 values
**IBM A2** IBM A2: The IBM A2 is an open source massively multicore capable and multithreaded 64-bit Power ISA processor core designed by IBM using the Power ISA v.2.06 specification. Versions of processors based on the A2 core range from a 2.3 GHz version with 16 cores consuming 65 W to a less powerful, four core version, consuming 20 W at 1.4 GHz. Design: The A2 core is a processor core designed for customization and embedded use in system on chip-devices, and was developed following IBM's game console processor designs, the Xbox 360-processor and Cell processor for the PlayStation 3. Design: A2I A2I is a 4-way simultaneous multithreaded core which implements the 64-bit Power ISA v.2.06 Book III-E embedded platform specification with support for the embedded hypervisor features. It was designed for implementations with many cores and focusing on high throughput and many simultaneous threads. A2I was written in VHDL.The core has 4×32 64-bit general purpose registers (GPR) with full support for both little and big endian byte ordering, 16 KB+16 KB instruction and data cache and is capable of four-way multithreading. Design: It has a fine grain branch prediction unit (BPU) with eight 1024-entry branch history tables. The L1 caches is a 16 KB 8-way set-associative data cache and a 4-way set-associative 16 KB instruction cache. It executes a simple in-order pipeline capable of issuing two instructions per cycle; one to the 6-stage arithmetic logic unit (ALU) and one to the optional auxiliary execution unit (AXU). Design: It includes a memory management unit but no floating point unit (FPU). Such facilities are handled by the AXU, which has support for any number of standardized or customized macros, such as floating point units, vector units, DSPs, media accelerators and other units with instruction sets and registers not part of the Power ISA. The core has a system interface unit used to connect to other on die cores, with a 256-bit interface for data writes and a 128-bit interface for instruction and data reads at full core speed. Design: A2O The A2O is a slightly more modern version, written in Verilog, using the Power ISA v.2.07 Book III-E. It is optimized for single core performance and designed to reach 3 GHz at 45 nm process technology. The A2O differs from its sibling in that it is only two-way multithreaded, 32+32 kB data and instruction L1 caches, and is capable of out-of-order execution. Design: When A2O was released, no actual products have used it. OpenSource: In the second half of 2020 IBM released the A2I and A2O cores under a Creative Commons license, and published the VHDL and Verilog code on GitHub. The intention was to add them to the OpenPOWER Foundation's offerings of free and open processor cores. As A2 was designed in 2010, A2I and A2O are not compliant with the Power ISA 3.0 or 3.1 which is mandatory for OpenPOWER cores. It is IBM's wish for the cores to be updated so they comply with the newer version of the ISA. Products: PowerEN The PowerEN (Power Edge of Network), or the "wire-speed processor", is designed as hybrid between regular networking processors, doing switching and routing and a typical server processor, that is manipulating and packaging data. It was revealed on February 8, 2010, at ISSCC 2010. Products: Each chip uses the A2I core and has 8 MB of cache as well a multitude of task-specific engines besides the general-purpose processors, such as XML, cryptography, compression and regular expression accelerators each with MMUs of their own, four 10 Gigabit Ethernet ports and two PCIe lanes. Up to four chips can be linked in a SMP system without any additional support chips. The chips are said to be extremely complex according to Charlie Johnson, chief architect at IBM, and use 1.43 billion transistors on a die size of 428 mm² fabricated using a 45 nm process. Products: Blue Gene/Q The Blue Gene/Q processor is an 18 core chip using the A2I core running at 1.6 GHz with special features for fast thread context switching, quad SIMD floating point unit, 5D torus chip-to-chip network and 2 GB/s external I/O. The cores are linked by a crossbar switch at half core speed to a 32 MB eDRAM L2 cache. The L2 cache is multi-versioned and supports transactional memory and speculative execution. A Blue Gene/Q chip has two DDR3 memory controllers running at 1.33 GHz, supporting up to 16 GB RAM.It uses 16 cores for computing, and one core for operating system services. This 17th core will take care of interrupts, asynchronous I/O, MPI flow control, and RAS functionality. The 18th core is used as a spare in case one of the other cores are permanently damaged (for instance in manufacturing) but is shut down in functional operation. The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm, will deliver a peak performance of 204.8 GFLOPS at 1.6 GHz and draws about 55 watts. The chip has a die size of 19×19 mm (359.5 mm²) and uses 1.47 billion transistors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Excitation (magnetic)** Excitation (magnetic): In electromagnetism, excitation is the process of generating a magnetic field by means of an electric current. An electric generator or electric motor consists of a rotor spinning in a magnetic field. The magnetic field may be produced by permanent magnets or by field coils. In the case of a machine with field coils, a current must flow in the coils to generate (excite) the field, otherwise no power is transferred to or from the rotor. Field coils yield the most flexible form of magnetic flux regulation and de-regulation, but at the expense of a flow of electric current. Hybrid topologies exist, which incorporate both permanent magnets and field coils in the same configuration. The flexible excitation of a rotating electrical machine is employed by either brushless excitation techniques or by the injection of current by carbon brushes (static excitation). Excitation in generators: For a machine using field coils, as is the case in most large generators, the field must be established by a current in order for the generator to produce electricity. Although some of the generator's own output can be used to maintain the field once it starts up, an external source of current is needed for starting the generator. In any case, it is important to be able to control the field since this will maintain the system voltage. Excitation in generators: Amplifier principle Except for permanent magnet generators, a generator produces output voltage proportional to the magnetic flux, which is the sum of flux from the magnetization of the structure and the flux proportional to the field produced by the excitation current. If there is no excitation current the flux is tiny and the armature voltage is almost nil. The field current controls the generated voltage allowing a power system’s voltage to be regulated to remove the effect of increasing armature current causing increased voltage drop in the armature winding conductors. In a system with multiple generators and a constant system voltage the current and power delivered by an individual generator is regulated by the field current. A generator is a current to voltage, or transimpedance amplifier. To avoid damage from progressively larger over-corrections, the field current must be adjusted more slowly than the effect of the adjustment propagates through the power system. Excitation in generators: Separate excitation For large, or older, generators, it is usual for a separate exciter dynamo to be powered in parallel with the main power generator. This is a small permanent-magnet or battery-excited dynamo that produces the field current for the larger generator. Excitation in generators: Self excitation Modern generators with field coils are usually self-excited; i.e., some of the power output from the rotor is used to power the field coils. The rotor iron retains a degree of residual magnetism when the generator is turned off. The generator is started with no load connected; the initial weak field induces a weak current in the rotor coils, which in turn creates an initial field current, increasing the field strength, thus increasing the induced current in the rotor, and so on in a feedback process until the machine "builds up" to full voltage. Excitation in generators: Starting Self-excited generators must be started without any external load attached. External load will sink the electrical power from the generator before the capacity to generate electrical power can increase. Excitation in generators: Variants Multiple versions of self-exitation exist: a shunt, the simplest design, uses the main winding for the excitation power; an excitation boost system (EBS) is a shunt design with a separate small generator added to temporarily provide an energy boost when the main coil voltage drops (for example, due to a fault). The boost generator is not rated for permanent operation; an auxiliary winding is not connected to the main one and thus is not subject to voltage changes caused by the change of the load. Excitation in generators: Field flashing If the machine does not have enough residual magnetism to build up to full voltage, usually a provision is made to inject current into the field coil from another source. This may be a battery, a house unit providing direct current, or rectified current from a source of alternating current power. Since this initial current is required for a very short time, it is called field flashing. Even small portable generator sets may occasionally need field flashing to restart. Excitation in generators: The critical field resistance is the maximum field circuit resistance for a given speed with which the shunt generator would excite. The shunt generator will build up voltage only if field circuit resistance is less than critical field resistance. It is a tangent to the open circuit characteristics of the generator at a given speed. Excitation in generators: Brushless excitation Brushless excitation creates the magnetic flux on the rotor of electrical machines without the need of carbon brushes. It is typically used for reducing the regular maintenance costs and to reduce the risk of brush-fire. It was developed in the 1950s, as a result of the advances in high-power semiconductor devices. The concept was using a rotating diode rectifier on the shaft of the synchronous machine to harvest induced alternating voltages and rectify them to feed the generator field winding.Brushless excitation has been historically lacking the fast flux de-regulation, which has been a major drawback. However, new solutions have emerged. Modern rotating circuitry incorporates active de-excitation components on the shaft, extending the passive diode bridge. Moreover, their recent developments in high-performance wireless communication have realized fully controlled topologies on the shaft, such as the thyristor rectifiers and chopper interfaces. Sources: Noland, Jonas Kristiansen; Nuzzo, Stefano; Tessarolo, Alberto; Alves, Erick Fernando (2019). "Excitation System Technologies for Wound-Field Synchronous Machines: Survey of Solutions and Evolving Trends". IEEE Access. 7: 109699–109718. doi:10.1109/ACCESS.2019.2933493. eISSN 2169-3536. S2CID 201065415.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum ergodicity** Quantum ergodicity: In quantum chaos, a branch of mathematical physics, quantum ergodicity is a property of the quantization of classical mechanical systems that are chaotic in the sense of exponential sensitivity to initial conditions. Quantum ergodicity states, roughly, that in the high-energy limit, the probability distributions associated to energy eigenstates of a quantized ergodic Hamiltonian tend to a uniform distribution in the classical phase space. This is consistent with the intuition that the flows of ergodic systems are equidistributed in phase space. By contrast, classical completely integrable systems generally have periodic orbits in phase space, and this is exhibited in a variety of ways in the high-energy limit of the eigenstates: typically, some form of concentration occurs in the semiclassical limit ℏ→0 The model case of a Hamiltonian is the geodesic Hamiltonian on the cotangent bundle of a compact Riemannian manifold. The quantization of the geodesic flow is given by the fundamental solution of the Schrödinger equation exp ⁡(itΔ) where Δ is the square root of the Laplace–Beltrami operator. The quantum ergodicity theorem of Shnirelman 1974, Zelditch, and Yves Colin de Verdière states that a compact Riemannian manifold whose unit tangent bundle is ergodic under the geodesic flow is also ergodic in the sense that the probability density associated to the nth eigenfunction of the Laplacian tends weakly to the uniform distribution on the unit cotangent bundle as n → ∞ in a subset of the natural numbers of natural density equal to one. Quantum ergodicity can be formulated as a non-commutative analogue of the classical ergodicity (T. Sunada). Quantum ergodicity: Since a classically chaotic system is also ergodic, almost all of its trajectories eventually explore uniformly the entire accessible phase space. Thus, when translating the concept of ergodicity to the quantum realm, it is natural to assume that the eigenstates of the quantum chaotic system would fill the quantum phase space evenly (up to random fluctuations) in the semiclassical limit ℏ→0 . The quantum ergodicity theorems of Shnirelman, Zelditch, and Yves Colin de Verdière proves that the expectation value of an operator converges in the semiclassical limit to the corresponding microcanonical classical average. However, the quantum ergodicity theorem leaves open the possibility of eigenfunctions become sparse with serious holes as ℏ→0 , leaving large but not macroscopic gaps on the energy manifolds in the phase space. In particular, the theorem allows the existence of a subset of macroscopically nonerdodic states which on the other hand must approach zero measure, i.e., the contribution of this set goes towards zero percent of all eigenstates when ℏ→0 .For example, the theorem do not exclude quantum scarring, as the phase space volume of the scars also gradually vanishes in this limit. A quantum eigenstate is scarred by periodic orbit if its probability density is on the classical invariant manifolds near and all along that periodic orbit is systematically enhanced above the classical, statistically expected density along that orbit. In a simplified manner, a quantum scar refers to an eigenstate of whose probability density is enhanced in the neighborhood of a classical periodic orbit when the corresponding classical system is chaotic. In conventional scarring, the responsive periodic orbit is unstable. The instability is a decisive point that separates quantum scars from a more trivial finding that the probability density is enhanced near stable periodic orbits due to the Bohr's correspondence principle. The latter can be viewed as a purely classical phenomenon, whereas in the former quantum interference is important. On the other hand, in the perturbation-induced quantum scarring, some of the high-energy eigenstates of a locally perturbed quantum dot contain scars of short periodic orbits of the corresponding unperturbed system. Even though similar in appearance to ordinary quantum scars, these scars have a fundamentally different origin., In this type of scarring, there are no periodic orbits in the perturbed classical counterpart or they are too unstable to cause a scar in a conventional sense. Conventional and perturbation-induced scars are both a striking visual example of classical-quantum correspondence and of a quantum suppression of chaos (see the figure). In particular, scars are a significant correction to the assumption that the corresponding eigenstates of a classically chaotic Hamiltonian are only featureless and random. In some sense, scars can be considered as an eigenstate counterpart to the quantum ergodicity theorem of how short periodic orbits provide corrections to the universal random matrix theory eigenvalue statistics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tufted cell** Tufted cell: Tufted cells are found within the olfactory glomeruli. They receive input from the receptor cells of the olfactory epithelium found in areas of the nose able to sense smell. Tufted cell: Both tufted cells and mitral cells are projection neurons. Projection neurons send the signals from the glomeruli deeper into the brain. The actual signal sent through these projection cells has been sharpened or filtered by a process called lateral inhibition. Both the periglomerular cells and the granule cells contribute to lateral inhibition. Projection neurons therefore transmit a sharpened olfactory signal to the deeper parts of the brain. Tufted cells project onto the anterior piriform cortex.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermofax** Thermofax: Thermo-Fax (very often Thermo fax) is 3M's trademarked name for a photocopying technology which was introduced in 1950. It was a form of thermographic printing and an example of a dry silver process. It was a significant advance as no chemicals were required, other than those contained in the copy paper itself. A thin sheet of heat sensitive copy paper was placed on the original document to be copied, and exposed to infrared energy. Wherever the image on the original paper contained carbon, the image absorbed the infrared energy when heated. The heated image then transferred the heat to the heat sensitive paper producing a blackened copy image of the original. Model 12: The first commercially available Thermofax machine was the Model 12. The 'layup' of the original and the copy paper was placed on a stationary glass platen and an infrared lamp and reflector assembly moved beneath the glass, radiating upwards. The layup was held in position by a lid with an inflatable rubber bladder that was latched down by the user. Model 17: In subsequent versions, beginning with the Model 17, the layup was fed into a slot, and continuously exposed as it passed the lamp and reflector. The Model 17 and successors were table-top machines, approximately the size of a typewriter from the same era. Q system: A variation of this technology was a billing system called the Q System, typically used by medical and dental offices. A 'master' composed of a sheet of heavy backing paper and a thin sheet of ruled paper attached to it at the top edge was created for each patient. Billing entries were then made in pencil on the thin sheet for each patient visit. To create a billing copy, a sheet of heat sensitive paper was inserted between the backing and the entry sheet and passed through the ThermoFax machine, the Model 47 being the most commonly used. Transparencies: As copying technology advanced, Thermofax machines were subsequently marketed as a method of producing transparencies (viewgraphs) for overhead projector presentations. A sheet of heat-sensitive clear stock was placed on top of the original, and passed through a ThermoFax, producing a black image on the clear stock. This application saw a common usage well into the 1980s, and specialized uses thereafter. Modern uses: As of 2009, Thermofax machines were still widely used by artists. In addition to making copies, Thermofax machines can be used to make a "spirit master" for spirit duplicator machines. Tattoo artists use these spirit masters as tattoo stencils, to quickly and accurately mark the outlines of a tattoo on the skin of the person to be tattooed using a transfer solution. Textile and Printmaking artists use these machines for creating silk screens in several seconds by running a piece of Riso film through with a photocopied image. Modern uses: Riso film is a Japanese silk screen product composed of a Saran-type plastic that has been bonded to a screen mesh of various sizes. When the Riso film is exposed to the infrared bulb inside the machine, the saran plastic emulsion side opens up wherever there is an ink toner on the photocopy. Paint and other mediums can then be screened once the film is mounted on a frame. The imaging barrel inside the Thermofax is 8.5" wide, but the film can be of any length. These modern uses have kept up the demand for most of the models of Thermofax machines. Modern uses: Model 45EGA was manufactured with an electrical defect that requires a conversion kit to be installed for safe use of the machine. The 45EGA models that were not converted, are still considered to be fire hazards. Disadvantages: The Thermofax process was temperamental. The coated paper tended to curl, and being heat-sensitive, copies were not archival. The darkness setting is tricky to adjust, and drifts as the machine warms up. The darkness often varies, some portions of the text being too light and others being too dark. Since the heat absorption of the ink does not necessarily correlate with its visible appearance, there were occasional idiosyncrasies; some inks that looked nearly black to the eye might not copy at all, and an exposure setting that worked well for some originals might require a change to make usable copies with another. Cost comparison: Thermofax copies were inexpensive. One business book asserts that research conducted by Xerox before introducing their copier came to the conclusion that "nobody would pay 5¢ for a plain-paper copy when they could get a Thermofax copy for a cent-and-a-half." Fortunately, "Xerox ignored the research." Contemporary references: Contemporary references to the Thermofax process: "They did have—what did they call that brown stuff? Thermofax, right. That's the first copying machine and they didn't look like anything at all. They were brown and they faded." "Marjorie Spock had invested in one of the earliest models of thermofax machines, which she kept in her basement. It was a crude affair that continually overheated, belching smoke and vile-smelling fumes from odd sprockets and sending out scorched brown paper, sometimes completely burned and only barely legible at best." "The only thing we had then, was what they called a thermofax machine, which was very strange. It was on a very bad tissue paper kind of thing and a very obscure image. But we were desperate and it was the only way to make copies." "If a typewritten or printed page was placed flat on an illuminated screen and covered with a chemically treated sheet of pinkish paper, it would duplicate on the treated paper when an air-cushioned rubber mat was brought down over it and a strong light turned on underneath." Cultural references: Thermofax is the name of a dragon in the text adventure Wishbringer by Infocom. The Lord of the Rings parody Bored of the Rings, written by National Lampoon founders Henry N. Beard and Douglas C. Kenney, dubs the magical horse Thermofax (instead of Shadowfax) ridden by 'Goodgulf Greyteeth' (also known as Gandalf Greyhame).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aceclofenac** Aceclofenac: Aceclofenac is a nonsteroidal anti-inflammatory drug (NSAID) analog of diclofenac. It is used for the relief of pain and inflammation in rheumatoid arthritis, osteoarthritis and ankylosing spondylitis. It was patented in 1983 and approved for medical use in 1992. Side effects: Aceclofenac should not be given to people with porphyria or breast-feeding mothers, and is not recommended for children. It should be avoided near term in a pregnant woman because of the risk of having a premature closure of ductus arteriosus leading to fetal hydrops in the neonate. Chemistry: Aceclofenac (C16H13Cl2NO4), chemically [(2-{2, 6-dichlorophenyl) amino} phenylacetooxyacetic acid], is a crystalline powder with a molecular weight of 354.19. It is practically insoluble in water with good permeability. It is metabolized in human hepatocytes and human microsomes to form [2-(2',6'-dichloro-4'-hydroxy- phenylamino) phenyl] acetoxyacetic acid as the major metabolite, which is then further conjugated. According to the Biopharmaceutical Classification System (BCS) drug substances are classified to four classes upon their solubility and permeability. Aceclofenac falls under the BCS Class II, poorly soluble and highly permeable drug.Aceclofenac works by inhibiting the action of cyclooxygenase (COX) that is involved in the production of prostaglandins (PG) which is accountable for pain, swelling, inflammation and fever. The incidence of gastric ulcerogenicity of aceclofenac has been reported to be significantly lower than that of the other frequently prescribed NSAIDs, for instance, 2-folds lesser than naproxen, 4-folds lesser than diclofenac, and 7-folds lesser than indomethacin. Society and culture: Economics Aceclofenac is available in Hungary as a prescription only medicine. The cost of the drug is low, around US$0.14 per 100 mg tablet (as of 2019). Brand names Aceclofenac is available in Europe and CIS countries. Known trades names include: Acecgen (Generics UK), Aflamin, Airtal/Biofenac (Gedeon Richter Plc.), AklofEP (ExtractumPharma) and Flemac (Aramis Pharma).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantitative electroencephalography** Quantitative electroencephalography: Quantitative electroencephalography (qEEG or QEEG) is a field concerned with the numerical analysis of electroencephalography (EEG) data and associated behavioral correlates. Details: Techniques used in digital signal analysis are extended to the analysis of electroencephalography (EEG). These include wavelet analysis and Fourier analysis, with new focus on shared activity between rhythms including phase synchrony (coherence, phase lag) and magnitude synchrony (comodulation/correlation, and asymmetry). Details: The analog signal comprises a microvoltage time series of the EEG, sampled digitally and sampling rates adequate to over-sample the signal (using the Nyquist principle of exceeding twice the highest frequency being detected). Modern EEG amplifiers use adequate sampling to resolve the EEG across the traditional medical band from DC to 70 or 100 Hz, using sample rates of 250/256, 500/512, to over 1000 samples per second, depending on the intended application. Details: QEEG can be performed by open-source toolboxes such as EEGLAB or the Neurophysiological Biomarker Toolbox. Several QEEG products have received Class 2 FDA medical device clearance and the method has received some medical acceptance for use in epilepsy patients. However QEEG has not been endorsed by the American Academy of Neurology or the American Clinical Neurophysiology Society. Fourier analysis of EEG: The Fourier decomposes the EEG time series into a voltage by frequency spectral graph commonly called the "power spectrum", with power being the square of the EEG magnitude, and magnitude being the integral average of the amplitude of the EEG signal, measured from(+) peak-to-(-)peak), across the time sampled, or epoch. The epoch length determines the frequency resolution of the Fourier, with a 1-second epoch providing a 1 Hz resolution (plus/minus 0.5 Hz resolution), and a 4-second epoch providing ¼ Hz, or plus/minus 0.125 Hz resolution. Wavelet analysis of EEG: A wavelet is a time-frequency transformation that allows analysis of EEG signals in the time extension that is not possible with Fourier analysis. X(a,b)=1a∫−∞∞Ψ(t−ba)¯x(t)dt Where a = scaling; b = time Uses: QEEG has been accepted for diagnostic evaluation in some areas, such as cerebro-vascular disorders, encephalopathy, dementia and epilepsy, though it remains yet to be accepted in other clinical areas, such as diagnosing mild traumatic brain injury or psychiatric disorders. The use of qEEG techniques in investigations in clinical and research settings are ongoing. QEEG has also been utilized to provide neurofeedback, which is a form of biofeedback, wherein electrical activity in the brain is monitored by a computer program, which is applied to modulate visual or auditory stimuli - These stimuli, in turn, are designed to be controlled by the user.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zofenoprilat** Zofenoprilat: Zofenoprilat is an angiotensin-converting enzyme inhibitor, and is the free sulfhydryl active metabolite of zofenopril.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Almost simple group** Almost simple group: In mathematics, a group is said to be almost simple if it contains a non-abelian simple group and is contained within the automorphism group of that simple group – that is, if it fits between a (non-abelian) simple group and its automorphism group. In symbols, a group A is almost simple if there is a (non-abelian) simple group S such that Aut ⁡(S). Examples: Trivially, non-abelian simple groups and the full group of automorphisms are almost simple, but proper examples exist, meaning almost simple groups that are neither simple nor the full automorphism group. For n=5 or n≥7, the symmetric group Sn is the automorphism group of the simple alternating group An, so Sn is almost simple in this trivial sense. For n=6 there is a proper example, as S6 sits properly between the simple A6 and Aut ⁡(A6), due to the exceptional outer automorphism of A6. Two other groups, the Mathieu group 10 and the projective general linear group PGL 2⁡(9) also sit properly between A6 and Aut ⁡(A6). Properties: The full automorphism group of a non-abelian simple group is a complete group (the conjugation map is an isomorphism to the automorphism group), but proper subgroups of the full automorphism group need not be complete. Structure: By the Schreier conjecture, now generally accepted as a corollary of the classification of finite simple groups, the outer automorphism group of a finite simple group is a solvable group. Thus a finite almost simple group is an extension of a solvable group by a simple group.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Llama hiking** Llama hiking: Llama hiking, also known as llama trekking or llama caravanning, is an activity where llamas accompany people on hiking and walking trips, including eco-tourism. Expeditions can last from as little as a few hours to several days. For longer trips the llamas often carry up to three days trekking supplies or cargo in purpose-built pack saddles so the people with them can carry as little as a day backpack. Treks are also offered, accompanied by the closely related alpaca.Llamas have padded feet similar to those of a dog, which lets them easily traverse steep and rocky paths while being more environmentally friendly to the ground than horse hooves. They also can use narrower paths reducing disturbances to vegetation. Llamas have both a thick undercoat and a woolly topcoat which protects them from the cold. A three-compartment stomach helps them cope with poor quality food sources. A llama can carry about 25% of its body weight with no problem. So an average animal of 300 pounds (136 kg) can carry around 75 pounds (34 kg) of equipment in its packs.Llamas' excellent endurance combined with biddable and peaceful natures makes them suitable pack animals for hill walking, unlike more stubborn animals such as mules. People that might not usually participate in endurance walking go llamas hiking, including couples looking for some romantic time together, or walking parties that include disabled children. History: Llamas have been used by people of the Andes mountains as pack animals for hundreds of years. Recently, llama hiking has become popular in countries outside of South America. Since the second half of the 20th century large numbers of llamas have been brought across to the US and Canada. In southern California a few avocado farmers have used llamas to carry large loads of the fruit down steeper hills. In the UK, llamas can act as companions for a relaxing stroll, like walking a dog. Back in 1997, British newspaper The Independent described llama hiking as an exotic import from California, but it has since become more commonplace. In Wales during the 2020 COVID-19 pandemic, llamas previously used for llama hiking have been used to help deliver food and cheer up lonely people unable to get out due to the lockdown. Therapeutic walks: Alpacas and llamas are suitable for close contact with children due to their docile and friendly natures. Parents have reported children being willing to walk twice as far when accompanied by llamas. Llama walks are a popular therapeutic activity for disabled children. Autistic children can respond especially well to llamas, sometimes the expressive eyes of the animal help them make their first sustained eye contact. According to author Kay Frydenborg, the good results llamas have with children may in part be due to how pleasant they are to touch.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Institute for Physico-Medical Research** Institute for Physico-Medical Research: The Institute of Physico-Medical Research (polish: Instytut Badań Fizykomedycznych, IBF), is a Polish research unit that emerged in 1991 as a Research and Development Department of Primax Medic Research, Innovative and Development Company Ltd. It cooperates with specialists from renowned scientific and clinical institutions and has the Scientific Council . Its activities cover developing innovative technologies used in medicine and biotechnology. Main Specializations of Scientific Activities: The main aim of the specialists focused on the subjects developed in the institute is conducting the research and publishing its results. It concerns in particular the evaluation and analysis of: distribution of electric potentials which appear during heart activity and application of these measurements to cardiac diagnosis, the influence of inhomogeneous static magnetic fields (NCMF) on living organisms, especially on particular ailments and diseases, developing the content and technology of producing medicines, cosmetics, nutriceutics and dietary supplementsThere has been developed a new method of analysing the activity cardiac electrical activity SATRO-ECG, which is based on an individual SFHAM model. This method is used in non-invasive diagnosis of heart diseases and it facilitates analysing the processes occurring during depolarization of the myocardium. Main Specializations of Scientific Activities: The specialists of the Institute have developed a revascular therapy with the usage of nutriceutics and dietary supplements, ready to be implemented with SATRO-ECG method in health care systems. There has also been calculated the spatial distribution of inhomogeneous constant magnetic field (NCMF) presently used in magnetic products . Basing on individual solutions developed in the Institute there has been compiled “The Domestic Magneto-Therapy Program as an Initiative to Global Public Health”, which was presented at the Forum of Projects for Public Health of the UN . Publications: Leoński, W. (1996). "Quantum and classical dynamics for a pulsed nonlinear oscillator". Physica A: Statistical Mechanics and Its Applications. Elsevier BV. 233 (1–2): 365–378. Bibcode:1996PhyA..233..365L. doi:10.1016/s0378-4371(96)00250-6. ISSN 0378-4371. Leoński, W. (1996-10-01). "Fock states in a Kerr medium with parametric pumping". Physical Review A. American Physical Society (APS). 54 (4): 3369–3372. Bibcode:1996PhRvA..54.3369L. doi:10.1103/physreva.54.3369. ISSN 1050-2947. PMID 9913860. Chumakov, S M; Kozierowski, M (1996). "Dicke model: quantum nonlinear dynamics and collective phenomena". Quantum and Semiclassical Optics: Journal of the European Optical Society Part B. IOP Publishing. 8 (4): 775–803. Bibcode:1996QuSOp...8..775C. doi:10.1088/1355-5111/8/4/003. ISSN 1355-5111. Leon-acuteski, W. (1997-05-01). "Finite-dimensional coherent-state generation and quantum-optical nonlinear oscillator models". Physical Review A. American Physical Society (APS). 55 (5): 3874–3878. Bibcode:1997PhRvA..55.3874L. doi:10.1103/physreva.55.3874. ISSN 1050-2947. Publications: Janicki JS. Analiza EKG z uwzględnieniem procesów fizycznych zachodzących w mięśniu sercowym. Folia Cardiologica, vol.11, p. 13, Zakopane 2004 Janicki J. Wpływ gradientowego pola magnetycznego na organizm człowieka. Acta Bio-Optica et Informatica Medica 4/2008, vol. 14, p. 300-301 Janicki JS. Podstawy zastosowania gradientowego pola magnetycznego w rehabilitacji. Rehabilitacja w praktyce, 1/2009, p. 15 Janicki, J.S.; Leoński, W.; Jagielski, J. (2009). "Partial potentials of selected cardiac muscle regions and heart activity model based on single fibres". Medical Engineering & Physics. Elsevier BV. 31 (10): 1276–1282. doi:10.1016/j.medengphy.2009.08.007. ISSN 1350-4533. PMID 19762270. Publications: Janicki JS, Leoński W, Jagielski J, Sobieszczańska M, Chąpiński M, Janicki Ł. Single Fibre Based Heart Activity Model (SFHAM) Based Qrs-Waves Synthesis. W: Sobieszczańska M, Jagielski J, Macfarlane PW, editors. Electrocardiology 2009. JAKS Publishing Company; 2010. p. 81-86, ISBN 978-83-928209-5-6 Janicki JS, Leoński W, Jagielski J, Sobieszczańska M, Leońska JG. Implementation of SFHAM in Coronary Heart Disease Diagnosis. W: Sobieszczańska M, Jagielski J, Macfarlane PW, editors. Electrocardiology 2009. JAKS Publishing Company; 2010. p. 197-201, ISBN 978-83-928209-5-6
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Debye length** Debye length: In plasmas and electrolytes, the Debye length λD (Debye radius or Debye–Hückel screening length), is a measure of a charge carrier's net electrostatic effect in a solution and how far its electrostatic effect persists. With each Debye length the charges are increasingly electrically screened and the electric potential decreases in magnitude by 1/e. A Debye sphere is a volume whose radius is the Debye length. Debye length is an important parameter in plasma physics, electrolytes, and colloids (DLVO theory). The corresponding Debye screening wave vector kD=1/λD for particles of density n , charge q at a temperature T is given by kD2=4πnq2/(kBT) in Gaussian units. Expressions in MKS units will be given below. The analogous quantities at very low temperatures ( T→0 ) are known as the Thomas–Fermi length and the Thomas–Fermi wave vector. They are of interest in describing the behaviour of electrons in metals at room temperature. Debye length: The Debye length is named after the Dutch-American physicist and chemist Peter Debye (1884-1966), a Nobel laureate in Chemistry. Physical origin: The Debye length arises naturally in the thermodynamic description of large systems of mobile charges. In a system of N different species of charges, the j -th species carries charge qj and has concentration nj(r) at position r . According to the so-called "primitive model", these charges are distributed in a continuous medium that is characterized only by its relative static permittivity, εr This distribution of charges within this medium gives rise to an electric potential Φ(r) that satisfies Poisson's equation: where ε≡εrε0 , ε0 is the electric constant, and ρext is a charge density external (logically, not spatially) to the medium. Physical origin: The mobile charges not only contribute in establishing Φ(r) but also move in response to the associated Coulomb force, −qj∇Φ(r) If we further assume the system to be in thermodynamic equilibrium with a heat bath at absolute temperature T , then the concentrations of discrete charges, nj(r) , may be considered to be thermodynamic (ensemble) averages and the associated electric potential to be a thermodynamic mean field. Physical origin: With these assumptions, the concentration of the j -th charge species is described by the Boltzmann distribution, where kB is the Boltzmann constant and where nj0 is the mean concentration of charges of species j Identifying the instantaneous concentrations and potential in the Poisson equation with their mean-field counterparts in the Boltzmann distribution yields the Poisson–Boltzmann equation: Solutions to this nonlinear equation are known for some simple systems. Solutions for more general systems may be obtained in the high-temperature (weak coupling) limit, qjΦ(r)≪kBT , by Taylor expanding the exponential: This approximation yields the linearized Poisson–Boltzmann equation which also is known as the Debye–Hückel equation: The second term on the right-hand side vanishes for systems that are electrically neutral. The term in parentheses divided by ε , has the units of an inverse length squared and by dimensional analysis leads to the definition of the characteristic length scale that commonly is referred to as the Debye–Hückel length. As the only characteristic length scale in the Debye–Hückel equation, λD sets the scale for variations in the potential and in the concentrations of charged species. All charged species contribute to the Debye–Hückel length in the same way, regardless of the sign of their charges. For an electrically neutral system, the Poisson equation becomes To illustrate Debye screening, the potential produced by an external point charge ρext=Qδ(r) is The bare Coulomb potential is exponentially screened by the medium, over a distance of the Debye length: this is called Debye screening or shielding (Screening effect). The Debye–Hückel length may be expressed in terms of the Bjerrum length λB as where zj=qj/e is the integer charge number that relates the charge on the j -th ionic species to the elementary charge e In a plasma: For a weakly collisional plasma, Debye shielding can be introduced in a very intuitive way by taking into account the granular character of such a plasma. Let us imagine a sphere about one of its electrons, and compare the number of electrons crossing this sphere with and without Coulomb repulsion. With repulsion, this number is smaller. Therefore, according to Gauss theorem, the apparent charge of the first electron is smaller than in the absence of repulsion. The larger the sphere radius, the larger is the number of deflected electrons, and the smaller the apparent charge: this is Debye shielding. Since the global deflection of particles includes the contributions of many other ones, the density of the electrons does not change, at variance with the shielding at work next to a Langmuir probe (Debye sheath). Ions bring a similar contribution to shielding, because of the attractive Coulombian deflection of charges with opposite signs. In a plasma: This intuitive picture leads to an effective calculation of Debye shielding (see section II.A.2 of ). The assumption of a Boltzmann distribution is not necessary in this calculation: it works for whatever particle distribution function. The calculation also avoids approximating weakly collisional plasmas as continuous media. An N-body calculation reveals that the bare Coulomb acceleration of a particle by another one is modified by a contribution mediated by all other particles, a signature of Debye shielding (see section 8 of ). When starting from random particle positions, the typical time-scale for shielding to set in is the time for a thermal particle to cross a Debye length, i.e. the inverse of the plasma frequency. Therefore in a weakly collisional plasma, collisions play an essential role by bringing a cooperative self-organization process: Debye shielding. This shielding is important to get a finite diffusion coefficient in the calculation of Coulomb scattering (Coulomb collision). In a plasma: In a non-isothermic plasma, the temperatures for electrons and heavy species may differ while the background medium may be treated as the vacuum ( εr=1 ), and the Debye length is where λD is the Debye length, ε0 is the permittivity of free space, kB is the Boltzmann constant, qe is the charge of an electron, Te and Ti are the temperatures of the electrons and ions, respectively, ne is the density of electrons, nj is the density of atomic species j, with positive ionic charge zjqeEven in quasineutral cold plasma, where ion contribution virtually seems to be larger due to lower ion temperature, the ion term is actually often dropped, giving although this is only valid when the mobility of ions is negligible compared to the process's timescale. In a plasma: Typical values In space plasmas where the electron density is relatively low, the Debye length may reach macroscopic values, such as in the magnetosphere, solar wind, interstellar medium and intergalactic medium. See the table here below: In an electrolyte solution: In an electrolyte or a colloidal suspension, the Debye length for a monovalent electrolyte is usually denoted with symbol κ−1 where I is the ionic strength of the electrolyte in number/m3 units, ε0 is the permittivity of free space, εr is the dielectric constant, kB is the Boltzmann constant, T is the absolute temperature in kelvins, e is the elementary charge,or, for a symmetric monovalent electrolyte, where R is the gas constant, F is the Faraday constant, C0 is the electrolyte concentration in molar units (M or mol/L).Alternatively, where λB is the Bjerrum length of the medium in nm, and the factor 10 24 derives from transforming unit volume from cubic dm to cubic nm. In an electrolyte solution: For deionized water at room temperature, at pH=7, λB ≈ 1μm. At room temperature (20 °C or 70 °F), one can consider in water the relation: where κ−1 is expressed in nanometres (nm) I is the ionic strength expressed in molar (M or mol/L)There is a method of estimating an approximate value of the Debye length in liquids using conductivity, which is described in ISO Standard, and the book. In semiconductors: The Debye length has become increasingly significant in the modeling of solid state devices as improvements in lithographic technologies have enabled smaller geometries.The Debye length of semiconductors is given: where ε is the dielectric constant, kB is the Boltzmann constant, T is the absolute temperature in kelvins, q is the elementary charge, and Ndop is the net density of dopants (either donors or acceptors).When doping profiles exceed the Debye length, majority carriers no longer behave according to the distribution of the dopants. Instead, a measure of the profile of the doping gradients provides an "effective" profile that better matches the profile of the majority carrier density. In semiconductors: In the context of solids, Thomas–Fermi screening length may be required instead of Debye length.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aerial bundled cable** Aerial bundled cable: Aerial bundled cables (also aerial bundled conductors or simply ABC) are overhead power lines using several insulated phase conductors bundled tightly together, usually with a bare neutral conductor. This contrasts with the traditional practice of using uninsulated conductors separated by air gaps. This variation of bundled conductors utilizes the same principles as overhead power lines, except that they are closer together to the point of touching but each conductor is surrounded by an insulating layer (except for the neutral line). Aerial bundled cable: The main objections to the traditional design are that the multiple conductors are considered unappealing, and external forces (such as high winds) can cause them to touch and short circuit. The resultant sparks have been a cause of bushfires in drier climates. In the UK where some supplies to rural property are converted to PME/MEN from TT Earthing system concerns have been expressed that the lower conductor alone may be broken, (by high vehicle or falling tree for example) but with the upper phase conductors intact. This is a potentially dangerous fault condition. With ABC, a simultaneous disconnection of all conductors is more likely. Aerial bundled cable: In moister climates, tree growth is a significant problem for overhead power lines. ABC will not arc over if touched by tree branches. Although persistent rubbing is still a problem, tree-trimming costs can be reduced. Areas with large trees and branches falling on lines are a problem for ABC as the line degrades over time. Due to the very large strain forces cracking and breaking insulation can lead to short circuit failures which can then lead to ground fires due to dripping of molten insulation. Low voltage ABC has already been developed in several countries across the globe and promises to be cheaper, safer, more reliable, require less tree clearing and pruning, be more aesthetic, be less labor-intensive, require less maintenance and eliminate bushfires being initiated by conductor clashing. Advantages: Relative immunity to short circuits caused by external forces (wind, fallen branches), unless they abrade the insulation. Can stand in close proximity to trees/buildings and will not generate sparks if touched. Little to no tree trimming necessary Simpler installation, as crossbars and insulators are not required. Ease of erection and stringing, less labor-intensive, less construction resources needed. More aesthetically appealing. Can be installed in a narrower right-of-way. At junction poles, insulating bridging wires are needed to connect non-insulated wires at either side. ABC can dispense with one of these splices. Less risk of a neutral-only break from tree or vehicle damage, increasing safety with TNC-s systems. Significantly improved safety for linespersons, particularly when working on live conductors. Electricity theft is made harder, and more obvious to detect. Less required maintenance and necessary inspections of lines. Improved reliability in comparison with both bare conductor overhead systems and underground systems. Insulated conductors prevent accidental contact and supply can be maintained temporarily in the event of a suspension system collapse. Disadvantages: Additional cost for the cable itself. Insulation degrades due to sun exposure, though the critical insulation between the wires is somewhat shielded from the sun. Shorter spans and more poles due to increased weight. Can lead to much longer repair times for installations in hilly areas due to much higher line weights requiring bigger and more specialized equipment to repair. Older installations are known to cause fires in areas where falling large trees or branches regularly cause breaks in lines and or in insulation leading to short circuits which can then lead to burning insulation dripping to ground and starting ground fires. Failure modes through punctures, electrical tracking, and erosion. International usage: Australia ABC have been introduced into Australian power systems progressively since 1983. This was partly in response to bushfires sparked by old wires touching. In some bushfire prone areas though, older ABC installations are now creating fires, particularly at points where the cables have damaged or been degraded over time. In the Dandenong Ranges area Victoria 2014. Medium voltage (11-22 kV) ABC is being replaced with underground cable due to high failure rates of HV ABC, with life expectancy of just 10 years, when original life was expected to be approx 30 years. Due to degraded cable, cost of repairs & maintenance and bushfire risk. Ireland Low voltage ABC lines were first installed on the rural Irish distribution networks in 1981. It is not known where ABC was first installed. Pakistan K-Electric first introduced ABC, in Gulshan-e-Iqbal area of Karachi city with a pilot project was completed in March 2014. Following 90% theft loss reduction, the decision to roll out the new cabling across the entire K-Electric distribution network of Karachi. Sri Lanka Low voltage ABC lines are installed in urban distribution systems commonly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Windows 10 version 1607** Windows 10 version 1607: Windows 10 Anniversary Update (also known as version 1607 and codenamed "Redstone 1") is the second major update to Windows 10 and the first in a series of updates under the Redstone codenames. It carries the build number 10.0.14393. This update, as the name applies, is to celebrate the first anniversary of Windows 10. It was released 1 year after its launch. PC version history: The first preview was released on December 16, 2015. The final release was made available to Windows Insiders on July 18, 2016, followed by a public release on August 2. This release of Windows 10 is supported for users of the Current Branch (CB), Current Branch for Business (CBB) and Long-Term Support Branch (LTSB).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WPFR** WPFR: WPFR may refer to: WPFR (AM), a radio station (1480 AM) licensed to serve Terre Haute, Indiana, United States WEHP (FM), a radio station (93.7 FM) licensed to serve Clinton, Indiana, which held the call signs WPFR from 1997 to 2000 and WPFR-FM from 2000 to 2023 WIBQ, a radio station (1300 AM) licensed to serve Terre Haute, Indiana, which held the call sign WPFR from 1983 to 1987 WBOW, a radio station (102.7 FM) licensed to serve Terre Haute, Indiana, which held the call sign WPFR from 1961 to 1992
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crystal radio** Crystal radio: A crystal radio receiver, also called a crystal set, is a simple radio receiver, popular in the early days of radio. It uses only the power of the received radio signal to produce sound, needing no external power. It is named for its most important component, a crystal detector, originally made from a piece of crystalline mineral such as galena. This component is now called a diode. Crystal radio: Crystal radios are the simplest type of radio receiver and can be made with a few inexpensive parts, such as a wire for an antenna, a coil of wire, a capacitor, a crystal detector, and earphones (because a crystal set has insufficient power for a loudspeaker). However they are passive receivers, while other radios use an amplifier powered by current from a battery or wall outlet to make the radio signal louder. Thus, crystal sets produce rather weak sound and must be listened to with sensitive earphones, and can receive stations only within a limited range of the transmitter.The rectifying property of a contact between a mineral and a metal was discovered in 1874 by Karl Ferdinand Braun. Crystals were first used as a detector of radio waves in 1894 by Jagadish Chandra Bose, in his microwave optics experiments. They were first used as a demodulator for radio communication reception in 1902 by G. W. Pickard. Crystal radios were the first widely used type of radio receiver, and the main type used during the wireless telegraphy era. Sold and homemade by the millions, the inexpensive and reliable crystal radio was a major driving force in the introduction of radio to the public, contributing to the development of radio as an entertainment medium with the beginning of radio broadcasting around 1920.Around 1920, crystal sets were superseded by the first amplifying receivers, which used vacuum tubes. With this technological advance, crystal sets became obsolete for commercial use but continued to be built by hobbyists, youth groups, and the Boy Scouts mainly as a way of learning about the technology of radio. They are still sold as educational devices, and there are groups of enthusiasts devoted to their construction.Crystal radios receive amplitude modulated (AM) signals, although FM designs have been built. They can be designed to receive almost any radio frequency band, but most receive the AM broadcast band. A few receive shortwave bands, but strong signals are required. The first crystal sets received wireless telegraphy signals broadcast by spark-gap transmitters at frequencies as low as 20 kHz. History: Crystal radio was invented by a long, partly obscure chain of discoveries in the late 19th century that gradually evolved into more and more practical radio receivers in the early 20th century. The earliest practical use of crystal radio was to receive Morse code radio signals transmitted from spark-gap transmitters by early amateur radio experimenters. As electronics evolved, the ability to send voice signals by radio caused a technological explosion around 1920 that evolved into today's radio broadcasting industry. History: Early years Early radio telegraphy used spark gap and arc transmitters as well as high-frequency alternators running at radio frequencies. The coherer was the first means of detecting a radio signal. These, however, lacked the sensitivity to detect weak signals. History: In the early 20th century, various researchers discovered that certain metallic minerals, such as galena, could be used to detect radio signals.Bengali physicist Jagadish Chandra Bose was first to use a crystal as a radio wave detector, using galena detectors to receive microwaves starting around 1894. In 1901, Bose filed for a U.S. patent for "A Device for Detecting Electrical Disturbances" that mentioned the use of a galena crystal; this was granted in 1904, #755840. On August 30, 1906, Greenleaf Whittier Pickard filed a patent for a silicon crystal detector, which was granted on November 20, 1906.A crystal detector includes a crystal, usually a thin wire or metal probe that contacts the crystal, and the stand or enclosure that holds those components in place. The most common crystal used is a small piece of galena; pyrite was also often used, as it was a more easily adjusted and stable mineral, and quite sufficient for urban signal strengths. Several other minerals also performed well as detectors. Another benefit of crystals was that they could demodulate amplitude modulated signals. This device brought radiotelephones and voice broadcast to a public audience. Crystal sets represented an inexpensive and technologically simple method of receiving these signals at a time when the embryonic radio broadcasting industry was beginning to grow. History: 1920s and 1930s In 1922 the (then named) United States Bureau of Standards released a publication entitled Construction and Operation of a Simple Homemade Radio Receiving Outfit. This article showed how almost any family having a member who was handy with simple tools could make a radio and tune into weather, crop prices, time, news and the opera. This design was significant in bringing radio to the general public. NBS followed that with a more selective two-circuit version, Construction and Operation of a Two-Circuit Radio Receiving Equipment With Crystal Detector, which was published the same year and is still frequently built by enthusiasts today. History: In the beginning of the 20th century, radio had little commercial use, and radio experimentation was a hobby for many people. Some historians consider the autumn of 1920 to be the beginning of commercial radio broadcasting for entertainment purposes. Pittsburgh station KDKA, owned by Westinghouse, received its license from the United States Department of Commerce just in time to broadcast the Harding-Cox presidential election returns. In addition to reporting on special events, broadcasts to farmers of crop price reports were an important public service in the early days of radio. History: In 1921, factory-made radios were very expensive. Since less-affluent families could not afford to own one, newspapers and magazines carried articles on how to build a crystal radio with common household items. To minimize the cost, many of the plans suggested winding the tuning coil on empty pasteboard containers such as oatmeal boxes, which became a common foundation for homemade radios. History: Crystodyne In early 1920s Russia, Oleg Losev was experimenting with applying voltage biases to various kinds of crystals for the manufacturing of radio detectors. The result was astonishing: with a zincite (zinc oxide) crystal he gained amplification. This was a negative resistance phenomenon, decades before the development of the tunnel diode. After the first experiments, Losev built regenerative and superheterodyne receivers, and even transmitters. History: A crystodyne could be produced under primitive conditions; it could be made in a rural forge, unlike vacuum tubes and modern semiconductor devices. However, this discovery was not supported by the authorities and was soon forgotten; no device was produced in mass quantity beyond a few examples for research. "Foxhole radios" In addition to mineral crystals, the oxide coatings of many metal surfaces act as semiconductors (detectors) capable of rectification. Crystal radios have been improvised using detectors made from rusty nails, corroded pennies, and many other common objects. History: When Allied troops were halted near Anzio, Italy during the spring of 1944, powered personal radio receivers were strictly prohibited as the Germans had equipment that could detect the local oscillator signal of superheterodyne receivers. Crystal sets lack power driven local oscillators, hence they could not be detected. Some resourceful soldiers constructed "crystal" sets from discarded materials to listen to news and music. One type used a blue steel razor blade and a pencil lead for a detector. The lead point touching the semiconducting oxide coating (magnetite) on the blade formed a crude point-contact diode. By carefully adjusting the pencil lead on the surface of the blade, they could find spots capable of rectification. The sets were dubbed "foxhole radios" by the popular press, and they became part of the folklore of World War II. History: In some German-occupied countries during WW2 there were widespread confiscations of radio sets from the civilian population. This led determined listeners to build their own clandestine receivers which often amounted to little more than a basic crystal set. Anyone doing so risked imprisonment or even death if caught, and in most of Europe the signals from the BBC (or other allied stations) were not strong enough to be received on such a set. History: "Rocket Radio" In the late 1950s, the compact "rocket radio", shaped like a rocket, typically imported from Japan, was introduced, and gained moderate popularity. It used a piezoelectric crystal earpiece (described later in this article), a ferrite core to reduce the size of the tuning coil (also described later), and a small germanium fixed diode, which did not require adjustment. To tune in stations, the user moved the rocket nosepiece, which, in turn, moved a ferrite core inside a coil, changing the inductance in a tuned circuit. Earlier crystal radios suffered from severely reduced Q, and resulting selectivity, from the electrical load of the earphone or earpiece. Furthermore, with its efficient earpiece, the "rocket radio" did not require a large antenna to gather enough signal. With much higher Q, it could typically tune in several strong local stations, while an earlier radio might only receive one station, possibly with other stations heard in the background. History: For listening in areas where an electric outlet was not available, the "rocket radio" served as an alternative to the vacuum tube portable radios of the day, which required expensive, heavy, batteries. Children could hide "rocket radios" under the covers, to listen to radio when their parents thought they were sleeping. Children could take the radios to public swimming pools and listen to radio when they got out of the water, clipping the ground wire to a chain link fence surrounding the pool. The rocket radio was also used as an emergency radio, because it did not require batteries or an AC outlet. History: The rocket radio was available in several rocket styles, as well as other styles that featured the same basic circuit.Transistor radios had become available at the time, but were expensive. Once those radios dropped in price, the rocket radio declined in popularity. History: Later years While it never regained the popularity and general use that it enjoyed at its beginnings, the crystal radio circuit is still used. The Boy Scouts have kept the construction of a radio set in their program since the 1920s. A large number of prefabricated novelty items and simple kits could be found through the 1950s and 1960s, and many children with an interest in electronics built one. History: Building crystal radios was a craze in the 1920s, and again in the 1950s. Recently, hobbyists have started designing and building examples of the early instruments. Much effort goes into the visual appearance of these sets as well as their performance. Annual crystal radio 'DX' contests (long distance reception) and building contests allow these set owners to compete with each other and form a community of interest in the subject. Basic principles: A crystal radio can be thought of as a radio receiver reduced to its essentials. It consists of at least these components: An antenna in which electric currents are induced by radio waves. Basic principles: A resonant circuit (tuned circuit) which selects the frequency of the desired radio station from all the radio signals received by the antenna. The tuned circuit consists of a coil of wire (called an inductor) and a capacitor connected together. The circuit has a resonant frequency, and allows radio waves at that frequency to pass through to the detector while largely blocking waves at other frequencies. One or both of the coil or capacitor is adjustable, allowing the circuit to be tuned to different frequencies. In some circuits a capacitor is not used and the antenna serves this function, as an antenna that is shorter than a quarter-wavelength of the radio waves it is meant to receive is capacitive. Basic principles: A semiconductor crystal detector that demodulates the radio signal to extract the audio signal (modulation). The crystal detector functions as a square law detector, demodulating the radio frequency alternating current to its audio frequency modulation. The detector's audio frequency output is converted to sound by the earphone. Early sets used a "cat whisker detector" consisting of a small piece of crystalline mineral such as galena with a fine wire touching its surface. The crystal detector was the component that gave crystal radios their name. Modern sets use modern semiconductor diodes, although some hobbyists still experiment with crystal or other detectors. Basic principles: An earphone to convert the audio signal to sound waves so they can be heard. The low power produced by a crystal receiver is insufficient to power a loudspeaker, hence earphones are used.As a crystal radio has no power supply, the sound power produced by the earphone comes solely from the transmitter of the radio station being received, via the radio waves captured by the antenna. The power available to a receiving antenna decreases with the square of its distance from the radio transmitter. Even for a powerful commercial broadcasting station, if it is more than a few miles from the receiver the power received by the antenna is very small, typically measured in microwatts or nanowatts. In modern crystal sets, signals as weak as 50 picowatts at the antenna can be heard. Crystal radios can receive such weak signals without using amplification only due to the great sensitivity of human hearing, which can detect sounds with an intensity of only 10−16 W/cm2. Therefore, crystal receivers have to be designed to convert the energy from the radio waves into sound waves as efficiently as possible. Even so, they are usually only able to receive stations within distances of about 25 miles for AM broadcast stations, although the radiotelegraphy signals used during the wireless telegraphy era could be received at hundreds of miles, and crystal receivers were even used for transoceanic communication during that period. Design: Commercial passive receiver development was abandoned with the advent of reliable vacuum tubes around 1920, and subsequent crystal radio research was primarily done by radio amateurs and hobbyists. Many different circuits have been used. The following sections discuss the parts of a crystal radio in greater detail. Design: Antenna The antenna converts the energy in the electromagnetic radio waves to an alternating electric current in the antenna, which is connected to the tuning coil. Since, in a crystal radio, all the power comes from the antenna, it is important that the antenna collect as much power from the radio wave as possible. The larger an antenna, the more power it can intercept. Antennas of the type commonly used with crystal sets are most effective when their length is close to a multiple of a quarter-wavelength of the radio waves they are receiving. Since the length of the waves used with crystal radios is very long (AM broadcast band waves are 182–566 m or 597–1857 ft. long) the antenna is made as long as possible, from a long wire, in contrast to the whip antennas or ferrite loopstick antennas used in modern radios. Design: Serious crystal radio hobbyists use "inverted L" and "T" type antennas, consisting of hundreds of feet of wire suspended as high as possible between buildings or trees, with a feed wire attached in the center or at one end leading down to the receiver. However, more often, random lengths of wire dangling out windows are used. A popular practice in early days (particularly among apartment dwellers) was to use existing large metal objects, such as bedsprings, fire escapes, and barbed wire fences as antennas. Design: Ground The wire antennas used with crystal receivers are monopole antennas which develop their output voltage with respect to ground. The receiver thus requires a connection to ground (the earth) as a return circuit for the current. The ground wire was attached to a radiator, water pipe, or a metal stake driven into the ground. In early days if an adequate ground connection could not be made a counterpoise was sometimes used. A good ground is more important for crystal sets than it is for powered receivers, as crystal sets are designed to have a low input impedance needed to transfer power efficiently from the antenna. A low resistance ground connection (preferably below 25 Ω) is necessary because any resistance in the ground reduces available power from the antenna. In contrast, modern receivers are voltage-driven devices, with high input impedance, hence little current flows in the antenna/ground circuit. Also, mains powered receivers are grounded adequately through their power cords, which are in turn attached to the earth by way of a well established ground. Design: Tuned circuit The tuned circuit, consisting of a coil and a capacitor connected together, acts as a resonator, similar to a tuning fork. Electric charge, induced in the antenna by the radio waves, flows rapidly back and forth between the plates of the capacitor through the coil. The circuit has a high impedance at the desired radio signal's frequency, but a low impedance at all other frequencies. Hence, signals at undesired frequencies pass through the tuned circuit to ground, while the desired frequency is instead passed on to the detector (diode) and stimulates the earpiece and is heard. The frequency of the station received is the resonant frequency f of the tuned circuit, determined by the capacitance C of the capacitor and the inductance L of the coil: f=12πLC The circuit can be adjusted to different frequencies by varying the inductance (L), the capacitance (C), or both, "tuning" the circuit to the frequencies of different radio stations. In the lowest-cost sets, the inductor was made variable via a spring contact pressing against the windings that could slide along the coil, thereby introducing a larger or smaller number of turns of the coil into the circuit, varying the inductance. Alternatively, a variable capacitor is used to tune the circuit. Some modern crystal sets use a ferrite core tuning coil, in which a ferrite magnetic core is moved into and out of the coil, thereby varying the inductance by changing the magnetic permeability (this eliminated the less reliable mechanical contact).The antenna is an integral part of the tuned circuit and its reactance contributes to determining the circuit's resonant frequency. Antennas usually act as a capacitance, as antennas shorter than a quarter-wavelength have capacitive reactance. Many early crystal sets did not have a tuning capacitor, and relied instead on the capacitance inherent in the wire antenna (in addition to significant parasitic capacitance in the coil) to form the tuned circuit with the coil. Design: The earliest crystal receivers did not have a tuned circuit at all, and just consisted of a crystal detector connected between the antenna and ground, with an earphone across it. Since this circuit lacked any frequency-selective elements besides the broad resonance of the antenna, it had little ability to reject unwanted stations, so all stations within a wide band of frequencies were heard in the earphone (in practice the most powerful usually drowns out the others). It was used in the earliest days of radio, when only one or two stations were within a crystal set's limited range. Design: Impedance matching An important principle used in crystal radio design to transfer maximum power to the earphone is impedance matching. The maximum power is transferred from one part of a circuit to another when the impedance of one circuit is the complex conjugate of that of the other; this implies that the two circuits should have equal resistance. However, in crystal sets, the impedance of the antenna-ground system (around 10–200 ohms) is usually lower than the impedance of the receiver's tuned circuit (thousands of ohms at resonance), and also varies depending on the quality of the ground attachment, length of the antenna, and the frequency to which the receiver is tuned.Therefore, in improved receiver circuits, in order to match the antenna impedance to the receiver's impedance, the antenna was connected across only a portion of the tuning coil's turns. This made the tuning coil act as an impedance matching transformer (in an autotransformer connection) in addition to providing the tuning function. The antenna's low resistance was increased (transformed) by a factor equal to the square of the turns ratio (the ratio of the number of turns the antenna was connected to, to the total number of turns of the coil), to match the resistance across the tuned circuit. In the "two-slider" circuit, popular during the wireless era, both the antenna and the detector circuit were attached to the coil with sliding contacts, allowing (interactive) adjustment of both the resonant frequency and the turns ratio. Alternatively a multiposition switch was used to select taps on the coil. These controls were adjusted until the station sounded loudest in the earphone. Design: Problem of selectivity One of the drawbacks of crystal sets is that they are vulnerable to interference from stations near in frequency to the desired station. Often two or more stations are heard simultaneously. This is because the simple tuned circuit does not reject nearby signals well; it allows a wide band of frequencies to pass through, that is, it has a large bandwidth (low Q factor) compared to modern receivers, giving the receiver low selectivity.The crystal detector worsened the problem, because it has relatively low resistance, thus it "loaded" the tuned circuit, drawing significant current and thus damping the oscillations, reducing its Q factor so it allowed through a broader band of frequencies. In many circuits, the selectivity was improved by connecting the detector and earphone circuit to a tap across only a fraction of the coil's turns. This reduced the impedance loading of the tuned circuit, as well as improving the impedance match with the detector. Design: Inductive coupling In more sophisticated crystal receivers, the tuning coil is replaced with an adjustable air core antenna coupling transformer which improves the selectivity by a technique called loose coupling. This consists of two magnetically coupled coils of wire, one (the primary) attached to the antenna and ground and the other (the secondary) attached to the rest of the circuit. The current from the antenna creates an alternating magnetic field in the primary coil, which induced a current in the secondary coil which was then rectified and powered the earphone. Each of the coils functions as a tuned circuit; the primary coil resonated with the capacitance of the antenna (or sometimes another capacitor), and the secondary coil resonated with the tuning capacitor. Both the primary and secondary were tuned to the frequency of the station. The two circuits interacted to form a resonant transformer. Design: Reducing the coupling between the coils, by physically separating them so that less of the magnetic field of one intersects the other, reduces the mutual inductance, narrows the bandwidth, and results in much sharper, more selective tuning than that produced by a single tuned circuit. However, the looser coupling also reduced the power of the signal passed to the second circuit. The transformer was made with adjustable coupling, to allow the listener to experiment with various settings to gain the best reception. Design: One design common in early days, called a "loose coupler", consisted of a smaller secondary coil inside a larger primary coil. The smaller coil was mounted on a rack so it could be slid linearly in or out of the larger coil. If radio interference was encountered, the smaller coil would be slid further out of the larger, loosening the coupling, narrowing the bandwidth, and thereby rejecting the interfering signal. Design: The antenna coupling transformer also functioned as an impedance matching transformer, that allowed a better match of the antenna impedance to the rest of the circuit. One or both of the coils usually had several taps which could be selected with a switch, allowing adjustment of the number of turns of that transformer and hence the "turns ratio". Coupling transformers were difficult to adjust, because the three adjustments, the tuning of the primary circuit, the tuning of the secondary circuit, and the coupling of the coils, were all interactive, and changing one affected the others. Design: Crystal detector The crystal detector demodulates the radio frequency signal, extracting the modulation (the audio signal which represents the sound waves) from the radio frequency carrier wave. In early receivers, a type of crystal detector often used was a "cat whisker detector". The point of contact between the wire and the crystal acted as a semiconductor diode. The cat whisker detector constituted a crude Schottky diode that allowed current to flow better in one direction than in the opposite direction. Modern crystal sets use modern semiconductor diodes. The crystal functions as an envelope detector, rectifying the alternating current radio signal to a pulsing direct current, the peaks of which trace out the audio signal, so it can be converted to sound by the earphone, which is connected to the detector. Design: The rectified current from the detector has radio frequency pulses from the carrier frequency in it, which are blocked by the high inductive reactance and do not pass well through the coils of early date earphones. Hence, a small capacitor called a bypass capacitor is often placed across the earphone terminals; its low reactance at radio frequency bypasses these pulses around the earphone to ground. In some sets the earphone cord had enough capacitance that this component could be omitted.Only certain sites on the crystal surface functioned as rectifying junctions, and the device was very sensitive to the pressure of the crystal-wire contact, which could be disrupted by the slightest vibration. Therefore, a usable contact point had to be found by trial and error before each use. The operator dragged the wire across the crystal surface until a radio station or "static" sounds were heard in the earphones. Alternatively, some radios (circuit, right) used a battery-powered buzzer attached to the input circuit to adjust the detector. The spark at the buzzer's electrical contacts served as a weak source of static, so when the detector began working, the buzzing could be heard in the earphones. The buzzer was then turned off, and the radio tuned to the desired station. Design: Galena (lead sulfide) was the most common crystal used, but various other types of crystals were also used, the most common being iron pyrite (fool's gold, FeS2), silicon, molybdenite (MoS2), silicon carbide (carborundum, SiC), and a zincite-bornite (ZnO-Cu5FeS4) crystal-to-crystal junction trade-named Perikon. Crystal radios have also been improvised from a variety of common objects, such as blue steel razor blades and lead pencils, rusty needles, and pennies In these, a semiconducting layer of oxide or sulfide on the metal surface is usually responsible for the rectifying action.In modern sets, a semiconductor diode is used for the detector, which is much more reliable than a crystal detector and requires no adjustments. Germanium diodes (or sometimes Schottky diodes) are used instead of silicon diodes, because their lower forward voltage drop (roughly 0.3 V compared to 0.6 V) makes them more sensitive.All semiconductor detectors function rather inefficiently in crystal receivers, because the low voltage input to the detector is too low to result in much difference between forward better conduction direction, and the reverse weaker conduction. To improve the sensitivity of some of the early crystal detectors, such as silicon carbide, a small forward bias voltage was applied across the detector by a battery and potentiometer. The bias moves the diode's operating point higher on the detection curve producing more signal voltage at the expense of less signal current (higher impedance). There is a limit to the benefit that this produces, depending on the other impedances of the radio. This improved sensitivity was caused by moving the DC operating point to a more desirable voltage-current operating point (impedance) on the junction's I-V curve. The battery did not power the radio, but only provided the biasing voltage which required little power. Design: Earphones The requirements for earphones used in crystal sets are different from earphones used with modern audio equipment. They have to be efficient at converting the electrical signal energy to sound waves, while most modern earphones sacrifice efficiency in order to gain high fidelity reproduction of the sound. In early homebuilt sets, the earphones were the most costly component. Design: The early earphones used with wireless-era crystal sets had moving iron drivers that worked in a way similar to the horn loudspeakers of the period. Each earpiece contained a permanent magnet about which was a coil of wire which formed a second electromagnet. Both magnetic poles were close to a steel diaphragm of the speaker. When the audio signal from the radio was passed through the electromagnet's windings, current was caused to flow in the coil which created a varying magnetic field that augmented or diminished that due to the permanent magnet. This varied the force of attraction on the diaphragm, causing it to vibrate. The vibrations of the diaphragm push and pull on the air in front of it, creating sound waves. Standard headphones used in telephone work had a low impedance, often 75 Ω, and required more current than a crystal radio could supply. Therefore, the type used with crystal set radios (and other sensitive equipment) was wound with more turns of finer wire giving it a high impedance of 2000–8000 Ω.Modern crystal sets use piezoelectric crystal earpieces, which are much more sensitive and also smaller. They consist of a piezoelectric crystal with electrodes attached to each side, glued to a light diaphragm. When the audio signal from the radio set is applied to the electrodes, it causes the crystal to vibrate, vibrating the diaphragm. Crystal earphones are designed as ear buds that plug directly into the ear canal of the wearer, coupling the sound more efficiently to the eardrum. Their resistance is much higher (typically megohms) so they do not greatly "load" the tuned circuit, allowing increased selectivity of the receiver. The piezoelectric earphone's higher resistance, in parallel with its capacitance of around 9 pF, creates a filter that allows the passage of low frequencies, but blocks the higher frequencies. In that case a bypass capacitor is not needed (although in practice a small one of around 0.68 to 1 nF is often used to help improve quality), but instead a 10–100 kΩ resistor must be added in parallel with the earphone's input.Although the low power produced by crystal radios is typically insufficient to drive a loudspeaker, some homemade 1960s sets have used one, with an audio transformer to match the low impedance of the speaker to the circuit. Similarly, modern low-impedance (8 Ω) earphones cannot be used unmodified in crystal sets because the receiver does not produce enough current to drive them. They are sometimes used by adding an audio transformer to match their impedance with the higher impedance of the driving antenna circuit. Use as a power source: A crystal radio tuned to a strong local transmitter can be used as a power source for a second amplified receiver of a distant station that cannot be heard without amplification.: 122–123 There is a long history of unsuccessful attempts and unverified claims to recover the power in the carrier of the received signal itself. Traditional crystal sets use half-wave rectifiers. As AM signals have a modulation factor of only 30% by voltage at peaks, no more than 9% of received signal power ( P=U2/R ) is actual audio information, and 91% is just rectified DC voltage. <correction> The 30% figure is the standard used for radio testing, and is based on the average modulation factor for speech. Properly-designed and managed AM transmitters can be run to 100% modulation on peaks without causing distortion or "splatter" (excess sideband energy that radiates outside of the intended signal bandwidth). Given that the audio signal is unlikely to be at peak all the time, the ratio of energy is, in practice, even greater. Considerable effort was made to convert this DC voltage into sound energy. Some earlier attempts include a one-transistor amplifier in 1966. Sometimes efforts to recover this power are confused with other efforts to produce a more efficient detection. This history continues now with designs as elaborate as "inverted two-wave switching power unit".: 129
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apple IIc** Apple IIc: The Apple IIc, the fourth model in the Apple II series of personal computers, is Apple Computer's first endeavor to produce a portable computer. The result was a 7.5 lb (3.4 kg) notebook-sized version of the Apple II that could be transported from place to place — a portable alternative and complement to the Apple IIe. The c in the name stood for compact, referring to the fact it was essentially a complete Apple II computer setup (minus display and power supply) squeezed into a small notebook-sized housing. While sporting a built-in floppy drive and new rear peripheral expansion ports integrated onto the main logic board, it lacks the internal expansion slots and direct motherboard access of earlier Apple II models, making it a closed system like the Macintosh. However, that was the intended direction for this model — a more appliance-like machine, ready to use out of the box, requiring no technical know-how or experience to hook up and therefore attractive to first-time users. History: The Apple IIc was released on April 24, 1984, during an Apple-held event called Apple II Forever. With that motto, Apple proclaimed the new machine was proof of the company's long-term commitment to the Apple II series and its users, despite the recent introduction of the Macintosh. The IIc was also seen as the company's response to the new IBM PCjr, and Apple hoped to sell 400,000 by the end of 1984. While essentially an Apple IIe computer in a smaller case, it was not a successor, but rather a portable version to complement it. One Apple II machine would be sold for users who required the expandability of slots, and another for those wanting the simplicity of a plug and play machine with portability in mind. History: The machine introduced Apple's Snow White design language, notable for its case styling and a modern look designed by Hartmut Esslinger which became the standard for Apple equipment and computers for nearly a decade. The Apple IIc introduced a unique off-white coloring known as "Fog", chosen to enhance the Snow White design style. The IIc and some peripherals were the only Apple products to use the "Fog" coloring. While relatively light-weight and compact in design, the Apple IIc was not a true portable in design as it lacked a built-in battery and display. History: Codenames for the machine while under development included Lollie, ET, Yoda, Teddy, VLC, IIb, IIp. Overview of features: Improving the IIe Technically the Apple IIc was an Apple IIe in a smaller case, more portable and easier to use but also less expandable. The IIc used the CMOS-based 65C02 microprocessor which added 27 new instructions to the 6502, but was incompatible with programs that used deprecated illegal opcodes of the 6502. (Apple stated that the Apple IIc was compatible with 90–95% of the 10,000 software packages available for the Apple II series.) The new ROM firmware allowed Applesoft BASIC to recognize lowercase characters and work better with an 80-column display, and fixed several bugs from the IIe ROM. In terms of video, the text display added 32 unique character symbols called "MouseText" which, when placed side by side, could display simple icons, windows and menus to create a graphical user interface completely out of text, similar in concept to IBM code page 437 or PETSCII's box-drawing characters. A year later, the Apple IIe would benefit from these improvements in the form of a four-chip upgrade called the Enhanced IIe. Overview of features: Built-in cards and ports The equivalent of five expansion cards were built-in and integrated into the Apple IIc motherboard: An Extended 80 Column Card, two Apple Super Serial Cards, a Mouse Card, and a disk floppy drive controller card. This meant the Apple IIc had 128 KB RAM, 80-column text, and Double-Hi-Resolution graphics built-in and available right out of the box, unlike its older sibling, the Apple IIe. It also meant less of a need for slots, as the most popular peripheral add-on cards were already built-in, ready for devices to be plugged into the rear ports of the machine. The built-in cards were mapped to phantom slots so software from slot-based Apple II models would know where to find them (i.e. mouse to virtual slot 4, serial cards to slot 1 and 2, floppy to slot 6, and so on). The entire Apple Disk II Card, used for controlling floppy drives, had been shrunk down into a single chip called the "IWM" which stood for Integrated Woz Machine. Overview of features: In the rear of the machine were its expansion ports, mostly for providing access to its built-in cards. The standard DE-9 joystick connector doubled as a mouse interface, compatible with the same mice used by the Lisa and early Macintosh computers. Two serial ports were provided primarily to support a printer and modem, and a floppy port connector supported a single external 5.25-inch drive (and later "intelligent" devices such as 3.5-inch drives and hard disks). A Video Expansion port provided rudimentary signals for add-on adapters but, alone, could not directly generate a video signal (Apple produced an LCD and an RF-modulator for this port; the latter shipped with early IIcs). A port connector tied into an internal 12 V power converter for attaching batteries; this is where the infamous external power supply (dubbed "brick on a leash" by users) that was included plugged in. The same composite video port found on earlier Apple II models remained present; however, gone were the cassette ports and internal DIP-16 game port. Overview of features: Built-in accessories and keyboard The Apple IIc had a built-in 5.25-inch floppy drive (140 KB) along the right side of the case—the first Apple II model to include such a feature. Along the left side of the case was a dial to control the volume of the internal speaker, along with a 1⁄8-inch monaural audio jack for headphones or an external speaker. A fold-out carrying handle doubled as a way to prop up the back end of the machine to angle the keyboard for typing, if desired. Overview of features: The keyboard layout mirrored that of the Apple IIe; however, the "Reset" key had been moved above the "Esc" key. Two toggle switches were also located in the same area: an "80/40"-column switch for (specially written) software to detect which text video mode to start up in, and a "Keyboard" switch to select between QWERTY and Dvorak layout—or between US and national layout on non-American machines. The keyboard itself was built into the front half of the case, much like a notebook computer, and early models had a rubber mat placed beneath the keycaps which acted as a liquid spill guard. Reception: Although Apple predicted that it would sell 100,000 IIc computers per month, it sold an average of 100,000 per year over four years; even the unsuccessful PCjr outsold it during each computer's first year on the market. The IIe was much more popular than the IIc because of its greater expandability, but Apple almost stopped production of the IIe because of the IIc's expected popularity, causing a shortage of the former and glut of the latter.While noting its lack of an internal modem and inability to use expansion cards such as the popular Z-80 SoftCard, BYTE in May 1984 described the Apple IIc as a "head-to-head [competitor] with the IBM PCjr" for novice computer users. Creative Computing agreed, stating in July 1984 that "This war will have no clear winner. Apple fans will buy the IIc, and IBM fans will buy the PCjr. I believe the Apple II will live forever", with the IIc as the "final transmutation" of the Apple II series because it was about as small as a computer with a full-sized keyboard and 5 1/4" drive could be. The magazine said in December 1984 that the IIe and IIc were the best home computers with prices above $500, with the IIc better for those using word processing and business software. Specifications: Microprocessor 65C02 running at 1.023 MHz 8-bit data bus Memory 128 KB RAM built-in 32 KB ROM built-in (16 KB ROM in original) Expandable from 128 KB to 1 MB (only through non-conventional methods in original) Video 40 and 80 columns text, with 24 lines Low-Resolution: 40 × 48 (16 colors) High-Resolution: 280 × 192 (6 colors) Double-Low-Resolution: 80 × 48 (16 colors) Double-High-Resolution: 560 × 192 (16 colors) Audio Built-in speaker; 1-bit toggling User-adjustable volume (manual dial control) Built-in storage Slim-line internal 5.25-inch floppy drive (140 KB, single-sided) Internal connectors Memory Expansion Card connector (34-pin)** Only available on ROM 3 motherboard and higher; original IIc: NONE Specialized chip controllers IWM (Integrated Woz Machine) for floppy drives Dual 6551 ACIA chips for serial I/O External connectors Joystick/Mouse (DE-9) Printer, serial-1 (DIN-5) Modem, serial-2 (DIN-5) Video Expansion Port (D-15) Floppy drive SmartPort (D-19) 15-Volt DC connector input (DIN-7, male) NTSC composite video output (RCA connector) Audio-out (1⁄8-inch mono phone jack) Revisions: The Apple IIc was in production from April 1984 to August 1988, and during this time accrued some minor changes. These modifications included three new ROM updates, a bug-fix correction to the original motherboard, a newly revised motherboard, and a slight cosmetic change to the external appearance of the machine. The ROM revision for a specific Apple IIc is determined by entering the Applesoft BASIC programming language and typing in the command PRINT PEEK (64447) which returns the value indicating the particular ROM version. Revisions: Original IIc (ROM version '255') The initial ROM, installed in machines produced during the first year and a half of production, was 16 KB in size. The only device which could be connected to the disk port was (one) external 5.25-inch floppy drive; software could be booted from this external drive by typing the command PR#7. The serial port did not mask incoming linefeed characters or support the XON/XOFF protocol, unlike all later firmware revisions to come. There was no self-test diagnostic present in this ROM; holding down the solid-Apple key during cold boot merely cycled unusual patterns on screen which served no useful purpose or indication of the machine's health. Revisions: Serial port timing fix The original Apple IIc motherboard (manufactured between April and November 1984) derived the timing for its two serial ports through a 74LS161 TTL logic chip. It was later found that this method's timing was 3% slower than the minimum requirement specified and caused some third-party (i.e. non-Apple) modems and printers, which operated at 1200 bits per second (baud) or faster, to function improperly. Slower serial devices operating at 300 baud or less were unaffected, as well as some faster devices which could tolerate the deviation. The solution to ensure all devices were compatible was to replace the TTL chip with an oscillator during manufacture. Apple would swap affected motherboards for users who could prove they had an incompatible serial device (e.g. a third-party 1200-baud modem which presented problems; not all did). It is important to note the problem did not affect all owners; it was more or less a hit-or-miss issue depending on the specific device connected. Revisions: UniDisk 3.5 support (ROM version '0') This update, introduced in November 1985, came in the form of an upgrade to the ROM firmware which doubled in size from 16 KB to 32 KB. The new ROM supported "intelligent" devices such as the Apple UniDisk 3.5-inch (800 KB) floppy drive and Smartport-based hardisks, in addition to an external 5.25-inch floppy drive. A new self-test diagnostic was provided for testing built-in RAM and other signs of logic faults. The Mini-Assembler, absent since the days of the Apple II Plus, made a return, and new Monitor "Step" and "Trace" commands were added as well. The upgraded ROM added rudimentary support for an external AppleTalk networking device which was yet to be developed. When attempting to boot virtual slot 7, users would encounter the message "APPLETALK OFFLINE". The IIc, however, had no built-in networking capabilities, and no external device was ever released. The upgrade consisted of a single chip swap (and a trivial motherboard modification), which Apple provided free only to persons who purchased a UniDisk 3.5 drive. A small sticker with an icon of a 3.5-inch floppy diskette was placed next to the existing 5.25-inch diskette icon above the floppy drive port indicating the machine had been upgraded. Revisions: Memory Expansion IIc (ROM version '3') Introduced in September 1986 simultaneously with the Apple IIGS, this model introduced a new motherboard, new keyboard and new color scheme. The original Apple IIc had no expansion options and required third-party cards (e. Legend Industries) to perform various hardware tricks. This could be done by removing the CPU and MMU chips and inserting a special board into these sockets, which then used bank switching to expand memory up to 1 Megabyte (RAM). This was similar to the function of the slots in the original Apple II, II+ as well as the auxiliary slot in the Apple IIe. The new motherboard added a 34-pin socket for plugging in memory cards directly, which allowed for the addressing of up to 1 megabyte (MB) of memory using Slinky-type memory cards. The onboard chip count was reduced from 16 memory chips (64K×1) to four (64K×4). The new firmware removed the code for the cancelled AppleTalk networking device and replaced it with support for memory cards. Bumping out the non-supported AppleTalk functionality, memory now lived in virtual slot 4, and mouse support moved to slot 7. The new keyboard no longer had the rubber anti-spill mat and offered generally more tactile and responsive keys that felt more "clicky". At the same time, the color of the keyboard, floppy drive latch, and power supply cords changed from beige to light grey, which matched the new Platinum color scheme of the Apple IIGS. The case style, however, remained Snow White. Owners of the previous IIc model were entitled to a free motherboard upgrade if they purchased one of Apple's IIc memory expansion boards (they did not receive the new keyboard or the cosmetic changes). Revisions: Memory Expansion fix (ROM version '4') In January 1988, a new ROM firmware update was issued to address bugs in the new memory-expandable IIc. Changes included better detection of installed RAM chips, correction of a problem when using the serial modem port in terminal mode, and a bug fix for keyboard buffering. The ROM upgrade was available free of charge only to owners of the memory expansion IIc. This was the final change to the Apple IIc; it would be superseded that September by the Apple IIc Plus (identified as ROM version '5'). International versions: Like the Apple IIe before it, the Apple IIc keyboard differed depending on what region of the world it was sold in. Sometimes the differences were very minor, such as extra local language characters and symbols printed on certain keycaps (e.g. French accented characters on the Canadian IIc such as "à", "é", "ç", etc., or the British Pound "£" symbol on the UK IIc) while other times the layout and shape of keys greatly differed (e.g. European IIcs). In order to access the local character set, the "Keyboard" switch above the keyboard was depressed, which would instantly switch text video from the US character set to the local set. The DVORAK keyboard layout was not available on international IIcs—the feature had been intended to switch between international keyboards; the DVORAK layout was merely added to give the switch a function on US IIcs. In some countries these localized IIcs also supported 50 Hz PAL video and the different 220/240-volt power of that region by means of a different external power supply — this was a very simple change, since the IIc had an internal 12-volt power converter. The international versions replaced any English legends printed on the case (specifically the "keyboard" toggle switch, "Power" and "Disk Use" drive-activity labels) with graphical icons that could be universally understood. Add-on accessories: Portability enhancements At the time of the Apple IIc's release, Apple announced an optional black and white (1-bit) LCD screen designed specifically for the machine called the Apple Flat Panel Display. While it was welcomed as a means of making the IIc more portable, it did not integrate well as a portable solution, not attaching in a secure or permanent manner and not able to fold-over face down. Instead, it sat atop the machine and connected via ribbon cable to a somewhat bulky rear port connector. Its main shortcoming was that it suffered from a very poor contrast and no backlighting, making it very difficult to view without a strong external light source. The display itself had an odd aspect ratio as well, making graphics look vertically squashed. A third-party company would later introduce a work-alike LCD screen called the C-Vue, which looked and functioned very much like Apple's product, albeit with a reportedly slight improvement in viewability. Consequently, both sold poorly and had a very short market life span, making these displays fairly uncommon (and as a result, extremely rare today). Add-on accessories: Third parties also offered external rechargeable battery units for the Apple IIc (e.g. Prairie Power Portable System available from Roger Coats) with up to eight hours per charge or longer. Although they aided in making the machine more of a true portable, they were nonetheless bulky and heavy, and added more pieces that would have to be carried. Adapter cables were sold as well that allowed the Apple IIc to plug into an automobile's DC power cigarette lighter. Add-on accessories: To help transport the Apple IIc and its accessory pieces around, Apple sold a nylon carrying case with shoulder strap that had a compartment for the computer, its external power supply, and the cables. It had enough room to squeeze in one of the above-mentioned LCD display units. The case was grey in color with a stitched-on Apple logo in the upper right corner. Add-on accessories: Expansion capabilities While the Apple IIc had many built-in features to offer, many users wanted to extend the machine's capabilities beyond what Apple provided. It proved difficult since the IIc was a closed system that initially was designed with no expansion capabilities in mind; however, many companies figured out ingenious ways of squeezing enhancements inside the tiny case. Real-time clocks, memory expansion, and coprocessors were popular, and some companies even managed to combine all three into a single add-on board. Typically, in order to add these options, key chips on the motherboard were pulled and moved onto the expansion board offering the new features, and the board was then placed into the empty sockets. While sometimes a tight squeeze, this trickery worked quite well, and most importantly of all offered users a way to expand memory—something Apple did not themselves support until the Memory Expansion IIc model was introduced. Add-on accessories: Some companies devised a method for squeezing in an entire CPU accelerator product, by means of placing all the specialized circuitry (i.e. cache and logic) into one tall chip that outright replaced the 40-pin 65C02 microprocessor, speeding up the machine from 4–10 MHz. Notable examples are the Zip Chip and Rocket Chip. Add-on accessories: Although the IIc lacked a SCSI or IDE interface, external hard drives were produced by third parties that connected through the floppy SmartPort as an innovative alternative connection method (e.g. ProApp, Chinook). While these specialized hard drives were relatively slow due to the nature of how data was transferred through this interface (designed primarily for floppy drives) they did allow for true mass storage. The CDrive however did mount internally and was very fast due to its direct connection to the CPU. Other innovations that used existing expansion ports led to add-on speech and music synthesis products by means of external devices that plugged into the IIc's serial ports. Three popular such devices were the Mockingboard-D, Cricket and Echo IIc. Applied Engineering offered an ever increasing and improved line of "Z-Ram" internal memory expansion boards, which also included a Z-80 CPU for Apple CP/M software capability. Add-on accessories: General accessories For those wishing to use the Apple IIc as a standard desktop machine, Apple sold the Monitor IIc, a 9 in (23 cm) monochrome CRT display with an elevated stand. The Color Monitor IIc, a 14 in (36 cm) color composite monitor, followed in 1985. A mouse was another popular add-on, especially since it required no interface card, unlike earlier Apples, and simply plugged directly into the back of the machine (MousePaint, a clone of MacPaint, shipped with the IIc's mouse). An external 5.25-inch floppy drive, matching the style of the IIc, was also made available. Later, 3.5-inch floppy storage became an option with the "intelligent" UniDisk 3.5 which contained its own miniature computer inside (CPU, RAM, firmware) to overcome the issue of using a high-speed floppy drive on a 1 MHz machine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Myers–Briggs Type Indicator** Myers–Briggs Type Indicator: In personality typology, the Myers–Briggs Type Indicator (MBTI) is an introspective self-report questionnaire indicating differing psychological preferences in how people perceive the world and make decisions. It enjoys popularity despite being widely regarded as pseudoscience by the scientific community. The test attempts to assign a binary value to each of four categories: introversion or extraversion, sensing or intuition, thinking or feeling, and judging or perceiving. One letter from each category is taken to produce a four-letter test result, such as "ISTJ" or "ENFP".The MBTI was constructed by two Americans: Katharine Cook Briggs and her daughter Isabel Briggs Myers, who were inspired by the book Psychological Types by Swiss psychiatrist Carl Jung. Isabel Myers was particularly fascinated by the concept of introversion and she typed herself as an INFP. However, she felt the book was too complex for the general public, and therefore she tried to organize the Jungian cognitive functions to make it more accessible.Most of the research supporting the MBTI's validity has been produced by the Center for Applications of Psychological Type, an organization run by the Myers–Briggs Foundation, and published in the center's own journal, the Journal of Psychological Type (JPT), raising questions of independence, bias, and conflict of interest. Though the MBTI resembles some psychological theories, it has been criticized as pseudoscience and is not widely endorsed by academic researchers in the psychology field. The indicator exhibits significant scientific (psychometric) deficiencies, including poor validity, poor reliability, measuring categories that are not independent, and not being comprehensive. History: Briggs began her research into personality in 1917. Upon meeting her future son-in-law, she observed marked differences between his personality and that of other family members. Briggs embarked on a project of reading biographies, and subsequently developed a typology wherein she proposed four temperaments: meditative (or thoughtful), spontaneous, executive, and social.After the publication in 1923 of an English translation of Carl Jung's book Psychological Types (first published in German as Psychologische Typen in 1921), Briggs recognized that Jung's theory resembled, but went far beyond, her own. Briggs's four types were later identified as corresponding to the IXXXs (Introverts: "meditative"), EXXPs (Extraverts & Prospectors: "spontaneous"), EXTJs (Extraverts, Thinkers & Judgers: "executive") and EXFJs (Extraverts, Feelers & Judgers: "social"). Her first publications were two articles describing Jung's theory, in the journal New Republic in 1926 ("Meet Yourself Using the Personality Paint Box") and in 1928 ("Up From Barbarism"). After extensively studying the work of Jung, Briggs and her daughter extended their interest in human behavior into efforts to turn the theory of psychological types to practical use.Although Myers graduated from Swarthmore College in political science in 1919, neither Myers nor Briggs were formally educated in the discipline of psychology, and both were self-taught in the field of psychometric testing. Myers therefore apprenticed herself to Edward N. Hay (1891–1958), the head personnel officer for a large Philadelphia bank. From Hay, Myers learned rudimentary test construction, scoring, validation, and statistical methods.Briggs and Myers began creating their indicator during World War II (1939–1945) in the belief that a knowledge of personality preferences would help women entering the industrial workforce for the first time to identify the sorts of war-time jobs that would be the "most comfortable and effective" for them. The Briggs Myers Type Indicator Handbook, published in 1944, was re-published as "Myers–Briggs Type Indicator" in 1956.Myers' work attracted the attention of Henry Chauncey, head of the Educational Testing Service, a private assessment-organization. Under these auspices, the first MBTI "manual" was published, in 1962. The MBTI received further support from Donald W. MacKinnon, head of the Institute of Personality and Social Research at the University of California, Berkeley; W. Harold Grant, a professor at Michigan State University and Auburn University; and Mary H. McCaulley of the University of Florida. The publication of the MBTI was transferred to Consulting Psychologists Press in 1975, and the Center for Applications of Psychological Type was founded as a research laboratory.After Myers' death in May 1980, Mary McCaulley updated the MBTI manual, and the second edition was published in 1985. The third edition appeared in 1998. History: Format and administration In 1987, an advanced scoring-system was developed for the MBTI. From this was developed the Type Differentiation Indicator (TDI), which is a scoring system for the longer MBTI, Form J, which includes the 290 items written by Myers that had survived her previous item analyses. It yields 20 subscales (five under each of the four dichotomous preference scales), plus seven additional subscales for a new "comfort-discomfort" factor (which parallels, though not perfectly measuring, the NEO-PI factor of neuroticism). This factor's scales indicate a sense of overall comfort and confidence versus discomfort and anxiety. They also load onto one of the four type-dimensions: guarded-optimistic (T/F), defiant-compliant (T/F), carefree-worried (T/F), decisive-ambivalent (J/P), intrepid-inhibited (E/I), leader-follower (E/I), and proactive-distractible (J/P).Also included is a composite of these called "strain". There are also scales for type-scale consistency and comfort-scale consistency. Reliability of 23 of the 27 TDI subscales is greater than 0.50, "an acceptable result given the brevity of the subscales".In 1989, a scoring system was developed for only the 20 subscales for the original four dichotomies. This was initially known as "Form K" or "the Expanded Analysis Report". This tool is now called the MBTI Step II.Form J or the TDI included the items (derived from Myers' and McCaulley's earlier work) necessary to score what became known as Step III. (The 1998 MBTI Manual reported that the two instruments were one and the same) Step III was developed in a joint project involving the following organizations: the Myers–Briggs Company, the publisher of all the MBTI works; the Center for Applications of Psychological Type (CAPT), which holds all of Myers' and McCaulley's original work; and the MBTI Trust headed by Katharine and Peter Myers. CAPT advertised Step III as addressing type development and the use of "perception and judgment" by respondents. Concepts: The MBTI is based on the influential theory of psychological types proposed by Swiss psychiatrist Carl Jung in 1921, who had speculated that people experience the world using four principal psychological functions—sensation, intuition, feeling, and thinking—and that one of these four functions is dominant for a person most of the time. The four categories are introversion/extraversion, sensing/intuition, thinking/feeling, judging/perceiving. According to the MBTI, each person is said to have one preferred quality from each category, producing 16 unique types.The MBTI emphasizes the value of naturally occurring differences. "The underlying assumption of the MBTI is that we all have specific preferences in the way we construe our experiences, and these preferences underpin our interests, needs, values, and motivation."The MBTI Manual states that the indicator "is designed to implement a theory; therefore, the theory must be understood to understand the MBTI". Fundamental to the MBTI is the hypothesis of psychological types as originally developed by Carl Jung. Jung proposed the existence of two dichotomous pairs of cognitive functions: The "rational" (judging) functions: thinking and feeling. Concepts: The "irrational" (perceiving) functions: sensation and intuition.Jung believed that for every person, each of the functions is expressed primarily in either an introverted or extraverted form. Based on Jung's original concepts, Briggs and Myers developed their own theory of psychological type, described below, on which the MBTI is based. However, although psychologist Hans Eysenck called the MBTI a moderately successful quantification of Jung's original principles as outlined in Psychological Types, he also said, "[The MBTI] creates 16 personality types which are said to be similar to Jung's theoretical concepts. I have always found difficulties with this identification, which omits one half of Jung's theory (he had 32 types, by asserting that for every conscious combination of traits there was an opposite unconscious one). Obviously, the latter half of his theory does not admit of questionnaire measurement, but to leave it out and pretend that the scales measure Jungian concepts is hardly fair to Jung." In any event, both models remain hypothetical, with no controlled scientific studies supporting either Jung's original concept of type or the Myers–Briggs variation. Concepts: Differences from Jung Jung did not see the types (such as intra- and extraversion) as dualistic, but rather as tendencies: both are innate and have the potential to balance.Jung's typology theories postulated a sequence of four cognitive functions (thinking, feeling, sensation, and intuition), each having one of two polar tendencies (extraversion or introversion), giving a total of eight dominant functions. The MBTI is based on these eight hypothetical functions, although with some differences in expression from Jung's model. While the Jungian model offers empirical evidence for the first three dichotomies, the Briggs added the judgment-perception preference.The most notable addition of Myers' and Briggs' ideas to Jung's original thought is their concept that a given type's fourth letter (J or P) indicates a person's most preferred extraverted function, which is the dominant function for extraverted types and the auxiliary function for introverted types.Jung hypothesized that the dominant function acts alone in its preferred world: exterior for extraverts and interior for introverts. The remaining three functions, he suggested, operate in the opposite orientation. Some MBTI practitioners, however, place doubt on this concept as being a category error with next to no empirical evidence backing it relative to other findings with correlation evidence, yet as a theory it still remains part of Myers' and Briggs' extrapolation of their original theory despite being discounted.Jung's hypothesis can be summarized as: if the dominant cognitive function is introverted, then the other functions are extraverted and vice versa. The MBTI Manual summarizes Jung's work of balance in psychological type as follows: "There are several references in Jung's writing to the three remaining functions having an opposite attitudinal character. For example, in writing about introverts with thinking dominant ... Jung commented that the counterbalancing functions have an extraverted character." Using the INTP type as an example, the orientation according to Jung would be as follows: Dominant introverted thinking Auxiliary extraverted intuition Tertiary introverted sensing Inferior extraverted feeling Type dynamics and development Jung's typological model regards psychological type as similar to left or right handedness: people are either born with, or develop, certain preferred ways of perceiving and deciding. The MBTI sorts some of these psychological differences into four opposite pairs, or "dichotomies", with a resulting 16 possible psychological types. None of these are considered to be "better" or "worse"; however, Briggs and Myers theorized that people innately "prefer" one overall combination of type differences. In the same way that writing with the left hand is difficult for a right-hander, so people tend to find using their opposite psychological preferences more difficult, though they can become more proficient (and therefore behaviorally flexible) with practice and development. Concepts: The 16 types are typically referred to by an abbreviation of four letters – the initial letters of each of their four type preferences (except in the case of intuition, which uses the abbreviation "N" to distinguish it from introversion). For instance: ENTJ: extraversion (E), intuition (N), thinking (T), judgment (J) ISFP: introversion (I), sensing (S), feeling (F), perception (P)These abbreviations are applied to all 16 types. Concepts: The interaction of two, three, or four preferences is known as "type dynamics". Although type dynamics has received little or no empirical support to substantiate its viability as a scientific theory, Myers and Briggs asserted that for each of the 16 four-preference types, one function is the most dominant and is likely to be evident earliest in life. A secondary or auxiliary function typically becomes more evident (differentiated) during teenaged years and provides balance to the dominant. In normal development, individuals tend to become more fluent with a third, tertiary function during mid-life, while the fourth, inferior function remains least consciously developed. The inferior function is often considered to be more associated with the unconscious, being most evident in situations such as high stress (sometimes referred to as being "in the grip" of the inferior function).However, the use of type dynamics is disputed: in the conclusion of various studies on the subject of type dynamics, James H. Reynierse writes, "Type dynamics has persistent logical problems and is fundamentally based on a series of category mistakes; it provides, at best, a limited and incomplete account of type related phenomena"; and "type dynamics relies on anecdotal evidence, fails most efficacy tests, and does not fit the empirical facts". His studies gave the clear result that the descriptions and workings of type dynamics do not fit the real behavior of people. He suggests getting completely rid of type dynamics, because it does not help, but hinders understanding of personality. The presumed order of functions 1 to 4 did only occur in one out of 540 test results. Concepts: Four dichotomies The four pairs of preferences or "dichotomies" are shown in the adjacent table. Concepts: The terms used for each dichotomy have specific technical meanings relating to the MBTI, which differ from their everyday usage. For example, people who prefer judgment over perception are not necessarily more "judgmental" or less "perceptive", nor does the MBTI instrument measure aptitude; it simply indicates for one preference over another. Someone reporting a high score for extraversion over introversion cannot be correctly described as more extraverted: they simply have a clear preference. Concepts: Point scores on each of the dichotomies can vary considerably from person to person, even among those with the same type. However, Isabel Myers considered the direction of the preference (for example, E vs. I) to be more important than the degree of the preference (for example, very clear vs. slight). The expression of a person's psychological type is more than the sum of the four individual preferences. The preferences interact through type dynamics and type development. Concepts: Attitudes: extraversion/introversion Myers–Briggs literature uses the terms extraversion and introversion as Jung first used them. Extraversion means literally outward-turning and introversion, inward-turning. These specific definitions differ somewhat from the popular usage of the words. Extraversion is the spelling used in MBTI publications. The preferences for extraversion and introversion are often called "attitudes". Briggs and Myers recognized that each of the cognitive functions can operate in the external world of behavior, action, people, and things ("extraverted attitude") or the internal world of ideas and reflection ("introverted attitude"). The MBTI assessment sorts for an overall preference for one or the other. Concepts: People who prefer extraversion draw energy from action: they tend to act, then reflect, then act further. If they are inactive, their motivation tends to decline. To rebuild their energy, extraverts need breaks from time spent in reflection. Conversely, those who prefer introversion "expend" energy through action: they prefer to reflect, then act, then reflect again. To rebuild their energy, introverts need quiet time alone, away from activity.An extravert's flow is directed outward toward people and objects, whereas the introvert's is directed inward toward concepts and ideas. Contrasting characteristics between extraverted and introverted people include: Extraverted are action-oriented, while introverted are thought-oriented. Concepts: Extraverted seek breadth of knowledge and influence, while introverted seek depth of knowledge and influence. Extraverted often prefer more frequent interaction, while introverted prefer more substantial interaction. Extraverted recharge and get their energy from spending time with people, while introverted recharge and get their energy from spending time alone; they consume their energy through the opposite process. Concepts: Functions: sensing/intuition and thinking/feeling Jung identified two pairs of psychological functions: Two perceiving functions: sensation (usually called sensing in MBTI writings) and intuition Two judging functions: thinking and feelingAccording to Jung's typology model, each person uses one of these four functions more dominantly and proficiently than the other three; however, all four functions are used at different times depending on the circumstances. Because each function can manifest in either an extraverted or an introverted attitude, Jung's model includes eight combinations of functions and attitudes, four of which are largely conscious and four unconscious. John Beebe created a model that combines ideas of archetypes and the dialogical self with functions, each function viewed as performing the role of an archetype within an internal dialog.Sensing and intuition are the information-gathering (perceiving) functions. They describe how new information is understood and interpreted. People who prefer sensing are more likely to trust information that is in the present, tangible, and concrete: that is, information that can be understood by the five senses. They tend to distrust hunches, which seem to come "out of nowhere". They prefer to look for details and facts. For them, the meaning is in the data. On the other hand, those who prefer intuition tend to trust information that is less dependent upon the senses, that can be associated with other information (either remembered or discovered by seeking a wider context or pattern). They may be more interested in future possibilities. For them, the meaning is in the underlying theory and principles which are manifested in the data.Thinking and feeling are the decision-making (judging) functions. The thinking and feeling functions are both used to make rational decisions, based on the data received from their information-gathering functions (sensing or intuition). Those who prefer thinking tend to decide things from a more detached standpoint, measuring the decision by what seems reasonable, logical, causal, consistent, and matching a given set of rules. Those who prefer feeling tend to come to decisions by associating or empathizing with the situation, looking at it 'from the inside' and weighing the situation to achieve, on balance, the greatest harmony, consensus and fit, considering the needs of the people involved. Thinkers usually have trouble interacting with people who are inconsistent or illogical, and tend to give very direct feedback to others. They are concerned with the truth and view it as more important.As noted already, people who prefer thinking do not necessarily, in the everyday sense, "think better" than their feeling counterparts, in the common sense; the opposite preference is considered an equally rational way of coming to decisions (and, in any case, the MBTI assessment is a measure of preference, not ability). Similarly, those who prefer feeling do not necessarily have "better" emotional reactions than their thinking counterparts. Concepts: Dominant function According to Jung, people use all four cognitive functions. However, one function is generally used in a more conscious and confident way. This dominant function is supported by the secondary (auxiliary) function, and to a lesser degree the tertiary function. The fourth and least conscious function is always the opposite of the dominant function. Myers called this inferior function the "shadow."The four functions operate in conjunction with the attitudes (extraversion and introversion). Each function is used in either an extraverted or introverted way. A person whose dominant function is extraverted intuition, for example, uses intuition very differently from someone whose dominant function is introverted intuition. Concepts: Lifestyle preferences: judging/perception Myers and Briggs added another dimension to Jung's typological model by identifying that people also have a preference for using either the judging function (thinking or feeling) or their perceiving function (sensing or intuition) when relating to the outside world (extraversion). Concepts: Myers and Briggs held that types with a preference for judging show the world their preferred judging function (thinking or feeling). So, TJ types tend to appear to the world as logical and FJ types as empathetic. According to Myers, judging types like to "have matters settled". Those types who prefer perception show the world their preferred perceiving function (sensing or intuition). So, SP types tend to appear to the world as concrete and NP types as abstract. According to Myers, perceptive types prefer to "keep decisions open". For extraverts, the J or P indicates their dominant function; for introverts, the J or P indicates their auxiliary function. Introverts tend to show their dominant function outwardly only in matters "important to their inner worlds". For example, because the ENTJ type is extraverted, the J indicates that the dominant function is the preferred judging function (extraverted thinking). The ENTJ type introverts the auxiliary perceiving function (introverted intuition). The tertiary function is sensing and the inferior function is introverted feeling. Because the INTJ type is introverted, however, the J instead indicates that the auxiliary function is the preferred judging function (extraverted thinking). The INTJ type introverts the dominant perceiving function (introverted intuition). The tertiary function is feeling and the inferior function is extraverted sensing. Criticism: Despite its popularity, it has been widely regarded as pseudoscience by the scientific community. The validity (statistical validity and test validity) of the MBTI as a psychometric instrument has been the subject of much criticism. Media reports have called the test "pretty much meaningless", and "one of the worst personality tests in existence". The psychologist Adam Grant is especially vocal against MBTI. He called it "the fad that won't die" in the Psychology Today article. Psychometric specialist Robert Hogan wrote: "Most personality psychologists regard the MBTI as little more than an elaborate Chinese fortune cookie..."It has been estimated that between a third and a half of the published material on the MBTI has been produced for the special conferences of the Center for the Application of Psychological Type (which provide the training in the MBTI, and are funded by sales of the MBTI) or as papers in the Journal of Psychological Type (which is edited and supported by Myers–Briggs advocates and by sales of the indicator). It has been argued that this reflects a lack of critical scrutiny. Many of the studies that endorse MBTI are methodologically weak or unscientific. A 1996 review by Gardner and Martinko concluded: "It is clear that efforts to detect simplistic linkages between type preferences and managerial effectiveness have been disappointing. Indeed, given the mixed quality of research and the inconsistent findings, no definitive conclusion regarding these relationships can be drawn."The test has been described as one of many self-discovery "fads" and has been likened to horoscopes, as both rely on the Barnum effect, flattery, and confirmation bias, leading participants to personally identify with descriptions that are somewhat desirable, vague, and widely applicable.Currently, MBTI is not ready to be adopted in counseling. Criticism: Little evidence for dichotomies As previously stated in the Myers–Briggs Type Indicator § Four dichotomies section, Isabel Myers considered the direction of the preference (for example, E vs. I) to be more important than the degree of the preference. Statistically, this would mean that scores on each MBTI scale would show a bimodal distribution with most people scoring near the ends of the scales, thus dividing people into either, e.g., an extraverted or an introverted psychological type. However, most studies have found that scores on the individual scales were actually distributed in a centrally peaked manner, similar to a normal distribution, indicating that the majority of people were actually in the middle of the scale and were thus neither clearly introverted nor extraverted. Most personality traits do show a normal distribution of scores from low to high, with about 15% of people at the low end, about 15% at the high end and the majority of people in the middle ranges. But in order for the MBTI to be scored, a cut-off line is used at the middle of each scale and all those scoring below the line are classified as a low type and those scoring above the line are given the opposite type. Thus, psychometric assessment research fails to support the concept of type, but rather shows that most people lie near the middle of a continuous curve. Criticism: Although we do not conclude that the absence of bimodality necessarily proves that the MBTI developers' theory-based assumption of categorical "types" of personality is invalid, the absence of empirical bimodality in IRT-based research of MBTI scores does indeed remove a potentially powerful line of evidence that was previously available to "type" advocates to cite in defense of their position. Criticism: Little evidence for "dynamic" type stack Some MBTI supporters argue that the application of type dynamics to MBTI (e.g., where inferred "dominant" or "auxiliary" functions like Se / "Extraverted Sensing" or Ni / "Introverted Intuition" are presumed to exist) is a logical category error that has little empirical evidence backing it. Instead, they argue that Myers–Briggs validity as a psychometric tool is highest when each type of category is viewed independently as a dichotomy. Criticism: Validity and utility The content of the MBTI scales is problematic. In 1991, a National Academy of Sciences committee reviewed data from MBTI research studies and concluded that only the I-E scale has high correlations with comparable scales of other instruments and low correlations with instruments designed to assess different concepts, showing strong validity. In contrast, the S-N and T-F scales show relatively weak validity. The 1991 review committee concluded at the time there was "not sufficient, well-designed research to justify the use of the MBTI in career counseling programs". This study based its measurement of validity on "criterion-related validity (i.e. does the MBTI predict specific outcomes related to interpersonal relations or career success/job performance?)." The committee stressed the discrepancy between popularity of the MBTI and research results stating, "the popularity of this instrument in the absence of proven scientific worth is troublesome." There is insufficient evidence to make claims about utility, particularly of the four letter type derived from a person's responses to the MBTI items. Criticism: Lack of objectivity The accuracy of the MBTI depends on honest self-reporting. Unlike some personality questionnaires, such as the 16PF Questionnaire, the Minnesota Multiphasic Personality Inventory, or the Personality Assessment Inventory, the MBTI does not use validity scales to assess exaggerated or socially desirable responses. As a result, individuals motivated to do so can fake their responses. One study found a weak but statistically significant correlation between the MBTI judging scale and the Eysenck Personality Questionnaire lie scale, suggesting that more socially conformant individuals are more likely to be considered judging according to the MBTI. If respondents "fear they have something to lose, they may answer as they assume they should." However, the MBTI ethical guidelines state, "It is unethical and in many cases illegal to require job applicants to take the Indicator if the results will be used to screen out applicants." The intent of the MBTI is to provide "a framework for understanding individual differences, and... a dynamic model of individual development". Criticism: Terminology The terminology of the MBTI has been criticized as being very "vague and general", so as to allow any kind of behavior to fit any personality type, which may result in the Barnum effect, where people give a high rating to a positive description that supposedly applies specifically to them. Others argue that while the MBTI type descriptions are brief, they are also distinctive and precise. Some theorists, such as David Keirsey, have expanded on the MBTI descriptions, providing even greater detail. For instance, Keirsey's descriptions of his four temperaments, which he correlated with the sixteen MBTI personality types, show how the temperaments differ in terms of language use, intellectual orientation, educational and vocational interests, social orientation, self-image, personal values, social roles, and characteristic hand gestures. Criticism: Factor analysis Researchers have reported that the JP and the SN scales correlate with one another. One factor-analytic study based on (N=1291) college-aged students found six different factors instead of the four purported dimensions, thereby raising doubts as to the construct validity of the MBTI. Criticism: Correlates According to Hans Eysenck: The main dimension in the MBTI is called E-I, or extraversion-introversion; this is mostly a sociability scale, correlating quite well with the MMPI social introversion scale (negatively) and the Eysenck Extraversion scale (positively). Unfortunately, the scale also has a loading on neuroticism, which correlates with the introverted end. Thus introversion correlates roughly (i.e., averaging values for males and females) −.44 with dominance, +.37 with abasement, +.46 with counselling readiness, −.52 with self-confidence, −.36 with personal adjustment, and −.45 with empathy. The failure of the scale to disentangle Introversion and Neuroticism (there is no scale for neurotic and other psychopathological attributes in the MBTI) is its worst feature, only equalled by the failure to use factor analysis in order to test the arrangement of items in the scale. Criticism: Reliability The test-retest reliability of the MBTI tends to be low. Large numbers of people (between 39% and 76% of respondents) obtain different type classifications when retaking the indicator after only five weeks. A 2013 Fortune Magazine article titled "Have we all been duped by the Myers-Briggs Test" wrote: The interesting – and somewhat alarming – fact about the MBTI is that, despite its popularity, it has been subject to sustained criticism by professional psychologists for over three decades. One problem is that it displays what statisticians call low "test-retest reliability." So if you retake the test after only a five-week gap, there's around a 50% chance that you will fall into a different personality category compared to the first time you took the test. Criticism: A second criticism is that the MBTI mistakenly assumes that personality falls into mutually exclusive categories. ... The consequence is that the scores of two people labelled "introverted" and "extraverted" may be almost exactly the same, but they could be placed into different categories since they fall on either side of an imaginary dividing line. Criticism: Within each dichotomy scale, as measured on Form G, about 83% of categorizations remain the same when people are retested within nine months and around 75% when retested after nine months. About 50% of people re-administered the MBTI within nine months remain the same overall type and 36% the same type after more than nine months. For Form M (the most current form of the MBTI instrument), the MBTI Manual reports that these scores are higher.In one study, when people were asked to compare their preferred type to that assigned by the MBTI assessment, only half of people chose the same profile.It has been argued that criticisms regarding the MBTI mostly come down to questions regarding the validity of its origins, not questions regarding the validity of the MBTI's usefulness. Others argue that the MBTI can be a reliable measurement of personality, and "like all measures, the MBTI yields scores that are dependent on sample characteristics and testing conditions". Statistics: A 1973 study of university students in the United States found the INFP type was the most common type among students studying the fine arts and art education subjects, with 36% of fine arts students and 26% of art education students being INFPs. A 1973 study of the personality types of teachers in the United States found Intuitive-Perceptive types (ENFP, INFP, ENTP, INTP) were over-represented in teachers of subjects such as English, social studies and art, as opposed to science and mathematics, which featured more sensing (S) and judging (J) types. A questionnaire of 27,787 high school students suggested INFP students among them showed a significant preference for art, English, and music subjects. Utility: Isabel Myers claimed that the proportion of different personality types varied by choice of career or course of study. However, researchers examining the proportions of each type within varying professions report that the proportion of MBTI types within each occupation is close to that within a random sample of the population. Some researchers have expressed reservations about the relevance of type to job satisfaction, as well as concerns about the potential misuse of the instrument in labeling people.The Myers–Briggs Company, then known as Consulting Psychologists Press (and later CPP), became the exclusive publisher of the MBTI in 1975. They call it "the world's most widely used personality assessment", with as many as two million assessments administered annually. The Myers-Briggs Company and other proponents state that the indicator meets or exceeds the reliability of other psychological instruments.Although some studies claim support for validity and reliability, other studies suggest that the MBTI "lacks convincing validity data" and that it is pseudoscience.The MBTI has poor predictive validity of employees' job performance ratings. As noted above under Precepts and ethics, the MBTI measures preferences, not ability. The use of the MBTI as a predictor of job success is expressly discouraged in the Manual. It is argued that the MBTI only continues to be popular because many people are qualified to administer it, it is not difficult to understand, and there are many supporting books, websites and other sources which are readily available to the general public. Correlations with other instruments: Keirsey temperaments David Keirsey developed the Keirsey Temperament Sorter after learning about the MBTI system, though he traces four "temperaments" back to Ancient Greek traditions. He maps these temperaments to the Myers–Briggs groupings SP, SJ, NF, and NT. He also gives each of the 16 MBTI types a name, as shown in the below table. Correlations with other instruments: Big Five McCrae and Costa based their Five Factor Model (FFM) on Goldberg's Big Five theory. McCrae and Costa present correlations between the MBTI scales and the Big Five personality constructs measured, for example, by the NEO-PI-R. The five purported personality constructs have been labeled: extraversion, openness, agreeableness, conscientiousness, and neuroticism (emotional instability), although there is not universal agreement on the Big Five theory and the related Five-Factor Model (FFM). The following correlations are based on the results from 267 men and 201 women as part of a longitudinal study of aging. Correlations with other instruments: These correlations refer to the second letter shown, i.e., the table shows that I and P have negative correlations with extraversion and conscientiousness, respectively, while F and N have positive correlations with agreeableness and openness, respectively. These results suggest that the four MBTI scales can be incorporated within the Big Five personality trait constructs, but that the MBTI lacks a measure for emotional stability dimension of the Big Five (though the TDI, discussed above, has addressed that dimension). Emotional stability (or neuroticism) is a predictor of depression and anxiety disorders. Correlations with other instruments: These findings led McCrae and Costa to conclude that, "correlational analyses showed that the four MBTI indices did measure aspects of four of the five major dimensions of normal personality. The five-factor model provides an alternative basis for interpreting MBTI findings within a broader, more commonly shared conceptual framework." However, "there was no support for the view that the MBTI measures truly dichotomous preferences or qualitatively distinct types, instead, the instrument measures four relatively independent dimensions." In popular culture: At the time of the COVID-19 pandemic, MBTI testing became highly popular among young South Koreans who were using it in an attempt to find compatible dating partners. The craze led to a rise in MBTI-themed products including beers, music playlists and computer games. One survey reported that by December 2021, nearly half of the population had taken the MBTI personality test. Also, the MBTI personality test became an issue in the presidential election.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Z-Stoff** Z-Stoff: Z-Stoff ([t͡sɛt ʃtɔf], "substance Z") was a name for calcium permanganate or sodium permanganate mixed in water. It was normally used as a catalyst for T-Stoff (high-test peroxide) in military rocket programs by Nazi Germany during World War II.Z-Stoff was used in the cold engine of the Messerschmitt Me 163 A airplane, in the earlier, self-contained HWK 109-500 Starthilfe RATO booster motor for crewed aircraft (usually in pairs or multiples of two for such uses), and a smaller derivation of the Starthilfe unit, the HWK 109-507 booster engine used with the Henschel Hs 293 anti-ship guided missile. T-Stoff decomposed by Z-Stoff was commonly used by World War II German military to generate steam for powering of fuel pumps in airplanes and rockets. Z-Stoff: The reaction produces manganese dioxide, which tends to clog the steam generators. Later generations of the Walter Rocket used solid-state catalyst instead of its water solution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Double-ended queue** Double-ended queue: In computer science, a double-ended queue (abbreviated to deque, pronounced deck, like "cheque") is an abstract data type that generalizes a queue, for which elements can be added to or removed from either the front (head) or back (tail). It is also often called a head-tail linked list, though properly this refers to a specific data structure implementation of a deque (see below). Naming conventions: Deque is sometimes written dequeue, but this use is generally deprecated in technical literature or technical writing because dequeue is also a verb meaning "to remove from a queue". Nevertheless, several libraries and some writers, such as Aho, Hopcroft, and Ullman in their textbook Data Structures and Algorithms, spell it dequeue. John Mitchell, author of Concepts in Programming Languages, also uses this terminology. Distinctions and sub-types: This differs from the queue abstract data type or first in first out list (FIFO), where elements can only be added to one end and removed from the other. This general data class has some possible sub-types: An input-restricted deque is one where deletion can be made from both ends, but insertion can be made at one end only. An output-restricted deque is one where insertion can be made at both ends, but deletion can be made from one end only.Both the basic and most common list types in computing, queues and stacks can be considered specializations of deques, and can be implemented using deques. Operations: The basic operations on a deque are enqueue and dequeue on either end. Also generally implemented are peek operations, which return the value at that end without dequeuing it. Names vary between languages; major implementations include: Implementations: There are at least two common ways to efficiently implement a deque: with a modified dynamic array or with a doubly linked list. Implementations: The dynamic array approach uses a variant of a dynamic array that can grow from both ends, sometimes called array deques. These array deques have all the properties of a dynamic array, such as constant-time random access, good locality of reference, and inefficient insertion/removal in the middle, with the addition of amortized constant-time insertion/removal at both ends, instead of just one end. Three common implementations include: Storing deque contents in a circular buffer, and only resizing when the buffer becomes full. This decreases the frequency of resizings. Implementations: Allocating deque contents from the center of the underlying array, and resizing the underlying array when either end is reached. This approach may require more frequent resizings and waste more space, particularly when elements are only inserted at one end. Storing contents in multiple smaller arrays, allocating additional arrays at the beginning or end as needed. Indexing is implemented by keeping a dynamic array containing pointers to each of the smaller arrays. Implementations: Purely functional implementation Double-ended queues can also be implemented as a purely functional data structure.: 115  Two versions of the implementation exist. The first one, called 'real-time deque, is presented below. It allows the queue to be persistent with operations in O(1) worst-case time, but requires lazy lists with memoization. The second one, with no lazy lists nor memoization is presented at the end of the sections. Its amortized time is O(1) if the persistency is not used; but the worst-time complexity of an operation is O(n) where n is the number of elements in the double-ended queue. Implementations: Let us recall that, for a list l, |l| denotes its length, that NIL represents an empty list and CONS(h, t) represents the list whose head is h and whose tail is t. The functions drop(i, l) and take(i, l) return the list l without its first i elements, and the first i elements of l, respectively. Or, if |l| < i, they return the empty list and l respectively. Implementations: Real-time deques via lazy rebuilding and scheduling A double-ended queue is represented as a sextuple (len_front, front, tail_front, len_rear, rear, tail_rear) where front is a linked list which contains the front of the queue of length len_front. Similarly, rear is a linked list which represents the reverse of the rear of the queue, of length len_rear. Furthermore, it is assured that |front| ≤ 2|rear|+1 and |rear| ≤ 2|front|+1 - intuitively, it means that both the front and the rear contains between a third minus one and two thirds plus one of the elements. Finally, tail_front and tail_rear are tails of front and of rear, they allow scheduling the moment where some lazy operations are forced. Note that, when a double-ended queue contains n elements in the front list and n elements in the rear list, then the inequality invariant remains satisfied after i insertions and d deletions when (i+d) ≤ n/2. That is, at most n/2 operations can happen between each rebalancing. Implementations: Let us first give an implementation of the various operations that affect the front of the deque - cons, head and tail. Those implementation do not necessarily respect the invariant. In a second time we'll explain how to modify a deque which does not satisfy the invariant into one which satisfy it. However, they use the invariant, in that if the front is empty then the rear has at most one element. The operations affecting the rear of the list are defined similarly by symmetry. It remains to explain how to define a method balance that rebalance the deque if insert' or tail broke the invariant. The method insert and tail can be defined by first applying insert' and tail' and then applying balance. Implementations: where rotateDrop(front, i, rear)) return the concatenation of front and of drop(i, rear). That isfront' = rotateDrop(front, ceil_half_len, rear) put into front' the content of front and the content of rear that is not already in rear'. Since dropping n elements takes O(n) time, we use laziness to ensure that elements are dropped two by two, with two drops being done during each tail' and each insert' operation. where rotateRev(front, middle, rear) is a function that returns the front, followed by the middle reversed, followed by the rear. This function is also defined using laziness to ensure that it can be computed step by step, with one step executed during each insert' and tail' and taking a constant time. This function uses the invariant that |rear|-2|front| is 2 or 3. where ++ is the function concatenating two lists. Implementations: Implementation without laziness Note that, without the lazy part of the implementation, this would be a non-persistent implementation of queue in O(1) amortized time. In this case, the lists tail_front and tail_rear could be removed from the representation of the double-ended queue. Language support: Ada's containers provides the generic packages Ada.Containers.Vectors and Ada.Containers.Doubly_Linked_Lists, for the dynamic array and linked list implementations, respectively. C++'s Standard Template Library provides the class templates std::deque and std::list, for the multiple array and linked list implementations, respectively. As of Java 6, Java's Collections Framework provides a new Deque interface that provides the functionality of insertion and removal at both ends. It is implemented by classes such as ArrayDeque (also new in Java 6) and LinkedList, providing the dynamic array and linked list implementations, respectively. However, the ArrayDeque, contrary to its name, does not support random access. Javascript's Array prototype & Perl's arrays have native support for both removing (shift and pop) and adding (unshift and push) elements on both ends. Python 2.4 introduced the collections module with support for deque objects. It is implemented using a doubly linked list of fixed-length subarrays. As of PHP 5.3, PHP's SPL extension contains the 'SplDoublyLinkedList' class that can be used to implement Deque datastructures. Previously to make a Deque structure the array functions array_shift/unshift/pop/push had to be used instead. Language support: GHC's Data.Sequence module implements an efficient, functional deque structure in Haskell. The implementation uses 2–3 finger trees annotated with sizes. There are other (fast) possibilities to implement purely functional (thus also persistent) double queues (most using heavily lazy evaluation). Kaplan and Tarjan were the first to implement optimal confluently persistent catenable deques. Their implementation was strictly purely functional in the sense that it did not use lazy evaluation. Okasaki simplified the data structure by using lazy evaluation with a bootstrapped data structure and degrading the performance bounds from worst-case to amortized. Kaplan, Okasaki, and Tarjan produced a simpler, non-bootstrapped, amortized version that can be implemented either using lazy evaluation or more efficiently using mutation in a broader but still restricted fashion. Mihaesau and Tarjan created a simpler (but still highly complex) strictly purely functional implementation of catenable deques, and also a much simpler implementation of strictly purely functional non-catenable deques, both of which have optimal worst-case bounds. Language support: Rust's std::collections includes VecDeque which implements a double-ended queue using a growable ring buffer. Complexity: In a doubly-linked list implementation and assuming no allocation/deallocation overhead, the time complexity of all deque operations is O(1). Additionally, the time complexity of insertion or deletion in the middle, given an iterator, is O(1); however, the time complexity of random access by index is O(n). In a growing array, the amortized time complexity of all deque operations is O(1). Additionally, the time complexity of random access by index is O(1); but the time complexity of insertion or deletion in the middle is O(n). Applications: One example where a deque can be used is the work stealing algorithm. This algorithm implements task scheduling for several processors. A separate deque with threads to be executed is maintained for each processor. To execute the next thread, the processor gets the first element from the deque (using the "remove first element" deque operation). If the current thread forks, it is put back to the front of the deque ("insert element at front") and a new thread is executed. When one of the processors finishes execution of its own threads (i.e. its deque is empty), it can "steal" a thread from another processor: it gets the last element from the deque of another processor ("remove last element") and executes it. The work stealing algorithm is used by Intel's Threading Building Blocks (TBB) library for parallel programming.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adherens junction** Adherens junction: Adherens junctions (or zonula adherens, intermediate junction, or "belt desmosome") are protein complexes that occur at cell–cell junctions and cell–matrix junctions in epithelial and endothelial tissues, usually more basal than tight junctions. An adherens junction is defined as a cell junction whose cytoplasmic face is linked to the actin cytoskeleton. They can appear as bands encircling the cell (zonula adherens) or as spots of attachment to the extracellular matrix (focal adhesion). Adherens junction: Adherens junctions uniquely disassemble in uterine epithelial cells to allow the blastocyst to penetrate between epithelial cells.A similar cell junction in non-epithelial, non-endothelial cells is the fascia adherens. It is structurally the same, but appears in ribbonlike patterns that do not completely encircle the cells. One example is in cardiomyocytes. Proteins: Adherens junctions are composed of the following proteins: cadherins. The cadherins are a family of transmembrane proteins that form homodimers in a calcium-dependent manner with other cadherin molecules on adjacent cells. p120 (sometimes called delta catenin) binds the juxtamembrane region of the cadherin. γ-catenin or gamma-catenin (plakoglobin) binds the catenin-binding region of the cadherin. α-catenin or alpha-catenin binds the cadherin indirectly via β-catenin or plakoglobin and links the actin cytoskeleton with cadherin. Significant protein dynamics are thought to be involved. Models: Adherens junctions were, for many years, thought to share the characteristic of anchor cells through their cytoplasmic actin filaments. Adherens junctions may serve as a regulatory module to maintain the actin contractile ring with which it is associated in microscopic studies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bundle (mathematics)** Bundle (mathematics): In mathematics, a bundle is a generalization of a fiber bundle dropping the condition of a local product structure. The requirement of a local product structure rests on the bundle having a topology. Without this requirement, more general objects can be considered bundles. For example, one can consider a bundle π: E→ B with E and B sets. It is no longer true that the preimages π−1(x) must all look alike, unlike fiber bundles where the fibers must all be isomorphic (in the case of vector bundles) and homeomorphic. Definition: A bundle is a triple (E, p, B) where E, B are sets and p : E → B is a map. Definition: E is called the total space B is the base space of the bundle p is the projectionThis definition of a bundle is quite unrestrictive. For instance, the empty function defines a bundle. Nonetheless it serves well to introduce the basic terminology, and every type of bundle has the basic ingredients of above with restrictions on E, p, B and usually there is additional structure. Definition: For each b ∈ B, p−1(b) is the fibre or fiber of the bundle over b. A bundle (E*, p*, B*) is a subbundle of (E, p, B) if B* ⊂ B, E* ⊂ E and p* = p|E*. A cross section is a map s : B → E such that p(s(b)) = b for each b ∈ B, that is, s(b) ∈ p−1(b). Examples: If E and B are smooth manifolds and p is smooth, surjective and in addition a submersion, then the bundle is a fibered manifold. Here and in the following examples, the smoothness condition may be weakened to continuous or sharpened to analytic, or it could be anything reasonable, like continuously differentiable (C1), in between. If for each two points b1 and b2 in the base, the corresponding fibers p−1(b1) and p−1(b2) are homotopy equivalent, then the bundle is a fibration. Examples: If for each two points b1 and b2 in the base, the corresponding fibers p−1(b1) and p−1(b2) are homeomorphic, and in addition the bundle satisfies certain conditions of local triviality outlined in the pertaining linked articles, then the bundle is a fiber bundle. Usually there is additional structure , e.g. a group structure or a vector space structure, on the fibers besides a topology. Then is required that the homeomorphism is an isomorphism with respect to that structure, and the conditions of local triviality are sharpened accordingly. Examples: A principal bundle is a fiber bundle endowed with a right group action with certain properties. One example of a principal bundle is the frame bundle. If for each two points b1 and b2 in the base, the corresponding fibers p−1(b1) and p−1(b2) are vector spaces of the same dimension, then the bundle is a vector bundle if the appropriate conditions of local triviality are satisfied. The tangent bundle is an example of a vector bundle. Bundle objects: More generally, bundles or bundle objects can be defined in any category: in a category C, a bundle is simply an epimorphism π: E → B. If the category is not concrete, then the notion of a preimage of the map is not necessarily available. Therefore these bundles may have no fibers at all, although for sufficiently well behaved categories they do; for instance, for a category with pullbacks and a terminal object 1 the points of B can be identified with morphisms p:1→B and the fiber of p is obtained as the pullback of p and π. The category of bundles over B is a subcategory of the slice category (C↓B) of objects over B, while the category of bundles without fixed base object is a subcategory of the comma category (C↓C) which is also the functor category C², the category of morphisms in C. Bundle objects: The category of smooth vector bundles is a bundle object over the category of smooth manifolds in Cat, the category of small categories. The functor taking each manifold to its tangent bundle is an example of a section of this bundle object.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canadian Family Physician** Canadian Family Physician: Canadian Family Physician (French: Le Médecin de famille canadien) is a monthly peer-reviewed open-access medical journal published by the College of Family Physicians of Canada. It provides continuing medical education for family physicians and other primary care clinicians. The journal publishes original articles presenting a family medicine perspective to clinical medicine through approaches to common clinical conditions and evidence-based clinical reviews intended to assist family physicians in patient care. Most articles are published in both English and French. The journal was established in 1967 and the editor-in-chief is Nicholas Pimlott (University of Toronto). Abstracting and indexing: The journal is abstracted and indexed in the following databases: According to the Journal Citation Reports, its 2020 impact factor is 3.275.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clinic (music)** Clinic (music): A musical clinic is an informal meeting with a guest musician, where a small-to-medium-sized audience questions the musician's styles and techniques and also how to improve their own skill. The musician might perform an entire piece, or demonstrate certain techniques for the audience to observe. The objective is for the audience to learn from the guest musician. A musical clinic can apply to any type of musical instrument, music or player. The clinics are often held at musical instrument stores.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aeromagnetic survey** Aeromagnetic survey: An aeromagnetic survey is a common type of geophysical survey carried out using a magnetometer aboard or towed behind an aircraft. The principle is similar to a magnetic survey carried out with a hand-held magnetometer, but allows much larger areas of the Earth's surface to be covered quickly for regional reconnaissance. The aircraft typically flies in a grid-like pattern with height and line spacing determining the resolution of the data (and cost of the survey per unit area). Method: As the aircraft flies, the magnetometer measures and records the total intensity of the magnetic field at the sensor, which is a combination of the desired magnetic field generated in the Earth as well as tiny variations due to the temporal effects of the constantly varying solar wind and the magnetic field of the survey aircraft. By subtracting the solar, regional, and aircraft effects, the resulting aeromagnetic map shows the spatial distribution and relative abundance of magnetic minerals (most commonly the iron oxide mineral magnetite) in the upper levels of the Earth's crust. Because different rock types differ in their content of magnetic minerals, the magnetic map allows a visualization of the geological structure of the upper crust in the subsurface, particularly the spatial geometry of bodies of rock and the presence of faults and folds. This is particularly useful where bedrock is obscured by surface sand, soil or water. Aeromagnetic data was once presented as contour plots, but now is more commonly expressed as thematic (colored) and shaded computer generated pseudo-topography images. The apparent hills, ridges and valleys are referred to as aeromagnetic anomalies. A geophysicist can use mathematical modeling to infer the shape, depth and properties of the rock bodies responsible for the anomalies. Method: Airplanes are normally used for high-level reconnaissance surveys in gentle terrain, and helicopters are used in mountainous terrain or where more detail is required. History: Aeromagnetic surveys were first performed in World War II to detect submarines using a Magnetic Anomaly Detector attached to an aircraft. This method is still widely used by military maritime patrol aircraft. Uses: Aeromagnetic surveys are widely used to aid in the production of geological maps and are also commonly used during mineral exploration and petroleum exploration. Some mineral deposits are associated with an increase or decrease in the abundance of magnetic minerals, and occasionally the sought after commodity may itself be magnetic (e.g. iron ore deposits), but often the elucidation of the subsurface structure of the upper crust is the most valuable contribution of the aeromagnetic data. It has also been used to find buried fault zones that are prone to damaging earthquakes. Unexploded ordnance: Aeromagnetic surveys are also used to perform reconnaissance mapping of unexploded ordnance. The aircraft is typically a helicopter, as the sensors must be close to the ground (relative to mineral exploration) to be effective. Electromagnetic methods are also used for this purpose. UAV aeromagnetic survey: Recent developments in aeromagnetic surveying include the use of drones. The market of unmanned aerial systems is exponential development, so the arrival of these technologies in some niches was inevitable including geophysical surveys. UAVs have proven to be especially useful for mineral exploration, detection and identification. It is also possible to detect Unexploded Ordnance objects using a drone-mounted magnetometer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Molybdenum disilicide** Molybdenum disilicide: Molybdenum disilicide (MoSi2, or molybdenum silicide), an intermetallic compound, a silicide of molybdenum, is a refractory ceramic with primary use in heating elements. It has moderate density, melting point 2030 °C, and is electrically conductive. At high temperatures it forms a passivation layer of silicon dioxide, protecting it from further oxidation. The thermal stability of MoSi2 alongside its high emissivity make this material, alongside WSi2 attractive for applications as a high emissivity coatings in heat shields for atmospheric entry. MoSi2 is a gray metallic-looking material with tetragonal crystal structure (alpha-modification); its beta-modification is hexagonal and unstable. It is insoluble in most acids but soluble in nitric acid and hydrofluoric acid. Molybdenum disilicide: While MoSi2 has excellent resistance to oxidation and high Young's modulus at temperatures above 1000 °C, it is brittle in lower temperatures. Also, at above 1200 °C it loses creep resistance. These properties limits its use as a structural material, but may be offset by using it together with another material as a composite material. Molybdenum disilicide and MoSi2-based materials are usually made by sintering. Plasma spraying can be used for producing its dense monolithic and composite forms; material produced this way may contain a proportion of β-MoSi2 due to its rapid cooling. Molybdenum disilicide: Molybdenum disilicide heating elements can be used for temperatures up to 1800 °C, in electric furnaces used in laboratory and production environment in production of glass, steel, electronics, ceramics, and in heat treatment of materials. While the elements are brittle, they can operate at high power without aging, and their electrical resistivity does not increase with operation time. Their maximum operating temperature has to be lowered in atmospheres with low oxygen content due to breakdown of the passivation layer. Molybdenum disilicide: Other ceramic materials used for heating elements include silicon carbide, barium titanate, and lead titanate composite materials. Molybdenum disilicide is used in microelectronics as a contact material. It is often used as a shunt over polysilicon lines to increase their conductivity and increase signal speed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rational Rhapsody** Rational Rhapsody: Rational Rhapsody, a modeling environment based on UML, is a visual development environment for systems engineers and software developers creating real-time or embedded systems and software. Rational Rhapsody uses graphical models to generate software applications in various languages including C, C++, Ada, Java and C#. Rational Rhapsody: Developers use Rational Rhapsody to understand and elaborate requirements, create model designs using industry standard languages (UML, SysML, AUTOSAR, DoDAF, MODAF, UPDM), validate functionality early in development, and automate delivery of high structured products.Rational Rhapsody Model Manager (previous implementation, Design Manager, will be deprecated) is a web based application that stakeholders, developers, and other team members use to collaborate on the design of products, software, and systems. The product contains a server that hosts model designs which have been developed in Rational Rhapsody. A client extension component included with Rational Rhapsody allows users to connect to a Design Manager server. After connecting to the server, models can be moved into project areas with specific modelling domains based on the industry standard languages supported by Rational Rhapsody. Rhapsody Model Manager also integrates with the Rational solution for Collaborative Lifecycle Management (CLM). In this environment, artifacts can be associated with other lifecycle resources such as requirements (the Doors Next Generation application), change requests and change sets of sources (the Team Concert Application), and Quality Assurance test cases (the Quality Manager application). Global Configuration control allows different teams and different projects to interact in a synchronised setup that integrates deliveries and baselines within each of the tools in the CLM solution. History: Rhapsody was first released in 1996 by Israeli software company I-Logix Inc. Rhapsody was developed as an object-oriented tool for modeling and executing statecharts, based on work done by David Harel at the Weizmann Institute of Science, who was the first to develop the concept of hierarchical, parallel, and broadcasting statecharts.In 2006, I-Logix's shareholders sold the company to Swedish software company Telelogic AB. Rhapsody became a Rational Software product after the acquisition of Telelogic AB in 2008, like all former Telelogic products. Since the rebranding, Rational Rhapsody has been integrated with the IBM Rational Systems and Software Engineering Solution. History: Rational Rhapsody Design Manager was first released in June, 2011 by IBM. In December 2011, the product was integrated as a design component in IBM Rational Solution for Collaborative Lifecycle Management (CLM).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Persistent current** Persistent current: In physics, persistent current refers to a perpetual electric current, not requiring an external power source. Such a current is impossible in normal electrical devices, since all commonly-used conductors have a non-zero resistance, and this resistance would rapidly dissipate any such current as heat. However, in superconductors and some mesoscopic devices, persistent currents are possible and observed due to quantum effects. In resistive materials, persistent currents can appear in microscopic samples due to size effects. Persistent currents are widely used in the form of superconducting magnets. In magnetized objects: In electromagnetism, all magnetizations can be seen as microscopic persistent currents. By definition a magnetization M can be replaced by its corresponding microscopic form, which is an electric current density: J=∇×M .This current is a bound current, not having any charge accumulation associated with it since it is divergenceless. What this means is that any permanently magnetized object, for example a piece of lodestone, can be considered to have persistent electric currents running throughout it (the persistent currents are generally concentrated near the surface). The converse is also true: any persistent electric current is divergence-free, and can therefore be represented instead by a magnetization. Therefore, in the macroscopic Maxwell's equations, it is purely a choice of mathematical convenience, whether to represent persistent currents as magnetization or vice versa. In the microscopic formulation of Maxwell's equations, however, M does not appear and so any magnetizations must be instead represented by bound currents. In superconductors: In superconductors, charge can flow without any resistance. It is possible to make pieces of superconductor with a large built-in persistent current, either by creating the superconducting state (cooling the material) while charge is flowing through it, or by changing the magnetic field around the superconductor after creating the superconducting state. This principle is used in superconducting electromagnets to generate sustained high magnetic fields that only require a small amount of power to maintain. The persistent current was first identified by H. Kamerlingh Onnes, and attempts to set a lower bound on their duration have reached values of over 100,000 years. In resistive conductors: Surprisingly, it is also possible to have tiny persistent currents inside resistive metals that are placed in a magnetic field, even in metals that are nominally "non-magnetic". The current is the result of a quantum mechanical effect that influences how electrons travel through metals, and arises from the same kind of motion that allows the electrons inside an atom to orbit the nucleus forever. In resistive conductors: This type of persistent current is a mesoscopic low temperature effect: the magnitude of the current becomes appreciable when the size of the metallic system is reduced to the scale of the electron quantum phase coherence length and the thermal length. Persistent currents decrease with increasing temperature and will vanish exponentially above a temperature known as the Thouless temperature. This temperature scales as the inverse of the circuit diameter squared. Consequently, it has been suggested that persistent currents could flow up to room temperature and above in nanometric metal structures such as metal (Au, Ag,...) nanoparticles. This hypothesis has been offered for explaining the singular magnetic properties of nanoparticles made of gold and other metals. Unlike with superconductors, these persistent currents do not appear at zero magnetic field, as the current fluctuates symmetrically between positive and negative values; the magnetic field breaks that symmetry and allows a nonzero average current. In resistive conductors: Although the persistent current in an individual ring is largely unpredictable due to uncontrolled factors like the disorder configuration, it has a slight bias so that an average persistent current appears even for an ensemble of conductors with different disorder configurations.This kind of persistent current was first predicted to be experimentally observable in micrometer-scale rings in 1983 by Markus Büttiker, Yoseph Imry, and Rolf Landauer. Because the effect requires the phase coherence of electrons around the entire ring, the current can not be observed when the ring is interrupted by an ammeter and thus the current must by measured indirectly through its magnetization. In resistive conductors: In fact, all metals exhibit some magnetization in magnetic fields due a combination of de Haas–van Alphen effect, core diamagnetism, Landau diamagnetism, Pauli paramagnetism, which all appear regardless of the shape of the metal. In resistive conductors: The additional magnetization from persistent current becomes strong with a connected ring shape, and for example would disappear if the ring were cut.Experimental evidence of the observation of persistent currents were first reported in 1990 by a research group at Bell Laboratories using a superconducting resonator to study an array of copper rings. Subsequent measurements using superconducting resonators and extremely sensitive magnetometers known as superconducting quantum interference devices (SQUIDs) produced inconsistent results. In resistive conductors: In 2009, physicists at Stanford University using a scanning SQUID and at Yale University using microelectromechanical cantilevers reported measurements of persistent currents in nanoscale gold and aluminum rings respectively that both showed a strong agreement with the simple theory for non-interacting electrons. In resistive conductors: "These are ordinary, non-superconducting metal rings, which we typically think of as resistors, yet these currents will flow forever, even in the absence of an applied voltage." The 2009 measurements both reported greater sensitivity to persistent currents than previous measurements and made several other improvements to persistent current detection. The scanning SQUID's ability to change the position of the SQUID detector relative to the ring sample allowed for a number of rings to be measured on one sample chip and better extraction of the current signal from background noise. The cantilever detector's mechanical detection technique made it possible to measure the rings in a clean electromagnetic environment over a large range of magnetic field and also to measure a number of rings on one sample chip.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ornithodoros** Ornithodoros: Ornithodoros is a genus in the soft-bodied tick family, Argasidae. Physiology: The opening between the midgut and hindgut has been lost, making the ticks unable to pass digestive waste products out of their bodies. Taxonomy: The Linnean name derives from ornithos (Greek: ὄρνιθος) and doros (Greek: Δωρόν), meaning "bird" and "gift", respectively. It contains these species: Carios erraticus was previously placed in this genus, as Ornithodoros erraticus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mutton flaps** Mutton flaps: Mutton flaps, or breast of lamb, are an inexpensive cut of meat from a sheep. Mutton flaps: Consisting of a sheep's lower rib meat, mutton flaps are considered a low-quality cut in Western countries, unlike to pork and beef ribs. They have been described there as a "tough, scraggy meat", if not properly prepared. Their high fat content has also contributed to their unpopularity in many Western countries, although they are widely used as döner meat in Europe.Mutton flaps are a staple in the South Pacific where their high fat content has been linked with the development of obesity problems. In 2000, Fiji banned their import. On July 1, 2020, Tonga banned the import of mutton flaps from New Zealand, claiming their consumption plays a major role in increasing obesity among the population. Method of cooking: In Indonesia, a similar cut of meat called breast of goat is cooked by cutting it into pieces and grilling using skewers. This dish, called sate kronyos, is especially popular in Bantul, Yogyakarta.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flogo** Flogo: A Flogo (portmanteau of floating and logo) or foam balloon, is a stable mass of lighter-than-air soap bubbles formed into a specific shape. They are not balloons, as they have no envelope, but consist merely of a condensed grouping of soap bubbles filled with a mixture of helium and air. They are shaped by being molded through a die inserted in the top of the generating machine. Flogo: It is possible to create foam balloons with a diameter of more than 1 meter. Identical foam balloons can be manufactured with the same machine in quick repetition. Flogos are most frequently for "Skyvertising" or Aerial advertising purposes, since they can be manufactured easily in the form of corporate or team logos. In principle, wind conditions in the lower atmosphere can be easily monitored with flogos. Foam balloons are not stable long-term, but decay after some hours. Nevertheless, they can reach heights of several kilometers up.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dirichlet boundary condition** Dirichlet boundary condition: In the mathematical study of differential equations, the Dirichlet (or first-type) boundary condition is a type of boundary condition, named after Peter Gustav Lejeune Dirichlet (1805–1859). When imposed on an ordinary or a partial differential equation, it specifies the values that a solution needs to take along the boundary of the domain. Dirichlet boundary condition: In finite element method (FEM) analysis, essential or Dirichlet boundary condition is defined by weighted-integral form of a differential equation. The dependent unknown u in the same form as the weight function w appearing in the boundary expression is termed a primary variable, and its specification constitutes the essential or Dirichlet boundary condition. The question of finding solutions to such equations is known as the Dirichlet problem. In applied sciences, a Dirichlet boundary condition may also be referred to as a fixed boundary condition. Examples: ODE For an ordinary differential equation, for instance, the Dirichlet boundary conditions on the interval [a,b] take the form where α and β are given numbers. PDE For a partial differential equation, for example, where ∇2 denotes the Laplace operator, the Dirichlet boundary conditions on a domain Ω ⊂ Rn take the form where f is a known function defined on the boundary ∂Ω. Applications For example, the following would be considered Dirichlet boundary conditions: In mechanical engineering and civil engineering (beam theory), where one end of a beam is held at a fixed position in space. In heat transfer, where a surface is held at a fixed temperature. In electrostatics, where a node of a circuit is held at a fixed voltage. In fluid dynamics, the no-slip condition for viscous fluids states that at a solid boundary, the fluid will have zero velocity relative to the boundary. Other boundary conditions: Many other boundary conditions are possible, including the Cauchy boundary condition and the mixed boundary condition. The latter is a combination of the Dirichlet and Neumann conditions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automatic clustering algorithms** Automatic clustering algorithms: Automatic clustering algorithms are algorithms that can perform clustering without prior knowledge of data sets. In contrast with other cluster analysis techniques, automatic clustering algorithms can determine the optimal number of clusters even in the presence of noise and outlier points. Centroid-based: Given a set of n objects, centroid-based algorithms create k partitions based on a dissimilarity function, such that k≤n. A major problem in applying this type of algorithm is determining the appropriate number of clusters for unlabeled data. Therefore, most research in clustering analysis has been focused on the automation of the process. Centroid-based: Automated selection of k in a K-means clustering algorithm, one of the most used centroid-based clustering algorithms, is still a major problem in machine learning. The most accepted solution to this problem is the elbow method. It consists of running k-means clustering to the data set with a range of values, calculating the sum of squared errors for each, and plotting them in a line chart. If the chart looks like an arm, the best value of k will be on the "elbow".Another method that modifies the k-means algorithm for automatically choosing the optimal number of clusters is the G-means algorithm. It was developed from the hypothesis that a subset of the data follows a Gaussian distribution. Thus, k is increased until each k-means center's data is Gaussian. This algorithm only requires the standard statistical significance level as a parameter and does not set limits for the covariance of the data. Connectivity-based (hierarchical clustering): Connectivity-based clustering or hierarchical clustering is based on the idea that objects have more similarities to other nearby objects than to those further away. Therefore, the generated clusters from this type of algorithm will be the result of the distance between the analyzed objects. Connectivity-based (hierarchical clustering): Hierarchical models can either be divisive, where partitions are built from the entire data set available, or agglomerating, where each partition begins with a single object and additional objects are added to the set. Although hierarchical clustering has the advantage of allowing any valid metric to be used as the defined distance, it is sensitive to noise and fluctuations in the data set and is more difficult to automate. Connectivity-based (hierarchical clustering): Methods have been developed to improve and automate existing hierarchical clustering algorithms such as an automated version of single linkage hierarchical cluster analysis (HCA). This computerized method bases its success on a self-consistent outlier reduction approach followed by the building of a descriptive function which permits defining natural clusters. Discarded objects can also be assigned to these clusters. Essentially, one needs not to resort to external parameters to identify natural clusters. Information gathered from HCA, automated and reliable, can be resumed in a dendrogram with the number of natural clusters and the corresponding separation, an option not found in classical HCA. This method includes the two following steps: outliers being removed (this is applied in many filtering applications) and an optional classification allowing expanding clusters with the whole set of objects.BIRCH (balanced iterative reducing and clustering using hierarchies) is an algorithm used to perform connectivity-based clustering for large data-sets. It is regarded as one of the fastest clustering algorithms, but it is limited because it requires the number of clusters as an input. Therefore, new algorithms based on BIRCH have been developed in which there is no need to provide the cluster count from the beginning, but that preserves the quality and speed of the clusters. The main modification is to remove the final step of BIRCH, where the user had to input the cluster count, and to improve the rest of the algorithm, referred to as tree-BIRCH, by optimizing a threshold parameter from the data. In this resulting algorithm, the threshold parameter is calculated from the maximum cluster radius and the minimum distance between clusters, which are often known. This method proved to be efficient for data sets of tens of thousands of clusters. If going beyond that amount, a supercluster splitting problem is introduced. For this, other algorithms have been developed, like MDB-BIRCH, which reduces super cluster splitting with relatively high speed. Density-based: Unlike partitioning and hierarchical methods, density-based clustering algorithms are able to find clusters of any arbitrary shape, not only spheres. Density-based: The density-based clustering algorithm uses autonomous machine learning that identifies patterns regarding geographical location and distance to a particular number of neighbors. It is considered autonomous because a priori knowledge on what is a cluster is not required. This type of algorithm provides different methods to find clusters in the data. The fastest method is DBSCAN, which uses a defined distance to differentiate between dense groups of information and sparser noise. Moreover, HDBSCAN can self-adjust by using a range of distances instead of a specified one. Lastly, the method OPTICS creates a reachability plot based on the distance from neighboring features to separate noise from clusters of varying density. Density-based: These methods still require the user to provide the cluster center and cannot be considered automatic. The Automatic Local Density Clustering Algorithm (ALDC) is an example of the new research focused on developing automatic density-based clustering. ALDC works out local density and distance deviation of every point, thus expanding the difference between the potential cluster center and other points. This expansion allows the machine to work automatically. The machine identifies cluster centers and assigns the points that are left by their closest neighbor of higher density. In the automation of data density to identify clusters, research has also been focused on artificially generating the algorithms. For instance, the Estimation of Distribution Algorithms guarantees the generation of valid algorithms by the directed acyclic graph (DAG), in which nodes represent procedures (building block) and edges represent possible execution sequences between two nodes. Building Blocks determine the EDA's alphabet or, in other words, any generated algorithm. Clustering algorithms artificially generated are compared to DBSCAN, a manual algorithm, in experimental results.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Worldly cardinal** Worldly cardinal: In mathematical set theory, a worldly cardinal is a cardinal κ such that the rank Vκ is a model of Zermelo–Fraenkel set theory. Relationship to inaccessible cardinals: By Zermelo's theorem on inaccessible cardinals, every inaccessible cardinal is worldly. By Shepherdson's theorem, inaccessibility is equivalent to the stronger statement that (Vκ, Vκ+1) is a model of second order Zermelo-Fraenkel set theory. Being worldly and being inaccessible are not equivalent; in fact, the smallest worldly cardinal has countable cofinality and therefore is a singular cardinal.The following are in strictly increasing order, where ι is the least inaccessible cardinal: The least worldly κ. The least worldly κ and λ (κ<λ, and same below) with Vκ and Vλ satisfying the same theory. The least worldly κ that is a limit of worldly cardinals (equivalently, a limit of κ worldly cardinals). The least worldly κ and λ with Vκ ≺Σ2 Vλ (this is higher than even a κ-fold iteration of the above item). The least worldly κ and λ with Vκ ≺ Vλ. The least worldly κ of cofinality ω1 (corresponds to the extension of the above item to a chain of length ω1). The least worldly κ of cofinality ω2 (and so on). The least κ>ω with Vκ satisfying replacement for the language augmented with the (Vκ,∈) satisfaction relation. The least κ inaccessible in Lκ(Vκ); equivalently, the least κ>ω with Vκ satisfying replacement for formulas in Vκ in the infinitary logic L∞,ω. The least κ with a transitive model M⊂Vκ+1 extending Vκ satisfying Morse–Kelley set theory. (not a worldly cardinal) The least κ with Vκ having the same Σ2 theory as Vι. The least κ with Vκ and Vι having the same theory. The least κ with Lκ(Vκ) and Lι(Vι) having the same theory. (not a worldly cardinal) The least κ with Vκ and Vι having the same Σ2 theory with real parameters. (not a worldly cardinal) The least κ with Vκ ≺Σ2 Vι. The least κ with Vκ ≺ Vι. The least infinite κ with Vκ and Vι satisfying the same L∞,ω statements that are in Vκ. The least κ with a transitive model M⊂Vκ+1 extending Vκ and satisfying the same sentences with parameters in Vκ as Vι+1 does. The least inaccessible cardinal ι.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Singular homology** Singular homology: In algebraic topology, singular homology refers to the study of a certain set of algebraic invariants of a topological space X, the so-called homology groups Hn(X). Singular homology: Intuitively, singular homology counts, for each dimension n, the n-dimensional holes of a space. Singular homology is a particular example of a homology theory, which has now grown to be a rather broad collection of theories. Of the various theories, it is perhaps one of the simpler ones to understand, being built on fairly concrete constructions (see also the related theory simplicial homology). Singular homology: In brief, singular homology is constructed by taking maps of the standard n-simplex to a topological space, and composing them into formal sums, called singular chains. The boundary operation – mapping each n-dimensional simplex to its (n−1)-dimensional boundary – induces the singular chain complex. The singular homology is then the homology of the chain complex. The resulting homology groups are the same for all homotopy equivalent spaces, which is the reason for their study. These constructions can be applied to all topological spaces, and so singular homology is expressible as a functor from the category of topological spaces to the category of graded abelian groups. Singular simplices: A singular n-simplex in a topological space X is a continuous function (also called a map) σ from the standard n-simplex Δn to X, written σ:Δn→X. This map need not be injective, and there can be non-equivalent singular simplices with the same image in X. Singular simplices: The boundary of σ, denoted as ∂nσ, is defined to be the formal sum of the singular (n − 1)-simplices represented by the restriction of σ to the faces of the standard n-simplex, with an alternating sign to take orientation into account. (A formal sum is an element of the free abelian group on the simplices. The basis for the group is the infinite set of all possible singular simplices. The group operation is "addition" and the sum of simplex a with simplex b is usually simply designated a + b, but a + a = 2a and so on. Every simplex a has a negative −a.) Thus, if we designate σ by its vertices [p0,p1,…,pn]=[σ(e0),σ(e1),…,σ(en)] corresponding to the vertices ek of the standard n-simplex Δn (which of course does not fully specify the singular simplex produced by σ ), then ∂nσ=∂n[p0,p1,…,pn]=∑k=0n(−1)k[p0,…,pk−1,pk+1,…,pn]=∑k=0n(−1)kσ∣e0,…,ek−1,ek+1,…,en is a formal sum of the faces of the simplex image designated in a specific way. (That is, a particular face has to be the restriction of σ to a face of Δn which depends on the order that its vertices are listed.) Thus, for example, the boundary of σ=[p0,p1] (a curve going from p0 to p1 ) is the formal sum (or "formal difference") [p1]−[p0] Singular chain complex: The usual construction of singular homology proceeds by defining formal sums of simplices, which may be understood to be elements of a free abelian group, and then showing that we can define a certain group, the homology group of the topological space, involving the boundary operator. Singular chain complex: Consider first the set of all possible singular n-simplices σn(X) on a topological space X. This set may be used as the basis of a free abelian group, so that each singular n-simplex is a generator of the group. This set of generators is of course usually infinite, frequently uncountable, as there are many ways of mapping a simplex into a typical topological space. The free abelian group generated by this basis is commonly denoted as Cn(X) . Elements of Cn(X) are called singular n-chains; they are formal sums of singular simplices with integer coefficients. Singular chain complex: The boundary ∂ is readily extended to act on singular n-chains. The extension, called the boundary operator, written as ∂n:Cn→Cn−1, is a homomorphism of groups. The boundary operator, together with the Cn , form a chain complex of abelian groups, called the singular complex. It is often denoted as (C∙(X),∂∙) or more simply C∙(X) The kernel of the boundary operator is ker ⁡(∂n) , and is called the group of singular n-cycles. The image of the boundary operator is im ⁡(∂n+1) , and is called the group of singular n-boundaries. Singular chain complex: It can also be shown that ∂n∘∂n+1=0 , implying Bn(X)⊆Zn(X) . The n -th homology group of X is then defined as the factor group Hn(X)=Zn(X)/Bn(X). The elements of Hn(X) are called homology classes. Homotopy invariance: If X and Y are two topological spaces with the same homotopy type (i.e. are homotopy equivalent), then Hn(X)≅Hn(Y) for all n ≥ 0. This means homology groups are homotopy invariants, and therefore topological invariants. In particular, if X is a connected contractible space, then all its homology groups are 0, except H0(X)≅Z A proof for the homotopy invariance of singular homology groups can be sketched as follows. A continuous map f: X → Y induces a homomorphism f♯:Cn(X)→Cn(Y). It can be verified immediately that ∂f♯=f♯∂, i.e. f# is a chain map, which descends to homomorphisms on homology f∗:Hn(X)→Hn(Y). We now show that if f and g are homotopically equivalent, then f* = g*. From this follows that if f is a homotopy equivalence, then f* is an isomorphism. Let F : X × [0, 1] → Y be a homotopy that takes f to g. On the level of chains, define a homomorphism P:Cn(X)→Cn+1(Y) that, geometrically speaking, takes a basis element σ: Δn → X of Cn(X) to the "prism" P(σ): Δn × I → Y. The boundary of P(σ) can be expressed as ∂P(σ)=f♯(σ)−g♯(σ)−P(∂σ). So if α in Cn(X) is an n-cycle, then f#(α ) and g#(α) differ by a boundary: f♯(α)−g♯(α)=∂P(α), i.e. they are homologous. This proves the claim. Homology groups of common spaces: The table below shows the k-th homology groups Hk(X) of n-dimensional real projective spaces RPn, complex projective spaces, CPn, a point, spheres Sn( n≥1 ), and a 3-torus T3 with integer coefficients. Functoriality: The construction above can be defined for any topological space, and is preserved by the action of continuous maps. This generality implies that singular homology theory can be recast in the language of category theory. In particular, the homology group can be understood to be a functor from the category of topological spaces Top to the category of abelian groups Ab. Functoriality: Consider first that X↦Cn(X) is a map from topological spaces to free abelian groups. This suggests that Cn(X) might be taken to be a functor, provided one can understand its action on the morphisms of Top. Now, the morphisms of Top are continuous functions, so if f:X→Y is a continuous map of topological spaces, it can be extended to a homomorphism of groups f∗:Cn(X)→Cn(Y) by defining f∗(∑iaiσi)=∑iai(f∘σi) where σi:Δn→X is a singular simplex, and ∑iaiσi is a singular n-chain, that is, an element of Cn(X) . This shows that Cn is a functor Cn:Top→Ab from the category of topological spaces to the category of abelian groups. Functoriality: The boundary operator commutes with continuous maps, so that ∂nf∗=f∗∂n . This allows the entire chain complex to be treated as a functor. In particular, this shows that the map X↦Hn(X) is a functor Hn:Top→Ab from the category of topological spaces to the category of abelian groups. By the homotopy axiom, one has that Hn is also a functor, called the homology functor, acting on hTop, the quotient homotopy category: Hn:hTop→Ab. Functoriality: This distinguishes singular homology from other homology theories, wherein Hn is still a functor, but is not necessarily defined on all of Top. In some sense, singular homology is the "largest" homology theory, in that every homology theory on a subcategory of Top agrees with singular homology on that subcategory. On the other hand, the singular homology does not have the cleanest categorical properties; such a cleanup motivates the development of other homology theories such as cellular homology. Functoriality: More generally, the homology functor is defined axiomatically, as a functor on an abelian category, or, alternately, as a functor on chain complexes, satisfying axioms that require a boundary morphism that turns short exact sequences into long exact sequences. In the case of singular homology, the homology functor may be factored into two pieces, a topological piece and an algebraic piece. The topological piece is given by C∙:Top→Comp which maps topological spaces as X↦(C∙(X),∂∙) and continuous functions as f↦f∗ . Here, then, C∙ is understood to be the singular chain functor, which maps topological spaces to the category of chain complexes Comp (or Kom). The category of chain complexes has chain complexes as its objects, and chain maps as its morphisms. Functoriality: The second, algebraic part is the homology functor Hn:Comp→Ab which maps C∙↦Hn(C∙)=Zn(C∙)/Bn(C∙) and takes chain maps to maps of abelian groups. It is this homology functor that may be defined axiomatically, so that it stands on its own as a functor on the category of chain complexes. Homotopy maps re-enter the picture by defining homotopically equivalent chain maps. Thus, one may define the quotient category hComp or K, the homotopy category of chain complexes. Coefficients in R: Given any unital ring R, the set of singular n-simplices on a topological space can be taken to be the generators of a free R-module. That is, rather than performing the above constructions from the starting point of free abelian groups, one instead uses free R-modules in their place. All of the constructions go through with little or no change. The result of this is Hn(X;R) which is now an R-module. Of course, it is usually not a free module. The usual homology group is regained by noting that Hn(X;Z)=Hn(X) when one takes the ring to be the ring of integers. The notation Hn(X; R) should not be confused with the nearly identical notation Hn(X, A), which denotes the relative homology (below). Coefficients in R: The universal coefficient theorem provides a mechanism to calculate the homology with R coefficients in terms of homology with usual integer coefficients using the short exact sequence 0. where Tor is the Tor functor. Of note, if R is torsion-free, then Tor_1(G, R) = 0 for any G, so the above short exact sequence reduces to an isomorphism between Hn(X;Z)⊗R and Hn(X;R). Relative homology: For a subspace A⊂X , the relative homology Hn(X, A) is understood to be the homology of the quotient of the chain complexes, that is, Hn(X,A)=Hn(C∙(X)/C∙(A)) where the quotient of chain complexes is given by the short exact sequence 0. Reduced homology: The reduced homology of a space X, annotated as H~n(X) is a minor modification to the usual homology which simplifies expressions of some relationships and fulfils the intuiton that all homology groups of a point should be zero. Reduced homology: For the usual homology defined on a chain complex: ⋯⟶∂n+1Cn⟶∂nCn−1⟶∂n−1⋯⟶∂2C1⟶∂1C0⟶∂00 To define the reduced homology, we augment the chain complex with an additional Z between C0 and zero: ⋯⟶∂n+1Cn⟶∂nCn−1⟶∂n−1⋯⟶∂2C1⟶∂1C0⟶ϵZ→0 where ϵ(∑iniσi)=∑ini . This can be justified by interpreting the empty set as "(-1)-simplex", which means that C−1≃Z . The reduced homology groups are now defined by ker ⁡(∂n)/im(∂n+1) for positive n and ker ⁡(ϵ)/im(∂1) . For n > 0, Hn(X)=H~n(X) , while for n = 0, H0(X)=H~0(X)⊕Z. Cohomology: By dualizing the homology chain complex (i.e. applying the functor Hom(-, R), R being any ring) we obtain a cochain complex with coboundary map δ . The cohomology groups of X are defined as the homology groups of this complex; in a quip, "cohomology is the homology of the co [the dual complex]". Cohomology: The cohomology groups have a richer, or at least more familiar, algebraic structure than the homology groups. Firstly, they form a differential graded algebra as follows: the graded set of groups form a graded R-module; this can be given the structure of a graded R-algebra using the cup product; the Bockstein homomorphism β gives a differential.There are additional cohomology operations, and the cohomology algebra has addition structure mod p (as before, the mod p cohomology is the cohomology of the mod p cochain complex, not the mod p reduction of the cohomology), notably the Steenrod algebra structure. Betti homology and cohomology: Since the number of homology theories has become large (see Category:Homology theory), the terms Betti homology and Betti cohomology are sometimes applied (particularly by authors writing on algebraic geometry) to the singular theory, as giving rise to the Betti numbers of the most familiar spaces such as simplicial complexes and closed manifolds. Extraordinary homology: If one defines a homology theory axiomatically (via the Eilenberg–Steenrod axioms), and then relaxes one of the axioms (the dimension axiom), one obtains a generalized theory, called an extraordinary homology theory. These originally arose in the form of extraordinary cohomology theories, namely K-theory and cobordism theory. In this context, singular homology is referred to as ordinary homology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kramers' theorem** Kramers' theorem: In quantum mechanics, the Kramers' degeneracy theorem states that for every energy eigenstate of a time-reversal symmetric system with half-integer total spin, there is another eigenstate with the same energy related by time-reversal. In other words, the degeneracy of every energy level is an even number if it has half-integer spin. The theorem is named after Dutch physicist H. A. Kramers. Kramers' theorem: In theoretical physics, the time reversal symmetry is the symmetry of physical laws under a time reversal transformation: T:t↦−t. Kramers' theorem: If the Hamiltonian operator commutes with the time-reversal operator, that is [H,T]=0, then, for every energy eigenstate |n⟩ , the time reversed state T|n⟩ is also an eigenstate with the same energy. These two states are sometimes called a Kramers pair. In general, this time-reversed state may be identical to the original one, but that is not possible in a half-integer spin system: since time reversal reverses all angular momenta, reversing a half-integer spin cannot yield the same state (the magnetic quantum number is never zero). Mathematical statement and proof: In quantum mechanics, the time reversal operation is represented by an antiunitary operator {\textstyle T:{\mathcal {H}}\to {\mathcal {H}}} acting on a Hilbert space {\textstyle {\mathcal {H}}} . If it happens that {\textstyle T^{2}=-1} , then we have the following simple theorem: If {\textstyle T:{\mathcal {H}}\to {\mathcal {H}}} is an antiunitary operator acting on a Hilbert space {\textstyle {\mathcal {H}}} satisfying {\textstyle T^{2}=-1} and {\textstyle v} a vector in {\textstyle {\mathcal {H}}} , then {\textstyle Tv} is orthogonal to {\textstyle v} Proof By the definition of an antiunitary operator, {\textstyle \langle Tu,Tw\rangle =\langle w,u\rangle } , where {\textstyle u} and {\textstyle w} are vectors in {\textstyle {\mathcal {H}}} . Replacing {\textstyle u=Tv} and {\textstyle w=v} and using that {\textstyle T^{2}=-1} , we get {\textstyle -\langle v,Tv\rangle =\langle T^{2}v,Tv\rangle =\langle v,Tv\rangle ,} which implies that {\textstyle \langle v,Tv\rangle =0} Consequently, if a Hamiltonian {\textstyle H} is time-reversal symmetric, i.e. it commutes with {\textstyle T} , then all its energy eigenspaces have even degeneracy, since applying {\textstyle T} to an arbitrary energy eigenstate {\textstyle |n\rangle } gives another energy eigenstate {\textstyle T|n\rangle } that is orthogonal to the first one. The orthogonality property is crucial, as it means that the two eigenstates {\textstyle |n\rangle } and {\textstyle T|n\rangle } represent different physical states. If, on the contrary, they were the same physical state, then T|n⟩=eiα|n⟩ for an angle α∈R , which would imply T2|n⟩=T(eiα|n⟩)=e−iαeiα|n⟩=+|n⟩ To complete Kramers degeneracy theorem, we just need to prove that the time-reversal operator {\textstyle T} acting on a half-odd-integer spin Hilbert space satisfies {\textstyle T^{2}=-1} . This follows from the fact that the spin operator {\textstyle \mathbf {S} } represents a type of angular momentum, and, as such, should reverse direction under T :S→T−1ST=−S. Mathematical statement and proof: Concretely, an operator {\textstyle T} that has this property is usually written as T=e−iπSyK where {\textstyle S_{y}} is the spin operator in the {\textstyle y} direction and {\textstyle K} is the complex conjugation map in the {\textstyle S_{z}} spin basis.Since {\textstyle iS_{y}} has real matrix components in the Sz basis, then T2=e−iπSyKe−iπSyK=e−i2πSyK2=(−1)2S. Hence, for half-odd-integer spins {\textstyle S={\frac {1}{2}},{\frac {3}{2}},\ldots } , we have {\textstyle T^{2}=-1} . This is the same minus sign that appears when one does a full 2π rotation on systems with half-odd-integer spins, such as fermions. Consequences: The energy levels of a system with an odd total number of fermions (such as electrons, protons and neutrons) remain at least doubly degenerate in the presence of purely electric fields (i.e. no external magnetic fields). It was first discovered in 1930 by H. A. Kramers as a consequence of the Breit equation. As shown by Eugene Wigner in 1932, it is a consequence of the time reversal invariance of electric fields, and follows from an application of the antiunitary T-operator to the wavefunction of an odd number of fermions. The theorem is valid for any configuration of static or time-varying electric fields. Consequences: For example, the hydrogen (H) atom contains one proton and one electron, so that the Kramers theorem does not apply. Indeed, the lowest (hyperfine) energy level of H is nondegenerate, although a generic system might have degeneracy for other reasons. The deuterium (D) isotope on the other hand contains an extra neutron, so that the total number of fermions is three, and the theorem does apply. The ground state of D contains two hyperfine components, which are twofold and fourfold degenerate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Special use airspace** Special use airspace: Special use airspace (SUA) is an area designated for operations of a nature such that limitations may be imposed on aircraft not participating in those operations. Often these operations are of a military nature. The designation of SUAs identifies for other users the areas where such activity occurs, provides for segregation of that activity from other users, and allows charting to keep airspace users informed of potential hazards. Most SUAs are depicted on aeronautical charts and FAA maintains a page showing the current status of most SUAs. Special use airspace: Special use airspace includes: restricted airspace, prohibited airspace, military operations areas (MOA), warning areas, alert areas, temporary flight restriction (TFR), national security areas, and controlled firing areas, typically up to FL180 or 18,000 ft above sea level. In addition there is often an Air Traffic Control Assigned Airspace (ATCAA) from FL180 through FL600 in which ATC plans for military operations. ATCAAs are generally not depicted on charts because flights above FL180 are subject to mandatory instrument flight rules operation requiring continuous contact with ATC. Alert areas may contain high volume of pilot training or an unusual type of aerial activity. MOAs (located over land) and warning areas (located over domestic or international waters or both) and training routes contain high volumes of military activity. Special use airspace: Flights within restricted areas are only allowed with specific FAA clearance and may be subject to restrictions, while in prohibited areas flights are forbidden except in emergency situations. Flying in MOAs or Warning Areas is allowed by non-military aircraft without clearance, but can be hazardous.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Iodine-123** Iodine-123: Iodine-123 (123I) is a radioactive isotope of iodine used in nuclear medicine imaging, including single-photon emission computed tomography (SPECT) or SPECT/CT exams. The isotope's half-life is 13.2230 hours; the decay by electron capture to tellurium-123 emits gamma radiation with a predominant energy of 159 keV (this is the gamma primarily used for imaging). In medical applications, the radiation is detected by a gamma camera. The isotope is typically applied as iodide-123, the anionic form. Production: Iodine-123 is produced in a cyclotron by proton irradiation of xenon in a capsule. Xenon-124 absorbs a proton and immediately loses a neutron and proton to form xenon-123, or else loses two neutrons to form caesium-123, which decays to xenon-123. The xenon-123 formed by either route then decays to iodine-123, and is trapped on the inner wall of the irradiation capsule under refrigeration, then eluted with sodium hydroxide in a halogen disproportionation reaction, similar to collection of iodine-125 after it is formed from xenon by neutron irradiation (see article on 125I for more details). Production: 124Xe (p,pn) 123Xe → 123I124Xe (p,2n) 123Cs → 123Xe → 123IIodine-123 is usually supplied as [123I]-sodium iodide in 0.1 M sodium hydroxide solution, at 99.8% isotopic purity.123I for medical applications has also been produced at Oak Ridge National Laboratory by proton cyclotron bombardment of 80% isotopically enriched tellurium-123. 123Te (p,n) 123I Decay: The detailed decay mechanism is electron capture (EC) to form an excited state of the nearly-stable nuclide tellurium-123 (its half life is so long that it is considered stable for all practical purposes). This excited state of 123Te produced is not the metastable nuclear isomer 123mTe (the decay of 123I does not involve enough energy to produce 123mTe), but rather is a lower-energy nuclear isomer of 123Te that immediately gamma decays to ground state 123Te at the energies noted, or else (13% of the time) decays by internal conversion electron emission (127 keV), followed by an average of 11 Auger electrons emitted at very low energies (50-500 eV). The latter decay channel also produces ground-state 123Te. Especially because of the internal conversion decay channel, 123I is not an absolutely pure gamma-emitter, although it is sometimes clinically assumed to be one.The Auger electrons from the radioisotope have been found in one study to do little cellular damage, unless the radionuclide is directly incorporated chemically into cellular DNA, which is not the case for present radiopharmaceuticals which use 123I as the radioactive label nuclide. The damage from the more penetrating gamma radiation and 127 keV internal conversion electron radiation from the initial decay of 123Te is moderated by the relatively short half-life of the isotope. Medical applications: 123I is the most suitable isotope of iodine for the diagnostic study of thyroid diseases. The half-life of approximately 13.2 hours is ideal for the 24-hour iodine uptake test and 123I has other advantages for diagnostic imaging thyroid tissue and thyroid cancer metastasis. The energy of the photon, 159 keV, is ideal for the NaI (sodium iodide) crystal detector of current gamma cameras and also for the pinhole collimators. It has much greater photon flux than 131I. It gives approximately 20 times the counting rate of 131I for the same administered dose, while the radiation burden to the thyroid is far less (1%) than that of 131I. Moreover, scanning a thyroid remnant or metastasis with 123I does not cause "stunning" of the tissue (with loss of uptake), because of the low radiation burden of this isotope. For the same reasons, 123I is never used for thyroid cancer or Graves disease treatment, and this role is reserved for 131I. Medical applications: 123I is supplied as sodium iodide (NaI), sometimes in basic solution in which it has been dissolved as the free element. This is administered to a patient by ingestion under capsule form, by intravenous injection, or (less commonly due to problems involved in a spill) in a drink. The iodine is taken up by the thyroid gland and a gamma camera is used to obtain functional images of the thyroid for diagnosis. Quantitative measurements of the thyroid can be performed to calculate the iodine uptake (absorption) for the diagnosis of hyperthyroidism and hypothyroidism. Medical applications: Dosing can vary; 7.5–25 megabecquerels (200–680 μCi) is recommended for thyroid imaging and for total body while an uptake test may use 3.7–11.1 MBq (100–300 μCi). There is a study that indicates a given dose can effectively result in effects of an otherwise higher dose, due to impurities in the preparation. The dose of radioiodine 123I is typically tolerated by individuals who cannot tolerate contrast mediums containing larger concentration of stable iodine such as used in CT scan, intravenous pyelogram (IVP) and similar imaging diagnostic procedures. Iodine is not an allergen. Medical applications: 123I is also used as a label in other imaging radiopharmaceuticals e.g.metaiodobenzylguanidine (MIBG) and ioflupane. Precautions: Removal of radioiodine contamination can be difficult and use of a decontaminant specially made for radioactive iodine removal is advised. Two common products designed for institutional use are Bind-It and I-Bind. General purpose radioactive decontamination products are often unusable for iodine, as these may only spread or volatilize it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hygroscopy** Hygroscopy: Hygroscopy is the phenomenon of attracting and holding water molecules via either absorption or adsorption from the surrounding environment, which is usually at normal or room temperature. If water molecules become suspended among the substance's molecules, adsorbing substances can become physically changed, e.g., changing in volume, boiling point, viscosity or some other physical characteristic or property of the substance. For example, a finely dispersed hygroscopic powder, such as a salt, may become clumpy over time due to collection of moisture from the surrounding environment. Hygroscopy: Deliquescent materials are sufficiently hygroscopic that they absorb so much water that they become liquid and form an aqueous solution. Hygroscopy is essential for many plant and animal species' attainment of hydration, nutrition, reproduction and/or seed dispersal. Biological evolution created hygroscopic solutions for water harvesting, filament tensile strength, bonding and passive motion- natural solutions being considered in future biomimetics. Etymology and pronunciation: The word hygroscopy () uses combining forms of hygro- and -scopy. Unlike any other -scopy word, it no longer refers to a viewing or imaging mode. It did begin that way, with the word hygroscope referring in the 1790s to measuring devices for humidity level. These hygroscopes used materials, such as certain animal hairs, that appreciably changed shape and size when they became damp. Such materials were then said to be hygroscopic because they were suitable for making a hygroscope. Eventually, though, the word hygroscope ceased to be used for any such instrument in modern usage. But the word hygroscopic (tending to retain moisture) lived on, and thus also hygroscopy (the ability to do so). Nowadays an instrument for measuring humidity is called a hygrometer (hygro- + -meter). History: Early hygroscopy literature began circa 1880. Studies by Victor Jodin (Annales Agronomiques, October 1897) focused on the biological properties of hygroscopicity. He noted pea seeds, both living and dead (without germinative capacity), responded similarly to atmospheric humidity, their weight increasing or decreasing in relation to hygrometric variation. History: Marcellin Berthelot viewed hygroscopicity from the physical side, a physico-chemical process. Berthelot's principle of reversibility, briefly- that water dried from plant tissue could be restored hygroscopically, was published in "Recherches sur la desiccation des plantes et des tissues végétaux; conditions d'équilibre et de réversibilité," (Annales de Chimie et de Physique, April 1903).Léo Errera viewed hygroscopicity from perspectives of the physicist and the chemist. His memoir "Sur l'Hygroscopicité comme cause de l'action physiologique à distance" (Recueil de l'lnstitut Botanique Léo Errera, Université de Bruxelles, tome vi., 1906) provided a hygroscopy definition that remains valid to this day. Hygroscopy is "exhibited in the most comprehensive sense, as displayed (a) in the condensation of the water-vapour of the air on the cold surface of a glass; (b) in the capillarity of hair, wool, cotton, wood shavings, etc.; (c) in the imbibition of water from the air by gelatine; (d) in the deliquescence of common salt; (e) in the absorption of water from the air by concentrated sulphuric acid; (f) in the behaviour of quicklime". Overview: Hygroscopic substances include cellulose fibers (such as cotton and paper), sugar, caramel, honey, glycerol, ethanol, wood, methanol, sulfuric acid, many fertilizer chemicals, many salts (like calcium chloride, bases like sodium hydroxide etc.), and a wide variety of other substances.If a compound dissolves in water, then it is considered to be hydrophilic.Zinc chloride and calcium chloride, as well as potassium hydroxide and sodium hydroxide (and many different salts), are so hygroscopic that they readily dissolve in the water they absorb: this property is called deliquescence. Not only is sulfuric acid hygroscopic in concentrated form but its solutions are hygroscopic down to concentrations of 10% v/v or below. A hygroscopic material will tend to become damp and cakey when exposed to moist air (such as the salt inside salt shakers during humid weather). Overview: Because of their affinity for atmospheric moisture, desirable hygroscopic materials might require storage in sealed containers. Some hygroscopic materials, e.g., sea salt and sulfates, occur naturally in the atmosphere and serve as cloud seeds, cloud condensation nuclei (CCNs). Being hygroscopic, their microscopic particles provide an attractive surface for moisture vapour to condense and form droplets. Modern-day human cloud seeding efforts began in 1946.When added to foods or other materials for the express purpose of maintaining moisture content, hygroscopic materials are known as humectants. Overview: Materials and compounds exhibit different hygroscopic properties, and this difference can lead to detrimental effects, such as stress concentration in composite materials. The volume of a particular material or compound is affected by ambient moisture and may be considered its coefficient of hygroscopic expansion (CHE) (also referred to as CME, or coefficient of moisture expansion) or the coefficient of hygroscopic contraction (CHC)—the difference between the two terms being a difference in sign convention. Overview: Differences in hygroscopy can be observed in plastic-laminated paperback book covers—often, in a suddenly moist environment, the book cover will curl away from the rest of the book. The unlaminated side of the cover absorbs more moisture than the laminated side and increases in area, causing a stress that curls the cover toward the laminated side. This is similar to the function of a thermostat's bimetallic strip. Inexpensive dial-type hygrometers make use of this principle using a coiled strip. Deliquescence is the process by which a substance absorbs moisture from the atmosphere until it dissolves in the absorbed water and forms a solution. Deliquescence occurs when the vapour pressure of the solution that is formed is less than the partial pressure of water vapour in the air. Overview: While some similar forces are at work here, it is different from capillary attraction, a process where glass or other solid substances attract water, but are not changed in the process (e.g., water molecules do not become suspended between the glass molecules). Deliquescence: Deliquescence, like hygroscopy, is also characterized by a strong affinity for water and tendency to absorb moisture from the atmosphere if exposed to it. Unlike hygroscopy, however, deliquescence involves absorbing sufficient water to form an aqueous solution. Most deliquescent materials are salts, including calcium chloride, magnesium chloride, zinc chloride, ferric chloride, carnallite, potassium carbonate, potassium phosphate, ferric ammonium citrate, ammonium nitrate, potassium hydroxide, and sodium hydroxide. Owing to their very high affinity for water, these substances are often used as desiccants, which is also an application for concentrated sulfuric and phosphoric acids. Some deliquescent compounds are used in the chemical industry to remove water produced by chemical reactions (see drying tube). Biology: Hygroscopy appears in both plant and animal kingdoms, the latter benefiting via hydration and nutrition. Some amphibian species secrete a hygroscopic mucus that harvests moisture from the air. Orb web building spiders produce hygroscopic secretions that preserve the stickiness and adhesion force of their webs. One aquatic reptile species is able to travel beyond aquatic limitations, onto land, due to its hygroscopic integument. Biology: Plants benefit from hygroscopy via hydration and reproduction- demonstrated by convergent evolution examples. Hygroscopic movement (hygrometrically activated movement) is integral in fertilization, seed/spore release, dispersal and germination. The phrase "hygroscopic movement" originated in 1904's "Vorlesungen Über Pflanzenphysiologie", translated in 1907 as "Lectures on Plant Physiology" (Ludwig Jost and R.J. Harvey Gibson, Oxford, 1907). When movement becomes larger scale, affected plant tissues are colloquially termed hygromorphs. Hygromorphy is a common mechanism of seed dispersal as the movement of dead tissues respond to hygrometric variation, e.g. spore release from the fertile margins of Onoclea sensibilis. Movement occurs when plant tissue matures, dies and desiccates, cell walls drying, shrinking; and also when humidity re-hydrates plant tissue, cell walls enlarging, expanding. The direction of the resulting force depends upon the architecture of the tissue and is capable of producing bending, twisting or coiling movements. Biology: Hygroscopic hydration examples Air plants, a Tillandsia species, are epiphytes that use their degenerated, non-nutritive roots to anchor upon rocks or other plants. Hygroscopic leaves absorb their necessary moisture from humidity in the air. The collected water molecules are transported from leaf surfaces to an internal storage network via osmotic pressure with capacity sufficient for the plant's growing requirements.The file snake (Acrochordus granulatus), from a family known as completely aquatic, has hygroscopic skin that serves as a water reservoir, retarding desiccation, allowing it to travel out of water.Another example is the sticky capture silk found in spider webs, e.g. from the orb-weaver spider (Larinioides cornutus). This spider, as typical, coats its threads with a self-made hydrogel, an aggregate blend of glycoproteins, low molecular mass organic and inorganic compounds (LMMCs), and water. The LMMCs are hygroscopic, thus is the glue, its moisture absorbing properties using environmental humidity to keep the capture silk soft and tacky.The waxy monkey tree frog (Phyllomedusa sauvagii) and the Australian green tree frog (Litoria caerulea) benefit from two hygroscopically-enabled hydration processes; transcutaneous uptake of condensation on their skin and reduced evaporative water loss due to the condensed water film barrier covering their skin. Condensation volume is enhanced by the hygroscopic secretions they wipe across their granular skin.Some toads use hygroscopic secretions to reduce evaporative water loss, Anaxyrus sp. being an example. The venomous secretion from its parotoid gland also includes hygroscopic Glycosaminoglycans. When the toad wipes this protective secretion on its body its skin becomes moistened by the surrounding environmental humidity, considered an aid in water balance.Red and white clover (Trifolium pratense) and (Trifolium repens), yellow bush lupine (Lupinus arboreus) and several members of the legume family have a hygroscopic hilar valve (hilum) that controls seed embryo moisture levels. The saguaro (Carnegiea gigantea), another eudicots species, also has hygroscopic seeds shown to imbibe up to 20% atmospheric moisture, by weight. Functionally, the hilar valve allows water vapor to enter or exit to ensure viability, while blocking liquid water. If however, humidity levels gradually rise to a high enough level, the hilar valve remains open, allowing liquid water passage for germination. Physiologically, the inner and outer epidermides have independent hilar valve control. The outer epidermis has columnar-shaped cells, annularly arranged about the hilum. These counter palisade cells, being hygroscopic, respond to external humidity by swelling and closing the hilar valve during high humidity, preventing water absorption into the seed. Reversibly, they shrivel, opening the valve during low humidity, allowing the seed to expel excess moisture. The inner epidermis, inside the seed's impermeable integument, has palisade epidermis cells, a second annularly arranged hygroscopic layer attuned to the embryo's moisture level. There exists a moisture tension between inner and outer palisade cells. For the hilum to close, this moisture needs to exceed some minimum level (14-25% for these species). While the hilar valve is open (i.e., low outer humidity) if the humidity suddenly increases, the moisture tension reaches that protective threshold and the hilum closes, preventing moisture (liquid water) from entering. If, however, the outer humidity rises gradually, implying suitable growing conditions, the moisture tension level doesn't immediately exceed the threshold, keeping the hilum open and enabling the gradual moisture entry necessary for imbibition. Biology: Hygroscopic-assisted propagation examples Typical of hygroscopic movement are plant tissues with "closely packed long (columnar) parallel thick-walled cells (that) respond by expanding longitudinally when exposed to humidity and shrinking when dried (Reyssat et al., 2009)". Cell orientation, pattern structure (annular, planar, bi-layered or tri-layered) and the effects of the opposite-surface's cell orientation control the hygroscopic reaction. Moisture responsive seed encapsulations rely on valves opening when exposed to wetting or drying; discontinuous tissue structures provide such predetermined breaking points (sutures), often implemented via reduced cell wall thickness or seams within bi- or tri-layered structures. Graded distributions varying in density and/or cell orientation focus hygroscopic movement, frequently observed as biological actuators (a hinge function); e.g. pinecones (Pinus spp.), the ice plant (Aizoaceae spp.) and the wheat awn (Triticum spp.), described below. Biology: Hygroscopic bi-layered cell arrays act as a capitulum hinge in some plants, Xerochrysum bracteatum and Syngonanthus elegans being examples. The "hygroscopic bending of involucral bracts surrounding a capitulum ... contributes to flower protection and pollination" and assists dispersion by protecting delicate pappi filaments from entanglement or destruction by precipitation, e.g. Taraxacum (dandelions). In nature these involucral bracts have a diurnal rhythm. The whorl of hygroscopic bracts bend outward exposing the capitulum (see illustration) during the day, then inward, closing it at night, as the relative humidity shifts in response to the daily temperature change. Bracts are scarious, the hinge and blade composed of "exclusively dead cells (Nishikawa et al., 2008)", allowing the hygroscopically activated bracts to function from flowering through achene dispersal. Physiologically, the bract's lower section is source to the hinge-like function, "comprised of sclerenchyma-like abaxial (inner petal) tissue, parenchyma and adaxial epidermis (outer petal tissue)..." Bract cell wall composition is rather uniform but its cells gradually change in orientation. The bract's hygroscopic bending is due to the differing cell orientations of its inner and outer epidermides, causing adaxial–abaxial force gradients between opposing sides that change with moisture; thus, the aggregate hygrometric force, in whorl unison, controls the capitulum's repetitive opening and closing.Some trees and shrubs in fire-prone regions evolved a dual-stage hygroscopic dispersal; an initial thermo-sensitive enabling (extreme heat or fire), then a serotinous hygroresponsive seed release. Examples are the woody fruits of Myrtaceae (e.g. Eucalyptus species plurimae, Melaleuca spp.) and Proteaceae (e.g. Hakea spp., Banksia spp., Xylomelum spp.) and the woody cones of Pinaceae (e.g. Pinus spp.) and the cypress family (Cupressaceae), e.g. the giant sequoia (Sequoiadendron giganteum)). Typical in lodgepole pine (Pinus contorta), Eucalyptus, and Banksia are resin-sealed seed encapsulations that require the heat of fire to physically melt the resin, enabling serotinous seed release. Such seed encapsulations may "reduce seed loss or damage from granivores, desiccation, and fire (Moya et al., 2008; Talluto & Benkman, 2014; Lamont et al., 2016, 2020)." The similarity of dual-stage dispersal techniques between different clades, angiosperms and gymnosperms, "can be interpreted as a result of convergent evolution (e.g. Clarke et al., 2013)".Banksia attenuata, typical of Banksia spp., has a seed bearing follicle composed of a bi-layer hygroscopic cell network. The woody follicle is thermo-sensitive, then hygroresponsive; serotinous humidity opening the ventral suture and exposing seed when germination conditions are favorable. Physiologically, the heat-sensitive follicle valves of Banksia spp. are sealed by a wax (resin) layer, released by high ambient temperatures (fire), "thereby facilitating opening (e.g. Huss et al., 2018)." The follicle mesocarp consists of high density branched fiber bundles; the endocarp, low density parallel fibers. A suture is caused by differential hygroscopic movements between layers, their microfibril structures having a large angle disparity (microfibril angle (MFA) γ = 75–90°).Pine cone scales (pinaceae spp.) employ a hygromorphic hinge for their seed release. Physiology involves a bi-layered structure of closely packed long parallel thick-walled cells. Fiber alignments within layers are non-uniform, varying longitudinally, producing different microfibril angles (MFAs) of 30° and 74° between layers over the span of the scale. The region of greatest MFA, the hinge knuckle, is a small region near the scale and midrib (central stem) union. In mature pine cones the outer scale layer is the controlling tissue, its long thick-walled cells responding longitudinally to environmental humidity. Distortion occurs in the knuckle region as movement of the outer layer overtakes that of the more passive inner scale layer, forcing the scale to bend or flex. The remainder of the scale is hygroscopically passive, though amplifies apex displacement via length and geometrically; e.g. bending the scale closed with hydration or flexing it open with dehydration- releasing seed.Flowering plants of the Asteraceae family have hygroscopically-influenced dispersion, coordinating anemochory (wind dispersal) with favorable environmental conditions, common in A. genera Erigeron, Leontodon, Senecio, Sonchus and Taraxacum. As example, the flight-enabling pappus of the common dandelion achene undergoes binary morphing (opened or closed) of its whisker-like filaments, in unison with chorused responses of the remaining achenes. Pappus movement is controlled via a hygroscopic actuator in the apical plate, at the beak's top, the locus for all the achene's filaments. High humidity causes each pappus to close, contracting its radially patterned structure, reducing its area and the likelihood of wind current dispersal. For any achene that become released, flight dynamics of the reduced pappus dramatically limit dispersal range. The hygroscopic actuator's responsiveness to changes in relative humidity (RH) is predictable, repeatable; e.g. the pappi of centaurea imperialis remain closed at ≥ 78% RH and open completely at ≤ 75% RH. During more favorable lower humidity conditions, pappi fully expand and wind current allochory is re-enabled.The orchid tree (Bauhinia variegata) depends upon hygro-responsive twisting for its dispersal. Its seed pod contains two hygroscopic sclerenchyma fibre layers, nearly orthogonal, joining at the valves. During dehiscence the large 90° microfibril angle between endocarp layers, combined with dual sided shrinkage, results in opposing helical torques that force a suture at the weakest point, the seed case valves; their opening releases seed.Some plants synchronize the opening of their mature seed capsule with active rainfall- hygrochasy. This dispersal technique is frequently observed in the arid regions of southern and eastern Africa, the Israeli desert, parts of North America and Somalia, and believed evolved to offer higher survival rates in arid environs. Hygrochasy is commonly associated with family Aizoaceae spp., the ice plant, as > 98% of its species utilize post-wetting dehiscence; such dispersal is also observed in family Plantaginaceae with the alpine Veronica of New Zealand, evolving in the last 9Myr. Common to all seed capsules are triangular circumferentially-arranged hygroscopic keels (valves) covering its seeds. These protective valves mechanically open only when hydrated with liquid water. Each keel (5 for Delosperma nakurense (Engl.) Herre) is comprised of cellulosic lattice tissue that swells with hydration, opening within minutes. The enlarged cells force straightening of an inherent desiccated fold in the keel, the hygroscopic hinge, near the keel's union with the capsule perimeter. Fully opened, the keel pivots over 150°, upward then backward, exposing seed compartments, one beneath each valve, separated by septa, all resting upon the capsule floor. Seeds are visible, but restrained by the cup-like ring created by the encircling keels. The final requirement for dispersal is rainfall, or sufficient moisture, to flush seed from this barrier, colloquially termed the splash cup. Seed that overflows or splashes from the cup is dispersed to the nearby ground. Any remaining seed will be preserved when keels desiccate, hygroscopically shrink, and restore to their natural folded, closed state. The hygromorphic process is reversible, repeatable; neglected seed having subsequent dispersal opportunity via future rainfalls.The seeds of some flowering herbs and grasses have hygroscopic appendages (awns) that bend with changes in humidity, enabling them to disperse over the ground, termed herpochory. The awn will thrust (or twist) when the seed is released, its motion dependent upon plant physiology. Subsequent hygrometric changes cause movements to repeat, thrusting (or twisting), pushing the seed into the ground.Two angiospermae families have similar methods of dispersal, though method of implementation varies within family: Geraniaceae family examples are the common stork's-bill (Erodium cicutarium) and geraniums (Pelargonium sp.); Poaceae family, Needle-and-Thread (Hesperostipa comata) and wheat (Triticum spp.). All rely upon a bi-layered parallel fiber hygroscopic cell physiology to control the awn's movement for dispersal and self-burial of seeds. Alignment of cellulose fibrils in the awn's controlling cell wall determines direction of movement. If fiber alignments are tilted, non-parallel venation, a helix develops and awn movement becomes twisting (coiling) instead of bending; e.g. coiling occurs in awns of Erodium, and Hesperostipa.Some plants use hygroscopic movements for Ballochory (self-dispersal), active ballists forcibly ejecting their seeds; e.g. species of geranium, violet, wood sorrel, witch hazel, touch-me-not (Impatiens), and acanthus. Rupturing of the Bauhinia purpurea seed pod reportedly propels its seed up to 15 metres distance. Engineering properties: Hygroscopicity is a general term used to describe a material's ability to absorb moisture from the environment. There is no standard quantitative definition of hygroscopicity, so generally the qualification of hygroscopic and non-hygroscopic is determined on a case-by-case basis. For example, pharmaceuticals that pick up more than 5% by mass, between 40 and 90% relative humidity at 25 °C, are described as hygroscopic, while materials that pick up less than 1%, under the same conditions are regarded as non-hygroscopic.The amount of moisture held by hygroscopic materials is usually proportional to the relative humidity. Tables containing this information can be found in many engineering handbooks and is also available from suppliers of various materials and chemicals. Engineering properties: Hygroscopy also plays an important role in the engineering of plastic materials. Some plastics, e. g. nylon, are hygroscopic while others are not. Polymers: Many engineering polymers are hygroscopic, including nylon, ABS, polycarbonate, cellulose, carboxymethyl cellulose, and poly(methyl methacrylate) (PMMA, plexiglas, perspex). Other polymers, such as polyethylene and polystyrene, do not normally absorb much moisture, but are able to carry significant moisture on their surface when exposed to liquid water.Type-6 nylon (a polyamide) can absorb up to 9.5% of its weight in moisture. Applications in baking: The use of different substances' hygroscopic properties in baking are often used to achieve differences in moisture content and, hence, crispiness. Different varieties of sugars are used in different quantities to produce a crunchy, crisp cookie (UK: biscuit) versus a soft, chewy cake. Sugars such as honey, brown sugar, and molasses are examples of sweeteners used to create moister and chewier cakes. Research: Several hygroscopic approaches to harvest atmospheric moisture have been demonstrated and require further development to assess their potentials as a viable water source. Experiments with fog collection, in select environs, duplicated the hydrophilic surfaces and hygroscopic surface wetting observed in tree frog hydration (biomimicry). Subsequent material optimizations developed artificial hydrophilic surfaces with collection rates of 25 mg H2O/(cm2 h), more than twice the collection rate of tree frogs under comparable conditions, i.e. 100% RH (Relative Humidity). Research: Another approach performs at lower 15-30% RHs but also has environs limitations; a sustainable biomass source is necessary. Super hygroscopic polymer films composed of biomass and hygroscopic salts are able to condense moisture from atmospheric humidity. By implementing rapid sorption-desorption kinetics and operating 14-24 cycles per day, this technique produced an equivalent water yield of 5.8-13.3 L kg−1 of sustainable raw materials, demonstrating the potential for low-cost, scalable atmospheric water harvesting.Hygroscopic glues are candidates for commercial development. The most common cause of synthetic glue failure at high humidity is attributed to water lubricating the contact area, impacting bond quality. Hygroscopic glues may allow more durable adhesive bonds by absorbing (pulling) inter-facial environmental moisture away from the glue-substrate boundary.Integrating hygroscopic movement into smart building designs and systems is frequently mentioned, e.g. self-opening windows. Such movement is appealing, an adaptive, self-shaping response that requires no external force or energy. However, capabilities of current material choices are limited. Biomimetic design of hygromorphic wood composites and hygro-actuated building systems have been modeled and evaluated. Research: Hygrometric response time, precise shape changes and durability are lacking. Most currently available hygro-actuated composites are inferior and exhibit fatigue failure well before that seen in nature, e.g. in pine cone scales, indicating that a better understanding of the plants' biological structures is needed. Materials composed of fluid-responsive active bilayer systems that can direct planned conformational hygromorphing are necessary. Current composites require undesirable trade-offs between hygromorphic response time and mechanical stability that must also be balanced with changing environmental stimuli.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interruption (speech)** Interruption (speech): An interruption is a speech action when one person breaks in to interject while another person is talking. Linguists, social psychologists, anthropologists, and sociologists are among the social scientists who have studied and identified patterns of interruption that may differ by gender, social status, race/ethnicity, culture, and political orientation. Turn-taking and overlaps: Harvey Sacks, the sociologist who launched the field of conversation analysis, worked with linguist Emanuel Schegloff and Gail Jefferson in the 1970s to analyze how turn-taking was organized in speech events such as everyday conversations. Speech events are organized so that only one person speaks at a time and to provide for orderly ways to change speakers. Sacks et al. thought that the process of turn-taking is subconscious.Overlaps occur when two or more speakers talk simultaneously. Types of interruptions: Communication analyst Julia A. Goldberg uses conversation analysis to define three types of conversational interruptions. Relationally neutral interruptions are interjections by the listener that seek to repair, repeat, or clarify something the speaker just said. During this type of interruption, the interrupter does not intend to exert power over the speaker, or to establish rapport with the speaker. The act of interruption itself is understood as neutral in this instance. Another type of interruption defined by Goldberg is the power interruption, where the interrupter breaks in and cuts off the speaker as a way to display some social power. Power interruptions are understood as acts of conflict and competition, and are viewed as rude, hostile, disrespectful, and/or uncaring about the speaker and/or what the speaker is saying. A rapport interruption is designed to display mutuality and generally conveys the impression that the interrupter understands and empathizes with the speaker and/or the content of the speech, and is interpreted as collaborative and cooperative.Power interruptions are also analyzed by Zimmerman and West, sociologists who note that the people who seek to be socially dominant exert their power over others through interrupting their speech. Zimmerman and West also analyzed how sex roles shape interruption patterns. Types of interruptions: Why participants interrupt each other might be an incentive for sociolinguists, or sometimes psychologists, to investigate this issue. One may begin to say something but suddenly, someone else interrupts to finish the sentence instead or holding the floor to say another idea without giving an opportunity to let others finish what they want to say. This is frustrating even the first speaker's sentence or thought goes along with the interrupter's. Gender and interruption patterns: Since the late-1970s, social scientists have studied the effect gender has on interruption patterns and other components of verbal communication. The findings of these studies are mixed, with some finding gender differences, while others did not. Among those that found gender differences are sociologists Don Zimmerman and Candace West, who used male dominance theory to claim that men interrupted women to assert their social dominance over women. Zimmerman and West's work discovered that interruptions were more evenly distributed in conversations involving same-sex speakers, while in cross-sex interactions, men were much more likely to interrupt women. Zhao and Gantz analyzed fictional TV shows to claim that male characters used disruptive interruptions more than female characters, while female characters more often used cooperative interruptions. They note, however, that the apparent gender differences in interruption patterns are affected by differences in social status among the TV characters. Goldberg notes that when conversational context and content are analyzed, interruptions may be seen as power displays, rapport displays, or as neutral acts that may or may not be shaped by the gender of the speaker. Linguist Makri-Tsilipakou discovered that men and women use "simultaneous speech" at about the same rate, but the sexes differ as to their interpretation of the meaning of the interruption. Women use simultaneous speech as a sign of support and agreement, while men use it either as support for the other's speech or to dissent from other speakers or from their viewpoint. Drass, a social psychologist, found that gender identity, as separate from biological sex, was an important variable, with persons who were more male-identified being more likely to interrupt than persons who were more female-identified. Gender and interruption patterns: Conversely, a study by Murray and Covelli used Zimmerman and West's coding strategies on their own dataset of conversations to find that women interrupted men more often than men interrupted women. According to James and Clarke, this pattern is especially evident in conversational situations where women felt more expertise, and thus may have felt that their interruptions were more legitimate. Gender and interruption patterns: Manterrupting The term manterrupting was coined in early 2015 by Jessica Bennett in an article that appeared in Time. Bennett defines the term as "[u]nnecessary interruption of a woman by a man." During the 2016 American presidential debates, the term was applied to candidate Donald Trump, who interrupted Hillary Clinton dozens of times during the first and second debates. Status and interruption patterns: Interruptions work as a status-organizing cue. In other words, conversational participants use cues such as perceptions of prestige, power, social class, gender, race and age, to organize small-group hierarchies. Interruption patterns differ by social status, with persons of higher social status, such as belonging to a social group who has more prestige or power, interrupting persons with lower status. Jacobi and Schweers analyzed transcripts of oral arguments made before the U.S. Supreme Court to find that senior justices interrupted their junior colleagues more frequently than the reverse. Kollock et al. studied conversations among couples, including male couples, female couples, and mixed sex couples. They found that partners who were considered to have more social power interrupted their partners more often, regardless of the gender composition of the dyad. In TV shows, characters who are lower in the status hierarchy are scripted to display a "sense of defiance" that allows them to interrupt more aggressively than persons who hold a mid-level status. A study of interviews between physicians and patients found that physicians, who are considered to hold higher status than their patients in terms of prestige, are much more likely to interrupt their patients, regardless of the sex of the patient or the physician. Patients interrupted senior physicians at a lower rate than they interrupted doctors who were in training, indicating that the senior physicians are regarded as having a higher status than their junior colleagues. In contrast, a study of physician-patient interactions among six different statuses, from low to high, indicated that patients tended to interrupt physicians more than the reverse, and that high and low status physicians did not differ in the number of times that they interrupted their patients. This study, by Irish and Hall, noted that status thus appears to be less of an indicator of the likelihood of interruptions among physicians and patients.In addition to social status affecting interruption patterns, interruptions also affect social status. In a study of mixed-sex and same-sex dyads, Farley discovered that the interrupters gained social status after they interrupted, while those who were interrupted lost social status. This study also found that people who interrupted also lost in terms of likeability. Status and interruption patterns: To note, culture is influential in communication; participants of the same culture may share the same beliefs regarding how-to-act when interacting with each other. It has been noticed that thorough interaction depends on shared understanding of behavioral basis. Furthermore, cross-cultural disparity in turn-taking is potential problem in communication. Race/ethnicity and interruption patterns: Don Zimmerman and Candace West also claim in their study that whites interrupt blacks as a strategy to exert their power and dominance. Cultural differences: Interruptions, and how people interpret interruptions, differ by culture and language. Makri-Tsilpakou notes that some languages and cultures have higher tolerance for simultaneous talk, and that interpretations of interruptions may differ depending on cultural context. Political orientation: Political orientation, e.g. where a person falls on the conservative to liberal political continuum, also shapes the likelihood that people will interrupt others or will be interrupted themselves. Jacobi and Schweers, in their study of transcripts of oral arguments made before the U.S. Supreme Court, found that conservative justices and advocates interrupt more often than liberals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heparin-induced thrombocytopenia** Heparin-induced thrombocytopenia: Heparin-induced thrombocytopenia (HIT) is the development of thrombocytopenia (a low platelet count), due to the administration of various forms of heparin, an anticoagulant. HIT predisposes to thrombosis (the abnormal formation of blood clots inside a blood vessel). When thrombosis is identified the condition is called heparin-induced thrombocytopenia and thrombosis (HITT). HIT is caused by the formation of abnormal antibodies that activate platelets, which release microparticles that activate thrombin, leading to thrombosis. If someone receiving heparin develops new or worsening thrombosis, or if the platelet count falls, HIT can be confirmed with specific blood tests.The treatment of HIT requires stopping heparin treatment, and both protection from thrombosis and choice of an agent that will not reduce the platelet count any further. Several alternatives are available for this purpose; mainly used are danaparoid, fondaparinux, argatroban, and bivalirudin.While heparin was discovered in the 1930s, HIT was not reported until the 1960s. Signs and symptoms: Heparin may be used for both prevention and the treatment of thrombosis. It exists in two main forms: an "unfractionated" form that can be injected under the skin (subcutaneously) or through an intravenous infusion, and a "low molecular weight" form that is generally given subcutaneously. Commonly used low molecular weight heparins are enoxaparin, dalteparin, nadroparin and tinzaparin.In HIT, the platelet count in the blood falls below the normal range, a condition called thrombocytopenia. However, it is generally not low enough to lead to an increased risk of bleeding. Most people with HIT, therefore, do not experience any symptoms. Typically, the platelet count falls 5–14 days after heparin is first given; if someone has received heparin in the previous three months, the fall in platelet count may occur sooner, sometimes within a day.The most common symptom of HIT is enlargement or extension of a previously diagnosed blood clot, or the development of a new blood clot elsewhere in the body. This may take the form of clots either in arteries or veins, causing arterial or venous thrombosis, respectively. Examples of arterial thrombosis are stroke, myocardial infarction ("heart attack"), and acute leg ischemia. Venous thrombosis may occur in the leg or arm in the form of deep vein thrombosis (DVT) and in the lung in the form of a pulmonary embolism (PE); the latter usually originates in the leg, but migrates to the lung.In those receiving heparin through an intravenous infusion, a complex of symptoms ("systemic reaction") may occur when the infusion is started. These include fever, chills, high blood pressure, a fast heart rate, shortness of breath, and chest pain. This happens in about a quarter of people with HIT. Others may develop a skin rash consisting of red spots. Mechanism: The administration of heparin can cause the development of HIT antibodies, suggesting heparin may act as a hapten, thus may be targeted by the immune system. In HIT, the immune system forms antibodies against heparin when it is bound to a protein called platelet factor 4 (PF4). These antibodies are usually of the IgG class and their development usually takes about five days. However, those who have been exposed to heparin in the last few months may still have circulating IgG, as IgG-type antibodies generally continue to be produced even when their precipitant has been removed. This is similar to immunity against certain microorganisms, with the difference that the HIT antibody does not persist more than three months. HIT antibodies have been found in individuals with thrombocytopenia and thrombosis who had no prior exposure to heparin, but the majority are found in people who are receiving heparin.The IgG antibodies form a complex with heparin and PF4 in the bloodstream. The tail of the antibody then binds to the FcγIIa receptor, a protein on the surface of the platelet. This results in platelet activation and the formation of platelet microparticles, which initiate the formation of blood clots; the platelet count falls as a result, leading to thrombocytopenia. In addition, the reticuloendothelial system (mostly the spleen) removes the antibody-coated platelets, further contributing to the thrombocytopenia. Mechanism: Formation of PF4-heparin antibodies is common in people receiving heparin, but only a proportion of these develop thrombocytopenia or thrombosis. This has been referred to as an "iceberg phenomenon". Diagnosis: HIT may be suspected if blood tests show a falling platelet count in someone receiving heparin, even if the heparin has already been discontinued. Professional guidelines recommend that people receiving heparin have a complete blood count (which includes a platelet count) on a regular basis while receiving heparin.However, not all people with a falling platelet count while receiving heparin turn out to have HIT. The timing, severity of the thrombocytopenia, the occurrence of new thrombosis, and the presence of alternative explanations, all determine the likelihood that HIT is present. A commonly used score to predict the likelihood of HIT is the "4 Ts" score introduced in 2003. A score of 0–8 points is generated; if the score is 0–3, HIT is unlikely. A score of 4–5 indicates intermediate probability, while a score of 6–8 makes it highly likely. Those with a high score may need to be treated with an alternative drug, while more sensitive and specific tests for HIT are performed, while those with a low score can safely continue receiving heparin, as the likelihood that they have HIT is extremely low. In an analysis of the reliability of the 4T score, a low score had a negative predictive value of 0.998, while an intermediate score had a positive predictive value of 0.14 and a high score a positive predictive value of 0.64; intermediate and high scores, therefore, warrant further investigation. Diagnosis: The first screening test in someone suspected of having HIT is aimed at detecting antibodies against heparin-PF4 complexes. This may be with a laboratory test of the enzyme-linked immunosorbent assay type. This ELISA test, however, detects all circulating antibodies that bind heparin-PF4 complexes, and may also falsely identify antibodies that do not cause HIT. Therefore, those with a positive ELISA are tested further with a functional assay. This test uses platelets and serum from the patient; the platelets are washed and mixed with serum and heparin. The sample is then tested for the release of serotonin, a marker of platelet activation. If this serotonin release assay (SRA) shows high serotonin release, the diagnosis of HIT is confirmed. The SRA test is difficult to perform and is usually only done in regional laboratories.If someone has been diagnosed with HIT, some recommend routine Doppler sonography of the leg veins to identify deep vein thromboses, as this is very common in HIT. Treatment: Given the fact that HIT predisposes strongly to new episodes of thrombosis, simply discontinuing the heparin administration is insufficient. Generally, an alternative anticoagulant is needed to suppress the thrombotic tendency while the generation of antibodies stops and the platelet count recovers. To make matters more complicated, the other most commonly used anticoagulant, warfarin, should not be used in HIT until the platelet count is at least 150 × 109/L because a very high risk of warfarin necrosis exists in people with HIT who have low platelet counts. Warfarin necrosis is the development of skin gangrene in those receiving warfarin or a similar vitamin K inhibitor. If the patient was receiving warfarin at the time when HIT is diagnosed, the activity of warfarin is reversed with vitamin K. Transfusing platelets is discouraged, as a theoretical risk indicates that this may worsen the risk of thrombosis; the platelet count is rarely low enough to be the principal cause of significant hemorrhage.Various nonheparin agents are used as alternatives to heparin therapy to provide anticoagulation in those with strongly suspected or proven HIT: danaparoid, fondaparinux, bivalirudin, and argatroban. Not all agents are available in all countries, and not all are approved for this specific use. For instance, argatroban is only recently licensed in the United Kingdom, and danaparoid is not available in the United States. Fondaparinux, a factor Xa inhibitor, is commonly used off label for HIT treatment in the United States.According to a systematic review, people with HIT treated with lepirudin showed a relative risk reduction of clinical outcome (death, amputation, etc.) to be 0.52 and 0.42 when compared to patient controls. In addition, people treated with argatroban for HIT showed a relative risk reduction of the above clinical outcomes to be 0.20 and 0.18. Lepirudin production stopped on May 31, 2012. Epidemiology: Up to 8% of patients receiving heparin are at risk to develop HIT antibodies, but only 1–5% on heparin will progress to develop HIT with thrombocytopenia and subsequently one-third of them may develop arterial or venous thrombosis. After vascular surgery, 34% of patients receiving heparin developed HIT antibodies without clinical symptoms. The exact number of cases of HIT in the general population is unknown. What is known is that women receiving heparin after a recent surgical procedure, particularly cardiothoracic surgery, have a higher risk, while the risk is very low in women just before and after giving birth. Some studies have shown that HIT is less common in those receiving low molecular weight heparin. History: While heparin was introduced for clinical use in the late 1930s, new thrombosis in people treated with heparin was not described until 1957, when vascular surgeons reported the association. The fact that this phenomenon occurred together with thrombocytopenia was reported in 1969; prior to this time, platelet counts were not routinely performed. A 1973 report established HIT as a diagnosis, as well as suggesting that its features were the result of an immune process.Initially, various theories existed about the exact cause of the low platelets in HIT. Gradually, evidence accumulated on the exact underlying mechanism. In 1984–1986, John G. Kelton and colleagues at McMaster University Medical School developed the laboratory tests that could be used to confirm or exclude heparin-induced thrombocytopenia.Treatment was initially limited to aspirin and warfarin, but the 1990s saw the introduction of a number of agents that could provide anticoagulation without a risk of recurrent HIT. Older terminology distinguishes between two forms of heparin-induced thrombocytopenia: type 1 (mild, nonimmune mediated and self-limiting fall in platelet count) and type 2, the form described above. Currently, the term HIT is used without a modifier to describe the immune-mediated severe form.In 2021 a condition resembling HIT but without heparin exposure was described to explain unusual post-vaccination embolic and thrombotic events after the Oxford–AstraZeneca COVID-19 vaccine. It is a rare adverse event (1:1 million to 1:100,000) resulting from COVID-19 vaccines (particularly adenoviral vector vaccines). This is also known as Thrombosis with Thrombocytopenia Syndrome or TTS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cycling (ice hockey)** Cycling (ice hockey): In ice hockey, cycling is an offensive strategy that moves the puck along the boards in the offensive zone to create a scoring chance by making defenders tired or moving them out of position.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skeletochronology** Skeletochronology: Skeletochronology is a technique used to determine the individual, chronological ages of vertebrates by counting lines of arrested, annual growth, also known as LAGs, within skeletal tissues. Within the annual bone growth specimens, there are broad and narrow lines. Broad lines represent the growth period and narrow lines represent a growth pause. These narrow lines are what characterises one growth year, therefore make it suitable to determine the age of the specimen. Not all bones grow at the same rate and the individual growth rate of a bone changes over a lifetime, therefore periodic growth marks can take irregular patterns. This indicates significant chronological events in an individual's life. The use of bone as a biomaterial is useful in investigating structure-property relationships. In addition to current research in skeletochronology, the ability of bone to adapt and change its structure to the external environment provides potential for further research in bone histomorphometry in the future. Amphibians and Reptiles are commonly aged determined, using this method, because they undergo discrete annual activity cycles such as winter dormancy or metamorphosis, however it cannot be used for all species of bony animals. The different environmental and biological factors that influence bone growth and development can become a barrier in determining age as a complete record may be rare. Method: The extraction and study of bone tissue varies depending on the taxa involved and the amount of material available. However, skeletochronology best focuses on LAGs that encircle the entire shaft in a ring form and have a regular pattern of deposition. These growths show a repeated pattern, 'described mathematically as a time series'. The tissues are divided using a microtome, stained with haematoxylin to be then viewed under a microscope. The analysis is frequently performed on dry bones with the additional application of alcohol or congelated preservation if needed, as the aim is to enhance the optical contrast which results from different physical properties to light.It is important to consider potential problems when selecting particular bones to study. If there is a weak optical contrast, it makes counting the arrested growth rings difficult and often inaccurate. There is also a possible presence of additional growth marks that are created to supplement weaker areas of growth. In these circumstances, alternative bones must be considered that may present more accurate data. Another case is the doubling of lines of arrested growth where two closely adjacent twin lines can be seen. However, when the pattern is widespread for several age classes in that species, then the twin LAGs can be counted as a single year growth. The most common issue to arise is the destruction of bone from biological processes, most frequently discovered in mammals and Birds. This causes age to be significantly underestimated. Over the lifespan of an individual, bone is constantly being reconstructed as specialised cells remove and deposit bone leading to a constant renewal of the bone material. The continuous resorption and deposition leaves gaps in the record of growth and missing bone tissue is a case at any stage of a vertebrate's life cycle; 'complete specimens that allow precise identification are extremely rare'.Therefore, to account for any missing bone tissues in a specimen, retrocalculation of skeletal age is to be completed. Method: Three approaches can be identified in retro calculating.1) Retro calculating of skeletal age which involves identifying major and minor axe of the bone's cross section and circumferences of bones calculated using Ramanujan's formula C=π[3(a+b)−√(a+3b)(3a+b)] .2) Retro calculating through Arithmetic estimate which requires the sampling of several parts of other bone and making an estimate of the number of missing tissues3) Retro calculating by superimposition in an Ontogenic series which requires a complete growth record on one individual so that their histological cross sections can be overlaid and reconstructed on another individual.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computational number theory** Computational number theory: In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorithms for primality testing and integer factorization, finding solutions to diophantine equations, and explicit methods in arithmetic geometry. Computational number theory has applications to cryptography, including RSA, elliptic curve cryptography and post-quantum cryptography, and is used to investigate conjectures and open problems in number theory, including the Riemann hypothesis, the Birch and Swinnerton-Dyer conjecture, the ABC conjecture, the modularity conjecture, the Sato-Tate conjecture, and explicit aspects of the Langlands program. Software packages: Magma computer algebra system SageMath Number Theory Library PARI/GP Fast Library for Number Theory
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zinc transporter ZIP6** Zinc transporter ZIP6: Zinc transporter ZIP6 is a protein that in humans is encoded by the SLC39A6 gene.Zinc is an essential cofactor for hundreds of enzymes. It is involved in protein, nucleic acid, carbohydrate, and lipid metabolism, as well as in the control of gene transcription, growth, development, and differentiation. SLC39A6 belongs to a subfamily of proteins that show structural characteristics of zinc transporters (Taylor and Nicholson, 2003).[supplied by OMIM]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NGC 103** NGC 103: NGC 103 is a small open cluster. It is partially visible in an 8" amateur telescope under moderately light polluted skies. It is roughly 4600 light-years from the Sun.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Slovak declension** Slovak declension: Slovak, like most Slavic languages and Latin, is an inflected language, meaning that the endings (and sometimes also the stems) of most words (nouns, adjectives, pronouns and numerals) change depending on the given combination of the grammatical gender, the grammatical number and the grammatical case of the particular word in the particular sentence: a) Gender: There are four grammatical genders in Slovak: animate masculine, inanimate masculine, feminine, and neuter. In popular description, the first two genders are often covered under common masculine gender. Almost all Slovak nouns and adjectives, as well as some pronouns and numerals can be categorized into one of these genders. Exceptions are pluralia tantum (Vianoce – Christmas, though there are rules for deriving the gender), words that are drifting into another gender and are currently neuter (knieža – prince), and masculine animals that are animate in singular and mostly inanimate in plural. Slovak declension: b) Number: Like in English, Slovak has singular and plural nouns. Morphological traces of the ancient Indo-European dual number remain, but are not a separate grammar category anymore. Slovak declension: A particular case is associated with three distinct groups of numerals associated with nouns: 1 (one) – nominative case singular, for example jeden dub (one oak) 2, 3, 4 – nominative case plural, for example dva duby (two oaks) 0, 5 and more – genitive case plural, for example päť dubov (five [of] oaks)c) Morphological cases: the nominative case (N) = the subject; the basic form of the word; answers the question Who / What; for example father (sg), fathers (pl) the genitive case (G) = (1) in English "of x" or "x's" ; answers the questions Of whom / Of what; for example father's (sg. ), fathers' (pl); (2) is used after the prepositions bez (without), blízko (near), do (to, into), doprostred (in(to) the middle of), mimo (out(side) of), miesto (instead of), okolo (around), od (from), podľa (according to), pomimo (next to, around), pomocou (by means of), pozdĺž (along), u (at), uprostred (in the middle of), vedľa (next to, adjacent to), vnútri (in, inside of), vyše (above), z (out of, from), *za (behind) the dative case (D) = (1) in English "to x"; answers the question To whom / To what; for example to the father (sg), to the fathers (pl); (2) is used after the prepositions k (to, towards), kvôli (because of), napriek (in spite of), naproti (facing, opposing), oproti ((facing, opposing)), voči (facing, against) the accusative case (A) = (1) the direct object; answers the question Whom / What; for example [I see the] father (sg), fathers (pl); (2) is used after the prepositions: cez(through), *medzi (between, among), *na (on, at), *nad (above), *po (after, for), *o (about, on), *pod (under), pre (for, because of), *pred (before, in front of), *v (in, on), vzhľadom na (regarding, concerning), *za (behind, for) the locative case (L) = used after the prepositions *na (on), *po (after), *o (about, on), pri (at, next to), *v (in, on) the instrumental case (I) = (1) in English "by (means of) x"; answers the question By (means of) whom / By (means of) what; for example [written] by the father; (2) is used after the prepositions: *medzi (between, among), *nad (above), *pod (under), *pred (before, in front of), s (with), *za (behind, at the back of) The (syntactic) vocative case (V) is not morphologically marked anymore in modern Slovak (unlike in modern Czech). Today the (syntactic) vocative is realised by the (morphological) nominative case, just like in English, German and many other languages. However, the ancient vocative declensions have survived (mostly in conserved, archaic words or language, e.g. in fairy tales, folklore, or in an ironic sense) in some words, some examples: syn (son) – V: synku, brat (brother) – V: bratu, bratku), chlapec (boy, knave) – V: chlapče), švagor (brother-in-law) – V: švagre or N, kmotor (godparent) – V: kmotre or N), chlap (man, male) – V: chlape, priateľ (friend) V: priateľu or N, pán (mister, lord) – V: pane or N), majster (master artist) – V: majstre or N), boh (god) – V: bože, mama (mum, mother) – V: mamo, mami) and was retrofitted (with the help of Czech influence) to some more words, like šéf (chief, boss) – V: šéfe. There is a dispute among some Slovak linguists whether to include the vocative into the categories grammar, but with declension (mostly) equal to the nominative, or to unify it with the nominative case category. The morphological vocative is used only for the above restricted number of words and in addition only in some contexts (such as many dialects, which still use the vocative case). Note however that there is no dispute that the syntactic vocative exists in Slovak. Slovak schools have been teaching for at least 30 years that there is no grammar category of vocative anymore in use, however, the use of the vocative case in the past is often mentioned. The Slovak Encyclopedia of Linguistics (1993) explicitly says: the vocative is nowadays replaced by the nominative. However, the Slovak National Corpus explicitly includes vocative as a separate case in the morphological analysis and corpus tagset.There is also a different form of morphological vocative emerging in spoken language, used with some familiar forms of personal names (Paľo - Pali, Jano, Jana - Jani, Zuza - Zuzi) and familiar forms of kinship words, such as mama – mami (mum, mother), oco – oci (dad, father), tata, tato – tati (dad, daddy), baba, babka – babi (gran, granny, grandmother). This usage is very similar to the "new Russian vocative" (Маш', Петь', мам'), but it is not accepted into standardised codified language. This could have developed out of proper names that were formed using the Hungarian diminutive suffix -i and that are used in spoken Slovak, and therefore is often homonymous with nominative (semi-)diminutive forms of the names. Another possibility is influence from Czech (from common bilingual TV during Czechoslovakia), where Jani / Zuzi as well as mami / tati / babi is part of Common Czech. Legend: "ends in" in the following refers to the ending in the nominative singular (N sg), unless stated differently; Soft consonants are: all consonants with the diacritic mark ˇ (for example š, ľ) + c, dz, j. Hard and neutral consonants are all the remaining consonants; For masculine nouns, adjectives, pronouns and numerals it is necessary to distinguish between animate and inanimate ones. An animate noun is a person (for example father, Peter) and an inanimate noun is any other noun (for example table, fear, democracy). Animals are usually viewed as persons only in sg. For the animate nouns, the G is identical with the A (both in sg. and in pl.), and for the inanimate nouns, the N is identical with the A (both in sg. and in pl.). Animate/Inanimate adjectives, pronouns and numerals are those referring to an animate/inanimate noun respectively (for example in "my father" the "my" is animate, because father is animate); sg = singular, pl = plural; N, G, D, A, L, I are abbreviations of grammatical cases (see above). Nouns: For each gender, there are four basic declension paradigms (that is declension models). Note that many nouns (especially those following the paradigm chlap) have different endings than those of the paradigms in one or more grammatical cases. They are neither defined, nor listed in the following. The complete number of different paradigms for nouns is somewhere around 200. A very small number of foreign nouns are not declined (that is the stem and ending never change). Nouns: The Masculine Gender There is also a 5th paradigm for foreign nouns ending in .-i, -y, -e, -í, -é, -ě, -ä (for example pony, kuli, Tököli, Goethe, Krejčí, abbé, Poupě) and foreign personal names ending in -ü, -ö (for example Jenö), which goes as follows: Sg: N: pony, G: ponyho, D: ponymu, A: ponyho, L and I: ponym; Pl: like hrdina.Masculine animal nouns are declined like chlap in the singular, but in plural usually like dub (if they end in a hard or neutral consonant) or like stroj (otherwise). Nouns: Notes on chlap: For the nouns ending in a vowel (for example -o, -u) the vowel is not part of the stem, but the ending in N sg: for example dedo has G / D sg... deda / dedovi etc. (not *dedoa / *dedoovi etc.) many nouns lose an e / o / i from the stem in all cases except N sg (for example vrabec – vrabca); in some short nouns, the -e- changes its position in all cases except N sg (for example žnec – ženca); some nouns ending in -k / -ch change their final /k/ or /ch/ into /c/ and /s/, respectively in N pl (for example žiak – žiaci); words ending in -h use the N pl ending for hrdina instead (such as vrah - vrahovia, súdruh - súdruhovia) most Latin and Greek nouns ending in -us, -as, -es lose it in all cases except N sg (for example génius – génia; but for example fiškus – fiškusa).Notes on hrdina: Notes on dub: many nouns lose e / o / i / í / i.e. / á from the stem in all cases except N sg and A sg (for example výmysel – výmysla, chrbát – chrbta, ohníček – ohníčka, dnešok – dneška, ocot – octa ) some Greek and Latin nouns in -us, -es, -os lose the -us / -es / -os in all cases except N sg and A sg (e.g. komunizmus – komunizmu; but e.g. autobus – autobusu, cirkus – cirkusu); some Slovak words lose the acute or the i / u from a diphthong in all cases except N sg and A sg (for example mráz – mraza, chlieb – chleba, vietor – vetra (here along with loss of o), stôl – stola, bôr – bora); in G pl, some nouns change the a / e / i / o / u (without an acute or a preceding i) in the stem to á / i.e. / í / ô / ú (raz – ráz, Vojany – Voján, Krompachy – Krompách, Žabokreky – Žabokriek, Poniky – Poník, sloha – slôh) or in some cases to ia / iu (for example čas – čias, Margecany – Margecian), unless the rhythmical rule prevents it, i.e. the preceding syllable in the stem already contains a vowel with an acute or a diphthong (for example Hájniky – Hájnik); in L sg, nouns ending in g / k / h have -u rather than -e. Notes on stroj: many nouns lose the e / o / i / í / i.e. / á in all cases except N sg and A sg (for example marec – marca, delenec – delenca, veniec – venca, deň – dňa, stupeň – stupňa, lakeť – lakťa); some nouns lose the acute or the i/u from a diphthong in all cases except N sg and A sg (for example dážď – dažďa, nôž – noža); in G pl, geographical names in pl. (plurale tantum) change the a / e / i / o / u (without an acute or a preceding i) in the stem to á / é / í / ó / ú (for example Tlmače – Tlmáč) or in some cases to ia / i.e. / iu / ô (for example Ladce – Ladiec) in the G pl, unless the rhythmical rule prevents it, i.e. the preceding syllable in the stem already contains an acute or a diphthong. Nouns: The Feminine Gender There is also a 5th paradigm for feminine nouns ending in -ná or -ovná (for example princezná), where the singular and N pl and A pl are like pekná (see under adjectives) and the remaining plural is like žena. In the G pl, there are changes in the stem: if the noun ends in -vowel + ná, then this vowel receives an acute (for example švagriná – švagrín), but otherwise -ie- is inserted (for example princezná – princezien). Nouns: There is also a 6th paradigm for the feminine nouns ending in -ea (idea, Kórea), which goes like žena, except that D sg and Lsg are idei, and G pl is ideí without change in the stem. Nouns: Notes on žena: The following nouns are declined like ulica instead of žena: večera, rozopra, konopa, Hybe and (the plurale tantum) dvere; In the G pl of some nouns, an i.e. / e / o / á / ô is inserted in the last syllable of the stem (for example hra – hier, čipka – čipiek/čipôk, karta – kariet/karát, kvapka – kvapiek/kvapák/kvapôk, vojna – vojen, látka – látok); In the G pl of some nouns, in the last syllable of the stem the a / i / y / u / ä / e / o / syllabic r / syllabic l (without an acute or a preceding i) is changed into á (or ia) / í / ý / ú / ia / ie / ô / ŕ / ĺ respectively (sila - síl, skala - skál, chyba – chýb, ruka – rúk, fakulta – fakúlt, päta – piat, slza – sĺz, črta – čŕt, brzda – bŕzd, slza – sĺz). Notes on ulica: In the G pl of some nouns ie is inserted (for example jedľa – jedieľ, sukňa – sukieň); In the G pl of some nouns, in the last syllable of the stem the a / i / y / u / e / o / syllabic r (without an acute or a preceding i) is changed into á (or ia) / í / ý / ú / ie / ô / ŕ respectively (for example ulica – ulíc, sudkyňa – sudkýň, Krkonoše – Krkonôš, košeľa – košieľ, guľa – gúľ, hoľa – hôľ, fľaša – fliaš).Notes on dlaň: The following nouns are declined like dlaň, not like kosť: obec, päsť, čeľusť; The following feminine nouns are not declined like dlaň, but like kosť: jar, zver, chuť, ortuť, pamäť, smrť, pleť, sneť, rukoväť, smeť, púť, spleť, svojeť, reč, seč, meď, soľ, hluš, myš, voš, lož, bel, Sereď, Sibír, Budapešť, Bukurešť, Lešť and a few other nouns. The words myseľ, chuť, raž, tvár, hneď can be declined like dlaň or like kosť in the singular, but only like dlaň in the plural. The word hrsť is declined like dlaň in the singular, but like kosť in the plural. The word pamäť is declined like kosť when it refers to human memory, but like dlaň when it refers to computer memory; most nouns in -eň lose -e- in all cases except N sg and A sg (for example úroveň – úrovne).Notes on kosť: see the first two notes under dlaň; some nouns lose -e-/-o- in all cases except N sg and A sg (for example ves – vsi, lož – lži, cirkev – cirkvi). Nouns: The Neuter Gender For (any) neuter nouns ending in -vowel+um/on (for example štúdium, ganglion) there is actually a 5th paradigm (štúdium), which is declined like mesto except that the -um- / -on- is omitted in all cases except N sg and A sg., L sg ends in -u (štúdiu), and G pl in -í (štúdií). Nouns: Notes on mesto: Latin and Greek neuter nouns ending in consonant + -um/-on (for example fórum, epiteton) are declined like mesto, except that the -um/-on is omitted in all cases except N sg and A sg (for example, N sg and A sg: publikum, G sg: publika, D sg: publiku etc.); in the G pl of some nouns, an ie/ e / o / á / (rarely é) is inserted in the last syllable of the stem (for example clo – ciel, mydlo –mydiel, zvieratko – zvieratiek, jedlo – jedál, vrecko – vrecák/vreciek, vlákno – vláken/vlákien, číslo – čísel / čísiel, lajno – lajen, lýtko – lýtok, teliesko – teliesok; in the G pl of some nouns, in the last syllable of the stem, the a / i / y / u / ä / e / o / syllabic r / syllabic l (without an acute or a preceding i) is changed into á / í / ý / ú / ia / ie / ô / ŕ / ĺ respectively (kladivo – kladív, zrno – zŕn).Notes on srdce: In the G pl of some nouns, an ie/e is inserted in the last syllable of the stem (for example citoslovce – citosloviec, okience – okienec, vajce – vajec); In the G pl of some nouns, in the last syllable of the stem the a / i / y / u / ä / e / o / syllabic r / syllabic l (without an acute or a preceding i) is changed into á / í / ý / ú / ia / ie / ô / ŕ / ĺ respectively (plece – pliec, srdce – sŕdc, slnce – sĺnc).Notes on vysvedčenie: Notes on dievča: The -a- at the beginning of all endings is replaced by ä after a labial consonant, i.e. p/b/m/f/v (for example žriebä – žriebäťa – žriebäťu...); Most nouns can take both the -at- endings and the -enc- endings in the plural (for example dievča, húsa, bábä), some nouns however take only the -at- endings (for example knieža, zviera, mláďa) and some nouns only the -enc- endings (for example kura). The following nouns do not take the -en- in the alternative plural endings: prasa (N pl prasatá/prasce, G pl prasiat/prasiec), teľa, šteňa. Adjectives: Paradigms Pekný This paradigm is used for adjectives ending in a hard or neutral consonant + ý [in masculine] Cudzí This paradigm is used for adjectives ending in -a soft consonant + í [in masculine] (including the comparative and superlative, see below); Forms: They are like with pekný, but within the endings (that is in what follows after pekn-) always replace ý by í, é by ie, á by ia, and ú by iu., e.g.: pekný – cudzí, pekné(ho) – cudzie(ho), pekný(m) – cudzí(m), pekná – cudzia, peknú – cudziu. Adjectives: Otcov This paradigm is used for adjectives ending in -ov / -in, for example otcov ("father's"), matkin ("mother's"). All of them are possessive adjectives (adjectives in -ov are derived from masculine nouns, adjectives in -in – from feminine nouns). Adjectives: The Comparative and Superlative The comparative is formed by replacing the adjective ending -ý/y/i/í by -ejší or -ší. There are exact rules for the choice between these two endings and there are several irregular comparatives. Examples: Regular: hrozný – hroznejší, bohatý – bohatší… Irregular: veľký – väčší, malý – menší, dobrý – lepší, zlý – horší, pekný – krajší, čierny – černejší, blízky – bližší, ďaleký – ďaľší, hlboký – hlbší…The comparative forms are declined like cudzí. Adjectives: The superlative (that is biggest, most difficult etc.) is formed as follows: naj+comparative. Examples: pekný – krajší – najkrajší, hrozný – hroznejší – najhroznejší... The comparative and superlative of adverbs (which, by the way, end in -o, -e or -y in the basic form) is formed by simply replacing the -(ej)ší from the adjective by -(ej)šie (for example: pekne – krajšie – najkrajšie, hrozne – hroznejšie – najhroznejšie, teplo – teplejšie – najteplejšie, pomaly – pomalšie – najpomalšie). Pronouns: Personal pronouns There is also the reflexive pronoun sa, which is declined as follows: N: –, G: seba, D: sebe / si, A: seba/sa, L: sebe, I: sebou Notes: the long forms mňa, teba, seba, mne, tebe, sebe in G, D and A are used after prepositions (for example pre mňa) or when emphasized, especially always at the beginning of the sentence (for example Vidíš len seba., Teba vidím.); the forms jeho, jemu in G, D and A are used when emphasized, especially always at the beginning of the sentence (for example Vidím jeho. Jeho vidím = It is him that I see); the forms in n- (that is neho, nemu, nej, ňu, nich, nim, ne) are used after prepositions (for example pre neho (masc.)); the forms -ňho (or -ň), -ňmu, -ň can be used alternatively after the prepositions do, pre, na, za, o, po, do, u (for example pre neho (masc.) = preňho = preň); the special form -eň can be used alternatively (for neuter nouns obligatorily) after the prepositions nad, ponad, cez, pod, popod, pred, popred (for example nad neho (masc.) = nadeň). Pronouns: Demonstrative Pronouns like ten (that, the) are declined: tamten (that one), henten (that one), tento (this one), tenže (the same)... like adjectives are declined: for example istý (certain, same), každý (each), iný (other), taký / onaký (such), všetok (all), sám (-self), onen (that one), and žiaden = žiadny (no one)... Pronouns: Interrogative (and Relative) and Indefinite pronouns who: N: kto - G: koho – D: komu – A: koho – L: kom – I: kým [always masculine animate], what: N: čo – G: čoho – D: čomu – A: čo – L: čom – I: čím [always neuter];like kto/čo are declined: nikto (nobody), niekto / dakto (someone), niečo / dačo (something), hocikto (who ever), nič (nothing), ktosi (someone), čosi (something)... Pronouns: like adjectives are declined: čí (whose), niečí / dačí / hocičí (someone's), ničí (no one's), ktorý (which), aký (what, which), nejaký / dajaký / (some), nijaký / niktorý (no), čísi (someone's), číkoľvek (whose ever). akýsi (some), ktorýsi (some), ktorýkoľvek (which ever)... Possessive pronouns The following are the first person pronouns. Pronouns: like môj (my) are declined: tvoj (your (sg.)) and svoj (one's own), except that the o never changes in ô (for example tvoj – tvojho...); náš (our) and váš (your (plural)), except that the -ô- in môj corresponds to an -á-, and an -o- in môj corresponds to an -a- here (for example náš – G: nášho – L: našom).not declined are: jeho (his), jej (her), ich (their). Numerals: Cardinal Numerals Paradigms jeden (one): declined like the adjective pekný; Changes for compound numerals in jeden: not declined; see Compound Numerals.dva (two): N: dvaja (masc. animate); dva (masc. inanimate); dve (otherwise) – G: dvoch – D: dvom – A: dvoch (masc. animate); dva (masc. inanimate); dve (otherwise) – L: dvoch – I: dvoma; Changes for compound numerals in dva:N: dvaja / dva (masc. animate); dva (otherwise), A: dvoch / dva (masc. animate); dva (otherwise);Also declined like dva: obidva / oba (both), and (with the above changes) the second part of the compound numerals 32, 42... 92, if they are declined (see Compound Numerals).tri (three): N: traja (masc. animate); tri (otherwise) – G: troch – D: trom – A: troch (masc. animate); tri (otherwise) – L: troch – I: troma / tromi. Numerals: Changes for compound numerals in tri, štyri:N: traja / tri (masc. animate); tri (otherwise), A: troch / tri (masc. animate); tri (otherwise); Also declined like tri: štyri (4), and (with the above changes) the second part of the compound numerals 23, 33, 43… 93; 24, 34, 44… 94, if they are declined (see Compound Numerals).päť (five): N: piati / päť (masc. animate); päť (otherwise) – G: piatich – D: piatim – A: piatich / päť (masc. animate); päť (otherwise) – L: piatich – I: piatimi; Also declined like päť: the numerals päť (6) to 19 (19), and 20, 30, 40, 50, 60, 70, 80, 90, and the second part of the compound numerals 25–29, 35–39 ... 95–99, if they are declined (see Compound Numerals).100, 200, 300... 900; 1000, 2000, 3000... 9000: not declined, but 1000 can be declined like päť. Numerals: Compound Numerals if they end in -jeden (for example 21, 101): not declined; otherwise: 2 alternatives: not declined or declined; if they are declined, then each number making up the numeral is declined according to its own paradigm (for example 23 chlapov: dvadsiatich troch chlapov). Ordinal Numerals They are declined like adjectives (paradigms pekný and cudzí). Note: Ordinal numerals are formed by adding adjective endings to the (slightly modified) cardinal numbers, for example: 5: päť – 5th: piaty, 20: dvadsať – 20th: dvadsiaty.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vint** Vint: Vint is a Russian card-game, similar to both bridge and whist and it is sometimes referred to as Russian whist. Vint means a screw in Russian, and the name is given to the game because the four players, each in turn, propose, bid and overbid each other until one, having bid higher than the others care to follow, makes the trump, and his vis-a-vis plays as his partner.The game spread to Finland, where it evolved into Skruuvi, which features also a kitty and misère contracts. Description of vint: Vint has many similarities to rubber bridge: The cards have the same rank. The score of tricks is entered under the line, and points for slam, honors, and penalties for undertricks above the line. The bidding is similar to bridge, one bids the number of tricks and the trump suit or no trump.During the progress of the bidding and declaring, opportunity is taken by the players to indicate by their calls their strength in the various suits and the high cards they hold, so that, when the playing begins, the position of the best cards and the strength of the different hands can often be fairly accurately estimated.Unlike Bridge, in Vint there is no dummy, all taken tricks count toward a game (that is, the tricks taken by the defenders as well as the tricks taken by the declarer side including overtricks, regardless of whether the contract was made or not), and the bidding ends after eight consecutive passes (everyone passes twice including the player who made the last bid.) The value of a trick depends on the level of the contract. In higher contracts the value of a trick is higher. Description of vint: The card play follows the standard whist formula. One must follow suit, but if unable to do so, one can play any card. The trick is won by the highest trump, if there are trumps in the trick, otherwise by the highest card of the suit led. The winner of the trick starts the next one. Points are awarded also for honours. In a no trump declaration aces count only as honors; in a suit declaration both the aces and the five next highest cards. Development The game emerged during the latter half of the 19th century. In primitive forms, known as Siberian Vint, the value of the trick depended on the level of the contract and the trump suit. Later, this was simplified so that the level of the contract was the only thing the value of the trick depended on. Description of vint: Towards the end of the 19th century, the kitty was added to the game. The highest bidder took a kitty of 4 cards to his hand and gave one card for every other player before the card play started. Towards the end of the 19th century, also the card exchange mechanism used in Skruuvi was born. The highest bidder took the kitty in his hand, gave 4 cards for his partner, who, in turn, gave one card for every other player. This enabled the declarer side to arrange themselves very shaped hands, which lead to higher contracts. Description of vint: The first rule book for bridge, dated 1886, was Biritch, or Russian Whist written by John Collinson, an English financier working in Ottoman Constantinople (now Istanbul). It and his subsequent letter to The Saturday Review dated May 28, 1906, document the origin of Biritch as being the Russian community in Constantinople. The word biritch is thought to be a transliteration of the Russian word Бирюч (бирчий, бирич), an occupation of a herald or announcer.There are references to Vint in classical Russian literature, notably in the short stories of Anton Chekhov, and in Cancer Ward by Aleksandr Solzhenitsyn. The composer Tchaikovsky was a very enthusiastic player. Skruuvi: Skruuvi is a Finnish variant of Vint, and it became common in Finland while it was a part of Russia. The rules of Skruuvi diverged slowly from Vint, and the rules were codified in the 1940s in the books Skruuviopas by the pseudonym O.L. and Uusi täydellinen skruuvipelin ohjekirja by the pseudonym E.N. Maalari. Skruuvi is still played in Finland as a niche hobby whereas Vint is not played in Russia. Helsingin Suomalainen Klubi still organizes annual Toro Skruuvi tournament in honour of Arvo Ylppö, an enthusiastic Skruuvi player. Skruuvi: Skruuvi uses a bidding system similar to bridge, but the emphasis of the bidding system is more in signifying individual high cards, similar to slam-investigating cue bids in bridge. Skruuvi: Differences from vint In Skruuvi, as described by E.N. Maalari, there is a kitty of four cards that the declarer side gets after bidding, and the game involves some exchange of cards so that everyone ends up with 13 cards. After the exchange of cards, the bidding continues, but only the members of the declarer side are allowed to participate in bidding. The trick-taking play occurs after this second bidding round. Skruuvi: In addition to the Vint-style scoring, the declarer side gets a bonus for a made contract that depends on the level of the contract. In Skruuvi, the non-declarer side may also double by knocking on the table before the card play starts. It is also possible to bid misääri, a contract where the aim is to avoid tricks. In a round-pass situation a forced misääri is played. Since the exchange of cards favours the declarer side, final contracts in Skruuvi are rather high, at a level of four or higher. In some circles undoubled contracts of four odd tricks, and sometimes also undoubled contracts of five odd tricks, are judged made without playing out the hand. After a rubber has been played in Skruuvi, four end games (called Kotka) are played without a kitty. In the end games the bidding starts at a level of six (small slam level), and the exchange of cards favours highly the declarer side. Skruuvi: A typical skruuvi night consists of three matches, where a match consists of a rubber of ordinary skruuvi and four kotkas. Between the matches, the seats are changed so that everyone plays as a partner of everyone else. The partnerships may be temporarily broken if the players make certain special bids, bolshevik or mussolini. In these contracts the declarer plays alone against everyone else, in bolshevik a grand slam misääri, and in mussolini a grand slam no trump. Skruuvi: Later developments The scoring system of the classical Skruuvi, as played in the first half of the 20th century, was notoriously complicated, with scoring for games, made contracts, taken tricks (or avoided tricks in misääri), various honours, penalties for failed contracts, penalties for aces taken in tricks during a misääri contract and, of course, special scoring for bolshevik and mussolini. Skruuvi: Since the 1950s, at Helsingin Suomalainen Klubi, the scoring system has been streamlined. Bonuses for honors and the concept of playing a rubber have been dropped altogether. A match consists of four hands of ordinary skruuvi, four hands of kotkas and four hands of bolsheviks. According to earlier rules, it was possible to bid a bolshevik, but the club rules made it mandatory for everyone to bid bolshevik once during a twelve-hand match. Only slam contracts and doubled contracts are actually played out, and other contracts are judged made without actual card play. Skruuvi: Another modern variant consists of eight hands, four hands of ordinary skruuvi and four hands of kotkas. Points are awarded only for made contracts, avoided tricks in forced misääri, and penalties for undertricks and penalties for taken aces in misääri. In this variant, all the hands are played out, but the minimum allowed final contract is five odd tricks. Famous players Minna Canth Arvo Ylppö
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Relapsing polychondritis** Relapsing polychondritis: Relapsing polychondritis is a multi-systemic condition characterized by repeated episodes of inflammation and deterioration of cartilage. The often painful disease can cause joint deformity and be life-threatening if the respiratory tract, heart valves, or blood vessels are affected. The exact mechanism is poorly understood, but it is thought to be related to an immune-mediated attack on particular proteins that are abundant in cartilage. Relapsing polychondritis: The diagnosis is reached on the basis of the symptoms and supported by investigations such as blood tests and sometimes other investigations. Treatment may involve symptomatic treatment with painkillers or anti-inflammatory medications, and more severe cases may require suppression of the immune system. Signs and symptoms: Though any cartilage in the body may be affected in persons with relapsing polychondritis, in many cases the disease affects several areas while sparing others. The disease may be variable in its signs and symptoms, resulting in a difficult diagnosis which may leads to delayed recognition for several months, years or decades. Joint symptoms are often one of the first signs of the disease with cartilage inflammation initially absent in nearly half the cases. Associated diseases There are several other overlapping diseases associated with RP, that should also be taken into account. About one-third of people with RP might be associated with other autoimmune diseases, vasculitides and hematologic disorders. Systemic vasculitis is the most common association with RP, followed by rheumatoid arthritis and systemic lupus erythematosus. The following table displays the main diseases in association with RP. Signs and symptoms: Cartilage inflammation Cartilage inflammation (technically known as chondritis) that is relapsing is very characteristic of the disease and is required for the diagnosis of RP. These recurrent episodes of inflammation over the course of the disease may result in breakdown and loss of cartilage. The signs and symptoms of cartilage inflammation in various parts of the body will be described first. Signs and symptoms: Ear Inflammation of the cartilage of the ear is a specific symptom of the disease and affects most people. It is present in about 20% of persons with RP at presentation and in 90% at some point. Both ears are often affected but the inflammation may alternate between either ear during a relapse. It is characteristic for the entire outer part of the ear except the earlobe to be swollen, red, or less often purplish, warm and painful to light touch.The inflammation of the ear usually lasts a few days or more, rarely a few weeks, and then resolves spontaneously and recurs at various intervals. Because of the loss of cartilage, after several flares cauliflower ear deformity may result. The outer part of the ear may be either floppy or hardened by calcifications of the scar tissue that replaces the cartilage. These cauliflower ear deformities occur in about 10% of persons with RP. Signs and symptoms: Nose The inflammation of the cartilage of the nose involves the bridge of the nose and is often less marked than the ears. Statistics show that this clinical manifestation is present in 15% of persons with RP and occurs at some point in 65% of persons with RP.Nasal obstruction is not a common feature. Atrophy may eventually develop secondarily during the disease, this appears gradual and is not easily noticed. This can result in collapse of the nasal septum with saddle-nose deformity, which is painless but irreversible. Signs and symptoms: Respiratory tract Inflammation occurs in the laryngeal, tracheal and bronchial cartilages. Both of these sites are involved in 10% of persons with RP at presentation and 50% over the course of this autoimmune disease, and is more common among females. The involvement of the laryngotracheobronchial cartilages may be severe and life-threatening; it causes one-third of all deaths among persons with RP. Signs and symptoms: Laryngeal chondritis is manifested as pain above the thyroid gland and, more importantly, as dysphonia with a hoarse voice or transient aphonia. Because this disease is relapsing, recurrent laryngeal inflammation may result in laryngomalacia or permanent laryngeal stenosis with inspiratory dyspnea that may require emergency tracheotomy as a temporary or permanent measure.Tracheobronchial involvement may or may not be accompanied with laryngeal chondritis and is potentially the most severe manifestation of RP.The symptoms consist of dyspnea, wheezing, a nonproductive cough, and recurrent, sometimes severe, lower respiratory tract infections. Signs and symptoms: Obstructive respiratory failure may develop as the result of either permanent tracheal or bronchial narrowing or chondromalacia with expiratory collapse of the tracheobronchial tree. Endoscopy, intubation, or tracheotomy has been shown to hasten death. Ribs Involvement of the rib cartilages results in costochondritis. Symptoms include chest wall pain or, less often, swelling of the involved cartilage. The involvement of the ribs is seen in 35% of persons with RP but is rarely the first symptom. Other manifestations Relapsing polychondritis may affect many different organ systems of the body. At first, some people with the disease may have only nonspecific symptoms such as fever, weight loss, and malaise. Joint The second most common clinical finding of this disease is joint pain with or without arthritis, after chondritis. Signs and symptoms: All synovial joints may be affected.At presentation, around 33% of people have joint symptoms that involve Polyarthralgia and/or polyarthritis or oligoarthritis that affects various parts of the body and often appears to be episodic, asymmetric, migratory and non-deforming. The most common sites of involvement are the metacarpophalangeal joints, proximal interphalangeal joints and knees. After which is followed by the ankles, wrists, metatarsophalangeal joints and the elbows. Any involvement of the axial skeleton is considered to be very rare. Tests for rheumatoid factor are negative in affected persons with RP, unless there is a co-morbidity with RA.Less often it has been reported that persons may experience arthralgia, monoarthritis, or chronic polyarthritis that mimics rheumatoid arthritis, leading to a difficult diagnosis for this disease. The appearance of erosions and destruction, however, is exceedingly rare and this may point instead to rheumatoid arthritis as a cause.Diseases and inflammation of tendons have been reported in small numbers of people with RP. During the course of the disease, around 80% of people develop joint symptoms. Signs and symptoms: Eye Involvement of the eye is rarely the initial symptom but develops in 60% of persons with RP. The most common forms of ocular involvement are usually mild and often consist of unilateral or bilateral episcleritis and/or scleritis, that is often anterior and could be lingering or relapsing. Scleritis that is necrotizing is found to be exceedingly rare. Less often, conjunctivitis occurs. Signs and symptoms: There are also other ocular manifestations that occur in persons with RP, these include keratoconjunctivitis sicca, peripheral keratitis (rarely with ulcerations), anterior uveitis, retinal vasculitis, proptosis, lid edema, keratoconus, retinopathy, iridocyclitis and ischemic optic neuritis that can lead to blindness.Cataract also is reported in relation to either the disease or to glucocorticoid exposure. Neurological The involvement of the peripheral or central nervous system is relatively rare and only occurs in 3% of persons affected with RP, and is sometimes seen in a relation with concomitant vasculitis. The most common neurological manifestation are palsies of the cranial nerves V and VII. Also hemiplegia, ataxia, myelitis and polyneuropathy have been reported in scientific literature. Very rare neurological manifestations include aseptic meningitis, meningoencephalitis, stroke, focal or generalized seizures and intracranial aneurysm. Magnetic Resonance Imaging of the brain shows multifocal areas of enhancement consistent with cerebral vasculitis in some cases. Signs and symptoms: Kidney The involvement of the kidney can be caused by primary renal parenchymal lesions, or an underlying vasculitis, or another associated autoimmune disease. Actual kidney involvement is quite rare, elevated creatinine levels are reported in approximately 10% of people with RP, and abnormalities in urinalysis in 26%. Involvement of the kidney often indicates a worse prognosis, with a 10-year survival rate of 30%. Signs and symptoms: The most common histopathologic finding is mild mesangial proliferation, that is followed by focal and segmental necrotizing glomerulonephritis with crescents. Other abnormalities that are found include glomerulosclerosis, IgA nephropathy and interstitial nephritis. Immunofluorescence studies most often reveal faint deposits of C3, IgG or IgM in the primarily mesangium. Constitutional symptoms These symptoms could consist of asthenia, fever, anorexia, and weight loss. They mostly occur during a severe disease flare. Others Skin and mucous membranes: 20 to 30% of people with relapsing polychondritis have skin involvement, including aphthous ulcers, genital ulcers, and a number of non-specific skin rashes including erythema nodosum, livedo reticularis, hives, and erythema multiforme. Cardiovascular system: Relapsing polychrondritis may cause inflammation of the aorta. It can also cause leaky heart valves (aortic valve regurgitation in 4 to 10%, mitral valve regurgitation in 2%). Causes: Relapsing polychondritis is an autoimmune disease in which the body's immune system begins to attack and destroy the cartilage tissues in the body. It has been postulated that both cell-mediated immunity and humoral immunity are responsible.Reasons for disease onset are not known, but there is no evidence of a genetic predisposition to developing relapsing polychondritis. However, there are cases where multiple members of the same family have been diagnosed with this illness. Studies indicate that some genetic contribution to susceptibility is likely. Diagnosis: There is no specific test for relapsing polychondritis. Some people may exhibit abnormal lab results while others may have completely normal labs even during active flares. Diagnosis: Diagnostic criteria There are several clinical criteria used to diagnose this disease. McAdam et al. introduced the clinical criteria for RP in 1976. These clinical criteria have later been expanded by Damiani et al. in 1979 and finally Michet et al. modified them in 1986. See the following table for these diagnostic clinical criteria and the number of conditions required for an official diagnosis. Diagnosis: Laboratory findings Patients presenting with acute episodes often have high levels of inflammatory markers such as erythrocyte sedimentation rate or C-reactive protein, ESR or CRP. Patients often have cartilage-specific antibodies present during acute relapsing polychondritis episodes. Antinuclear antibody reflexive panel, rheumatoid factor, and antiphospholipid antibodies are tests that may assist in the evaluation and diagnosis of autoimmune connective-tissue diseases. Imaging studies FDG positron emission tomography (PET) may be useful to detect the condition early. Other imaging studies including MRI, CT scans, and X-rays may reveal inflammation and/or damaged cartilage facilitating diagnosis. Special tests Biopsy Biopsy of the cartilage tissue (for example, ear) may show tissue inflammation and destruction, and may help with the diagnosis. The Biopsy of cartilage in patients with relapsing polychondritis may demonstrate chondrolysis, chondritis, and perichondritis. Pulmonary function tests It is useful to do a full set of pulmonary function tests, including inspiratory and expiratory flow-volume loops. Patterns consistent with either extrathoracic or intrathoracic obstruction (or both) may occur in this disease. Pulmonary function tests (flow-volume loops) provide a useful noninvasive means of quantifying and following the degree of extrathoracic airway obstruction in relapsing polychondritis. Differential diagnosis A differential diagnosis should be taken into account with the following main RP manifestations. Treatment: There are no prospective randomized controlled trials studying therapies for relapsing polychondritis. Evidence for efficacy of treatments is based on many case reports and series of small groups of patients. There are case reports that non-steroidal anti-inflammatories are effective for mild disease and that corticosteroids are effective for treatment of severe relapsing polychondritis. There are multiple case reports that dapsone is effective in doses from 25 mg/day to 200 mg/day. Corticosteroid-sparing medications such as azathioprine or methotrexate may be used to minimize steroid doses and limit the side effects of steroids. For severe disease cyclophosphamide is often given in addition to high dose intravenous steroids. Prognosis: Many individuals have mild symptoms, which recur infrequently, while others may have persistent problems that become debilitating or life-threatening. Epidemiology: Relapsing polychondritis occurs as often in men as in women. In a Mayo Clinic series, the annual incidence was about 3.5 cases per million. The highest incidence is between the ages of 40 and 50 years, but it may occur at any age. History: In 1923, Rudolf Jaksch von Wartenhorst first discovered relapsing polychondritis while working in Prague and initially named it Polychondropathia. His patient was a 32-year-old male brewer who presented with fever, asymmetric polyarthritis, and the ears and nose showed signs of swelling, deformity and were painful. Biopsy of nasal cartilage revealed loss of the cartilage matrix and a hyperplastic mucous membrane. Jaksch von Wartenhorst considered this was an undescribed degenerative disorder of cartilage and named it Polychondropathia. He even took his patient's occupation into consideration, and related the cause to excessive alcohol intake.Since then, the disease has received many names. The following table shows the history of the nomenclature of relapsing polychondritis. The current name, Relapsing Polychondritis (RP), was introduced by Pearson and his colleagues in 1960 to emphasize the episodic course of the disease. Research: There has been little research on neurological problems related to RP. If these cartilage structures get inflamed, they could press against nerves and cause a variety of problems that is seen in RP like peripheral neuropathy and many more.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scarpa's shoe** Scarpa's shoe: Scarpa's shoe was an 18th-century mechanical device developed to treat clubfoot. It never became widely accepted.It was designed by Antonio Scarpa, an Italian anatomist and surgeon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mirela Delibegovic** Mirela Delibegovic: Mirela Delibegovic is a British pharmacologist/biochemist who is Dean for Industrial Engagement in Research & Knowledge Transfer and Director of Aberdeen Cardiovascular and Diabetes Centre. She holds a Personal Chair in Diabetes Physiology and Signalling at the Institute of Medical Sciences at the University of Aberdeen. During the COVID-19 pandemic, Delibegovic used artificial intelligence to develop technologies that would allow mass-screening for coronavirus disease 2019. Early life and education: Delibegovic is from Tuzla in Bosnia and Herzegovina. She grew up during the Bosnian War, which forced her family apart. In the early nineties, she moved to Scotland and finished her secondary school education at George Heriot's School in Edinburgh. Delibegovic studied pharmacology at the University of Edinburgh. In the final year of her undergraduate degree Delibegovic moved to Essex, where she did her undergraduate final year project at GlaxoSmithKline on novel anti-diabetes drugs. She completed her doctoral research with Prof Dame Patricia Cohen at the University of Dundee Medical Research Council Protein Phosphorylation Unit. Here she studied the way that enzymes such as protein phosphatase 1 influenced diabetes development. She was supported by the Royal Society studentship. She has said that she was interested in diabetes because of family history and prevalence of Type 2 diabetes in Bosnia and Herzegovina. During her doctoral research, Delibegovic worked closely with pharmaceutical companies to translate her research to the read world. In 2003 she was awarded the American Heart Association personal fellowship to study the role of PTPN1 in glucose homeostasis at the Harvard Medical School in Boston, USA. She spent four years in Boston, working with Prof Benjamin Neel on mouse models of insulin resistance. Research and career: In 2007 Delibegovic returned to the United Kingdom, where she was awarded the Research Councils UK 5-year tenure track fellowship to investigate obesity and ageing at the University of Aberdeen. She was made Professor in Diabetes Physiology in 2015 at the age of 38. The field of her research has focussed on the PTP1B phosphatase, the molecular mechanisms that cause diabetes and what the relationship is between diabetes and Alzheimer's disease. She has demonstrated that PTP1B can be used for targeted treatments, reaching the cells of specific organs without causing any side effects. In 2017 Delibegovic demonstrated a novel pharmaceutical, Trodusquemine, that could be used to treat type 2 diabetes and breast cancer. She went on to show that a single dose of Trodusquemine, the PTP1B inhibitor, could be used to reverse the effects of atherosclerosis.During the COVID-19 pandemic, Delibegovic, in collaboration with an SME Vertebrate Antibodies Ltd and the NHS Grampian, obtained funding from the Scottish Government/ Chief Scientist office to develop a diagnostic test that could support mass screening for coronavirus disease. Her long-term aim was to use artificial intelligence to identify which parts of the severe acute respiratory syndrome coronavirus 2 activated the body's immune system. At the time, other coronavirus disease tests available in the United Kingdom would not support rapid deployment, and several were unreliable. In May 2020, the tests developed by Delibegovic and her team were in development. In March 2021, the tests were developed and available. https://www.abdn.ac.uk/news/14759/ Awards and honours: 2011 Royal Society of Edinburgh Young Academy of Scotland 2018 Wellcome Trust Prize for Outstanding Achievement in Public Engagement in the Biomedical Sciences 2018 "Super Zena" Award 2022 Royal Society of Edinburgh elected Fellow of the Royal Society of Edinburgh (FRSE) Selected publications: Bence, Kendra K.; Delibegovic, Mirela; Xue, Bingzhong; Gorgun, Cem Z.; Hotamisligil, Gokhan S.; Neel, Benjamin G.; Kahn, Barbara B. (2006). "Neuronal PTP1B regulates body weight, adiposity and leptin action". Nature Medicine. 12 (8): 917–924. doi:10.1038/nm1435. ISSN 1546-170X. PMID 16845389. S2CID 10654045. Delibegovic, Mirela; Zimmer, Derek; Kauffman, Caitlin; Rak, Kimberly; Hong, Eun-Gyoung; Cho, You-Ree; Kim, Jason K.; Kahn, Barbara B.; Neel, Benjamin G.; Bence, Kendra K. (2009-03-01). "Liver-Specific Deletion of Protein-Tyrosine Phosphatase 1B (PTP1B) Improves Metabolic Syndrome and Attenuates Diet-Induced Endoplasmic Reticulum Stress". Diabetes. 58 (3): 590–599. doi:10.2337/db08-0913. ISSN 0012-1797. PMC 2646057. PMID 19074988. Delibegovic, Mirela; Bence, Kendra K.; Mody, Nimesh; Hong, Eun-Gyoung; Ko, Hwi Jin; Kim, Jason K.; Kahn, Barbara B.; Neel, Benjamin G. (2007-11-01). "Improved Glucose Homeostasis in Mice with Muscle-Specific Deletion of Protein-Tyrosine Phosphatase 1B". Molecular and Cellular Biology. 27 (21): 7727–7734. doi:10.1128/MCB.00959-07. ISSN 0270-7306. PMC 2169063. PMID 17724080. Personal life: Delibegovic met her husband whilst a graduate student at the University of Dundee.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sample mean and covariance** Sample mean and covariance: The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. Sample mean and covariance: The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales. The sample mean is used as an estimator for the population mean, the average value in the entire population, where the estimate is more likely to be close to the population mean if the sample is large and representative. The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases. Sample mean and covariance: The term "sample mean" can also be used to refer to a vector of average values when the statistician is looking at the values of several variables in the sample, e.g. the sales, profits, and employees of a sample of Fortune 500 companies. In this case, there is not just a sample variance for each variable but a sample variance-covariance matrix (or simply covariance matrix) showing also the relationship between each pair of variables. This would be a 3×3 matrix when 3 variables are being considered. The sample covariance is useful in judging the reliability of the sample means as estimators and is also useful as an estimate of the population covariance matrix. Sample mean and covariance: Due to their ease of calculation and other desirable characteristics, the sample mean and sample covariance are widely used in statistics to represent the location and dispersion of the distribution of values in the sample, and to estimate the values for the population. Definition of the sample mean: The sample mean is the average of the values of a variable in a sample, which is the sum of those values divided by the number of values. Using mathematical notation, if a sample of N observations on variable X is taken from the population, the sample mean is: X¯=1N∑i=1NXi. Definition of the sample mean: Under this definition, if the sample (1, 4, 1) is taken from the population (1,1,3,4,0,2,1,0), then the sample mean is x¯=(1+4+1)/3=2 , as compared to the population mean of 12 1.5 . Even if a sample is random, it is rarely perfectly representative, and other samples would have other sample means even if the samples were all from the same population. The sample (2, 1, 0), for example, would have a sample mean of 1. Definition of the sample mean: If the statistician is interested in K variables rather than one, each observation having a value for each of those K variables, the overall sample mean consists of K sample means for individual variables. Let xij be the ith independently drawn observation (i=1,...,N) on the jth random variable (j=1,...,K). These observations can be arranged into N column vectors, each with K entries, with the K×1 column vector giving the i-th observations of all variables being denoted xi (i=1,...,N). Definition of the sample mean: The sample mean vector x¯ is a column vector whose j-th element x¯j is the average value of the N observations of the jth variable: x¯j=1N∑i=1Nxij,j=1,…,K. Thus, the sample mean vector contains the average of the observations for each variable, and is written x¯=1N∑i=1Nxi=[x¯1⋮x¯j⋮x¯K] Definition of sample covariance: The sample covariance matrix is a K-by-K matrix Q=[qjk] with entries qjk=1N−1∑i=1N(xij−x¯j)(xik−x¯k), where qjk is an estimate of the covariance between the jth variable and the kth variable of the population underlying the data. In terms of the observation vectors, the sample covariance is Q=1N−1∑i=1N(xi.−x¯)(xi.−x¯)T, Alternatively, arranging the observation vectors as the columns of a matrix, so that F=[x1x2…xN] ,which is a matrix of K rows and N columns. Definition of sample covariance: Here, the sample covariance matrix can be computed as Q=1N−1(F−x¯1NT)(F−x¯1NT)T ,where 1N is an N by 1 vector of ones. If the observations are arranged as rows instead of columns, so x¯ is now a 1×K row vector and M=FT is an N×K matrix whose column j is the vector of N observations on variable j, then applying transposes in the appropriate places yields Q=1N−1(M−1Nx¯)T(M−1Nx¯). Definition of sample covariance: Like covariance matrices for random vector, sample covariance matrices are positive semi-definite. To prove it, note that for any matrix A the matrix ATA is positive semi-definite. Furthermore, a covariance matrix is positive definite if and only if the rank of the xi.−x¯ vectors is K. Unbiasedness: The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector X , a row vector whose jth element (j = 1, ..., K) is one of the random variables. The sample covariance matrix has N−1 in the denominator rather than N due to a variant of Bessel's correction: In short, the sample covariance relies on the difference between each observation and the sample mean, but the sample mean is slightly correlated with each observation since it is defined in terms of all observations. If the population mean E⁡(X) is known, the analogous unbiased estimate qjk=1N∑i=1N(xij−E⁡(Xj))(xik−E⁡(Xk)), using the population mean, has N in the denominator. This is an example of why in probability and statistics it is essential to distinguish between random variables (upper case letters) and realizations of the random variables (lower case letters). Unbiasedness: The maximum likelihood estimate of the covariance qjk=1N∑i=1N(xij−x¯j)(xik−x¯k) for the Gaussian distribution case has N in the denominator as well. The ratio of 1/N to 1/(N − 1) approaches 1 for large N, so the maximum likelihood estimate approximately equals the unbiased estimate when the sample is large. Distribution of the sample mean: For each random variable, the sample mean is a good estimator of the population mean, where a "good" estimator is defined as being efficient and unbiased. Of course the estimator will likely not be the true value of the population mean since different samples drawn from the same distribution will give different sample means and hence different estimates of the true mean. Thus the sample mean is a random variable, not a constant, and consequently has its own distribution. For a random sample of N observations on the jth random variable, the sample mean's distribution itself has mean equal to the population mean E(Xj) and variance equal to σj2/N , where σj2 is the population variance. Distribution of the sample mean: The arithmetic mean of a population, or population mean, is often denoted μ. The sample mean x¯ (the arithmetic mean of a sample of values drawn from the population) makes a good estimator of the population mean, as its expected value is equal to the population mean (that is, it is an unbiased estimator). The sample mean is a random variable, not a constant, since its calculated value will randomly differ depending on which members of the population are sampled, and consequently it will have its own distribution. For a random sample of n independent observations, the expected value of the sample mean is E⁡(x¯)=μ and the variance of the sample mean is var ⁡(x¯)=σ2n. Distribution of the sample mean: If the samples are not independent, but correlated, then special care has to be taken in order to avoid the problem of pseudoreplication. If the population is normally distributed, then the sample mean is normally distributed as follows: x¯∼N{μ,σ2n}. If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and σ2/n < +∞. This is a consequence of the central limit theorem. Weighted samples: In a weighted sample, each vector xi (each set of single observations on each of the K random variables) is assigned a weight wi≥0 . Without loss of generality, assume that the weights are normalized: 1. (If they are not, divide the weights by their sum). Then the weighted mean vector x¯ is given by x¯=∑i=1Nwixi. and the elements qjk of the weighted covariance matrix Q are qjk=11−∑i=1Nwi2∑i=1Nwi(xij−x¯j)(xik−x¯k). If all weights are the same, wi=1/N , the weighted mean and covariance reduce to the (biased) sample mean and covariance mentioned above. Criticism: The sample mean and sample covariance are not robust statistics, meaning that they are sensitive to outliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notably quantile-based statistics such as the sample median for location, and interquartile range (IQR) for dispersion. Other alternatives include trimming and Winsorising, as in the trimmed mean and the Winsorized mean.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Resident monitor** Resident monitor: In computing, a resident monitor is a type of system software program that was used in many early computers from the 1950s to 1970s. It can be considered a precursor to the operating system. The name is derived from a program which is always present in the computer's memory, thus being "resident". Because memory was very limited on those systems, the resident monitor was often little more than a stub that would gain control at the end of a job and load a non-resident portion to perform required job cleanup and setup tasks. Resident monitor: On a general-use computer using punched card input, the resident monitor governed the machine before and after each job control card was executed, loaded and interpreted each control card, and acted as a job sequencer for batch processing operations. The resident monitor could clear memory from the last used program (with the exception of itself), load programs, search for program data and maintain standard input-output routines in memory.Similar system software layers were typically in use in the early days of the later minicomputers and microcomputers before they gained the power to support full operating systems. Current use: Resident monitor functionality is present in many embedded systems, boot loaders, and various embedded command lines. The original functions present in all resident monitors are augmented with present-day functions dealing with boot time hardware, disks, ethernet, wireless controllers, etc. Typically, these functions are accessed using a serial terminal or a physical keyboard and display, if attached. Such a resident monitor is frequently called a debugger, boot loader, command-line interface (CLI), etc. The original meaning of serial-accessed or terminal-accessed resident monitor is not frequently used, although the functionality remained the same, and was augmented. Current use: Typical functions of a resident monitor include examining and editing ram and/or ROM (including flash EEPROM) and sometimes special function registers, the ability to jump into code at a specified address, the ability to call code at a given address, the ability to fill an address range with a constant such as 0x00, and several others. More advanced functions include local disassembly to processor assembly language instructions, and even assembly and writing into flash memory from code typed by the operator. Also, code can be downloaded and uploaded from various sources, and some advanced monitors support a range of network protocols to do so as well as formatting and reading FAT and other filesystems, typically from flash memory on USB or CFcard buses. Current use: For embedded processors, many "in-circuit debuggers" with software-only mode use resident monitor concepts and functions that are frequently accessed by a GUI IDE. They are not different from the traditionally serial line accessed resident monitor command lines, but users are not aware of this. At the latest, developers and advanced users will discover these low level embedded resident monitor functions when writing low level API code on a host to communicate with an embedded target for debugging and code test case running. Current use: Several current microcontrollers have resident serial monitors or extended boot loaders available as options to be used by developers. Many are open source. Some examples are PAULMON2, AVR DebugMonitor and the Bamo128 Arduino boot loader and monitor. In general, most current resident monitors for embedded computing can be compiled according to various memory constraints, from small and minimalistic, to large, filling up to 25% of the code space available on an AVR ATmega328 processor with 32 kilobytes of flash memory, for example. Current use: In many cases resident monitors can be a step up from "printf debugging" and are very helpful when developing on a budget that does not allow a proper hardware in-circuit debugger (ICD) to be used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mask** Mask: A mask is an object normally worn on the face, typically for protection, disguise, performance, or entertainment, and often employed for rituals and rites. Masks have been used since antiquity for both ceremonial and practical purposes, as well as in the performing arts and for entertainment. They are usually worn on the face, although they may also be positioned for effect elsewhere on the wearer's body. Mask: In art history, especially sculpture, "mask" is the term for a face without a body that is not modelled in the round (which would make it a "head"), but for example appears in low relief. Etymology: The word "mask" appeared in English in the 1530s, from Middle French masque "covering to hide or guard the face", derived in turn from Italian maschera, from Medieval Latin masca "mask, specter, nightmare". This word is of uncertain origin, perhaps from Arabic maskharah مَسْخَرَۃٌ "buffoon", from the verb sakhira "to ridicule". However, it may also come from Provençal mascarar "to black (the face)" (or the related Catalan mascarar, Old French mascurer). This in turn is of uncertain origin – perhaps from a Germanic source akin to English "mesh", but perhaps from mask- "black", a borrowing from a pre-Indo-European language. One German author claims the word "mask" is originally derived from the Spanish más que la cara (literally, "more than the face" or "added face"), which evolved to "máscara", while the Arabic "maskharat" – referring to the buffoonery which is possible only by disguising the face – would be based on these Spanish roots. Other related forms are Hebrew masecha= "mask"; Arabic maskhara مَسْخَرَ = "he ridiculed, he mocked", masakha مَسَخَ = "he transfomed" (transitive). History: The use of masks in rituals or ceremonies is a very ancient human practice across the world, although masks can also be worn for protection, in hunting, in sports, in feasts, or in wars – or simply used as ornamentation. Some ceremonial or decorative masks were not designed to be worn. Although the religious use of masks has waned, masks are used sometimes in drama therapy or psychotherapy.One of the challenges in anthropology is finding the precise derivation of human culture and early activities, the invention and use of the mask is only one area of unsolved inquiry. The use of masks dates back several millennia. It is conjectured that the first masks may have been used by primitive people to associate the wearer with some kind of unimpeachable authority, such as a deity, or to otherwise lend credence to the person's claim on a given social role. History: The earliest known anthropomorphic artwork is circa 30,000–40,000 years old. The use of masks is demonstrated graphically at some of these sites. Insofar as masks involved the use of war-paint, leather, vegetative material, or wooden material, such masks failed to be preserved, however, they are visible in paleolithic cave drawings, of which dozens have been preserved. At the neanderthal Roche-Cotard site in France, a flintstone likeness of a face was found that is approximately 35,000 years old, but it is not clear whether it was intended as a mask.In the Greek bacchanalia and the Dionysus cult, which involved the use of masks, the ordinary controls on behaviour were temporarily suspended, and people cavorted in merry revelry outside their ordinary rank or status. René Guénon claims that in the Roman saturnalia festivals, the ordinary roles were often inverted. Sometimes a slave or a criminal was temporarily granted the insignia and status of royalty, only to be killed after the festival ended. The Carnival of Venice, in which all are equal behind their masks, dates back to 1268 AD. The use of carnivalesque masks in the Jewish Purim festivities probably originated in the late 15th century, although some Jewish authors claim it has always been part of Judaic tradition.The North American Iroquois tribes used masks for healing purposes (see False Face Society). In the Himalayas, masks functioned above all as mediators of supernatural forces. Yup'ik masks could be small 3-inch (7.6 cm) finger masks, but also 10-kilogram (22 lb) masks hung from the ceiling or carried by several people. Masks have been created with plastic surgery for mutilated soldiers.Masks in various forms – sacred, practical, or playful – have played a crucial historical role in the development of understandings about "what it means to be human", because they permit the imaginative experience of "what it is like" to be transformed into a different identity (or to affirm an existing social or spiritual identity). Not all cultures have known the use of masks, but most of them have. Masks in performance: Throughout the world, masks are used for their expressive power as a feature of masked performance – both ritually and in various theatre traditions. The ritual and theatrical definitions of mask usage frequently overlap and merge but still provide a useful basis for categorisation. The image of juxtaposed Comedy and Tragedy masks are widely used to represent the Performing Arts, and specifically drama. Masks in performance: In many dramatic traditions including the theatre of ancient Greece, the classical Noh drama of Japan (14th century to present), the traditional Lhamo drama of Tibet, Talchum in Korea, and the Topeng dance of Indonesia, masks were or are typically worn by all the performers, with several different types of mask used for different types of character. Masks in performance: In Ancient Rome, the word persona meant 'a mask'; it also referred to an individual who had full Roman citizenship. A citizen could demonstrate his or her lineage through imagines, death masks of the ancestors. These were wax casts kept in a lararium, the family shrine. Rites of passage, such as initiation of young members of the family, or funerals, were carried out at the shrine under the watch of the ancestral masks. At funerals, professional actors would wear these masks to perform deeds of the lives of the ancestors, thus linking the role of mask as a ritual object and in theatre. Masks in performance: Masks are a familiar and vivid element in many folk and traditional pageants, ceremonies, rituals, and festivals, and are often of an ancient origin. The mask is normally a part of a costume that adorns the whole body and embodies a tradition important to the religious and/or social life of the community as whole or a particular group within the community. Masks are used almost universally and maintain their power and mystery both for their wearers and their audience. The continued popularity of wearing masks at carnival, and for children at parties and for festivals such as Halloween are good examples. Nowadays these are usually mass-produced plastic masks, often associated with popular films, television programmes, or cartoon characters – they are, however, reminders of the enduring power of pretence and play and the power and appeal of masks. Ritual masks: Ritual masks occur throughout the world, and although they tend to share many characteristics, highly distinctive forms have developed. The function of the masks may be magical or religious; they may appear in rites of passage or as a make-up for a form of theatre. Equally masks may disguise a penitent or preside over important ceremonies; they may help mediate with spirits, or offer a protective role to the members of a society who use their powers. Biologist Jeremy Griffith has suggested that ritual masks, as representations of the human face, are extremely revealing of the two fundamental aspects of the human psychological condition: firstly, the repression of a cooperative, instinctive self or soul; and secondly, the extremely angry state of the unjustly condemned conscious thinking egocentric intellect.In parts of Australia, giant totem masks cover the body. Ritual masks: Africa There are a wide variety of masks used in Africa. In West Africa, masks are used in masquerades that form part of religious ceremonies enacted to communicate with spirits and ancestors. Examples are the masquerades of the Yoruba, Igbo, and Edo cultures, including Egungun Masquerades and Northern Edo Masquerades. The masks are usually carved with an extraordinary skill and variety by artists who will usually have received their training as an apprentice to a master carver – frequently it is a tradition that has been passed down within a family through many generations. Such an artist holds a respected position in tribal society because of the work that he or she creates, embodying not only complex craft techniques but also spiritual/social and symbolic knowledge. African masks are also used in the Mas or Masquerade of the Caribbean Carnival. Ritual masks: Djolé (also known as Jolé or Yolé) is a mask-dance from Temine people in Sierra Leone. Males wear the mask, although it does depict a female. Ritual masks: Many African masks represent animals. Some African tribes believe that the animal masks can help them communicate with the spirits who live in forests or open savannas. People of Burkina Faso known as the Bwa and Nuna call to the spirit to stop destruction. The Dogon of Mali have complex religions that also have animal masks. Their three main cults use seventy-eight different types of masks. Most of the ceremonies of the Dogon culture are secret, although the antelope dance is shown to non-Dogons. The antelope masks are rough rectangular boxes with several horns coming out of the top. The Dogons are expert agriculturists and the antelope symbolizes a hard-working farmer.Another culture that has a very rich agricultural tradition is the Bamana people of Mali. The antelope (called Chiwara) is believed to have taught man the secrets of agriculture. Although the Dogons and Bamana people both believe the antelope symbolises agriculture, they interpret elements the masks differently. To the Bamana people, swords represent the sprouting of grain. Ritual masks: Masks may also indicate a culture's ideal of feminine beauty. The masks of Punu of Gabon have highly arched eyebrows, almost almond-shaped eyes and a narrow chin. The raised strip running from both sides of the nose to the ears represent jewellery. Dark black hairstyle, tops the mask off. The whiteness of the face represents the whiteness and beauty of the spirit world. Only men wear the masks and perform the dances with high stilts despite the fact that the masks represent women. One of the most beautiful representations of female beauty is the Idia's Mask of Benin in present-day Edo State of Nigeria. It is believed to have been commissioned by a king of Benin in memory of his mother. To honor his dead mother, the king wore the mask on his hip during special ceremonies.The Senoufo people of the Ivory Coast represent tranquility by making masks with eyes half-shut and lines drawn near the mouth. The Temne of Sierra Leone use masks with small eyes and mouths to represent humility and humbleness. They represent wisdom by making bulging forehead. Other masks that have exaggerated long faces and broad foreheads symbolize the soberness of one's duty that comes with power. War masks are also popular. The Grebo of the Ivory Coast and Liberia carve masks with round eyes to represent alertness and anger, with the straight nose to represent unwillingness to retreat. Ritual masks: Today, the qualities of African art are beginning to be more understood and appreciated. However, most African masks are now being produced for the tourist trade. Although they often show skilled craftsmanship, they nearly always lack the spiritual character of the traditional tribal masks. Ritual masks: Oceania The variety and beauty of the masks of Melanesia are almost as highly developed as in Africa. It is a culture where ancestor worship is dominant and religious ceremonies are devoted to ancestors. Inevitably, many of the mask types relate to use in these ceremonies and are linked with the activities of secret societies. The mask is regarded as an instrument of revelation, giving form to the sacred. This is often accomplished by linking the mask to an ancestral presence, and thus bringing the past into the present. Ritual masks: As a culture of scattered islands and peninsulars, Melanesian mask forms have developed in a highly diversified fashion, with a great deal of variety in their construction and aesthetic. In Papua New Guinea, six-metre-high totem masks are placed to protect the living from spirits; whereas the duk-duk and tubuan masks of New Guinea are used to enforce social codes by intimidation. They are conical masks, made from cane and leaves. Ritual masks: North America Arctic Coastal groups have tended towards simple religious practice but a highly evolved and rich mythology, especially concerning hunting. In some areas, annual shamanic ceremonies involved masked dances and these strongly abstracted masks are arguably the most striking artifacts produced in this region. Ritual masks: Inuit groups vary widely and do not share a common mythology or language. Not surprisingly their mask traditions are also often different, although their masks are often made out of driftwood, animal skins, bones, and feathers. In some areas Inuit women use finger masks during storytelling and dancing.Pacific Northwest Coastal indigenous groups were generally highly skilled woodworkers. Their masks were often master-pieces of carving, sometimes with movable jaws, or a mask within a mask, and parts moved by pulling cords. The carving of masks was an important feature of wood craft, along with many other features that often combined the utilitarian with the symbolic, such as shields, canoes, poles, and houses. Ritual masks: Woodland tribes, especially in the North-East and around the Great Lakes, cross-fertilized culturally with one another. The Iroquois made spectacular wooden ‘false face’ masks, used in healing ceremonies and carved from living trees. These masks appear in a great variety of shapes, depending on their precise function. Ritual masks: Pueblo craftsmen produced impressive work for masked religious ritual, especially the Hopi and Zuni. The kachinas, god/spirits, frequently take the form of highly distinctive and elaborate masks that are used in ritual dances. These are usually made of leather with appendages of fur, feathers or leaves. Some cover the face, some the whole head and are often highly abstracted forms. Navajo masks appear to be inspired by the Pueblo prototypes.In more recent times, masking is a common feature of Mardi Gras traditions, most notably in New Orleans. Costumes and masks (originally inspired by masquerade balls) are frequently worn by krewe members on Mardi Gras Day. Laws against concealing one's identity with a mask are suspended for the day. Ritual masks: Latin America Distinctive styles of masks began to emerge in pre-Hispanic America about 1200 BC, although there is evidence of far older mask forms. In the Andes, masks were used to dress the faces of the dead. These were originally made of fabric, but later burial masks were sometimes made of beaten copper or gold, and occasionally of clay. For the Aztecs, human skulls were prized as war trophies, and skull masks were not uncommon. Masks were also used as part of court entertainments, possibly combining political with religious significance. Ritual masks: In post-colonial Latin America, pre-Columbian traditions merged with Christian rituals, and syncretic masquerades and ceremonies, such as All Souls/Day of the Dead developed, despite efforts of the Church to stamp out the indigenous traditions. Masks remain an important feature of popular carnivals and religious dances, such as The Dance of the Moors and Christians. Mexico, in particular, retains a great deal of creativity in the production of masks, encouraged by collectors. Wrestling matches, where it is common for the participants to wear masks, are very popular, and many of the wrestlers can be considered folk heroes. For instance, the popular wrestler El Santo continued wearing his mask after retirement, revealed his face briefly only in old age, and was buried wearing his silver mask. Ritual masks: Asia China In China, masks are thought to have originated in ancient religious ceremonies. Images of people wearing masks have been found in rock paintings along the Yangtze. Later mask forms brings together myths and symbols from shamanism and Buddhism. Ritual masks: Shigong dance masks were used in shamanic rituals to thank the gods, while nuo dance masks protected from bad spirits. Wedding masks were used to pray for good luck and a lasting marriage, and "Swallowing Animal" masks were associated with protecting the home and symbolised the "swallowing" of disaster. Opera masks were used in a basic "common" form of opera performed without a stage or backdrops. These led to colourful facial patterns that we see in today's Peking opera. Ritual masks: India/Sri Lanka/Indo-China Masked characters, usually divinities, are a central feature of Indian dramatic forms, many based on depicting the epics Mahabharata and Ramayana. Countries that have had strong Indian cultural influences – Cambodia, Burma, Indonesia, Thailand, and Lao – have developed the Indian forms, combined with local myths, and developed their own characteristic styles. Ritual masks: The masks are usually highly exaggerated and formalised, and share an aesthetic with the carved images of monstrous heads that dominate the facades of Hindu and Buddhist temples. These faces or Kirtimukhas, 'Visages of Glory', are intended to ward off evil and are associated with the animal world as well as the divine. During ceremonies, these visages are given active form in the great mask dramas of the South and South-eastern Asian region. Ritual masks: Indonesia In Indonesia, the mask dance predates Hindu-Buddhist influences. It is believed that the use of masks is related to the cult of the ancestors, which considered dancers the interpreters of the gods. Native Indonesian tribes such as Dayak have masked Hudoq dance that represents nature spirits. In Java and Bali, masked dance is commonly called topeng and demonstrated Hindu influences as it often feature epics such as Ramayana and Mahabharata. The native story of Panji also popular in topeng masked dance. Indonesian topeng dance styles are widely distributed, such as topeng Bali, Cirebon, Betawi, Malang, Yogyakarta, and Solo. Ritual masks: Japan Japanese masks are part of a very old and highly sophisticated and stylized theatrical tradition. Although the roots are in prehistoric myths and cults, they have developed into refined art forms. The oldest masks are the gigaku. The form no longer exists, and was probably a type of dance presentation. The bugaku developed from this – a complex dance-drama that used masks with moveable jaws. Ritual masks: The nō or noh mask evolved from the gigaku and bugaku and are acted entirely by men. The masks are worn throughout very long performances and are consequently very light. The nō mask is the supreme achievement of Japanese mask-making. Nō masks represent gods, men, women, madmen and devils, and each category has many sub-divisions. Kyōgen are short farces with their own masks, and accompany the tragic nō plays. Kabuki is the theatre of modern Japan, rooted in the older forms, but in this form masks are replaced by painted faces. Ritual masks: Korea Korean masks have a long tradition associated with shamanism and later in ritual dance. Korean masks were used in war, on both soldiers and their horses; ceremonially, for burial rites in jade and bronze and for shamanistic ceremonies to drive away evil spirits; to remember the faces of great historical figures in death masks; and in the arts, particularly in ritual dances, courtly, and theatrical plays. The present uses are as miniature masks for tourist souvenirs, or on mobile phones, where they hang as good-luck talismans. Ritual masks: Middle East Theatre in the Middle East, as elsewhere, was initially of a ritual nature, dramatising human relationships with nature, the deities, and other human beings. It grew out of sacred rites of myths and legends performed by priests and lay actors at fixed times and often in fixed locations. Folk theatre – mime, mask, puppetry, farce, juggling – had a ritual context in that it was performed at religious or rites of passage such as days of naming, circumcisions, and marriages. Over time, some of these contextual ritual enactments became divorced from their religious meaning and they were performed throughout the year. Some 2500 years ago, kings and commoners alike were entertained by dance and mime accompanied by music where the dancers often wore masks, a vestige of an earlier era when such dances were enacted as religious rites. According to George Goyan, this practice evoked that of Roman funeral rites where masked actor-dancers represented the deceased with motions and gestures mimicking those of the deceased while singing the praise of their lives (see Masks in Performance above). Ritual masks: Europe The oldest representations of masks in Europe are animal masks, such as the cave paintings of Lascaux in the Dordogne in southern France. Such masks survive in the alpine regions of Austria and Switzerland, and may be connected with hunting or shamanism. Masks are used throughout Europe in modern times, and are frequently integrated into regional folk celebrations and customs. Old masks are preserved and can be seen in museums and other collections, and much research has been undertaken into the historical origins of masks. Most probably represent nature spirits, and as a result many of the associated customs are seasonal. The original significance would have survived only until the introduction of Christianity, which incorporated many of the customs into its own traditions. In that process their meanings were changed also so, for example, old gods and goddesses originally associated with the celebrations were demonised and viewed as mere devils, or were subjugated to the Abrahamic God. Ritual masks: Many of the masks and characters used in European festivals belong to the contrasting categories of the 'good', or 'idealised beauty', set against the 'ugly' or 'beastly' and grotesque. This is particularly true of the Germanic and Central European festivals. Another common type is the Fool, sometimes considered to be the synthesis of the two contrasting types, Handsome and Ugly. Masks also tend to be associated with New Year and Carnival festivals. Ritual masks: The debate about the meaning of these and other mask forms continues in Europe, where monsters, bears, wild men, harlequins, hobby horses, and other fanciful characters appear in carnivals throughout the continent. It is generally accepted that the masks, noise, colour, and clamour are meant to drive away the forces of darkness and winter, and open the way for the spirits of light and the coming of spring. In Sardinia existed the tradition of Mamuthones e Issohadores of Mamoiada; Boes e Merdules of Ottana; Thurpos of Orotelli; S'Urtzu, Su 'Omadore and Sos Mamutzones of Samugheo. Ritual masks: Another tradition of European masks developed, more self-consciously, from court and civic events, or entertainments managed by guilds and co-fraternities. These grew out of the earlier revels and had become evident by the 15th century in places such as Rome and Venice, where they developed as entertainments to enliven towns and cities. Thus the Maundy Thursday carnival in St. Marks Square in Venice, attended by the Doge and aristocracy, also involved the guilds, including a guild of maskmakers. There is evidence of 'commedia dell'arte'-inspired Venetian masks and by the late 16th century the Venetian Carnival began to reach its peak and eventually lasted a whole 'season' from January until Lent. By the 18th century, it was already a tourist attraction, Goethe saying that he was ugly enough not to need a mask. The carnival was repressed during the Napoleonic Republic, although in the 1980s its costumes and the masks aping the 18th century heyday were revived. It appears other cities in central Europe were influenced by the Venetian model. Ritual masks: During the Reformation, many of these carnival customs began to die out in Protestant regions, although they seem to have survived in Catholic areas despite the opposition of the ecclesiastical authorities. So by the 19th century, the carnivals of the relatively wealthy bourgeois town communities, with elaborate masques and costumes, existed side by side with the ragged and essentially folkloric customs of the rural areas. Although these civic masquerades and their masks may have retained elements drawn from popular culture, the survival of carnival in the 19th century was often a consequence of a self-conscious 'folklore' movement that accompanied the rise of nationalism in many European countries. Nowadays, during carnival in the Netherlands masks are often replaced with face paint for more comfort. Ritual masks: In the beginning of the new century, on 19 August 2004, the Bulgarian archaeologist Georgi Kitov discovered a 673 g gold mask in the burial mound "Svetitsata" near Shipka, Central Bulgaria. It is a very fine piece of workmanship made out of massive 23 karat gold. Unlike other masks discovered in the Balkans (of which three are in Republic of Macedonia and two in Greece), it is now kept in the National Archaeological Museum in Sofia. It is considered to be the mask of a Thracian king, presumably Teres. Masks in theatre: Masks play a key part within world theatre traditions, particularly non-western theatre forms. They also continue to be a vital force within contemporary theatre, and their usage takes a variety of forms. Masks in theatre: In many cultural traditions, the masked performer is a central concept and is highly valued. In the western tradition, actors in Ancient Greek theatre wore masks, as they do in traditional Japanese Noh drama. In some Greek masks, the wide and open mouth of the mask contained a brass megaphone enabling the voice of the wearer to be projected into the large auditoria. In medieval Europe, masks were used in mystery and miracle plays to portray allegorical creatures, and the performer representing God frequently wore a gold or gilt mask. During the Renaissance, masques and ballet de cour developed – courtly masked entertainments that continued as part of ballet conventions until the late eighteenth century. The masked characters of the Commedia dell'arte included the ancestors of the modern clown. In contemporary western theatre, the mask is often used alongside puppetry to create a theatre that is essentially visual, rather than verbal, and many of its practitioners have been visual artists. Masks in theatre: Masks are an important part of many theatre forms throughout world cultures, and their usage in theatre has often developed from, or continues to be part of old, highly sophisticated, stylized theatrical traditions. Contemporary theatre Masks and puppets were often incorporated into the theatre work of European avant-garde artists from the turn of the nineteenth century. Alfred Jarry, Pablo Picasso, Oskar Schlemmer, other artists of the Bauhaus School, as well as surrealists and Dadaists, experimented with theatre forms and masks in their work. In the 20th century, many theatre practitioners, such as Meyerhold, Edward Gordon Craig, Jacques Copeau, and others in their lineage, attempted to move away from Naturalism. They turned to sources such as Oriental Theatre (particularly Japanese Noh theatre) and commedia dell'arte, both of which forms feature masks prominently. Masks in theatre: Edward Gordon Craig (1872–1966) in A Note on Masks (1910) proposed the virtues of using masks over the naturalism of the actor. Craig was highly influential, and his ideas were taken up by Brecht, Cocteau, Genet, Eugene O'Neill – and later by Arden, Grotowski, Brook, and others who "attempted to restore a ritualistic if not actually religious significance to theatre".Copeau, in his attempts to "Naturalise" actors, decided to use masks to liberate them from their "excessive awkwardness". In turn, Copeau's work with masks was taken on by his students including Etienne Decroux and later, via Jean Daste and Jacques Lecoq. Lecoq, having worked as movement director at Teatro Piccalo in Italy, was influenced by the Commedia tradition. Lecoq met Amleto Satori, a sculptor, and they collaborated on reviving the techniques of making traditional leather Commedia masks. Later, developing Copeau's "noble mask", Lecoq would ask Satori to make him masques neutre (the neutral mask). For Lecoq, masks became an important training tool, the neutral mask being designed to facilitate a state of openness in the student-performers, moving gradually on to character and expressive masks, and finally to "the smallest mask in the world" the clown's red-nose. One highly important feature of Lecoq's use of mask, wasn't so much its visual impact on stage, but how it changed the performers movement on stage. It was a body-based approach to mask work, rather than a visually led one. Lecoq's pedagogy has been hugely influential for theatre practitioners in Europe working with mask and has been exported widely across the world. This work with masks also relates to performing with portable structures and puppetry. Students of Lecoq have continued using masks in their work after leaving the school, such as in John Wright's Trestle Theatre. Masks in theatre: In America, mask-work was slower to arrive, but the Guerrilla Theatre movement, typified by groups such as the San Francisco Mime Troupe and Bread and Puppet Theatre took advantage of it. Influenced by modern dance, modern mime, Commedia dell'arte and Brecht such groups took to the streets to perform highly political theatre. Peter Schumann, the founder of Bread and Puppet theatre, made particular use of German Carnival masks. Bread and Puppet inspired other practitioners around the world, many of whom used masks in their work. In the US and Canada, these companies include In the Heart of the Beast Puppet and Mask Theater of Minneapolis; Arm-of-the Sea Theatre from New York State; Snake Theater from California; and Shadowland Theatre of Toronto, Ontario. These companies, and others, have a strong social agenda, and combine masks, music and puppetry to create a visual theatrical form. Another route masks took into American Theatre was via dancer/choreographers such as Mary Wigman, who had been using masks in dance and had emigrated to America to flee the Nazi regime. Masks in theatre: In Europe, Schumann's influence combined with the early avant-garde artists to encourage groups such as Moving Picture Mime Show and Welfare State (both in the UK). These companies had a big influence on the next generation of groups working in visual theatre, including IOU and Horse and Bamboo Theatre, who create a theatre in which masks are used along with puppets, film and other visual forms, with an emphasis on the narrative structure. Functional masks: Masks are also familiar as pieces of kit associated with practical functions, usually protective. There has been a proliferation of such masks recently but there is a long history of protective armour and even medical masks to ward off plague. The contrast with performance masks is not always clear-cut. Ritual and theatrical masks themselves can be considered to be practical, and protective masks in a sports context in particular are often designed to enhance the appearance of the wearer. Functional masks: Medical Some masks are used for medical purposes: Oxygen mask, a piece of medical equipment that assists breathing. Anesthetic mask. Burn mask, a piece of medical equipment that protects the burn tissue from contact with other surfaces, and minimises the risk of infection. Surgical mask, a piece of medical equipment that helps to protect both the surgeon and patient from acquiring infection from each other. Face shield, to protect a medical professional from bodily fluids. Pocket mask or CPR mask, used to safely deliver rescue breaths during a cardiac arrest or respiratory arrest.Cloth face mask, an alternative to a surgical mask for reducing the spread of infectious agents. Protective Protective masks are pieces of kit or equipment worn on the head and face to afford protection to the wearer, and today usually have these functions: Providing a supply of air or filtering the outside air (respirators and dust masks). Functional masks: Protecting the face against flying objects or dangerous environments, while allowing vision.In Roman gladiatorial tournaments masks were sometimes used. From archaeological evidence it is clear that these were not only protective but also helped make the wearer appear more intimidating. In medieval Europe and in Japan soldiers and samurai wore similarly ferocious-looking protective armour, extending to face-masks. In the 16th century, the Visard was worn by women to protect from sunburn. Today this function is attributed to thin balaclavas. Functional masks: In sport the protective mask will often have a secondary function to make the wearer appear more impressive as a competitor. Before strong transparent materials such as polycarbonate were invented, visors to protect the face had to be opaque with small eyeslits, and were a sort of mask, as often in mediaeval suits of armour, and (for example) Old Norse grímr meant "mask or visor". Disguise Masks are sometimes used to avoid recognition. As a disguise the mask acts as a form of protection for the wearer who wishes to assume a role or task without being identified by others. Robbers and other criminal perpetrators may wear masks as a means in concealing their faces and thus identities from their victims and from law enforcement. Occasionally a witness for the prosecution appears in court in a mask to avoid being recognized by associates of the accused. Functional masks: Participants in a black bloc at protests usually wear masks, often bandannas, to avoid recognition, and to try to protect against any riot control agents used.Masks are also used to prevent recognition while showing membership of a group: Masks are use by penitents in ceremonies to disguise their identity in order to make the act of penitence more selfless. The Semana Santa parades throughout Spain and in Hispanic or Catholic countries throughout the world are examples of this, with their cone-shaped masks known as capirote. Functional masks: Masks are used by vigilante groups. The cone-shaped mask in particular is identified with the Ku Klux Klan in a self-conscious effort to combine the hiding of personal identity with the promotion of a powerful and intimidating image. Members of the group Anonymous frequently wear masks (usually Guy Fawkes masks, best known from V for Vendetta) when they attend protests.While the niqāb usually shows membership of some Islamic community, its purpose is not to hinder recognition, although it falls under some anti-mask laws such as the French ban on face covering. Occupational Beaked masks containing herbs in the beak were worn in early modern Europe by plague doctors to try to ward off the Black Death. Filter mask, a piece of safety equipment. Full-face diving mask as part of self-contained breathing apparatus for divers and others; some let the wearer talk to others through a built-in communication device Respirator (gas or particulate mask), a mask worn on the face to protect the body from airborne pollutants and toxic materials, and fine particulate matter or infectious particles. Oxygen mask worn by high-altitude pilots, or used in medicine to deliver oxygen, anesthetic, or other gases to patients Welding mask to protect the welder's face and eyes from the brightness and sparks created during welding Sports American football helmet face mask Balaclava, also known as a "ski mask", to protect the face against cold air. Baseball catcher's mask. Diving mask, an item of diving equipment that allows scuba divers, free-divers, and snorkelers to see clearly underwater. Fencing mask. Goaltender mask, a mask worn by an ice or field hockey goaltender to protect the head and face from injury. Hurling helmets were made mandatory in 2010, and have a wire mask on the front to protect the player's face. Kendo, a mask called Men is used in this Japanese sword-fighting martial art. Paintball mask. Functional masks: Visor (ice hockey).An interesting example of a sports mask that confounds the protective function is the wrestling mask, a mask most widely used in the Mexican/Latin lucha libre style of wrestling. In modern lucha libre, masks are colourfully designed to evoke the images of animals, gods, ancient heroes, and other archetypes. The mask is considered "sacred" to some degree, placing its role closer to the ritual and performance function. Punitive: Masks are sometimes used to punish the wearer either by signalling their humiliation or causing direct suffering: Particularly uncomfortable types, such as an iron mask, for example the Scold's bridle, are fit as devices for humiliation, corporal punishment or torture. Masks were used to alienate and silence prisoners in Australian jails in the late 19th century. They were made of white cloth and covered the face, leaving only the eyes visible. Use of masks is also common in BDSM practices. Fashion: Decorative masks may be worn as part of a costume outside of ritual or ceremonial functions. This is often described as a masque, and relates closely to carnival styles. For example, attendants of a costume party will sometimes wear masks as part of their costumes. Several artists in the 20th and 21st century, such as Isamaya Ffrench and Damselfrau, create masks as wearable art. Fashion: Wrestling masks are used most widely in Mexican and Japanese wrestling. A wrestler's mask is usually related to a wrestler's persona (for example, a wrestler known as 'The Panda' might wear a mask with a panda's facial markings). Often, wrestlers will put their masks on the line against other wrestlers' masks, titles or an opponent's hair. While in Mexico and Japan, masks are a sign of tradition, they are looked down upon in the United States and Canada. Fashion: Several bands and performers, notably members of the groups Slipknot, Mental Creepers and Gwar, and the guitarist Buckethead, wear masks when they perform on stage. Several other groups, including Kiss, Alice Cooper, and Dimmu Borgir simulate the effect with facepaint. Hollywood Undead also wears masks but often remove them mid-performance. Leather-working, steampunk, and other methods and themes are occasionally used to create artisanal gas masks. In works of fiction: Masks have been used in many horror films to conceal the identities of the killer. Notable examples include Jason Voorhees of the Friday the 13th series, Jigsaw Killer from Saw, Ghostface of the Scream series, and Michael Myers of the Halloween series. Other types: A "buccal mask" is a mask that covers only the cheeks (hence the adjective "buccal") and mouth. A death mask is a mask either cast from or applied to the face of a recently deceased person. A "facial" (short for facial mask) is a temporary mask, not solid, used in cosmetics or as therapy for skin treatment. A "life mask" is a plaster cast of a face, used as a model for making a painting or sculpture. An animal roleplay mask is used for people to create a more animal-like image in fetish role play.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Breaking the bank** Breaking the bank: In gaming, breaking the bank refers to a player winning a critical sum of money from the casino. The literal, extremely rare, situation of breaking the bank, is winning more than the house has on hand. The term can also be used for the act of winning more chips than there are at the table. Another situation of it portrayed in fiction is a situation where a gambler will win more money than the casino owns, forcing the casino out of business, and winning the casino itself as a prize.In blackjack, card counting can facilitate a winning streak that eventually breaks the bank.Mark Bowden reports in The Atlantic that blackjack player Don Johnson broke the bank in 2011 winning nearly $6 million at Atlantic City's Tropicana casino after previously taking the Borgata for $5 million and Caesars for $4 million. The Tropicana refused to continue playing with Johnson on the terms the casino had negotiated after Johnson won $5.8 million, the Borgata cut Johnson off at $5 million, and the dealer at Caesars refused to fill Johnson's chip tray once his earnings topped $4 million. Johnson had reportedly negotiated terms with the Tropicana that included a hand-shuffled six-deck shoe; the right to split and double down on up to four hands at once; and a “soft 17" whittling the house edge down to one-fourth of 1 percent so in effect, Johnson was playing a 50–50 game against the house, and with the 20% "loss rebate", Johnson was risking only 80 cents of every dollar he played.In 2005, British investor Paul Newey nearly broke the bank at Birmingham's Genting Casino Star City, where he won £3 million and forced owner Stanley Leisure to issue a profit warning and caused the casino value to decline by 12%.Breaking the bank occurs only if the house fails to cap the total amount payable on a winning bet in a way that bears some reasonable relationship to the total amount of money in play. In contrast, parimutuel betting by its very nature does impose such a cap, and hence the bank cannot be broken.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Speed calling** Speed calling: Speed calling service allows telephone subscribers to assign one or two digit speed calling codes, by dialing a change speed calling list access code, a feature code, and a new telephone number. Thereafter, the subscribers need only use the assigned speed code to reach the desired party rather than dial the long phone number. This service became commonplace in the 1970s with the spread of Stored Program Control exchanges capable of implementing the required databases. It remains useful in installations of many extensions where programming each telephone set would be arduous. Speed calling subscriptions have largely been replaced by the introduction of telephone handsets which incorporate a local version of speed dial. Speed calling: Speed Calling allows subscribers to program shortcuts for telephone numbers to dial them quickly with just one or two digits. Both Speed Calling 8 and Speed Calling 30 require a subscription from your local telephone company.Speed Calling 8 allows subscribers to assign a telephone number to each of the digits 2 through 9, a total of eight numbers. To program the numbers, dial *74, followed by the digit 2 through 9 to assign the number to, followed by the full telephone number as you would normally dial it. Then, to dial the number in the future, dial the digit 2 through 9 followed by #. Speed calling: Speed Calling 30 allows subscribers to assign a telephone number to each of the numbers 20 through 49, a total of 30 numbers. To program the numbers, dial *75, followed by the number 20 through 49 to assign the number to, followed by the full telephone number as you would normally dial it. Then, to dial the number in the future, dial the number 20 through 49 followed by #.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MINOS** MINOS: Main injector neutrino oscillation search (MINOS) was a particle physics experiment designed to study the phenomena of neutrino oscillations, first discovered by a Super-Kamiokande (Super-K) experiment in 1998. Neutrinos produced by the NuMI ("Neutrinos at Main Injector") beamline at Fermilab near Chicago are observed at two detectors, one very close to where the beam is produced (the near detector), and another much larger detector 735 km away in northern Minnesota (the far detector). MINOS: The MINOS experiment started detecting neutrinos from the NuMI beam in February 2005. On 30 March 2006, the MINOS collaboration announced that the analysis of the initial data, collected in 2005, is consistent with neutrino oscillations, with the oscillation parameters which are consistent with Super-K measurements. MINOS received the last neutrinos from the NUMI beam line at midnight on 30 April 2012. It was upgraded to MINOS+ which started taking data in 2013. The experiment was shut down on June 29, 2016, and the far detector has been dismantled and removed. Detectors: There are two detectors in the experiment. Detectors: The near detector is similar to the far detector in design, but smaller in size with a mass of 980 tons (t). It is located at Fermilab, a few hundred meters away from the graphite target which the protons interact with, and approximately 100 meters underground. The commissioning of the near detector was completed in December 2004, and it is now fully operational. Detectors: The far detector has a mass of 5.4 kt. It is located in the Soudan mine in Northern Minnesota at a depth of 716 meters. The far detector has been fully operational since summer 2003, and has been taking cosmic ray and atmospheric neutrino data since early in its construction.Both MINOS detectors are steel-scintillator sampling calorimeters made out of alternating planes of magnetized steel and plastic scintillators. The magnetic field causes the path of a muon produced in a muon neutrino interaction to bend, making it possible to distinguish interactions with neutrinos from those with antineutrinos. This feature of the MINOS detectors allows MINOS to search for CPT-violation with atmospheric neutrinos and anti-neutrinos. Neutrino beam: To produce the NuMI beamline, 120 GeV Main Injector proton pulses hit a water-cooled graphite target. The resulting interactions of protons with the target material produce pions and kaons, which are focused by a system of magnetic horns. The neutrinos from subsequent decays of pions and kaons form the neutrino beam. Most of these are muon neutrinos, with a small electron neutrino contamination. Neutrino interactions in the near detector are used to measure the initial neutrino flux and energy spectrum. Because they are weakly interacting and therefore usually pass through matter, the vast majority of the neutrinos travel through the near detector and the 734 km of rock, then through the far detector and off into space. On the way toward Soudan, about 20% of the muon neutrinos oscillate into other flavors. Physics goals and results: MINOS measures the difference in neutrino beam composition and energy distribution in the near and far detectors with the aim of producing precision measurements of the neutrino squared mass difference and mixing angle. In addition, MINOS looks for the appearance of electron neutrinos in the far detector, and will either measure or set a limit on the oscillation probability of muon neutrinos into electron neutrinos. Physics goals and results: On 29 July 2006, the MINOS collaboration published a paper giving their initial measurements of oscillation parameters as judged from muon neutrino disappearance. These are: Δm223 = 2.74+0.44−0.26 × 10−3 eV2/c4 and sin2(2θ23) > 0.87 (68% confidence limit).In 2008, MINOS released a further result using over twice the previous data (3.36×1020 protons-on-target; this includes the first data set). This is the most precise measurement of Δm2. The results are: Δm223 = 2.43+0.13−0.13 × 10−3 eV2/c4 and sin2(2θ23) > 0.90 (90% confidence limit).In 2011, the above results were updated again, using a more than double data sample (exposure of 7.25×1020 protons on target) and improved analysis methodology. The results are: Δm223 = 2.32+0.12−0.08 × 10−3 eV2/c4 and sin2(2θ23) > 0.90 (90% confidence limit).In 2010 and 2011, MINOS reported results according to which there is a difference in the disappearance and consequently the masses between antineutrinos and neutrinos, which would violate CPT symmetry. Physics goals and results: However, after additional data were evaluated in 2012, MINOS reported that this gap has closed and no excess is there any more.Cosmic ray results from the MINOS far detector have shown that there is a strong correlation between high energy cosmic rays measured and the temperature of the stratosphere. This is the first time, daily variations in secondary cosmic rays from an underground muon detector are shown to be associated with planetary–scale meteorological phenomena in the stratosphere such as the sudden stratospheric warming as well as the change in seasons. The MINOS far detector is also able to observe a reduction in cosmic rays caused by the Sun and the Moon. Physics goals and results: Time of flight of neutrinos In 2007, an experiment with the MINOS detectors found the speed of 3 GeV neutrinos to be 1.000051(29) c at 68% confidence level, and at 99% confidence level a range between 0.999976 c to 1.000126 c. The central value was higher than the speed of light; however, the uncertainty was great enough that the result also did not rule out speeds less than or equal to light at this high confidence level.After the detectors for the project were upgraded in 2012, MINOS corrected their initial result and found agreement with the speed of light, with the difference in the arrival times of −0.0006% (±0.0012%) between neutrinos and light. Further measurements are going to be conducted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conservativity** Conservativity: In formal semantics conservativity is a proposed linguistic universal which states that any determiner D must obey the equivalence D(A,B)↔D(A,A∩B) . For instance, the English determiner "every" can be seen to be conservative by the equivalence of the following two sentences, schematized in generalized quantifier notation to the right. Conservativity: Every aardvark bites. ⇝every(A,B) Every aardvark is an aardvark that bites. ⇝every(A,A∩B) Conceptually, conservativity can be understood as saying that the elements of B which are not elements of A are not relevant for evaluating the truth of the determiner phrase as a whole. For instance, truth of the first sentence above does not depend on which biting non-aardvarks exist.Conservativity is significant to semantic theory because there are many logically possible determiners which are not attested as denotations of natural language expressions. For instance, consider the imaginary determiner shmore defined so that shmore(A,B) is true iff |A|>|B| . If there are 50 biting aardvarks, 50 non-biting aardvarks, and millions of non-aardvark biters, shmore(A,B) will be false but shmore(A,A∩B) will be true.Some potential counterexamples to conservativity have been observed, notably, the English expression "only". This expression has been argued to not be a determiner since it can stack with bona fide determiners and can combine with non-nominal constituents such as verb phrases. Conservativity: Only some aardvarks bite. This aardvark will only [VP bite playfully.]Different analyses have treated conservativity as a constraint on the lexicon, a structural constraint arising from the architecture of the syntax-semantics interface, as well as constraint on learnability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Η set** Η set: In mathematics, an η set (eta set) is a type of totally ordered set introduced by Hausdorff (1907, p. 126, 1914, chapter 6 section 8) that generalizes the order type η of the rational numbers. Definition: If α is an ordinal then an ηα set is a totally ordered set in which for any two subsets X and Y of cardinality less than ℵα , if every element of X is less than every element of Y then there is some element greater than all elements of X and less than all elements of Y Examples: The only non-empty countable η0 set (up to isomorphism) is the ordered set of rational numbers. Suppose that κ = ℵα is a regular cardinal and let X be the set of all functions f from κ to {−1,0,1} such that if f(α) = 0 then f(β) = 0 for all β > α, ordered lexicographically. Then X is a ηα set. The union of all these sets is the class of surreal numbers. A dense totally ordered set without endpoints is an ηα set if and only if it is ℵα saturated. Properties: Any ηα set X is universal for totally ordered sets of cardinality at most ℵα, meaning that any such set can be embedded into X. For any given ordinal α, any two ηα sets of cardinality ℵα are isomorphic (as ordered sets). An ηα set of cardinality ℵα exists if ℵα is regular and Σβ<α 2ℵβ ≤ ℵα.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Causal system** Causal system: In control theory, a causal system (also known as a physical or nonanticipative system) is a system where the output depends on past and current inputs but not future inputs—i.e., the output y(t0) depends only on the input x(t) for values of t≤t0 The idea that the output of a function at any time depends only on past and present values of input is defined by the property commonly referred to as causality. A system that has some dependence on input values from the future (in addition to possible dependence on past or current input values) is termed a non-causal or acausal system, and a system that depends solely on future input values is an anticausal system. Note that some authors have defined an anticausal system as one that depends solely on future and present input values or, more simply, as a system that does not depend on past input values. Classically, nature or physical reality has been considered to be a causal system. Physics involving special relativity or general relativity require more careful definitions of causality, as described elaborately in Causality (physics). Causal system: The causality of systems also plays an important role in digital signal processing, where filters are constructed so that they are causal, sometimes by altering a non-causal formulation to remove the lack of causality so that it is realizable. For more information, see causal filter. Causal system: For a causal system, the impulse response of the system must use only the present and past values of the input to determine the output. This requirement is a necessary and sufficient condition for a system to be causal, regardless of linearity. Note that similar rules apply to either discrete or continuous cases. By this definition of requiring no future input values, systems must be causal to process signals in real time. Mathematical definitions: Definition 1: A system mapping x to y is causal if and only if, for any pair of input signals x1(t) , x2(t) and any choice of t0 , such that x1(t)=x2(t),∀t<t0, the corresponding outputs satisfy y1(t)=y2(t),∀t<t0. Definition 2: Suppose h(t) is the impulse response of any system H described by a linear constant coefficient differential equation. The system H is causal if and only if h(t)=0,∀t<0 otherwise it is non-causal. Examples: The following examples are for systems with an input x and output y Examples of causal systems Memoryless system cos ⁡(ωt) Autoregressive filter y(t)=∫0∞x(t−τ)e−βτdτ Examples of non-causal (acausal) systems sin ⁡(t+τ)x(τ)dτ Central moving average yn=12xn−1+12xn+1 Examples of anti-causal systems y(t)=∫0∞x(t+τ)dτ Look-ahead yn=xn+1
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**XMDF (E-book format)** XMDF (E-book format): XMDF (ever-eXtending Mobile Document Format) is a file format for viewing electronic books. It was originally developed by Sharp Corporation for its Zaurus platform. It is primarily used in Japan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autecology** Autecology: Autecology is an approach in ecology that seeks to explain the distribution and abundance of species by studying interactions of individual organisms with their environments. An autecological approach differs from both ecosystem ecology, community ecology (synecology) and population ecology (demecology) by greater recognition of the species-specific adaptations of individual animals, plants or other organisms, and of environmental over density-dependent influences on species distributions. Autecological theory relates the species-specific requirements and environmental tolerances of individuals to the geographic distribution of the species, with individuals tracking suitable conditions, having the capacity for migration at at least one stage in their life cycles. Autecology has a strong grounding in evolutionary theory, including the theory of punctuated equilibrium and the recognition concept of species. History: Autecology was pioneered by German field botanists in the late 19th century. During the 20th century, autecology continued to exist mainly as a descriptive science rather than one with supporting theory and the most notable proponents of an autecological approach, Herbert Andrewartha and Charles Birch, avoided the term autecology when referring to species-focused ecological investigation with emphasis on density-independent processes. Part of the problem with deriving a theoretical structure for autecology is that individual species are unique in their life history and behaviour, making it difficult to draw broad generalisations across them without losing the crucial information that is gained by studying biology at a species level. Progress has been made in more recent times with Paterson's recognition concept of species and the concept of habitat tracking by organisms. The most recent attempt at deriving a theoretical structure for autecology was published in 2014 by ecologists Gimme Walter and Rob Hengeveld. Basic theory: Recognition concept Autecological theory is focused on species as the most important unit of biological organisation, as individuals across all populations of a particular species share species-specific adaptations that influence their ecology. This particularly relates to reproduction, as individuals of a sexual species share unique adaptations (e.g. courtship songs, pheromones) for recognising potential mates, and share a fertilisation mechanism that differs from those in all other species. This recognition concept of species differs from the biological species concept (or isolation concept) which defines species by cross-mating sterility, which in allopatric speciation is merely a consequence of adaptive change in a new species' fertilisation mechanism to suit a different environment. Basic theory: Environmental matching Individuals from across a species' range tend to be relatively uniform in terms of their dietary and habitat requirements and the range of environmental conditions they can tolerate. These differ from those of other species. Individuals of a species likewise share specific sensory adaptations for recognising suitable habitat. Seasonal changes and variability in climate mean that the spatial and/or temporal distribution of suitable habitat for a species also varies. In response, organisms track suitable conditions, for example by migrating in order to remain within suitable habitat, for which there is evidence in the fossil record. By determining the requirements and tolerances of a particular species, it is possible to predict how individuals of that species will respond to specific environmental changes Population sizes and replacement level reproduction Autecological theory predicts that populations will reproduce at around replacement level unless a period of environmental change causing unusually high or low survival causes the population to grow or shrink before restabilising at replacement level again. Population numbers may be reduced by introduction of new predation pressure, such as with poor fisheries management or introduction of a biological control agent to control an invasive species, such that net reproductive rate, R0, drops below replacement level. The species being preyed upon in each case may stabilise at a lower population density where it is more difficult for individuals of the higher trophic level to locate the prey species, but at this point relieving predation tends to make little difference to population size, as individuals continue to reproduce around replacement level as they were at a higher density prior to the introduction of a higher trophic level. Applications: Pest management Pest includes animals or agents that cause economic damage to cultivated crops. Pest manage refers to the techniques and methods applied to control or minimize the damage to the crops done by pest. Pest management may include chemical, mechanical, biological or integrated approach. To apply any type of effective management programme, it is of utmost importance to know in details about the particular pest species. Spcially study of the ecology of the pest provide necessary clues to its management. Applications: Biological control Conservation autecology Knowledge of species-level interactions, tolerances and habitat requirements is valuable for conservation of an endangered plant or animal species by ensuring its particular ecological requirements are met. Links to other fields: With focus on individual organism, autecology has mechanistic links to several other biological fields, including ethology, evolution, genetics and physiology
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tactical beacon** Tactical beacon: A tactical beacon or TACBE as it is often known to be used by British and American armed forces (usually by Special Forces) for many years. Its primary role is to contact planes or helicopters which may be overhead or nearby. The system's main use is a distress signal, sending out a constant danger message to AWACS planes. However, it can also be used as a short range communications device with local aircraft. Using it indicates that one is in danger and needs help.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zero Install** Zero Install: Zero Install is a means of distributing and packaging software for multiple operating systems (Unix-like including Linux and macOS, Windows). Software: Rather than the normal method of downloading a software package, extracting it, and installing it before it can be used (with the accompanying use of destructive updates and privilege escalation), packages distributed using Zero Install only need to be run. The first time software is accessed, it is downloaded from the Internet and cached; subsequently, the software is accessed from the cache. Inside the cache, each application unpacks its own directory, as in Application Directory systems. Software: The system is intended to be used alongside a distribution's native package manager. Software: Two advantages of Zero Install over more popular packaging systems are that it is cross-platform and no root password is needed to install software; packages can be installed in system locations writable by that user instead of requiring administrator access. Thus, package installation affects only the user installing it, which makes it possible for all users to be able to install and run new software. Software: Moreover, the EBox sandbox can be used on top of Zero Install to securely install software and to run them in a restricted environment.Among the disadvantages of Zero Install is the fact that applications often need a rewrite for this packager, e.g., no absolute paths may be in use, among other requirements. The quality of Zero Install repository content varies and may contain unmaintained software. Other uses of the term: Other uses of the term "Zero Install" exist which are unrelated to a specific software project. Other uses of the term: PaperCut software describes a "zero install strategy" for Windows networks, which involves configuring multiple terminals via methods such as a group policy, to run the client executable directly off a single share. This enables automatic updates for workstations in line with the server and avoids multiple, separate installation processes.The Yarn package manager describes a zero-install as a philosophy that seeks to limit failures by limiting the usage of Yarn commands and therefore limiting the number of opportunities for things to go wrong.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quarter dollar** Quarter dollar: The term "quarter dollar" refers to a quarter-unit of several currencies that are named "dollar". One dollar ($1) is normally divided into subsidiary currency of 100 cents, so a quarter dollar is equal to 25 cents. These quarter dollars (aka quarters) are denominated as either Coins or as banknotes. Although more than a dozen countries have their own unique dollar currency, not all of them use quarters. This article only includes quarters that were intended for circulation, those that add up to units of dollars, and those in the form of a coin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Access Database Engine** Access Database Engine: The Access Database Engine (also Office Access Connectivity Engine or ACE and formerly Microsoft Jet Database Engine, Microsoft JET Engine or simply Jet) is a database engine on which several Microsoft products have been built. The first version of Jet was developed in 1992, consisting of three modules which could be used to manipulate a database. Access Database Engine: JET stands for Joint Engine Technology. Microsoft Access and Visual Basic use or have used Jet as their underlying database engine. However, it has been superseded for general use, first by Microsoft Desktop Engine (MSDE), then later by SQL Server Express. For larger database needs, Jet databases can be upgraded (or, in Microsoft parlance, "up-sized") to Microsoft's flagship SQL Server database product. Access Database Engine: A five billion record MS Jet (Red) database with compression and encryption turned on requires about one terabyte of disk storage space. It comprises typically hundreds of *.mdb files. Architecture: Jet, being part of a relational database management system (RDBMS), allows the manipulation of relational databases. It offers a single interface that other software can use to access Microsoft databases and provides support for security, referential integrity, transaction processing, indexing, record and page locking, and data replication. In later versions, the engine has been extended to run SQL queries, store character data in Unicode format, create database views and allow bi-directional replication with Microsoft SQL Server. Architecture: There are three modules to Jet: One is the Native Jet ISAM Driver, a dynamic link library (DLL) that can directly manipulate Microsoft Access database files (MDB) using a (random access) file system API. Another one of the modules contains the ISAM Drivers, DLLs that allow access to a variety of Indexed Sequential Access Method ISAM databases, among them xBase, Paradox, Btrieve and FoxPro, depending on the version of Jet. The final module is the Data Access Objects (DAO) DLL. DAO provides an API that allows programmers to access JET databases using any programming language. Architecture: Locking Jet allows multiple users to access the database concurrently. To prevent that data from being corrupted or invalidated when multiple users try to edit the same record or page of the database, Jet employs a locking policy. Any single user can modify only those database records (that is, items in the database) to which the user has applied a lock, which gives exclusive access to the record until the lock is released. In Jet versions before version 4, a page locking model is used, and in Jet 4, a record locking model is employed. Microsoft databases are organized into data "pages", which are fixed-length (2 kB before Jet 4, 4 kB in Jet 4) data structures. Data is stored in "records" of variable length that may take up less or more than one page. The page locking model works by locking the pages, instead of individual records, which though less resource-intensive also means that when a user locks one record, all other records on the same page are collaterally locked. As a result, no other user can access the collaterally locked records, even though no user is accessing them and there is no need for them to be locked. In Jet 4, the record locking model eliminates collateral locks, so that every record that is not in use is available. Architecture: There are two mechanisms that Microsoft uses for locking: pessimistic locking, and optimistic locking. With pessimistic locking, the record or page is locked immediately when the lock is requested, while with optimistic locking, the locking is delayed until the edited record is saved. Conflicts are less likely to occur with optimistic locking, since the record is locked only for a short period of time. However, with optimistic locking one cannot be certain that the update will succeed because another user could lock the record first. With pessimistic locking, the update is guaranteed to succeed once the lock is obtained. Other users must wait until the lock is released in order to make their changes. Lock conflicts, which either require the user to wait, or cause the request to fail (usually after a timeout) are more common with pessimistic locking. Architecture: Transaction processing Jet supports transaction processing for database systems that have this capability. (ODBC systems have one-level transaction processing, while several ISAM systems like Paradox do not support transaction processing.) A transaction is a series of operations performed on a database that must be done together — this is known as atomicity and is one of the ACID (Atomicity, Consistency, Isolation, and Durability), concepts considered to be the key transaction processing features of a database management system. For transaction processing to work (until Jet 3.0), the programmer needed to begin the transaction manually, perform the operations needed to be performed in the transaction, and then commit (save) the transaction. Until the transaction is committed, changes are made only in memory and not actually written to disk.[1] Transactions have a number of advantages over independent database updates. One of the main advantages is that transactions can be abandoned if a problem occurs during the transaction. This is called rolling back the transaction, or just rollback, and it restores the state of the database records to precisely the state before the transaction began. Transactions also permit the state of the database to remain consistent if a system failure occurs in the middle of a sequence of updates required to be atomic. There is no chance that only some of the updates will end up written to the database; either all will succeed, or the changes will be discarded when the database system restarts. With ODBC's in-memory policy, transactions also allow for many updates to a record to occur entirely within memory, with only one expensive disk write at the end. Architecture: Implicit transactions were supported in Jet 3.0. These are transactions that are started automatically after the last transaction was committed to the database. Implicit transactions in Jet occurred when an SQL DML statement was issued. However, it was found that this had a negative performance impact in 32-bit Windows (Windows 95, Windows 98), so in Jet 3.5 Microsoft removed implicit transactions when SQL DML statements were made. Architecture: Data integrity Jet enforces entity integrity and referential integrity. Jet will by default prevent any change to a record that breaks referential integrity, but Jet databases can instead use propagation constraints (cascading updates and cascading deletes) to maintain referential integrity. Architecture: Jet also supports "business rules" (also known as "constraints"), or rules that apply to any column to enforce what data might be placed into the table or column. For example, a rule might be applied that does not allow a date to be entered into a date_logged column that is earlier than the current date and time, or a rule might be applied that forces people to enter a positive value into a numeric only field. Architecture: Security Access to Jet databases is done on a per user-level. The user information is kept in a separate system database, and access is controlled on each object in the system (for instance by table or by query). In Jet 4, Microsoft implemented functionality that allows database administrators to set security via the SQL commands CREATE, ADD, ALTER, DROP USER and DROP GROUP. These commands are a subset of ANSI SQL 92 standard, and they also apply to the GRANT/REVOKE commands. When Jet 2 was released, security could also be set programmatically through DAO. Architecture: Queries Queries are the mechanisms that Jet uses to retrieve data from the database. They can be defined in Microsoft QBE (Query By Example), through the Microsoft Access SQL Window or through Access Basic's Data Access Objects (DAO) language. These are then converted to an SQL SELECT statement. The query is then compiled — this involves parsing the query (involves syntax checking and determining the columns to query in the database table), then converted into an internal Jet query object format, which is then tokenized and organised into a tree like structure. In Jet 3.0 onwards these are then optimised using the Microsoft Rushmore query optimisation technology. The query is then executed and the results passed back to the application or user who requested the data. Architecture: Jet passes the data retrieved for the query in a dynaset. This is a set of data that is dynamically linked back to the database. Instead of having the query result stored in a temporary table, where the data cannot be updated directly by the user, the dynaset allows the user to view and update the data contained in the dynaset. Thus, if a university lecturer queries all students who received a distinction in their assignment and finds an error in that student's record, they would only need to update the data in the dynaset, which would automatically update the student's database record without the need for them to send a specific update query after storing the query results in a temporary table. History: Jet originally started in 1992 as an underlying data access technology that came from a Microsoft internal database product development project, code named Cirrus. Cirrus was developed from a pre-release version of Visual Basic code and was used as the database engine of Microsoft Access. Tony Goodhew, who worked for Microsoft at the time, says "It would be reasonably accurate to say that up until that stage Jet was more the name of the team that was assigned to work on the DB engine modules of Access rather than a component team. For VB [Visual Basic] 3.0 they basically had to tear it out of Access and graft it onto VB. That's why they've had all those Jet/ODBC problems in VB 3.0." Jet became more componentised when Access 2.0 was released because the Access ODBC developers used parts of the Jet code to produce the ODBC driver. A retrofit was provided that allowed Visual Basic 3.0 users to use the updated Jet issued in Access 2.0.Jet 2.0 was released as several dynamic linked libraries (DLLs) that were utilised by application software, such as Microsoft's Access database. DLLs in Windows are "libraries" of common code that can be used by more than one application—by keeping code that more than one application uses under a common library which each of these applications can use independently code maintenance is reduced and the functionality of applications increases, with less development effort. Jet 2.0 comprised three DLLs: the Jet DLL, the Data Access Objects (DAO) DLL and several external ISAM DLLs. The Jet DLL determined what sort of database it was accessing, and how to perform what was requested of it. If the data source was an MDB file (a Microsoft Access format) then it would directly read and write the data to the file. If the data source was external, then it would call on the correct ODBC driver to perform its request. The DAO DLL was a component that programmers could use to interface with the Jet engine, and was mainly used by Visual Basic and Access Basic programmers. The ISAM DLLs were a set of modules that allowed Jet to access three ISAM based databases: xBase, Paradox and Btrieve.[2] Jet 2.0 was replaced with Jet 2.1, which used the same database structure but different locking strategies, making it incompatible with Jet 2.0. History: Jet 3.0 included many enhancements, including a new index structure that reduced storage size and the time that was taken to create indices that were highly duplicated, the removal of read locks on index pages, a new mechanism for page reuse, a new compacting method for which compacting the database resulted in the indices being stored in a clustered-index format, a new page allocation mechanism to improve Jet's read-ahead capabilities, improved delete operations that speeded processing, multithreading (three threads were used to perform read ahead, write behind, and cache maintenance), implicit transactions (users did not have to instruct the engine to start manually and commit transactions to the database), a new sort engine, long values (such as memos or binary data types) were stored in separate tables, and dynamic buffering (whereby Jet's cache was dynamically allocated at start up and had no limit and which changed from a first in, first out (FIFO) buffer replacement policy to a least recently used (LRU) buffer replacement policy). Jet 3.0 also allowed for database replication. History: Jet 3.0 was replaced by Jet 3.5, which uses the same database structure, but different locking strategies, making it incompatible with Jet 3.0. Jet 4.0 gained numerous additional features and enhancements. History: Unicode character storage support, along with an NT sorting method that was also implemented in the Windows 95 version; Changes to data types to be more like SQL Server's (LongText or Memo; Binary; LongBinary; Date/Time; Real; Float4; IEEESingle; Double; Byte or Tinyint; Integer or Integer synonyms Smallint, Integer2, and Short; LongInteger or LongInteger synonyms Int, Integer, and Counter; Currency or Money; Boolean and GUID); a new decimal data type Memo fields could now be indexed Compressible data types SQL enhancements to make Jet conform more closely to ANSI SQL-92 Finer grained security; views support; procedure support Invocation and termination (committing or rolling back) of transactions Enhanced table creation and modification Referential integrity support Connection control (connected users remain connected, but once disconnected they cannot reconnect, and new connections cannot be made. This is useful for database administrators to gain control of the database) A user list, which allows administrators to determine who is connected to the database Record-level locking (previous versions only supported page-locking) Bi-directional replication with MS SQL Server.Microsoft Access versions from Access 2000 to Access 2010 included an "Upsizing Wizard" which could "upsize" (upgrade) a Jet database to "an equivalent database on SQL Server with the same table structure, data, and many other attributes of the original database". Reports, queries, macros and security were not handled by this tool, meaning that some manual modifications might have been needed if the application was heavily reliant on these Jet features.A standalone version of the Jet 4 database engine was a component of Microsoft Data Access Components (MDAC), and was included in every version of Windows from Windows 2000 on. The Jet database engine was only 32-bit and did not run natively under 64-bit versions of Windows. This meant that native 64-bit applications (such as the 64-bit versions of SQL Server) could not access data stored in MDB files through ODBC, OLE DB, or any other means, except through intermediate 32-bit software (running in WoW64) that acted as a proxy for the 64-bit client.With version 2007 onwards, Access includes an Office-specific version of Jet, initially called the Office Access Connectivity Engine (ACE), but which is now called the Access Database Engine (However MS-Access consultants and VBA developers who specialize in MS-Access are more likely to refer to it as "the ACE Database Engine"). This engine was backward-compatible with previous versions of the Jet engine, so it could read and write (.mdb) files from earlier Access versions. It introduced a new default file format, (.accdb), that brought several improvements to Access, including complex data types such as multivalue fields, the attachment data type and history tracking in memo fields. It also brought security changes and encryption improvements and enabled integration with Microsoft Windows SharePoint Services 3.0 and Microsoft Office Outlook 2007. It can be obtained separately.The engine in Microsoft Access 2010 discontinued support for Access 1.0, Access 2.0, Lotus 1-2-3 and Paradox files. A 64-bit version of Access 2010 and its ACE Driver/Provider was introduced, which in essence provides a 64-bit version of Jet. The driver is not part of the Windows operating system, but is available as a redistributable.The engine in Microsoft Access 2013 discontinued support for Access 95, Access 97 and xBase files, and it also discontinued support for replication.Version 1608 of Microsoft Access 2016 restored support for xBase files, and Version 1703 introduced a Large Number data type.From a data access technology standpoint, Jet is considered a deprecated technology by Microsoft, but Microsoft continues to support ACE as part of Microsoft Access. Compatibility: Microsoft provides the JET drivers for Microsoft Windows only and third party software support for JET databases is almost exclusively found on Windows. However, there are open source projects that enable working with JET databases on other platforms including Linux. Notably, MDB Tools and its much extended Java port named Jackcess as well as UCanAccess.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Konqueror** Konqueror: Konqueror is a free and open-source web browser and file manager that provides web access and file-viewer functionality for file systems (such as local files, files on a remote FTP server and files in a disk image). It forms a core part of the KDE Software Compilation. Developed by volunteers, Konqueror can run on most Unix-like operating systems. The KDE community licenses and distributes Konqueror under GNU GPL-2.0-or-later. Konqueror: The name "Konqueror" echoes a colonization paradigm to reference the two primary competitors at the time of the browser's first release: "first comes the Navigator, then Explorer, and then the Konqueror". Konqueror: It also follows the KDE naming convention: the names of most KDE programs begin with the letter K.Konqueror first appeared with version 2 of KDE on October 23, 2000. It replaces its predecessor, KFM (KDE file manager). With the release of KDE 4, Dolphin replaced Konqueror as the default KDE file manager, but the KDE community continues to maintain Konqueror as the default KDE web browser. Major supported protocols: Konqueror can utilize all KIOslaves installed on the user's system. Some examples include: FTP and SFTP/SSH browser Samba (Microsoft file-sharing) browser HTTP browser IMAP mail client ISO (CD image) viewer VNC viewerA complete list is available in the KDE Info Center's Protocols section. User interface: Konqueror supports tabbed document interface and Split views, wherein a window can contain multiple documents in tabs. Multiple document interfaces are not supported, however it is possible to recursively divide a window to view multiple documents simultaneously, or simply open another window. User interface: Konqueror's user interface is somewhat reminiscent of Microsoft's Internet Explorer, though it is more customizable. It works extensively with "panels", which can be rearranged or added. For example, one could have an Internet bookmarks panel on the left side of the browser window, and by clicking a bookmark, the respective web page would be viewed in the larger panel to the right. Alternatively, one could display a hierarchical list of folders in one panel and the content of the selected folder in another. Panels are quite flexible and can even include, among other KParts (components), a console window, a text editor, a media player. Panel configurations can be saved, and there are some default configurations. (For example, "Midnight Commander" displays a screen split into two panels, where each one contains a folder, Web site, or file view.) Navigation functions (back, forward, history, etc.) are available during all operations. Most keyboard shortcuts can be remapped using a graphical configuration, and navigation can be conducted through an assignment of letters to nodes on the active file by pressing the control key. The address bar has extensive autocompletion support for local directories, past URLs, and past search terms. Web browser: Konqueror has been developed as an autonomous web browser project. It uses KHTML as its browser engine, which is compliant with HTML and supports JavaScript, Java applets, CSS, SSL, and other relevant open standards. An alternative layout engine, kwebkitpart, is available from the Extragear.While KHTML is the default web-rendering engine, Konqueror is a modular application and other rendering engines are available. In particular, the WebKitPart component using the KHTML-derived WebKit engine has seen a lot of support in the KDE 4 series. However, the KHTML rendering backend contains unique features, such as the ability save a full archive of any given webpage into a single file with the ".war" extension. Web browser: Konqueror integrates several customizable search services which can be accessed by entering the service's abbreviation code (for example, gg: for Google, or wp: for Wikipedia) followed by the search term(s). One can add their own search service; for instance, to retrieve English Wikipedia articles, a shortcut may be added with the URL http://en.wikipedia.org/wiki/Special:Search?search=\{@}&go=Go. KHTML's rendering speed is on par with that of competing browsers, but sites with customized JavaScript are often problematic due to KHTML's much smaller mind- and market-share, resulting in fewer JavaScript features built into the JS engine. Kubuntu's 10.10 Maverick Meerkat release switched the default browser from Konqueror to rekonq, as well as a Firefox installer being added. Kubuntu subsequently switched from rekonq to Firefox, with the release of 14.04 Trusty Tahr. File manager: Konqueror also allows browsing the local directory hierarchy—either by entering locations in the address bar, or by selecting items in the file browser window. It allows browsing in different views, which differ in their usage of icons and layout. Files can also be executed, viewed, copied, moved, and deleted. The user can also open an embedded version of Konsole, via KDE's KParts technology, in which they can directly execute shell commands. In addition to the Konsole KPart, Konqueror can also use a Filelight KPart, to view a radial diagram of the user's filesystem. File manager: Although this functionality has not been removed from Konqueror, as of KDE 4, Dolphin has replaced Konqueror as the default file manager. Dolphin can – like Konqueror – divide each window or tab into multiple panes. Konqueror makes more powerful use of this feature, allowing as many vertically and horizontally divided panes as desired. Each can link to different content or even remote locations, so that Konqueror becomes a powerful graphical tool to manage content on multiple servers all in one window, "dragging and dropping" files between locations. File viewer: Using the KParts object model, Konqueror executes components that are capable of viewing (and sometimes editing) specific filetypes and embeds their client area directly into the Konqueror panel in which the respective files have been opened. This makes it possible to, for example, view an OpenDocument (via Calligra) or PDF document directly within Konqueror. Any application that implements the KParts model correctly can be embedded in this fashion. File viewer: KParts can also be used to embed certain types of multimedia content into HTML pages; for example, the KMPlayer KPart enables Konqueror to show embedded video on web pages. KIO: In addition to browsing files and web sites, Konqueror utilizes KIO plugins to extend its capabilities well beyond those of other browsers and file managers. It uses components of KIO, the KDE I/O plugin system, to access different protocols such as HTTP and FTP (support for these is built-in), WebDAV, SMB (Windows shares), SFTP and FISH (a handy replacement to the latter when the SFTP subsystem is disabled on the remote host). KIO: Similarly, Konqueror can use KIO plugins (called IOslaves) to access ZIP files and other archives, to process ed2k links (edonkey/emule), or even to browse audio CDs, ("audiocd:/") and rip them via drag-and-drop. Likewise, the "man:" and "info:" IOslaves can be used to fetch man and info formatted documentation. Konqueror Embedded: An embedded systems version, Konqueror Embedded is available. Unlike the full version of Konqueror, Embedded Konqueror is purely a web browser. It does not require KDE or even the X window system. A single static library, it is designed to be as small as possible, while providing all necessary functions of a web browser, such as support for HTML 4, CSS, JavaScript, cookies, and SSL. Download manager: KGet is a free download manager for KDE and is the default download manager for Konqueror. It is part of the KDE Network package. By default it is the download manager used for Konqueror, but can also be used with Mozilla Firefox and Chromium-based web browsers as well as rekonq. KGet was featured by Tux Magazine and Free Software Magazine. Download manager: History On KDE 3, KGet 0.8.x, 1 supported HTTP/FTP download. On KDE Software Compilation 4, KGet 2 was released; it supported bandwidth throttling segmentation, multi-threading, and the BitTorrent protocol. Features Downloading files from FTP, HTTP(S) and BitTorrent sources. Pausing and resuming of downloading files, as well as the ability to restart a download. Gives of information about current and pending downloads. Embedding into system tray of the host system. Integration with the KDE Konqueror and Rekonq web browsers. Metalink support which contain multiple URLs for downloads, along with checksums and other information. Automatically tags downloaded files with download information (such as the download URL) using Nepomuk. Download from multiple servers to speed up download time (segmented file transfer).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Propane-1,2,3-tricarboxylic acid** Propane-1,2,3-tricarboxylic acid: Propane-1,2,3-tricarboxylic acid, also known as tricarballylic acid, carballylic acid, and β-carboxyglutaric acid, is a tricarboxylic acid. The compound is an inhibitor of the enzyme aconitase and therefore interferes with the Krebs cycle.Esters of propane-1,2,3-tricarboxylic acid are found in natural products such as the mycotoxins fumonisins B1 and B2 and AAL toxin TA, and in macrocyclic inhibitors of Ras farnesyl-protein transferase (FPTase) such as actinoplanic acid. Propane-1,2,3-tricarboxylic acid: Propane-1,2,3-tricarboxylic acid can be synthesized in two steps from fumaric acid. Mechanism of the inhibition of aconitase: Aconitase normally catalyses, via the intermediate aconitic acid, the interconversion of citric acid into isocitric acid. Propane-1,2,3-tricarboxylic acid is well suited to bind to aconitase as it only lacks the hydroxide group in comparison to citric acid. However, the hydroxide group is essential to proceed from citric acid to aconitic acid, therefore the enzyme is not able to complete the reaction with propane-1,2,3-tricarboxylic acid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PCTI Solutions** PCTI Solutions: 'PCTI Solutions' is a provider of electronic document management and transfer solutions designed specifically for healthcare organisations in the UK. Docman: Docman is used by over 6,000 GP practices Docman to manage documents electronically. The software is used by every GP practice in NHS Scotland and is cited in the Good Practice Guidelines for General Practice Good Practice Guidelines. EDT Hub: EDT Hub is used by over 40 NHS Trusts to send documents electronically from NHS Secondary Care Trusts to NHS Primary Care Trusts. More recently the solution has been chosen by NHS Scotland for a national roll-out as reported in The Guardian.EDT Hub has ability to link multiple secondary care organisations with multiple primary care organisations, and saves the NHS money because it reduces paper consumption and printing costs whilst speeding up delivery and reducing the risk of document loss. Documents typically handled by EDT Hub include: Discharge Summaries Discharge Letters Encounter Reports Radiology Reports Outpatient Clinic Letters Out-of-Hours ReportsThis product is notable as one of the first of its kind to be granted NHS Interoperability Toolkit Accreditation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acoustic scale** Acoustic scale: In music, the acoustic scale, overtone scale, Lydian dominant scale (Lydian ♭7 scale), or the Mixolydian ♯4 scale is a seven-note synthetic scale. It is the fourth mode of the ascending melodic minor scale. This differs from the major scale in having an augmented fourth and a minor seventh scale degree. The term "acoustic scale" is sometimes used to describe a particular mode of this seven-note collection (e.g. the specific ordering C–D–E–F♯–G–A–B♭) and is sometimes used to describe the collection as a whole (e.g. including orderings such as E–F♯–G–A–B♭–C–D). History: In traditional music, the overtone scale persists in the music of peoples of South Siberia, especially in Tuvan music. Overtone singing and the sound of the Jew's harp are naturally rich in overtones, but melodies performed on the igil (bowed instrument distantly related to the violin) and plucked string instruments such as the doshpuluur or the chanzy also often follow the overtone scale, sometimes with pentatonic slices.The acoustic scale appears sporadically in nineteenth-century music, notably in the works of Franz Liszt and Claude Debussy. It also plays a role in the music of twentieth-century composers, including Igor Stravinsky, Béla Bartók, and Karol Szymanowski, who was influenced by folk music from the Polish Highlands. The acoustic scale is also remarkably common in the music of Nordeste, the northeastern region of Brazil (see Escala nordestina). It plays a major role in jazz harmony, where it is used to accompany dominant seventh chords starting on the first scale degree. The term "acoustic scale" was coined by Ernő Lendvai in his analysis of the music of Béla Bartók. Construction: The name "acoustic scale" refers to the resemblance to the eighth through 14th partials in the harmonic series (Play ). Starting on C1, the harmonic series is C1, C2, G2, C3, E3, G3, B♭3*, C4, D4, E4, F↑4*, G4, A♭4*, B♭4*, B4, C5 ... The bold notes spell out an acoustic scale on C4. However, in the harmonic series, the notes marked with asterisks are out of tune: F↑4* (Play ) is almost exactly halfway between F♮4 and F♯4, A♭4* (Play ) is closer to A♭4 than A♮4, and B♭4* is too flat to be generally accepted as part of an equal tempered scale. Construction: The acoustic scale may be formed from a major triad (C E G) with an added minor seventh and raised fourth (B♭ and F♯, drawn from the overtone series) and major second and major sixth (D and A). Lendvai described the use of the "acoustic system" accompanying the acoustic scale in Bartók's music, since it entails structural characteristics such as symmetrically balanced sections, especially periods, in contrast with his use of the golden ratio. In Bartók's music, the acoustic scale is characterized in various ways including diatonic, dynamic, tense, and triple- or other odd-metered, as opposed to the music structured by the Fibonacci sequence which is chromatic, static, relaxed, and duple-metered.Another way to regard the acoustic scale is that it occurs as a mode of the melodic minor scale starting on the fourth degree. Hence, the acoustic scale starting on D is D, E, F♯, G♯, A, B, C, D, containing the familiar sharpened F and G of A melodic minor. The F♯ turns the D minor tetrachord into a major tetrachord, and the G♯ turns it Lydian. Therefore, many occurrences of this scale in jazz may be regarded as unsurprising; it shows up in modal improvisation and composition over harmonic progressions which invite use of the melodic minor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Get a wiggle on** Get a wiggle on: Get a wiggle on is an 18th century English language idiom and Colloquial Expression which means to move quickly or hurry. Etymology: In 1891 Wilson's Photographic Magazine published "The American Psalm of Life" which began, "Get a wiggle on, my lad, Don't walk at a funeral pace..." By 1919 the phrase was also used in a song, "Get a wiggle on, get a wiggle on, Don't stand there with a giggle-on. By the 1920s the term had found its way into the American language as slang. History: The Cambridge Dictionary defines the phrase as meaning to hurry up. Get a wiggle on is both an English language idiom and a Colloquial Expression. The phrase has been in use since 1891 and is still being used in the 21st century. The phrase is also slang in Australia and it appears in the Aussie Slang Dictionary
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SYS (command)** SYS (command): In computing, sys is a command used in many operating system command-line shells and also in Microsoft BASIC. DOS, Windows, etc.: SYS is an external command of Seattle Computer Products 86-DOS, Microsoft MS-DOS, IBM PC DOS, Digital Research FlexOS, IBM/Toshiba 4690 OS, PTS-DOS, Itautec/Scopus Tecnologia SISNE plus, and Microsoft Windows 9x operating systems. It is used to make an already formatted medium bootable. It will install a boot sector capable of booting the operating system into the first logical sector of the volume. Further, it will copy the principal DOS system files, that is, the DOS-BIOS (IO.SYS or IBMBIO.COM) and the DOS kernel (MSDOS.SYS or IBMDOS.COM) into the root directory of the target. Due to restrictions in the implementation of the boot loaders in the boot sector and DOS' IO system, these two files must reside in the first two directory entries and be stored at the beginning of the data area under MS-DOS and PC DOS. Depending on version, the whole files or only a varying number of sectors of the DOS-BIOS (down to only three sectors in modern issues of DOS) will have to be stored in one consecutive part. SYS will try to physically rearrange other files on the medium in order to make room for these files in their required locations. This is why SYS needs to bypass the filesystem driver in the running operating system. Other DOS derivatives such as DR-DOS do not have any such restrictions imposed by the design of the boot loaders, therefore under these systems, SYS will install a DR-DOS boot sector, which is capable of mounting the filesystem, and can then simply copy the two system files into the root directory of the target. DOS, Windows, etc.: SYS will also copy the command line shell (COMMAND.COM) into the root directory. The command can be applied to hard drives and floppy disks to repair or create a boot sector. DOS, Windows, etc.: Although an article on Microsoft's website says the SYS command was introduced in MS-DOS version 2.0, this is incorrect. SYS actually existed in 86-DOS 0.3 already. According to The MS-DOS Encyclopedia, the command was licensed to IBM as part of the first version of MS-DOS, and as such it was part of MS-DOS/PC DOS from the very beginning (IBM PC DOS 1.0 and MS-DOS 1.25). DOS, Windows, etc.: DR DOS 6.0 includes an implementation of the SYS command. Syntax The command syntax is: SYS [drive1:][path] drive2: Arguments: [drive1:][path] – The location of the system files drive2: – The drive to which the files will be copied Example Microsoft BASIC: SYS is also a command in Microsoft BASIC used to execute a machine language program in memory. The command took the form SYS n where n is a memory location where the executable code starts. Home computer platforms typically publicised dozens of entry points to built-in routines (such as Commodore's KERNAL) that were used by programmers and users to access functionality not easily accessible through BASIC.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**José A. Carrillo** José A. Carrillo: José Antonio Carrillo de la Plata (pronounced [xoˈse an'tonjo ka'riʎo ðe la 'plata]; born 29 December 1969) is a Spanish mathematician primarily known for his contributions in applied partial differential equations, numerical analysis, many particle systems and kinetic theory. His works make use of methods from functional analysis, calculus of variations, optimal transport, gradient descent and entropy methods. Currently he is Professor of Nonlinear Partial Differential Equations at the Mathematical Institute of the University of Oxford where he started in April 2020. He is also Tutorial Fellow at The Queen's College Oxford in Applied Mathematics. He was previously Chair in Applied and Numerical Analysis at the Department of Mathematics, Imperial College London, United Kingdom, a position he held from October 2012 till March 2020. He served as Chair of the Applied Mathematics Committee of the European Mathematical Society during the period 2014–2017. He was formerly ICREA Research Professor at the Universitat Autònoma de Barcelona, Spain, during the period 2003–2012, and held positions at the University of Granada, Spain, and the University of Texas at Austin, United States during the years 1992–2003. He was recognized with the SeMA prize (2003) and the GAMM Richard Von-Mises prize (2006) for young researchers. He was a holder of the Royal Society Wolfson Fellowship during the period 2012–2017. José A. Carrillo: He was recognized as a Highly Cited Researcher in the years 2015, 2016, 2017, 2018, 2019 and 2020. In 2018, he was elected to the European Academy of Sciences, and nominated Changjiang Visiting Scholar (2018-2021) by the Ministry of Education of the People's Republic of China. In 2019, he was nominated as fellow of the Society for Industrial and Applied Mathematics for his outstanding contributions to applied mathematics in complex particle dynamics and his service to the Applied Mathematics Community of the European Mathematical Society. He has been elected as foreign member of the Spanish Royal Academy of Sciences in 2021 and Member of the Academia Europeaea in 2023. He has received the 2022 Echegaray Medal awarded by the Spanish Royal Academy of Sciences in recognition of an exceptional scientific career. Research and career: José A. Carrillo was born in 1969 in Granada, Spain, where he completed a Bachelor in Mathematics (1992) and a Bachelor in Computer Science (1992) at University of Granada. He obtained his Ph.D. degree in Mathematics at the University of Granada under the supervision of Prof. Juan Soler for his dissertation entitled “Estudio de soluciones débiles del sistema de Vlasov-Poisson-Fokker-Planck” ("Study of weak solutions to the Vlasov-Poisson-Fokker-Planck system"). Research and career: Mathematical work He has authored more than 200 mathematical research papers centered around partial differential equations. He has made contributions to the understanding of the asymptotic and qualitative behavior of solutions to nonlinear diffusion equations, particularly through the use of entropy methods and gradient flow techniques. He obtained results on the exponential convergence to equilibrium of the porous medium equation by using an analogue of the Bakry-Émery method . He has also worked on kinetic and diffusive models in mathematical biology describing chemotaxis, flocking and swarming behavior of interacting agents, as well as on computational neuroscience. In particular, he made contributions to the complete classification of the asymptotic behavior of solutions to the Patlak-Keller-Segel model for the aggregation of cells in biological systems. He was appointed as Changjiang Visiting Scholar by the Ministry of Education of the People's Republic of China 2018-2021. Research and career: Awards and honors SeMA (es:Sociedad Española de Matemática Aplicada) Young Researcher Prize , 2003. Richard von Mises Prize of the International Association of Applied Mathematics and Mechanics (GAMM), 2006. Royal Society Wolfson Research Merit Award, 2012. Student Academic Choice Award (SACA) for Best Supervision, Imperial College London, 2016. Highly Cited Researcher in the years 2015, 2016, 2017, 2018, 2019 and 2020. Fellow of the Society for Industrial and Applied Mathematics, 2019--. Foreign Member of the Spanish Royal Academy of Sciences, 2021--. Echegaray Medal, awarded by the Spanish Royal Academy of Sciences., 2022. Member of the Academia Europeaea 2023. Science policy He has participated in the Committee of Science Policy of the Sociedad de Científicos Españoles en el Reino Unido (Spanish Society of Researchers in the UK) SRUK/CERU. He contributed towards the Science Policy Report of SRUK/CERU due to the Spanish General Elections of 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DNA damage-binding protein** DNA damage-binding protein: DNA damage-binding protein or UV-DDB is a protein complex that is responsible for repair of UV-damaged DNA. This complex is composed of two protein subunits, a large subunit DDB1 (p127) and a small subunit DDB2 (p48). When cells are exposed to UV radiation, DDB1 moves from the cytosol to the nucleus and binds to DDB2, thus forming the UV-DDB complex. This complex formation is highly favorable and it is demonstrated by UV-DDB's binding preference and high affinity to the UV lesions in the DNA. This complex functions in nucleotide excision repair, recognising UV-induced (6-4) pyrimidine-pyrimidone photoproducts and cyclobutane pyrimidine dimers. Structure: The helical domain at the n-terminus of DDB2 binds to UV damaged DNA with high affinity to form the UV-DDB complex. The helical binding interaction at the n-terminus of DDB2 allows for the protein to bind immediately after detecting UV damaged DNA. DNA binds to DDB2 only when damaged by UV radiation. Binding with high affinity to a helical domain of DDB2 in the dimer form, UV-DDB, is facilitated by the n-terminal alpha helical paddle and beta wings of the DDB2 subunit. Both the alpha helical fold and the beta wing loops form a "winged helix" motif. The dimerized complex acts as a scaffold for DNA damage repair pathways and allows for other proteins to detect, interact, and repair UV damaged DNA. DDB2: DDB2 is a protein part of the CUL4A–RING ubiquitin ligase (CRL4) complex. It was thought that DDB2 only acts to recognize legions of UV damaged DNA. It has been found that DDB2 plays a role in promoting chromatin unfolding. This role is independent of DDB2's role in the CRL4 complex. Damage sensor role: UV-DDB is not only responsible for the repair of damaged DNA, it can also function by acting as a damage sensor. In base excision repair, UV-DDB galvanizes OGG1 and APE 1 activities. During DNA damage, proteins OGG1 and APE 1 encounter difficulty in repairing the lesions in a DNA wrapped nucleosome. Additionally, histones function by making the DNA inaccessible because of the way they make DNA coil and wrap into chromatin. UV-DDP plays a role in identifying the damaged sites within the chromatin, thereby allowing access to base excision repair proteins. When UV-DDB is recruited to these damaged sites, it recognizes the OGG1- AP DNA complex and further accelerates the turnover of glycosylases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nalidixic acid** Nalidixic acid: Nalidixic acid (tradenames Nevigramon, NegGram, Wintomylon and WIN 18,320) is the first of the synthetic quinolone antibiotics. In a technical sense, it is a naphthyridone, not a quinolone: its ring structure is a 1,8-naphthyridine nucleus that contains two nitrogen atoms, unlike quinoline, which has a single nitrogen atom.Synthetic quinolone antibiotics were discovered by George Lesher and coworkers as a byproduct of chloroquine manufacture in the 1960s; nalidixic acid itself was used clinically, starting in 1967. Nalidixic acid is effective primarily against Gram-negative bacteria, with minor anti-Gram-positive activity. In lower concentrations, it acts in a bacteriostatic manner; that is, it inhibits growth and reproduction. In higher concentrations, it is bactericidal, meaning that it kills bacteria instead of merely inhibiting their growth. Nalidixic acid: It has historically been used for treating urinary tract infections, caused, for example, by Escherichia coli, Proteus, Shigella, Enterobacter, and Klebsiella. It is no longer clinically used for this indication in the USA as less toxic and more effective agents are available. The marketing authorization for nalidixic acid has been suspended throughout the EU.It is also a tool in studies as a regulation of bacterial division. It selectively and reversibly blocks DNA replication in susceptible bacteria. Nalidixic acid and related antibiotics inhibit a subunit of DNA gyrase and topoisomerase IV and induce formation of cleavage complexes. It also inhibits the nicking-closing activity on the subunit of DNA gyrase that releases the positive binding stress on the supercoiled DNA. Adverse effects: Hives, rash, intense itching, or fainting soon after a dose may be a sign of anaphylaxis. Common adverse effects include rash, itchy skin, blurred or double vision, halos around lights, changes in color vision, nausea, vomiting, and diarrhea. Nalidixic acid may also cause convulsions and hyperglycemia, photosensitivity reactions, and sometimes haemolytic anaemia, thrombocytopenia or leukopenia. Particularly in infants and young children, has been reported occasionally increased intracranial pressure. Overdose: In case of overdose the patient experiences headache, visual disturbances, balance disorders, mental confusion, metabolic acidosis and seizures. Spectrum of bacterial susceptibility and resistance: Aeromonas hydrophila, Clostridium and Haemophilus are generally susceptible to nalidixic acid, while other bacteria such as Bifidobacteria, Lactobacillus, Pseudomonas and Staphylococcus are resistant. Salmonella enterica serovar Typhimurium strain ATCC14028 acquires nalidixic acid resistance when gyrB gene is mutated (strain IR715).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**First-person shooter engine** First-person shooter engine: A first-person shooter engine is a video game engine specialized for simulating 3D environments for use in a first-person shooter video game. First-person refers to the view where the players see the world from the eyes of their characters. Shooter refers to games which revolve primarily around wielding firearms and killing other entities in the game world, either non-player characters or other players. First-person shooter engine: The development of the FPS graphic engines is characterized by a steady increase in technologies, with some breakthroughs. Attempts at defining distinct generations lead to arbitrary choices of what constitutes a highly modified version of an 'old engine' and what is a new engine. First-person shooter engine: The classification is complicated as game engines blend old and new technologies. Features considered advanced in a new game one year, become the expected standard the next year. Games with a combination of both older and newer features are the norm. For example, Jurassic Park: Trespasser (1998) introduced physics to the FPS genre, which did not become common until around 2002. Red Faction (2001) featured a destructible environment, something still not common in engines years later. Timeline: 1970s and 1980s: Early FPS graphics engines Game rendering for this early generation of FPS were already from the first-person perspective and with the need to shoot things, however they were mostly made up using vector graphics. Timeline: There are two possible claimants for the first FPS, Maze War and Spasim. Maze War was developed in 1973 and involved a single player making his way through a maze of corridors rendered using a fixed perspective. Multiplayer capabilities, where players attempted to shoot each other, were added later and were networked in 1974. Spasim was originally developed in 1974 and involved players moving through a wire-frame 3D universe. Spasim could be played by up to 32 players on the PLATO network.Developed in-house by Incentive Software, the Freescape engine is considered to be one of the first proprietary 3D engines to be used for computer games, although the engine was not used commercially outside of Incentive's own titles. The first game to use this engine was the puzzle game Driller in 1987. Timeline: Early 1990s: Wireframes to 2.5D worlds and textures Games of this generation are often regarded as Doom clones. They were not capable of full 3D rendering, but used ray casting 2.5D techniques to draw the environment and sprites to draw enemies instead of 3D models. However these games began to use textures to render the environment instead of simple wire-frame models or solid colors. Timeline: Hovertank 3D, from id Software, was the first to use this technique in 1990, but was still not using textures, a capability which was added shortly after on Catacomb 3D (1991), then with the Wolfenstein 3D engine which was later used for several other games. Catacomb 3D was also the first game to show the player's hand on-screen, furthering the implication of the player into the character's role.Wolfenstein 3D engine was still very primitive. It did not apply textures to the floor and ceiling, and the ray casting restricted walls to a fixed height, and levels were all on the same plane. Timeline: Even though it was still not using true 3D, id Tech 1, first used in Doom (1993) and again from id Software, removed these limitations. It also first introduced the concept of binary space partitioning (BSP). Another breakthrough was the introduction of multiplayer abilities in the engine. However, because it was still using 2.5D, it was impossible to look up and down properly in Doom, and all Doom levels were actually two-dimensional. Due to the lack of a z-axis, the engine did not allow for room-over-room support. Timeline: Doom's success spawned several games using the same engine or similar techniques, giving them the name Doom clones. The Build engine, used in Duke Nukem 3D (1996), later removed some of the limitations of id Tech 1, such as the Build engine being able to have support for room-over-room by stacking sectors on top of sectors, but the techniques used remained the same. Timeline: Mid 1990s: 3D models, beginnings of hardware acceleration In the mid-1990s, game engines recreated true 3D worlds with arbitrary level geometry. Instead of sprites the engines used simply textured (single-pass texturing, no lighting details) polygonal objects. Timeline: FromSoftware released King's Field, a full polygon free roaming first-person real-time action title for the Sony PlayStation in December 1994. Sega's 32X release Metal Head was a first-person shooter mecha simulation game that used fully texture-mapped, 3D polygonal graphics. A year prior, Exact released the Sharp X68000 computer game Geograph Seal, a fully 3D polygonal first-person shooter that employed platform game mechanics and had most of the action take place in free-roaming outdoor environments rather than the corridor labyrinths of Wolfenstein 3D. The following year, Exact released its successor for the PlayStation console, Jumping Flash!, which used the same game engine but adapted it to place more emphasis on the platforming rather than the shooting. The Jumping Flash! series continued to use the same engine.Dark Forces, released in 1995 by LucasArts, has been regarded as one of the first "true 3-D" first-person shooter games. Its engine, the Jedi Engine, was one of the first engines to support an environment in three dimensions: areas can exist next to each other in all three planes, including on top of each other (such as stories in a building). Though most of the objects in Dark Forces are sprites, the game does include support for textured 3D-rendered objects. Another game regarded as one of the first true 3D first-person shooter is Parallax Software's 1994 shooter Descent.The Quake engine (Quake, 1996) used fewer animated sprites and used true 3D geometry and lighting, using elaborate techniques such as z-buffering to speed up the rendering. Quake was also the first true-3D game to use a special map design system to preprocess and pre-render the 3D environment: the 3D environment in which the game took place (referred for the first time as a Map) was simplified during the creation of the map to reduce the processing required when playing the game. Timeline: Static lightmaps and 3D light sources were also added in the BSP files storing the levels, allowing for more realistic lighting. The first Graphics processing units appeared in the late 1990s, but many games still supported software rendering at that time. id Tech 2 (Quake II, 1997) was one of the first games to take advantage of hardware accelerated graphics (id Software later reworked Quake to add OpenGL support to the game). GoldSrc, the engine derived from the Quake engine by Valve for Half-Life (1998), added Direct3D support, and a skeletal framework to better render the NPCs, and also greatly improved the NPCs artificial intelligence (AI) compared to the Quake engine. Timeline: Late 1990s: Full 32-bit color, and GPUs become standard This period saw the introduction of the first video cards with Transform, clipping, and lighting (T&L). The first card with this innovative technology was the GeForce 256. This card was superior to what 3dfx had to offer at the time, namely Voodoo3, which only fell short because the lack of T&L. Companies such as Matrox with their G400, and S3 with their Savage4 were forced to withdraw from the 3D gaming market during this time period. One year later, ATI released their Radeon 7200, a true competing graphics card line. Timeline: While all games of this period supported 16-bit color, many were adopting 32-bit color (really 24-bit color with an 8-bit alpha channel) as well. Soon, many benchmark sites began touting 32-bit as a standard. The Unreal Engine, used in a large number of FPS games since its release, was an important milestone at the time. It used the Glide API, specifically developed for 3dfx GPUs, instead of OpenGL. Probably the biggest reason for its popularity was that the engine architecture and the inclusion of a scripting language made it easy to mod it. One other improvement of Unreal compared to the previous generation of engines was its networking technology, which greatly improved the scalability of the engine on multiplayer.id Tech 3, first used for Quake III Arena, improved from its predecessor by allowing to store much more complex and smoother animations. It also had improved lighting and shadowing and introduced shaders and curved surfaces. Timeline: Early 2000s: Increasing detail, outdoor environments, and rag-doll physics New graphics hardware provided new capabilities, allowing new engines to add various new effects, such as particle effects or fog, as well as increase texture and polygon detail. Many games featured large outdoor environments, vehicles, and rag-doll physics. Timeline: Average Video Hardware requirements: a GPU with hardware T&L such as the DirectX 7.0 GeForce 2 or Radeon 7200 was typically required. The next-generation GeForce 3 or Radeon 8500 were recommended due to their more efficient architecture, though their DirectX 8.0 vertex and pixel shaders were of little use. A handful of games still supported DirectX 6.0 chipsets such as RIVA TNT2 and Rage 128, and software rendering (with an integrated Intel GMA), though this was apparent that even a powerful CPU could not compensate for the lack of hardware T&L. Timeline: Games engines originally developed for the PC platform, like the Unreal Engine 2.0, started to be adapted for sixth generation consoles like PlayStation 2 or GameCube, those now having the computer power to handle graphic-intensive video games. Mid 2000s: Lighting and pixel shaders, physics The new generation of graphics chips allowed pixel shader-based textures, bump mapping, and lighting and shadowing technologies to become common. Shader technologies included HLSL (for DirectX), GLSL (for OpenGL), or Cg. Timeline: This resulted in the obsolescence of DirectX 7.0 graphics chips such as the widespread GeForce 2 and Radeon 7200, as well as DirectX 6.0 chipsets such as RIVA TNT2 and Rage 128, and integrated on-board graphics accelerators. Until this generation of games, a powerful CPU was able to somewhat compensate for an older video card. Average Video Hardware requirements: minimum was a GeForce 3 or Radeon 8500, strongly recommended was the GeForce FX, Radeon 9700 (or other cards with Pixel shader 2.x support). The Radeon 9700 demonstrated that anti-aliasing (AA) and/or anisotropic filtering (AF) could be fully usable options, even in the newest and most demanding titles at the time, and resulted in the widespread acceptance of AA and AF as standard features. AA and AF had been supported by many earlier graphics chips prior to this but carried a heavy performance hit and so most gamers opted not to enable these features. Timeline: With these new technologies game engines featured seamlessly integrated indoor/outdoor environments, used shaders for more realistic animations (characters, water, weather effects, etc.), and generally increased realism. The fact that the GPU performed some of the tasks that were already done by the CPU, and more generally the increasing processing power available, allowed to add realistic physics effects to the games, for example with the inclusion of the Havok physics engine in most video games. Physics had been already added in a video game in 1998 with Jurassic Park: Trespasser, but limited hardware capabilities at the time, and the absence of a middleware like Havok to handle physics had made it a technical and commercial failure.id Tech 4, first used for Doom 3 (2004), used an entirely dynamic per-pixel lighting, whereas previously, 3D engines had relied primarily on pre-calculated per-vertex lighting or lightmaps and Gouraud shading. The Shadow volume approach used in Doom 3 permitted more realistic lighting and shadows, but this came at a price as it could not render soft shadows, and the engine was primarily good indoors. This was later rectified to work with vast outdoor spaces, with the introduction of MegaTexture technology in the id Tech 4 engine. Timeline: The same year, Valve released Half-Life 2, powered by their new Source engine. This new engine was notable in that, among other things, it had very realistic facial animations for NPCs, including what was described as an impressive lip-syncing technology. Late 2000s: The approach to Photorealism Further improvements in GPUs like Shader Model 3 and Shader Model 4, made possible by new graphic chipsets as GeForce 7 or Radeon X1xxx series, allowed for improvements in graphic effects. Timeline: Developers of this era of 3D engines often tout their increasingly photorealistic quality. Around the same time, esports we're beginning to gain attention. These engines include realistic shader-based materials with predefined physics, environments with procedural and vertex shader-based objects (vegetation, debris, human-made objects such as books or tools), procedural animation, cinematographic effects (depth of field, motion blur, etc.), high-dynamic-range rendering, and unified lighting models with soft shadowing and volumetric lighting. Timeline: However, most of engines capable of these effects are evolutions of engines from the previous generation, such as Unreal Engine 3, the Dunia Engine and CryEngine 2, id Tech 5 (which was used with Rage and makes use of the new Virtual Texturing technology). The first games using Unreal Engine 3 were released in November 2006, and the first game to use CryEngine 2 (Crysis) was released in 2007. Early 2010s: Graphic technique mixes Further improvements in GPUs like Shader Model 5, made possible by new graphic chipsets as GeForce 400 Series or Radeon HD 5000 series and later, allowed for improvements in graphic effects. such as Dynamic Displacement Mapping and Tessellation. As of 2010, two upcoming evolutions of major existing engines had been released: Unreal Engine 3 in DirectX 11 which powered Samaritan Demo (which is used with Batman: Arkham City, Batman: Arkham Knight and more DX11 based UE3 games) and CryEngine 3, which powers Crysis 2 and 3. Timeline: Few companies had discussed future plans for their engines; id Tech 6, the eventual successor to id Tech 5, was an exception. Preliminary information about this engine which was still in early phases of development tended to show that id Software was looking toward a direction where ray tracing and classic raster graphics would be mixed. However, according to John Carmack, the hardware capable of id Tech 6 did not yet exist. The first title using the engine, Doom, was released in mid 2016. Timeline: In September 2015, Valve released Source 2 in an update to Dota 2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Urine flow rate** Urine flow rate: Urine flow rate or urinary flow rate is the volumetric flow rate of urine during urination. It is a measure of the quantity of urine excreted in a specified period of time (per second or per minute). It is measured with uroflowmetry, a type of flow measurement. Urine flow rate: The letters "V" (for volume) and "Q" (a conventional symbol for flow rate) are both used as a symbol for urine flow rate. The V often has a dot (overdot), that is, V̇ ("V-dot"). Qmax indicates the maximum flow rate. Qmax is used as an indicator for the diagnosis of enlarged prostate. A lower Qmax may indicate that the enlarged prostate puts pressure on the urethra, partially occluding it. Uroflowmetry is performed by urinating into a special urinal, toilet, or disposable device that has a measuring device built in. The average rate changes with age. Clinical usage: Changes in the urine flow rate can be indicative of kidney, prostate or other renal disorders. Similarly, by measuring urine flow rate, it is possible to calculate the clearance of metabolites that are used as clinical markers for disease. The urinary flow rate in males with benign prostate hyperplasia is influenced, although not statistically by voiding position. In a meta-analysis on the influence of voiding position in males on urodynamics, males with this condition showed an improvement of 1.23 ml/s in the sitting position. Healthy, young males were not influenced by changing voiding position.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glucose-6-phosphate translocase** Glucose-6-phosphate translocase: Glucose-6-phosphate exchanger SLC37A4, also known as glucose-6-phosphate translocase, is an enzyme that in humans is encoded by the SLC37A4 gene.It consists of three subunits, each of which are vital components of the multi-enzyme Glucose-6-Phosphatase Complex (G6Pase). This important enzyme complex is located within the membrane of the endoplasmic reticulum, and catalyzes the terminal reactions in both glycogenolysis and gluconeogenesis. The G6Pase complex is most abundant in liver tissue, but also present in kidney cells, small intestine, pancreatic islets and at a lower concentration in the gallbladder. The G6Pase complex is highly involved in the regulation of homeostasis and blood glucose levels. Within this framework of glucose regulation, the translocase components are responsible for transporting the substrates and products across the endoplasmic reticulum membrane, resulting in the release of free glucose into the bloodstream. Structure: Glucose-6-phosphate translocase is a transmembrane protein providing a selective channel between the endoplasmic reticulum lumen and the cytosol. The enzyme is made up of three separate transporting subunits referred to as G6PT1 (subunit 1), G6PT2 (subunit 2) and G6PT3 (subunit 3). While the hydrolyzing component of the G6Pase complex is located on the side of the membrane on which it acts, namely facing the lumen, the translocases are all integral membrane proteins in order to perform their function as cross-membrane transporters. The translocases are spatially located on either side of the active site of the hydrolyzing component within the membrane, which allows the greatest speed and facility of the reaction. Mechanism: Each of the translocase subunits performs a specific function in the transport of substrates and products, and finally release of glucose (which will eventually reach the bloodstream), as a step in glycogenolysis or gluconeogenesis. G6PT1 transports Glucose-6-Phosphate from the cytosol into the lumen of the endoplasmic reticulum, where it is hydrolyzed by the catalytic subunit of G6Pase. After hydrolysis, glucose and inorganic phosphate are transported back into the cytosol by G6PT2 and G6PT3, respectively. While the exact chemistry of the enzyme remains unknown, studies have shown that the mechanism of the enzyme complex is highly dependent upon the membrane structure. For instance, the Michaelis Constant of the enzyme for glucose-6-phosphate decreases significantly upon membrane disruption. The originally proposed mechanism of the G6Pase system involved a relatively unspecific hydrolase, suggesting that G6PT1 alone provides the high specificity for the overall reaction by selective transport into the lumen, where hydrolysis occurs. Supporting evidence for this proposed reaction includes the marked decrease in substrate specificity of hydrolysis upon membrane degradation. Mechanism: Figure 1 illustrates the role of G6P-Translocase within the G6Pase complex. Inhibitors: Many inhibitors of glucose-6-phosphate translocase of novel, semi-synthetic or natural origin are known and of medical importance. Genetic algorithms for synthesizing novel inhibitors of G6PT1 have been developed and utilized in drug discovery. Inhibitors of G6PT1 are the most studied as this subunit catalyzes the rate limiting step in glucose production through gluconeogenesis or glycogenolysis, and without its function these two processes could not occur. This inhibition holds great potential in drug development (discussed in "Medical and Disease Relevance"). Small-molecule inhibitors, such as mercaptopicolinic acid and diazobenzene sulfonate have some degree of inhibiting potential for G6PT1 but systematically lack specificity in inhibition, rendering them poor drug candidates. Since the late 1990s, natural products have been increasingly studied as potent and specific inhibitors of G6PT1. Prominent examples of natural inhibitors include mumbaistatin and analogs, kodaistatin (harvested from extracts of Aspergillus terreus) and chlorogenic acid. Other natural product inhibitors of G6PT1 are found in the fungi Chaetomium carinthiacum, Bauhinia magalandra leaves, and streptomyces bacteria. Medical and disease relevance: 1) Excessive activity of G6PT1 may contribute to the development of diabetes. Diabetes mellitus type 2 is a disease characterized by chronically elevated blood glucose levels, even when fasting. The rapidly rising prevalence of type 2 diabetes, along with its strong correlation to heart disease and other health complications has rendered it an area of intense research with an urgent need for treatment options. Studies monitoring blood glucose levels in rabbits revealed that the activity of G6Pase, and therefore G6PT1, is increased in specimens with diabetes. This strong correlation with diabetes type 2 makes the G6Pase complex, and G6PT1 in particular, an appealing drug target for control of blood glucose levels as its inhibition would directly prevent the release of free glucose into the bloodstream. It is possible that this mechanism of inhibition could be developed into a treatment for diabetes.2) The absence of a functional G6PT1 enzyme causes glycogen storage disease type Ib, commonly referred to as von Gierke disease, in humans. A common symptom of this disease is a build-up of glycogen in the liver and kidney causing enlargement of the organs.3) G6PT1 activity contributes to the survival of cells during hypoxia, which enables tumor cell growth and proliferation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EIF2S3** EIF2S3: Eukaryotic translation initiation factor 2 subunit 3 (eIF2γ) is a protein that in humans is encoded by the EIF2S3 gene. Function: Eukaryotic translation initiation factor 2 (eIF2) functions in the early steps of protein synthesis by forming a ternary complex with GTP and initiator tRNA and binding to a 40S ribosomal subunit. eIF2 is composed of three subunits, alpha (α), beta (β), and gamma (γ, this article), with the protein encoded by this gene representing the gamma subunit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MayaVi** MayaVi: MayaVi is a scientific data visualizer written in Python, which uses VTK and provides a GUI via Tkinter. MayaVi was developed by Prabhu Ramachandran, is free and distributed under the BSD License. It is cross-platform and runs on any platform where both Python and VTK are available (almost any Unix, Mac OS X, or Windows). MayaVi is pronounced as a single name, "Ma-ya-vee", meaning "magical" in Sanskrit. The code of MayaVi has nothing in common with that of Autodesk Maya or the Vi text editor.The latest version of MayaVi, called Mayavi2, is a component of the Enthought suite of scientific Python programs. It differs from the original MayaVi by its strong focus on making not only an interactive program, but also a reusable component for 3D plotting in Python. Although it exposes a slightly different interface and API than the original MayaVi, it now has more features. Major features: visualizes computational grids and scalar, vector, and tensor data an easy-to-use GUI can be imported as a Python module from other Python programs or can be scripted from the Python interpreter supports volume visualization of data via texture and ray cast mappers support for any VTK dataset using the VTK data format support for PLOT3D data multiple datasets can be used simultaneously provides a pipeline browser, with which objects in the VTK pipeline can be browsed and edited imports simple VRML and 3D Studio scenes custom modules and data filters can be added exporting to PostScript files, PPM/BMP/TIFF/JPEG/PNG images, Open Inventor, Geomview OOGL, VRML files, Wavefront .obj files, or RenderMan RIB file
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxygen balance** Oxygen balance: Oxygen balance (OB, OB%, or Ω) is an expression that is used to indicate the degree to which an explosive can be oxidized, to determine if an explosive molecule contains enough oxygen to fully oxidize the other atoms in the explosive. For example, fully oxidized carbon forms carbon dioxide, hydrogen forms water, sulfur forms sulfur dioxide, and metals form metal oxides. A molecule is said to have a positive oxygen balance if it contains more oxygen than is needed and a negative oxygen balance if it contains less oxygen than is needed.An explosive with a negative oxygen balance will lead to incomplete combustion, which commonly produces carbon monoxide, which is a toxic gas. Explosives with negative or positive oxygen balance are commonly mixed with other energetic materials that are either oxygen positive or negative, respectively, to increase the explosive's power. For example, TNT is an oxygen negative explosive and is commonly mixed with oxygen positive energetic materials or fuels to increase its power. Calculating oxygen balance: The procedure for calculating oxygen balance in terms of 100 grams of the explosive material is to determine the number of moles of oxygen that are excess or deficient for 100 grams of the compound. 1600 Mol.wt.ofcompound×(2X+(Y/2)+M−Z) X = number of atoms of carbon, Y = number of atoms of hydrogen, Z = number of atoms of oxygen, and M = number of atoms of metal (metallic oxide produced). Calculating oxygen balance: In the case of TNT (C6H2(NO2)3CH3), Molecular weight = 227.1 X = 7 (number of carbon atoms) Y = 5 (number of hydrogen atoms) Z = 6 (number of oxygen atoms) Therefore, 1600 227.1 14 2.5 −6) OB% = −73.97% for TNTExamples of materials with negative oxygen balance are e.g. nitromethane (-39%), trinitrotoluene (−74%), aluminium powder (−89%), sulfur (−100%), or carbon (−266.7%). Examples of materials with positive oxygen balance are e.g. ammonium nitrate (+20%), ammonium perchlorate (+34%), potassium chlorate (+39.2%), sodium chlorate (+45%), potassium nitrate (+47.5%), tetranitromethane (+49%), lithium perchlorate (+60%), or nitroglycerine (+3.5%). Ethylene glycol dinitrate has an oxygen balance of zero, as does the theoretical compound trinitrotriazine. Oxygen balance and power: Because sensitivity, brisance, and strength are properties resulting from a complex explosive chemical reaction, a simple relationship such as oxygen balance cannot be depended upon to yield universally consistent results. When using oxygen balance to predict properties of one explosive relative to another, it is to be expected that one with an oxygen balance closer to zero will be the more brisant, powerful, and sensitive; however, many exceptions to this rule do exist.One area in which oxygen balance can be applied is in the processing of mixtures of explosives. The family of explosives called amatols are mixtures of ammonium nitrate and TNT. Ammonium nitrate has an oxygen balance of +20% and TNT has an oxygen balance of −74%, so it would appear that the mixture yielding an oxygen balance of zero would also result in the best explosive properties. In actual practice a mixture of 80% ammonium nitrate and 20% TNT by weight yields an oxygen balance of +1%, the best properties of all mixtures, and an increase in strength of 30% over TNT.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parade of Homes** Parade of Homes: The Parade of Homes is a branded showcase of new and remodeled homes held in several regions throughout the United States. Alternatively known as the Tour of Homes in some locales, it is often presented by the local Home Builders Association ("HBA") or Building Industry Association ("BIA"). HBA or BIA home tours under the Parade of Homes showcase homes, both new construction and remodeled homes. Homes generally include single-family homes, condominiums, duplexes, and townhomes. Parade of Homes: The nation's first home-tour organization was established in Minnesota in 1948 by the Builders Association of the Twin Cities (now known as Housing First Minnesota). The Parade of Homes, presented by Housing First Minnesota, is the largest in the nation. The event runs twice a year, once in the spring and once in the fall, with participation peaking at 1,259 home entries in a single event in 2006. The first United States Trademark for "Parade of Homes" was registered by the Home Builders Association of Fort Wayne, Inc. and was transferred to Housing First Minnesota in 2010. Housing First Minnesota holds multiple Minnesota and Wisconsin State and federal trademarks for the Parade of Homes.The Salt Lake Home Builders Association held the first Parade of Homes in the United States in 1946.Some locations allow tours for free; other locations require an admission ticket to view the homes. Some locations offer awards for the best homes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Operating speed** Operating speed: The operating speed of a road is the speed at which motor vehicles generally operate on that road. Operating speed: The precise definition of "operating speed", however, is open to debate. Some sources, such as the AASHTO, have changed their definitions recently to match the common use of the word. In 1994, the AASHTO Green Book defined the operating speed as "the highest overall speed at which a driver can travel on a given highway under favorable weather conditions and under prevailing traffic conditions without at any time exceeding the safe speed as determined by the design speed on a section-by-section basis," a definition which a majority of US states still use. In July 2001, however, the AASHTO revised their definition for the new edition of the Green Book and defined it as "the speed at which drivers are observed operating their vehicles during free-flow conditions." https://xn--90advk.xn--p1ai/question/situs-slot-online-zeus-terpercaya-kasih-jp-gampang-menang/
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bailout valve** Bailout valve: A Bailout valve in underwater diving is a valve switching the breathing gas supply from the primary source to an emergency source. Rebreather diving#Bailout valve, a rebreather bailout valve is a valve in the mouthpiece that switches from closed to open circuit. Surface-supplied diving#Bailout gas supply, in surface-supplied diving, a bailout valve is a valve on the helmet, full-face mask or bailout block on the diving harness that switches from surface-supplied gas to scuba gas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Post-polio syndrome** Post-polio syndrome: Post-polio syndrome (PPS, poliomyelitis sequelae) is a group of latent symptoms of poliomyelitis (polio), occurring at about a 25–40% rate (latest data greater than 80%). These symptoms are caused by the damaging effects of the viral infection on the nervous system. Symptoms typically occur 15 to 30 years after an initial acute paralytic attack. Symptoms include decreasing muscular function or acute weakness with pain and fatigue. The same symptoms may also occur years after a nonparalytic polio (NPP) infection. Post-polio syndrome: The precise mechanism that causes PPS is unknown. It shares many features with chronic fatigue syndrome, but unlike that disorder it tends to be progressive and can cause loss of muscle strength. Treatment is primarily limited to adequate rest, conservation of available energy, and supportive measures, such as leg braces and energy-saving devices such as powered wheelchairs, analgesia (pain relief), and sleep aids. Signs and symptoms: After a period of prolonged stability, individuals who had been infected and recovered from polio begin to experience new signs and symptoms, characterised by muscular atrophy (decreased muscle mass), weakness, pain, and fatigue in limbs that were originally affected or in limbs that did not seem to have been affected at the time of the initial polio illness. PPS is a very slowly progressing condition marked by periods of stability followed by new declines in the ability to carry out usual daily activities. Most patients become aware of their decreased capacity to carry out daily routines due to significant changes in mobility and decreasing upper limb function and lung capability. Fatigue is often the most disabling symptom; even slight exertion often produces disabling fatigue and can also intensify other symptoms. Problems breathing or swallowing, sleep-related breathing disorders, such as sleep apnea, and decreased tolerance for cold temperatures are other notable symptoms.Increased activity during healthy years between the original infection and onset of PPS can amplify the symptoms. Thus, contracting polio at a young age can result in particularly disabling PPS symptoms.A possible early occurring and long-lasting sign is a slight jitter exhibited in handwriting. Mechanism: Numerous theories have been proposed to explain post-polio syndrome. Despite this, no absolutely defined causes of PPS are known. The most widely accepted theory of the mechanism behind the disorder is "neural fatigue". A motor unit is a nerve cell (or neuron) and the muscle fibers it activates. Poliovirus attacks specific neurons in the brainstem and the anterior horn cells of the spinal cord, generally resulting in the death of a substantial fraction of the motor neurons controlling skeletal muscles. In an effort to compensate for the loss of these neurons, surviving motor neurons sprout new nerve terminals to the orphaned muscle fibers. The result is some recovery of movement and the development of enlarged motor units.The neural fatigue theory proposes that the enlargement of the motor neuron fibers places added metabolic stress on the nerve cell body to nourish the additional fibers. After years of use, this stress may be more than the neuron can handle, leading to the gradual deterioration of the sprouted fibers, and eventually, the neuron itself. This causes muscle weakness and paralysis. Restoration of nerve function may occur in some fibers a second time, but eventually, nerve terminals malfunction and permanent weakness occurs. When these neurons no longer carry on sprouting, fatigue occurs due to the increasing metabolic demand of the nervous system. The normal aging process also may play a role. Denervation and reinnervation are going on, but the reinnervation process has an upper limit where the reinnervation cannot compensate for the ongoing denervation, and loss of motor units takes place. What disturbs the denervation-reinnervation equilibrium and causes peripheral denervation, though, is still unclear. With age, most people experience a decrease in the number of spinal motor neurons. Because polio survivors have already lost a considerable number of motor neurons, further age-related loss of neurons may contribute substantially to new muscle weakness. The overuse and underuse of muscles also may contribute to muscle weakness.Another theory is that people who have recovered from polio lose remaining healthy neurons at a faster rate than normal. However, little evidence exists to support this idea. Finally, the initial polio infection is thought to cause an autoimmune reaction, in which the body's immune system attacks normal cells as if they were foreign substances. Again, compared to neural fatigue, the evidence supporting this theory is quite limited. Diagnosis: Diagnosis of PPS can be difficult, since the symptoms are hard to separate from complications due to the original polio infection, and from the normal infirmities of aging. No laboratory test for post-polio syndrome is known, nor are any other specific diagnostic criteria. Three important criteria are recognized, including previous diagnosis of polio, long interval after recovery, and gradual onset of weakness.In general, PPS is a diagnosis of exclusion whereby other possible causes of the symptoms are eliminated. Neurological examination aided by other laboratory studies can help to determine what component of a neuromuscular deficit occurred with polio and what components are new and to exclude all other possible diagnoses. Objective assessment of muscle strength in PPS patients may not be easy. Changes in muscle strength are determined in specific muscle groups using various muscle scales that quantify] strength, such as the Medical Research Council (MRC) scale. magnetic resonance imaging, neuroimaging, and electrophysiological studies, muscle biopsies, or spinal fluid analysis may also be useful in establishing a PPS diagnosis. Management: PPS treatment concerns comfort (relieving pain via analgesics) and rest (via use of mechanisms to make life easier, such as a powered wheelchair) and is generally of palliative care. No reversive therapies are known. Fatigue is usually the most disabling symptom. Energy conservation can significantly reduce fatigue episodes. Such can be achieved by lifestyle changes, such as additional (daytime) sleep, reducing workload, and weight loss for obesity. Some require lower-limb orthotics to reduce energy usage.Medications for fatigue, such as amantadine and pyridostigmine, are ineffective in the management of PPS. Muscle strength and endurance training are more important in managing the symptoms of PPS than the ability to perform enduring aerobic activity. Management should focus on treatments such as hydrotherapy and developing other routines that encourage strength, but do not affect fatigue levels. A recent trend toward use of intravenous immunoglobulin, which had yielded promising albeit modest results, but as of 2010 proves insufficient to recommend as a treatment.PPS increasingly stresses the musculoskeletal system from progressive muscular atrophy. In a review of 539 PPS patients, 80% reported pain in muscles and joints and 87% had fatigue. Joint instability can cause appreciable pain and should be adequately treated with painkillers. Directed activity, such as decreasing mechanical stress with braces and adaptive equipment, is recommended.Because PPS can fatigue facial muscles, as well as cause dysphagia (difficulty swallowing), dysarthria (difficulty speaking) or aphonia (inability to produce speech), persons may become malnourished from difficulty eating. Compensatory routines can help relieve these symptoms, such as eating smaller portions at a time and sitting down whilst eating. PPS with respiratory involvement requires exceptional therapy management, such as breathing exercises and chest percussion to expel secretions (clearing of the lungs) on a periodic basis (monitored via stethoscope). Failure to properly assess PPS with respiratory involvement can increase the risk of overlooking an aspiration pneumonia (a life-threatening infection of the lower respiratory tract, especially so if not caught early on). Severe cases may require permanent ventilation or tracheostomy. Sleep apnoea may also occur. Other management strategies that show improvement include smoking cessation, treatment of other respiratory diseases, and vaccination against respiratory infections such as influenza. Prognosis: In general, PPS is not life-threatening. The major exception is patients left with severe residual respiratory difficulties, who may experience new severe respiratory impairment. Compared to control populations, PPS patients lack any elevation of antibodies against the poliovirus, and because no poliovirus is excreted in the feces, it is not considered a recurrence of the original polio. Further, no evidence has shown that the poliovirus can cause a persistent infection in humans. PPS has been confused with amyotrophic lateral sclerosis (ALS), which progressively weakens muscles. PPS patients do not have an elevated risk of ALS.No sufficient longitudinal studies have been conducted on the prognosis of PPS, but speculations have been made by several physicians based on experience. Fatigue and mobility usually return to normal over a long period of time. The prognosis also differs depending upon different causes and factors affecting the individual. An overall mortality rate of 25% exists due to possible respiratory paralysis of persons with PPS; otherwise, it is usually not lethal.Prognosis can be abruptly changed for the worse by the use of anesthesia, such as during surgery. Epidemiology: Old data show PPS occurs in roughly 25 to 50% of people who survive a polio infection. However, newer data from countries that have contacted their polio survivors have shown 85% of their polio survivors to have symptoms of post polio syndrome. Typically, it occurs 30–35 years afterwards, but delays between 8 and 71 years have been recorded. The disease occurs sooner in persons with more severe initial infections. Other factors that increase the risk of PPS include increasing length of time since acute poliovirus infection, presence of permanent residual impairment after recovery from the acute illness, and being female. PPS is documented to occur in cases of nonparalytic polio (NPP). One review states late-onset weakness and fatigue occur in 14–42% of NPP patients.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anesthetic vaporizer** Anesthetic vaporizer: An anesthetic vaporizer (American English) or anaesthetic vaporiser (British English) is a device generally attached to an anesthetic machine which delivers a given concentration of a volatile anesthetic agent. It works by controlling the vaporization of anesthetic agents from liquid, and then accurately controlling the concentration in which these are added to the fresh gas flow. The design of these devices takes account of varying: ambient temperature, fresh gas flow, and agent vapor pressure. Modern vaporizers: There are generally two types of vaporizers: plenum and drawover. Both have distinct advantages and disadvantages. The dual-circuit gas-vapor blender is a third type of vaporizer used exclusively for the agent desflurane. Modern vaporizers: Plenum vaporizers The plenum vaporizer is driven by positive pressure from the anesthetic machine, and is usually mounted on the machine. The performance of the vaporizer does not change regardless of whether the patient is breathing spontaneously or is mechanically ventilated. The internal resistance of the vaporizer is usually high, but because the supply pressure is constant the vaporizer can be accurately calibrated to deliver a precise concentration of volatile anesthetic vapor over a wide range of fresh gas flows. The plenum vaporizer is an elegant device which works reliably, without external power, for many hundreds of hours of continuous use, and requires very little maintenance. Modern vaporizers: The plenum vaporizer works by accurately splitting the incoming gas into two streams. One of these streams passes straight through the vaporizer in the bypass channel. The other is diverted into the vaporizing chamber. Gas in the vaporizing chamber becomes fully saturated with volatile anesthetic vapor. This gas is then mixed with the gas in the bypass channel before leaving the vaporizer. Modern vaporizers: A typical volatile agent, isoflurane, has a saturated vapor pressure of 32kPa (about 1/3 of an atmosphere). This means that the gas mixture leaving the vaporizing chamber has a partial pressure of isoflurane of 32kPa. At sea-level (atmospheric pressure is about 101kPa), this equates conveniently to a concentration of 32%. However, the output of the vaporizer is typically set at 1–2%, which means that only a very small proportion of the fresh gas needs to be diverted through the vaporizing chamber (this proportion is known as the splitting ratio). It can also be seen that a plenum vaporizer can only work one way round: if it is connected in reverse, much larger volumes of gas enter the vaporizing chamber, and therefore potentially toxic or lethal concentrations of vapor may be delivered. (Technically, although the dial of the vaporizer is calibrated in volume percent (e.g. 2%), what it actually delivers is a partial pressure of anesthetic agent (e.g. 2kPa)). Modern vaporizers: The performance of the plenum vaporizer depends extensively on the saturated vapor pressure of the volatile agent. This is unique to each agent, so it follows that each agent must only be used in its own specific vaporizer. Several safety systems, such as the Fraser-Sweatman system, have been devised so that filling a plenum vaporizer with the wrong agent is extremely difficult. A mixture of two agents in a vaporizer could result in unpredictable performance from the vaporizer. Modern vaporizers: Saturated vapor pressure for any one agent varies with temperature, and plenum vaporizers are designed to operate within a specific temperature range. They have several features designed to compensate for temperature changes (especially cooling by evaporation). They often have a metal jacket weighing about 5 kg, which equilibrates with the temperature in the room and provides a source of heat. In addition, the entrance to the vaporizing chamber is controlled by a bimetallic strip, which admits more gas to the chamber as it cools, to compensate for the loss of efficiency of evaporation. Modern vaporizers: The first temperature-compensated plenum vaporizer was the Cyprane 'FluoTEC' Halothane vaporizer, released onto the market shortly after Halothane was introduced into clinical practice in 1956. Drawover vaporizers The drawover vaporizer is driven by negative pressure developed by the patient, and must therefore have a low resistance to gas flow. Its performance depends on the minute volume of the patient: its output drops with increasing minute ventilation. Modern vaporizers: The design of the drawover vaporizer is much simpler: in general it is a simple glass reservoir mounted in the breathing attachment. Drawover vaporizers may be used with any liquid volatile agent (including older agents such as diethyl ether or chloroform, although it would be dangerous to use desflurane). Because the performance of the vaporizer is so variable, accurate calibration is impossible. However, many designs have a lever which adjusts the amount of fresh gas which enters the vaporizing chamber. Modern vaporizers: The drawover vaporizer may be mounted either way round, and may be used in circuits where re-breathing takes place, or inside the circle breathing attachment. Drawover vaporizers typically have no temperature compensating features. With prolonged use, the liquid agent may cool to the point where condensation and even frost may form on the outside of the reservoir. This cooling impairs the efficiency of the vaporizer. One way of minimising this effect is to place the vaporizer in a bowl of water. The relative inefficiency of the drawover vaporizer contributes to its safety. A more efficient design would produce too much anesthetic vapor. The output concentration from a drawover vaporizer may greatly exceed that produced by a plenum vaporizer, especially at low flows. For safest use, the concentration of anesthetic vapor in the breathing attachment should be continuously monitored. Despite its drawbacks, the drawover vaporizer is cheap to manufacture and easy to use. In addition, its portable design means that it can be used in the field or in veterinary anesthesia. Modern vaporizers: Dual-circuit gas–vapor blender The third category of vaporizer (the dual-circuit gas–vapor blender) was created specifically for the agent desflurane. Desflurane boils at 23.5 °C, which is very close to room temperature. This means that at normal operating temperatures, the saturated vapor pressure of desflurane changes greatly with only small fluctuations in temperature. This means that the features of a normal plenum vaporizer are not sufficient to ensure an accurate concentration of desflurane. Additionally, on a very warm day, all the desflurane would boil, and very high (potentially lethal) concentrations of desflurane might reach the patient. Modern vaporizers: A desflurane vaporizer (e.g. the TEC 6 produced by Datex-Ohmeda) is heated to 39C and pressurized to 200kPa (and therefore requires electrical power). It is mounted on the anesthetic machine in the same way as a plenum vaporizer, but its function is quite different. It evaporates a chamber containing desflurane using heat, and injects small amounts of pure desflurane vapor into the fresh gas flow. A transducer senses the fresh gas flow. Modern vaporizers: A warm-up period is required after switching on. The desflurane vaporizer will fail if mains power is lost. Alarms sound if the vaporizer is nearly empty. An electronic display indicates the level of desflurane in the vaporizer. The expense and complexity of the desflurane vaporizer have contributed to the relative lack of popularity of desflurane, although in recent years it is gaining in popularity. Historical vaporizers: Historically, ether (the first volatile agent) was first used by John Snow's inhaler (1847) but was superseded by the use of chloroform (1848). Ether then slowly made a revival (1862–1872) with regular use via Curt Schimmelbusch's "mask", a narcosis mask for dripping liquid ether. Now obsolete, it was a mask constructed of wire, and covered with cloth. Historical vaporizers: Pressure and demand from dental surgeons for a more reliable method of administering ether helped modernize its delivery. In 1877, Clover invented an ether inhaler with a water jacket, and by the late 1899 alternatives to ether came to the fore, mainly due to the introduction of spinal anesthesia. Subsequently, this resulted in the decline of ether (1930–1956) use due to the introduction of cyclopropane, trichloroethylene, and halothane. By the 1980s, the anesthetic vaporizer had evolved considerably; subsequent modifications lead to a raft of additional safety features such as temperature compensation, a bimetallic strip, temperature-adjusted splitting ratio and anti-spill measures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RNA polymerase V** RNA polymerase V: RNA polymerase V (Pol V), previously known as RNA polymerase IVb, is a multisubunit plant specific RNA polymerase. It is required for normal function and biogenesis of small interfering RNA (siRNA). Together with RNA polymerase IV (Pol IV), Pol V is involved in an siRNA-dependent epigenetic pathway known as RNA-directed DNA methylation (RdDM), which establishes and maintains heterochromatic silencing in plants. Structure: RNA polymerase V is composed of 12 subunits that are paralogous to RNA polymerase II (Pol II) subunits. Approximately half of these subunits are shared among Pol II, IV, and V. Its two largest subunits, together forming the catalytic site, make up the most conserved region sharing similarity with eukaryotic and bacterial polymerases. The subunits unique to only Pol IV and V are believed to have arisen from gene duplication events that occurred prior to the evolution of land plants. The structure of Pol V has been studied in a variety of plants, including Arabidopsis thaliana, maize, and cauliflower. Affinity purification has shown significant differences in Pol V composition among these different species.In Arabidopsis, the largest subunit is known as NRPE1. This subunit contains a GW-rich AGO-hook motif that provides the ability to interact with the argonaute protein AGO4, as well as targeting of DNA methylation. While the subunit is unique to Pol V, it does contain a conserved domain common with the largest subunit of Pol IV known as Defective Chloroplasts and Leaves (DeCL), which provides an unknown function. The second largest subunit of Pol V, NRPD/E2, is shared with Pol IV. Aside from its catalytic site, Arabidopsis Pol V contains 10 smaller, noncatalytic subunits. Of these, 6 are shared with Pol II and 8 are shared with Pol IV. The fourth and seventh subunits form what is known as the "Stalk" subcomplex, while the fifth and ninth subunits form the "Jaw" subcomplex. Function: Pol V transcribes one of the two types of non-coding RNA involved in RdDM. In canonical RdDM, Pol V transcribes a scaffold RNA which base pairs with a 24-nt siRNA bound to AGO4. The AGO-hook motif in Pol V's largest subunit recruits this AGO4 to the site. Pol V transcripts are also necessary for the recruitment of chromatin remodelers to the target site. One such protein is Domains Rearranged Methyltransferase 2 (DRM2), which is believed to be recruited when the AGO4-bound siRNA base pairs with the scaffold. Once proteins are bound to this scaffold RNA, histone modification and DNA methylation may proceed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hull (watercraft)** Hull (watercraft): A hull is the watertight body of a ship, boat, or flying boat. The hull may open at the top (such as a dinghy), or it may be fully or partially covered with a deck. Atop the deck may be a deckhouse and other superstructures, such as a funnel, derrick, or mast. The line where the hull meets the water surface is called the waterline. General features: There is a wide variety of hull types that are chosen for suitability for different usages, the hull shape being dependent upon the needs of the design. Shapes range from a nearly perfect box in the case of scow barges to a needle-sharp surface of revolution in the case of a racing multihull sailboat. The shape is chosen to strike a balance between cost, hydrostatic considerations (accommodation, load carrying, and stability), hydrodynamics (speed, power requirements, and motion and behavior in a seaway) and special considerations for the ship's role, such as the rounded bow of an icebreaker or the flat bottom of a landing craft. General features: In a typical modern steel ship, the hull will have watertight decks, and major transverse members called bulkheads. There may also be intermediate members such as girders, stringers and webs, and minor members called ordinary transverse frames, frames, or longitudinals, depending on the structural arrangement. The uppermost continuous deck may be called the "upper deck", "weather deck", "spar deck", "main deck", or simply "deck". The particular name given depends on the context—the type of ship or boat, the arrangement, or even where it sails. General features: In a typical wooden sailboat, the hull is constructed of wooden planking, supported by transverse frames (often referred to as ribs) and bulkheads, which are further tied together by longitudinal stringers or ceiling. Often but not always there is a centerline longitudinal member called a keel. In fiberglass or composite hulls, the structure may resemble wooden or steel vessels to some extent, or be of a monocoque arrangement. In many cases, composite hulls are built by sandwiching thin fiber-reinforced skins over a lightweight but reasonably rigid core of foam, balsa wood, impregnated paper honeycomb, or other material. General features: Perhaps the earliest proper hulls were built by the Ancient Egyptians, who by 3000 BC knew how to assemble wooden planks into a hull. Hull shapes: Hulls come in many varieties and can have composite shape, (e.g., a fine entry forward and inverted bell shape aft), but are grouped primarily as follows: Chined and hard-chined. Examples are the flat-bottom (chined), v-bottom, and multi-chine hull (several gentler hard chines, still not smooth). These types have at least one pronounced knuckle throughout all or most of their length. Hull shapes: Moulded, round bilged or soft-chined. These hull shapes all have smooth curves. Examples are the round bilge, semi-round bilge, and s-bottom hull. Planing and displacement hulls Displacement hull: here the hull is supported exclusively or predominantly by buoyancy. Vessels that have this type of hull travel through the water at a limited rate that is defined by the waterline length except for especially narrow hulls such as sailing multihulls that are less limited this way. Hull shapes: Planing hull: here, the planing hull form is configured to develop positive dynamic pressure so that its draft decreases with increasing speed. The dynamic lift reduces the wetted surface and therefore also the drag. They are sometimes flat-bottomed, sometimes V-bottomed and more rarely, round-bilged. The most common form is to have at least one chine, which makes for more efficient planing and can throw spray down. Planing hulls are more efficient at higher speeds, although they still require more energy to achieve these speeds. An effective planing hull must be as light as possible with flat surfaces that are consistent with good sea keeping. Sailboats that plane must also sail efficiently in displacement mode in light winds. Hull shapes: Semi-displacement, or semi-planing: here the hull form is capable of developing a moderate amount of dynamic lift; however, most of the vessel's weight is still supported through buoyancy. Hull shapes: Hull forms At present, the most widely used form is the round bilge hull.With a small payload, such a craft has less of its hull below the waterline, giving less resistance and more speed. With a greater payload, resistance is greater and speed lower, but the hull's outward bend provides smoother performance in waves. As such, the inverted bell shape is a popular form used with planing hulls. Hull shapes: Chined and hard-chined hulls A chined hull does not have a smooth rounded transition between bottom and sides. Instead, its contours are interrupted by sharp angles where predominantly longitudinal panels of the hull meet. The sharper the intersection (the more acute the angle), the "harder" the chine. More than one chine per side is possible. The Cajun "pirogue" is an example of a craft with hard chines. Hull shapes: Benefits of this type of hull include potentially lower production cost and a (usually) fairly flat bottom, making the boat faster at planing. A hard chined hull resists rolling (in smooth water) more than does a hull with rounded bilges (the chine creates turbulence and drag resisting the rolling motion, as it moves through the water, the rounded-bilge provides less flow resistance around the turn). In rough seas, this can make the boat roll more, as the motion drags first down, then up, on a chine: round-bilge boats are more seakindly in waves, as a result. Hull shapes: Chined hulls may have one of three shapes: Flat-bottom chined hulls Multi-chined hulls V-bottom chined hulls. Sometimes called hard chine.Each of these chine hulls has its own unique characteristics and use. The flat-bottom hull has high initial stability but high drag. To counter the high drag, hull forms are narrow and sometimes severely tapered at bow and stern. This leads to poor stability when heeled in a sailboat. This is often countered by using heavy interior ballast on sailing versions. They are best suited to sheltered inshore waters. Early racing power boats were fine forward and flat aft. This produced maximum lift and a smooth, fast ride in flat water, but this hull form is easily unsettled in waves. The multi-chine hull approximates a curved hull form. It has less drag than a flat-bottom boat. Multi chines are more complex to build but produce a more seaworthy hull form. They are usually displacement hulls. V or arc-bottom chine boats have a V shape between 6° and 23°. This is called the deadrise angle. The flatter shape of a 6-degree hull will plane with less wind or a lower-horsepower engine but will pound more in waves. The deep V form (between 18 and 23 degrees) is only suited to high-powered planing boats. They require more powerful engines to lift the boat onto the plane but give a faster, smoother ride in waves. Hull shapes: Displacement chined hulls have more wetted surface area, hence more drag, than an equivalent round-hull form, for any given displacement. Hull shapes: Smooth curve hulls Smooth curve hulls are hulls that use, just like the curved hulls, a centreboard, or an attached keel.Semi round bilge hulls are somewhat less round. The advantage of the semi-round is that it is a nice middle between the S-bottom and chined hull. Typical examples of a semi-round bilge hull can be found in the Centaur and Laser sailing dinghies. Hull shapes: S-bottom hulls are sailing boat hulls with a midships transverse half-section shaped like an s. In the s-bottom, the hull has round bilges and merges smoothly with the keel, and there are no sharp corners on the hull sides between the keel centreline and the sheer line. Boats with this hull form may have a long fixed deep keel, or a long shallow fixed keel with a centreboard swing keel inside. Ballast may be internal, external, or a combination. This hull form was most popular in the late 19th and early to mid 20th centuries. Examples of small sailboats that use this s-shape are the Yngling and Randmeer. Appendages: Control devices such as a rudder, trim tabs or stabilizing fins may be fitted. A keel may be fitted on a hull to increase the transverse stability, directional stability or to create lift. Retractable appendages include centreboards and daggerboards. A forward protrusion below the waterline is called a bulbous bow. These are fitted on some hulls to reduce the wave making resistance drag and thereby increase fuel efficiency. Bulbs fitted at the stern are less common but accomplish a similar task. Terms: Baseline is a level reference line from which vertical distances are measured. Bow is the front part of the hull. Amidships is the middle portion of the vessel in the fore and aft direction. Port is the left side of the vessel when facing the bow from on board. Starboard is the right side of the vessel when facing the bow from on board. Stern is the rear part of the hull. Waterline is an imaginary line circumscribing the hull that matches the surface of the water when the hull is not moving. Metrics: Hull forms are defined as follows: Block measures that define the principal dimensions. They are: Beam or breadth (B) is the width of the hull. (ex: BWL is the maximum beam at the waterline) Draft (d) or (T) is the vertical distance from the bottom of the keel to the waterline. Freeboard (FB) is depth plus the height of the keel structure minus draft. Length at the waterline (LWL) is the length from the forwardmost point of the waterline measured in profile to the stern-most point of the waterline. Length between perpendiculars (LBP or LPP) is the length of the summer load waterline from the stern post to the point where it crosses the stem. (see also p/p) Length overall (LOA) is the extreme length from one end to the other. Moulded depth (D) is the vertical distance measured from the top of the keel to the underside of the upper deck at side.Form derivatives that are calculated from the shape and the block measures. They are: Displacement (Δ) is the weight of water equivalent to the immersed volume of the hull. Metrics: Longitudinal centre of buoyancy (LCB) is the longitudinal position of the centroid of the displaced volume, often given as the distance from a point of reference (often midships) to the centroid of the static displaced volume. Note that the longitudinal centre of gravity or centre of the weight of the vessel must align with the LCB when the hull is in equilibrium. Metrics: Longitudinal centre of flotation (LCF) is the longitudinal position of the centroid of the waterplane area, usually expressed as longitudinal distance from a point of reference (often midships) to the centre of the area of the static waterplane. This can be visualized as being the area defined by the water's surface and the hull. Vertical centre of buoyancy (VCB) is the vertical position of the centroid of displaced volume, generally given as a distance from a point of reference (such as the baseline) to the centre of the static displaced volume. Metrics: Volume (V or ∇) is the volume of water displaced by the hull.Coefficients help compare hull forms as well: Block coefficient (Cb) is the volume (V) divided by the LWL × BWL × TWL. If you draw a box around the submerged part of the ship, it is the ratio of the box volume occupied by the ship. It gives a sense of how much of the block defined by the LWL, beam (B) & draft (T) is filled by the hull. Full forms such as oil tankers will have a high Cb where fine shapes such as sailboats will have a low Cb. Midship coefficient (Cm or Cx) is the cross-sectional area (Ax) of the slice at midships (or at the largest section for Cx) divided by beam x draft. It displays the ratio of the largest underwater section of the hull to a rectangle of the same overall width and depth as the underwater section of the hull. This defines the fullness of the underbody. A low Cm indicates a cut-away mid-section and a high Cm indicates a boxy section shape. Sailboats have a cut-away mid-section with low Cx whereas cargo vessels have a boxy section with high Cx to help increase the Cb. Prismatic coefficient (Cp) is the volume (V) divided by LWLx Ax. It displays the ratio of the immersed volume of the hull to a volume of a prism with equal length to the ship and cross-sectional area equal to the largest underwater section of the hull (midship section). This is used to evaluate the distribution of the volume of the underbody. A low or fine Cp indicates a full mid-section and fine ends, a high or full Cp indicates a boat with fuller ends. Planing hulls and other highspeed hulls tend towards a higher Cp. Efficient displacement hulls travelling at a low Froude number will tend to have a low Cp. Waterplane coefficient (Cw) is the waterplane area divided by LWL x BWL. The waterplane coefficient expresses the fullness of the waterplane, or the ratio of the waterplane area to a rectangle of the same length and width. A low Cw figure indicates fine ends and a high Cw figure indicates fuller ends. High Cw improves stability as well as handling behavior in rough conditions. Note: Computer-aided design: Use of computer-aided design has superseded paper-based methods of ship design that relied on manual calculations and lines drawing. Since the early 1990s, a variety of commercial and freeware software packages specialized for naval architecture have been developed that provide 3D drafting capabilities combined with calculation modules for hydrostatics and hydrodynamics. These may be referred to as geometric modeling systems for naval architecture.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lagerdorf Formation** Lagerdorf Formation: The Lagerdorf Formation is a geologic formation in Germany. It preserves fossils dating back to the Cretaceous period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**T-statistic** T-statistic: In statistics, the t-statistic is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. It is used in hypothesis testing via Student's t-test. The t-statistic is used in a t-test to determine whether to support or reject the null hypothesis. It is very similar to the z-score but with the difference that t-statistic is used when the sample size is small or the population standard deviation is unknown. For example, the t-statistic is used in estimating the population mean from a sampling distribution of sample means if the population standard deviation is unknown. It is also used along with p-value when running hypothesis tests where the p-value tells us what the odds are of the results to have happened. Definition and features: Let β^ be an estimator of parameter β in some statistical model. Then a t-statistic for this parameter is any quantity of the form tβ^=β^−β0s.e.⁡(β^), where β0 is a non-random, known constant, which may or may not match the actual unknown parameter value β, and s.e.⁡(β^) is the standard error of the estimator β^ for β. By default, statistical packages report t-statistic with β0 = 0 (these t-statistics are used to test the significance of corresponding regressor). However, when t-statistic is needed to test the hypothesis of the form H0: β = β0, then a non-zero β0 may be used. Definition and features: If β^ is an ordinary least squares estimator in the classical linear regression model (that is, with normally distributed and homoscedastic error terms), and if the true value of the parameter β is equal to β0, then the sampling distribution of the t-statistic is the Student's t-distribution with (n − k) degrees of freedom, where n is the number of observations, and k is the number of regressors (including the intercept). Definition and features: In the majority of models, the estimator β^ is consistent for β and is distributed asymptotically normally. If the true value of the parameter β is equal to β0, and the quantity s.e.⁡(β^) correctly estimates the asymptotic variance of this estimator, then the t-statistic will asymptotically have the standard normal distribution. In some models the distribution of the t-statistic is different from the normal distribution, even asymptotically. For example, when a time series with a unit root is regressed in the augmented Dickey–Fuller test, the test t-statistic will asymptotically have one of the Dickey–Fuller distributions (depending on the test setting). Use: Most frequently, t statistics are used in Student's t-tests, a form of statistical hypothesis testing, and in the computation of certain confidence intervals. The key property of the t statistic is that it is a pivotal quantity – while defined in terms of the sample mean, its sampling distribution does not depend on the population parameters, and thus it can be used regardless of what these may be. One can also divide a residual by the sample standard deviation: g(x,X)=x−X¯s to compute an estimate for the number of standard deviations a given sample is from the mean, as a sample version of a z-score, the z-score requiring the population parameters. Use: Prediction Given a normal distribution N(μ,σ2) with unknown mean and variance, the t-statistic of a future observation Xn+1, after one has made n observations, is an ancillary statistic – a pivotal quantity (does not depend on the values of μ and σ2) that is a statistic (computed from observations). This allows one to compute a frequentist prediction interval (a predictive confidence interval), via the following t-distribution: Xn+1−X¯nsn1+n−1∼Tn−1. Use: Solving for Xn+1 yields the prediction distribution X¯n+sn1+n−1⋅Tn−1, from which one may compute predictive confidence intervals – given a probability p, one may compute intervals such that 100p% of the time, the next observation Xn+1 will fall in that interval. History: The term "t-statistic" is abbreviated from "hypothesis test statistic". In statistics, the t-distribution was first derived as a posterior distribution in 1876 by Helmert and Lüroth. The t-distribution also appeared in a more general form as Pearson Type IV distribution in Karl Pearson's 1895 paper. However, the T-Distribution, also known as Student's T Distribution gets its name from William Sealy Gosset who was first to published the result in English in his 1908 paper titled "The Probable Error of a Mean" (in Biometrika) using his pseudonym "Student" because his employer preferred their staff to use pen names when publishing scientific papers instead of their real name, so he used the name "Student" to hide his identity. Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples – for example, the chemical properties of barley where sample sizes might be as few as 3. Hence a second version of the etymology of the term Student is that Guinness did not want their competitors to know that they were using the t-test to determine the quality of raw material. Although it was William Gosset after whom the term "Student" is penned, it was actually through the work of Ronald Fisher that the distribution became well known as "Student's distribution" and "Student's t-test" Related concepts: z-score (standardization): If the population parameters are known, then rather than computing the t-statistic, one can compute the z-score; analogously, rather than using a t-test, one uses a z-test. This is rare outside of standardized testing. Studentized residual: In regression analysis, the standard errors of the estimators at different data points vary (compare the middle versus endpoints of a simple linear regression), and thus one must divide the different residuals by different estimates for the error, yielding what are called studentized residuals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Association for Molecular Pathology v. Myriad Genetics, Inc.** Association for Molecular Pathology v. Myriad Genetics, Inc.: Association for Molecular Pathology v. Myriad Genetics, Inc., 569 U.S. 576 (2013), was a Supreme Court case, which decided, that "a naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated.” However, as a "bizzare conciliatory prize" the Court allowed patenting of complementary DNA, which contains exactly the same protein-coding base pair sequence as the natural DNA, albeit with introns removed.The lawsuit in question challenged the validity of gene patents in the United States, specifically questioning certain claims in issued patents owned or controlled by Myriad Genetics that cover isolated DNA sequences, methods to diagnose propensity to cancer by looking for mutated DNA sequences, and methods to identify drugs using isolated DNA sequences. Prior to the case, the U.S. Patent Office accepted patents on isolated DNA sequences as a composition of matter. Diagnostic claims were already under question through the Supreme Court's prior holdings in Bilski v. Kappos and Mayo v. Prometheus. Drug screening claims were not seriously questioned prior to this case. Association for Molecular Pathology v. Myriad Genetics, Inc.: Notably, the original lawsuit in this case was not filed by a patent owner against a patent infringer, but by a public interest group (American Civil Liberties Union) of behalf of 20 medical organizations, researchers, genetic counselors, and patients as a declaratory judgement. Association for Molecular Pathology v. Myriad Genetics, Inc.: The case was originally heard in Southern District Court of New York. The District Court ruled, that none of the challenged claims were patent eligible. The majority opinion called patenting isolated or purified natural products a “lawyer's trick” to circumvent the prohibitions on the direct patenting of products of nature.Myriad then appealed to the United States Court of Appeals for the Federal Circuit (CAFC). The Federal Circuit reversed the district court in part and affirmed in part, ruling, that isolated DNA, which does not occur by itself in nature, can be patented, and that the drug screening claims were valid, but that Myriad's diagnostic claims were unpatentable. The CAFC considered the valid gene claims as directed toward compositions of matter rather than toward information, like the District Court did. Association for Molecular Pathology v. Myriad Genetics, Inc.: On appeal, the Supreme Court vacated and remanded the case for the Federal Circuit to reconsider the issues in light of Mayo v. Prometheus. On remand, the Federal Circuit held that Mayo v. Prometheus did not affect the outcome of the case, so the American Civil Liberties Union and the Public Patent Foundation filed a petition for certiorari. The Supreme Court granted certiorari and unanimously invalidated Myriad's claims to isolated genes. The Supreme Court held, that merely isolating genes (even with introns removed), which are found in nature, does not make them patentable. However, the SCOTUS agreed with the “friend of the court brief,” submitted by the USPTO, that complimentary DNA should be patent eligible, because it does not exist in Nature but rather was “engineered by man", even though this decision lacks scientific consistency. A prominent US biotech patent lawyer commented on the SCOTUS decision: "It is inconsistent to conclude that isolated DNA and naturally occurring DNA are not markedly different because their information content is the same, and at the same time find that cDNA is patent eligible despite having virtually identical information content to naturally occurring mRNA."This decision was not devastating for Myriad Genetics, since the Court only “invalidated five [of its 520] patent claims covering isolated naturally occurring DNA, ... thereby reducing [its] patent estate to 24 patents and 515 patent claims.” Myriad continued suing its competitiors. However, it was unable to get preliminary injunctions per eBay Inc. v. MercExchange, L.L.C., and most of these lawsuits were settled out of court. Background: The global search for a genetic basis for breast and ovarian cancer began in earnest in 1988. In 1990, at a meeting of the American Society of Human Genetics, a team of scientists led by Mary-Claire King, from the University of California, Berkeley announced the localization through linkage analysis of a gene associated with increased risk for breast cancer (BRCA1) to the long arm of chromosome 17. It was understood at the time that a test for these mutations would be a clinically important prognostic tool. Myriad Genetics was founded in 1994 as a startup company out of the University of Utah, by scientists involved in the hunt for the BRCA genes. In August 1994, Mark Skolnick, a founder of Myriad and scientist at University of Utah, and researchers at Myriad, along with colleagues at the University of Utah, the National Institutes of Health (NIH), and McGill University published the sequence of BRCA1, which they had isolated. In that same year, the first BRCA1 U.S. patent was filed by the University of Utah, National Institute of Environmental Health Sciences (NIEHS), and Myriad. Over the next year, Myriad, in collaboration with University of Utah, isolated and sequenced the BRCA2 gene, and the first BRCA2 patent was filed in the U.S. by the University of Utah and other institutions in 1995. In 1996, Myriad launched their BRACAnalysis product, which detects certain mutations in the BRCA1 and BRCA2 genes that put women at high risk for breast cancer and ovarian cancer.Myriad's business model has been to exclusively offer diagnostic testing services for the BRCA genes. It was on the basis of the premium price that the patents would allow Myriad to set during the 20 year life of the patents, that investors put money into Myriad. These were the funds that allowed Myriad to rapidly sequence the BRCA2 gene and finalize a robust diagnostic test. The business model meant that Myriad would need to enforce its patents against competitors, which included diagnostic labs at universities, which function very much like for-profit businesses in addition to educating pathologists-in-training. The patents were to expire, starting in 2014. In 2012, Myriad—just a startup in 1994—employed about 1200 people, had revenue of around $500 million, and was a publicly traded company.The USPTO patent examination guidelines in 2001, allowed patenting of DNA sequences: Like other chemical compounds, DNA molecules are eligible for patents when isolated from their natural state and purified or when synthesized in a laboratory from chemical starting materials. A patent on a gene covers the isolated and purified gene but does not cover the gene as it occurs in nature. Background: About 2000 isolated human genes had been patented in the United States before this case started. Gene patents have generated a great deal of controversy, especially when their owners or licensees have aggressively enforced them to create exclusivity. Clinical pathologists have been especially concerned with gene patents, as their medical practice of offering clinical diagnostic services is subject to patent law, unlike the practices of other doctors which are exempt from patent law. For example, in 1998, the University of Pennsylvania's Genetic Diagnostic Laboratory received cease and desist letters on the basis of patent infringement from Myriad, which requested clinical pathologists to stop testing patient samples for BRCA. Because of these kinds of legal threats to its members' medical practices, the Association for Molecular Pathology has actively lobbied against the existence of, and exclusive licensing of, gene patents and was the lead plaintiff in this litigation. Background: Relevant case law precedents Prior to Myriad the question of whether isolating/purification a product of Nature from its natural environment is a patentable invention or a non-patentable discovery had a long history of contradictory judgements in the USA and elsewhere. In 1889 case ex parte Latimer the inventor applied for a patent on fiber derived from Pinus australis tree. Even though the Commissioner of Patents concluded, that "“alleged invention is unquestionably very valuable, it nonetheless was a natural product and can no more be the subject of a patent in its natural state when freed from its surroundings than wheat which has been cut by a reaper.” However, the Commissioner suggested, that “If applicant's process had another final step by which the fiber ... were changed, ... [it would probably be patentable] ... because the natural fiber ... would ... become something new and different from what it is in its natural state.” This statement is very important, because after over a century of turmoil, the US case law returned to exactly this idea. Background: However, soon after Latimer the US courts decided, that molecules (which are claimed as composition of matter, unlike the natural fibers, which were claimed as articles of manufacture) were patentable, if the chemicals were "purified and isolated" from their natural environment(s). The "purified and isolated" doctrine was used to validate valuable patents on aspirin in 1910, on adrenaline in 1912, on vitamin B12 in 1958 and on prostaglandins in 1970. Although the SCOTUS invalidated in 1948 a patent on a naturally occurring mixture of bacterial strains (see Funk Bros. Seed Co. v. Kalo Inoculant Co.), US courts thought, that "purified and isolated" doctrine" still applied to molecules, and that Funk Bros. affected only living things. Background: The 1980 US Supreme Court decision in Diamond v. Chakrabarty opened a floodgate in patenting isolated genes, purified proteins and cell lines. The practice of patenting isolated genes was affirmed in 1991 by the CAFC in Amgen v. Chugai Pharmaceutical. It is estimated, that between 1980 and 2013 the USPTO allowed patent claims on up to 40,000 natural DNA-sequences.The SCOTUS had a chance to reverse Amgen in 2006, when it granted a writ of certiorary in LabCorp v. Metabolite, Inc.. However, the Court quickly dismissed the writ as improperly granted. Eventually, the practice of patenting "purified and isolated" products of nature came to an end in 2013, when the SCOTUS announced its decision in Association for Molecular Pathology v. Myriad Genetics. Background: Litigants Along with the AMP (Association for Molecular Pathology) and the University of Pennsylvania, other plaintiffs in the suit included researchers at Columbia, NYU, Emory, and Yale, several patient advocacy groups, and several individual patients. Background: The defendants in the suit were originally Myriad, the trustees of the University of Utah, and the U.S. Patent and Trademark Office (USPTO), but the USPTO was severed from the case by the district court.The American Civil Liberties Union (ACLU) and Public Patent Foundation represented the plaintiffs, with attorney Chris Hansen arguing the case. The law firm of Jones Day represented Myriad. Background: Proponents of the validity of these patents argued that recognizing such patents would encourage investment in biotechnology and promote innovation in genetic research by not keeping technology shrouded in secrecy. Opponents argued that these patents would stifle innovation by preventing others from conducting cancer research, would limit options for cancer patients in seeking genetic testing, and that the patents are not valid because they relate to genetic information that is not inventive, but is rather produced by nature. Background: Arguments The complaint challenged specific claims on isolated genes, diagnostic methods, and methods to identify drug candidates, in seven of Myriad's 23 patents on BRCA1 and BRCA2.The specific claims that were challenged were: claims 1, 2, 5, 6, 7, and 20 of U.S. patent 5,747,282; claims 1, 6, and 7 of U.S. patent 5,837,492; claim 1 of U.S. patent 5,693,473; claim 1 of U.S. patent 5,709,999; claim 1 of U.S. patent 5,710,001; claim 1 of U.S. patent 5,753,441; and claims 1 and 2 of U.S. patent 6,033,857The plaintiffs wanted these claims declared invalid, arguing that they are not patentable subject matter under §101 of Title 35 of the United States Code—that the isolated genes are unpatentable products of nature, that the diagnostic method claims are mere thought processes that do not yield any real world transformations, and that the drug screening claims were merely describing the basic processes of doing science. This part of U.S. law describes what is patent-eligible: "any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof". If the invention falls under one of several excluding categories, however, including a "naturally occurring article" (a defined term in the law), then it is not patent eligible.The Plaintiffs argued that Myriad's use of these patents—and the patents' very existence—restricted research for clinicians and limited scientific progress. They further argued that from a patient's perspective, Myriad's use of the patents not only made it impossible to obtain a second opinion on a patient's genetic predisposition to breast and ovarian cancer, but also kept the cost of BRCA1/2 testing high by preventing competition.Myriad defended their patents, arguing that the USPTO issues patents for genes as "isolated sequences" in the same way it issues patents for any other chemical compound, since the isolation of the DNA sequence renders it different in character from that present in the human body. Myriad argued that their diagnostic tests were patentable subject matter. Decision of the District Court: On March 29, 2010, Judge Robert W. Sweet of the United States District Court for the Southern District of New York declared all of the contested claims invalid.With respect to claims to isolated DNA sequences, Judge Sweet's 152 page decision stated: "DNA's existence in an 'isolated' form alters neither this fundamental quality of DNA as it exists in the body nor the information it encodes. Therefore, the patents at issue directed to 'isolated DNA' containing sequences found in nature are unsustainable as a matter of law and are deemed unpatentable under 35 U.S.C. §101." The decision also found that comparisons of DNA sequences involved in these patents are abstract mental processes under the Federal Circuit's In re Bilski decision, therefore also not patent eligible, and that the drug screening claims were unpatentable as they merely cover a "basic scientific principle".On June 16, 2010, Myriad filed its Notice of Appeal. First hearing in the Court of Appeals for the Federal Circuit: Myriad's appeal was granted, and the case was heard in United States Court of Appeals for the Federal Circuit. Myriad, the defendant-appellant, was supported by at least 15 amicus briefs and the plaintiff-appellees' position received support from 12 amicus briefs. The Department of Justice provided a surprising and unsolicited brief that in part supported the appellees but also suggested that claims covering isolated naturally occurring human genetic sequences are not properly patentable. Oral arguments were held on April 4, 2011.On July 29, 2011, the Federal Circuit overturned the district court's decision in part (reversing that an isolated DNA sequence is patent-ineligible, and the district court's decision that methods for screening cancer therapeutics is patent-ineligible) and affirmed its ruling in part (agreeing that the district court's decision that Myriad's claims for comparing DNA sequences are patent-ineligible). Judge Alan Lourie, who wrote the majority ruling, reasoned that isolated DNA is chemically distinct from the natural state of a gene in the body. Judge Lourie cited the Supreme Court case Diamond v. Chakrabarty, which used the test of whether a genetically modified organism was "markedly different" from those found in nature to rule that genetically modified organisms are patent eligible. Thus, he concluded that since Myriad's patents describe DNA sequences that do not alone exist in nature, they are patent eligible. First petition to the Supreme Court: After the Federal Circuit ruling, the Association for Molecular Pathology petitioned for a writ of certiorari to the Supreme Court, asking it to review this case. The Supreme Court granted the writ, and on March 26, 2012, it vacated the Federal Circuit decision, and remanded the case back to the Federal Circuit. In other words, the Supreme court revoked the original ruling of the Federal Circuit, and directed the lower court to re-hear the entire case again. These Supreme Court actions were made in light of its recent decision in Mayo Collaborative Services v. Prometheus Laboratories, Inc., where the Court ruled that certain kinds of claims in medical diagnostics patents, including natural phenomena, were not patentable. The Supreme Court expected the Federal Circuit to take this precedent into account in its new ruling. Second hearing in the Court of Appeals for the Federal Circuit: On August 16, 2012, the Federal Circuit held its ground, ruling again in a 2–1 decision in favor of Myriad. The new court opinion was nearly identical to the original. The Federal Circuit again reversed the district court's decision on isolated DNA molecules; the Federal Circuit found that such molecules are patent-eligible under § 101 because they are nonnaturally occurring compositions of matter. It also reversed the district court's decision concerning assays to find drugs to treat cancer; the Federal Circuit again found that these assays are patentable. And again—now reinforced by the Mayo decision—the Federal Circuit affirmed the lower court's decision, that method claims directed to "comparing" or "analyzing" DNA sequences are patent ineligible. Such claims were held to include no transformative steps and therefore to cover only patent-ineligible abstract, mental steps. Second hearing in the Court of Appeals for the Federal Circuit: With respect to the patentability of isolated genes, the majority opinion stated that the Mayo precedent was not particularly relevant to this case, because it did not deal with the patent eligibility of gene patents. Judge Lourie stated: "The remand of this case for reconsideration in light of Mayo might suggest, as Plaintiffs and certain amici state, that the composition claims are mere reflections of a law of nature. Respectfully, they are not, any more than any product of man reflects and is consistent with a law of nature." Judge William Bryson wrote a dissent with respect to the non-patentability of isolated DNA sequences, applying the reasoning of the Supreme Court in the Mayo case, with respect to methods involving "natural laws", to products of nature:In Mayo, which involved method claims…the [Supreme] Court found that the method was not directed to patent-eligible subject matter because it contributed nothing "inventive" to the law of nature that lay at the heart of the claimed invention…In concluding that the claims did not add "enough" to the natural laws, the Court was particularly persuaded by the fact that "the steps of the claimed processes…involve well-understood, routine, conventional activity previously engaged in by researchers in the field." Just as a patent involving a law of nature must have an "inventive concept" that does "significantly more than simply describe…natural relations,"… a patent involving a product of nature should have an inventive concept that involves more than merely incidental changes to the naturally occurring product. In cases such as this one, in which the applicant claims a composition of matter that is nearly identical to a product of nature, it is appropriate to ask whether the applicant has done "enough" to distinguish his alleged invention from the similar product of nature. Has the applicant made an "inventive" contribution to the product of nature? Does the claimed composition involve more than "well-understood, routine, conventional" elements? Here, the answer to those questions is no. Second hearing in the Court of Appeals for the Federal Circuit: Neither isolation of the naturally occurring material nor the resulting breaking of covalent bonds makes the claimed molecules patentable….The functional portion of the composition—the nucleotide sequence—remains identical to that of the naturally occurring gene. Second petition to the Supreme Court: On September 25, 2012, the American Civil Liberties Union and the Public Patent Foundation filed another petition for certiorari with the Supreme Court with respect to the second Federal Circuit Decision. On November 30, 2012, the Supreme Court agreed to hear the plaintiffs' appeal of the Federal Circuit's ruling.Oral arguments were heard before the Supreme Court on April 15, 2013. Decision of the Supreme Court: Justice Clarence Thomas, on June 13, 2013, delivered the opinion of the Court, in which all other members of the Supreme Court joined, except Justice Antonin Scalia, who concurred in part and concurred in the judgment. The majority opinion delivered by Thomas held, "A naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated, but cDNA is patent eligible because it is not naturally occurring." In Part III of the majority opinion, Thomas wrote: It is important to note what is not implicated by this decision. First, there are no method claims before this Court. Had Myriad created an innovative method of manipulating genes while searching for the BRCA1 and BRCA2 genes, it could possibly have sought a method patent. But the processes used by Myriad to isolate DNA at the time of Myriad's patents "were well understood, widely used, and fairly uniform insofar as any scientist engaged in the search for a gene would likely have utilized a similar approach," 702 F. Supp. 2d, at 202–203, and are not at issue in this case. Decision of the Supreme Court: Similarly, this case does not involve patents on new applications of knowledge about the BRCA1 and BRCA2 genes. Judge Bryson aptly noted that, "[a]s the first party with knowledge of the [BRCA1 and BRCA2] sequences, Myriad was in an excellent position to claim applications of that knowledge. Many of its unchallenged claims are limited to such applications." 689 F. 3d, at 1349. Decision of the Supreme Court: Nor do we consider the patentability of DNA in which the order of the naturally occurring nucleotides has been altered. Scientific alteration of the genetic code presents a different inquiry, and we express no opinion about the application of §101 to such endeavors. We merely hold that genes and the information they encode are not patent eligible under §101 simply because they have been isolated from the surrounding genetic material. Decision of the Supreme Court: In his concurring opinion, which relates to the scientific details in the majority opinion, Scalia wrote: I join the judgment of the Court, and all of its opinion except Part I–A and some portions of the rest of the opinion going into fine details of molecular biology. I am unable to affirm those details on my own knowledge or even my own belief. It suffices for me to affirm, having studied the opinions below and the expert briefs presented here, that the portion of DNA isolated from its natural state sought to be patented is identical to that portion of the DNA in its natural state; and that complementary DNA (cDNA) is a synthetic creation not normally present in nature. Reactions to the decision: Molecular Pathology v. Myriad Genetics was a landmark case on the practice of gene patenting. The District Court's decision was received as an unexpected ruling, because it contradicted the generally accepted practice of gene patents. The Federal Circuit's decision was a return to the status quo, in which the U.S. Patent Office issues patents for isolated gene sequences. However, it still ignited much controversy and interest from the public. The plaintiff's argument that DNA should be excluded from patent eligibility was widely echoed in popular media. Jim Dwyer, a reporter for The New York Times, wrote: "But for many people, it is impossible to understand how genes—the traits we inherit from our parents and pass along to our children—could become a company's intellectual property." James Watson, one of the discoverers of the structure of DNA, agreed and submitted a brief in the case. He argued that DNA conveys special genetic information, that human genetic information should not be the private property of anyone, and that developing a patent thicket of gene sequences could prevent easy commercialization of genetic diagnostics.In terms of the emotional impact of this case as it was portrayed in the media—the exclusive offering of a diagnostic test and the high price of the test—the real legal force on that issue arose from the outcome of other cases, Bilski v. Kappos and Mayo v. Prometheus. These cases rendered most diagnostic claims unpatentable, making it difficult for Myriad's business model (as described above in the Background section) to work going forward—difficult for R&D driven business and investors and thus potentially bad for patients as there may be fewer diagnostic tests brought to market, but also potentially better for patients in that prices for tests may be lower and it will be easier to have a test re-done by an alternate lab.The same issue, namely the patentability of the DNA sequence in the BRCA1 gene, was considered in a February 2013 case in the Federal Court of Australia where the validity of Myriad's patent was upheld. This was also a landmark ruling, and an appeal to the Full Court of the Federal Court of Australia was to be heard in August 2013. The submissions for that appeal were due on June 14, 2013, the day after the U.S. Supreme Court ruling was published, and the appellants in the Australian case stated that the U.S. ruling was referenced within their submission. In a unanimous decision in October 2015, the High Court of Australia, Australia's final court of appeal, concluded that an isolated nucleic acid, coding for a BRCA1 protein, with specific variations from the norm that are indicative of susceptibility to breast cancer and ovarian cancer was not a "patentable invention".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clinical data management system** Clinical data management system: A clinical data management system or CDMS is a tool used in clinical research to manage the data of a clinical trial. The clinical trial data gathered at the investigator site in the case report form are stored in the CDMS. To reduce the possibility of errors due to human entry, the systems employ various means to verify the data. Systems for clinical data management can be self-contained or part of the functionality of a CTMS. A CTMS with clinical data management functionality can help with the validation of clinical data as well as helps the site employ for other important activities like building patient registries and assist in patient recruitment efforts. Classification: The CDMS can be broadly divided into paper-based and electronic data capturing systems. Classification: Paper-based systems Case report forms are manually filled at site and mailed to the company for which trial is being performed. The data on forms is transferred to the CDMS tool through data entry. The most popular method being double data entry where two different data entry operators enter the data in the system independently and both the entries are compared by the system. In case the entry of a value conflicts, system alerts and a verification can be done manually. Another method is Single Data Entry. Classification: The data in CDMS are then transferred for the data validation. Also, in these systems during validation the data clarification from sites are done through paper forms, which are printed with the problem description and sent to the investigator site and the site responds by answering on forms and mailing them back. Classification: Electronic data capturing systems In such CDMSs, the investigators directly upload the data on CDMS, and the data can then be viewed by the data validation staff. Once the data are uploaded by site, the data validation team can send the electronic alerts to sites if there are any problems. Such systems eliminate paper usage in clinical trial validation of data. Clinical data management: Once data have been screened for typographical errors, the data can be validated to check for logical errors. An example is a check of the subject's date of birth to ensure that they are within the inclusion criteria for the study. These errors are raised for review to determine if there are errors in the data or if clarifications from the investigator are required. Clinical data management: Another function that the CDMS can perform is the coding of data. Currently, the coding is generally centered around two areas — adverse event terms and medication names. With the variance on the number of references that can be made for adverse event terms or medication names, standard dictionaries of these terms can be loaded into the CDMS. The data items containing the adverse event terms or medication names can be linked to one of these dictionaries. The system can check the data in the CDMS and compare them to the dictionaries. Items that do not match can be flagged for further checking. Some systems allow for the storage of synonyms to allow the system to match common abbreviations and map them to the correct term. As an example, ASA (acetylsalicylic acid) could be mapped to aspirin, a common notation. Popular adverse event dictionaries are MedDRA and WHOART and popular Medication dictionaries are COSTART and WHO Drug Dictionary. Clinical data management: At the end of the clinical trial the data set in the CDMS is extracted and provided to statisticians for further analysis. The analysed data are compiled into clinical study report and sent to the regulatory authorities for approval. Clinical data management: Most of the drug manufacturing companies are using Web-based systems for capturing, managing and reporting clinical data. This not only helps them in faster and more efficient data capture, but also speeds up the process of drug development. In such systems, studies can be set up for each drug trial. In-built edit checks help in removing erroneous data. The system can also be connected to other external systems. For example, RAVE can be connected to an IVRS (Interactive Voice Response System) facility to capture data through direct telephonic interviews of patients. Although IRT (Interactive Response Technology) systems (IVRS/IWRS) are most commonly associated to the enrollment of a patient in a study thus the system defining the arm of the treatment that the patient will take and the treatment kit numbers allocated to this arm (if applicable). Besides rather expensive commercial solutions, there are more and more open source clinical data management systems available on the market. CDMS implementations are required to comply with 21 CFR Part 11 federal regulations to be used for FDA registered drug trials. Part 11 requirements include audit trails, electronic signatures, and overall system validation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Braille pattern dots-123456** Braille pattern dots-123456: The Braille pattern dots-123456 ( ⠿ ) is a 6-dot braille cell with all six dots raised, or an 8-dot braille cell with both dots in the top three rows raised. It is represented by the Unicode code point U+283F, and in Braille ASCII with the equal sign. Unified Braille: In unified international braille, the braille pattern dots-123456 is used to represent a voiced dental/alveolar fricative or aspirant, such as /ð/, /z/, or /dʰ/ when multiple letters correspond to these values, and is otherwise assigned as needed. Table of unified braille values Other braille: Braille pattern dots-123456 is also used for the tactile feature on Canadian banknotes. Plus dots 7 and 8: Related to Braille pattern dots-123456 are Braille patterns 1234567, 1234568, and 12345678, which are used in 8-dot braille systems, such as Gardner-Salinas and Luxembourgish Braille. Related 8-dot kantenji patterns: In the Japanese kantenji braille, the standard 8-dot Braille patterns 235678, 1235678, 2345678, and 12345678 are the patterns related to Braille pattern dots-123456, since the two additional dots of kantenji patterns 0123456, 1234567, and 01234567 are placed above the base 6-dot cell, instead of below, as in standard 8-dot braille. Kantenji using braille patterns 235678, 1235678, 2345678, or 12345678 This listing includes kantenji using Braille pattern dots-123456 for all 6349 kanji found in JIS C 6226-1978. Related 8-dot kantenji patterns: - 目 Variants and thematic compounds - め/目 + selector 1 = 自 - め/目 + selector 3 + selector 3 = 睿 - め/目 + selector 4 = 面 - selector 1 + め/目 = 真 - selector 1 + selector 1 + め/目 = 眞 - selector 3 + め/目 = 乂 - selector 4 + め/目 = 牙 - selector 5 + め/目 = 黽 - selector 6 + め/目 = 弗 - 比 + め/目 = 亀 - 数 + め/目 = 百 Compounds of 目 - 日 + め/目 = 冒 - し/巿 + め/目 = 帽 - へ/⺩ + 日 + め/目 = 瑁 - め/目 + 宿 = 見 - 宿 + め/目 = 寛 - へ/⺩ + め/目 = 現 - 龸 + め/目 = 覚 - 龸 + 龸 + め/目 = 覺 - て/扌 + 龸 + め/目 = 撹 - ま/石 + め/目 = 親 - ね/示 + ま/石 + め/目 = 襯 - け/犬 + め/目 = 観 - け/犬 + け/犬 + め/目 = 觀 - 心 + け/犬 + め/目 = 欟 - め/目 + り/分 = 窺 - め/目 + 仁/亻 = 覗 - ね/示 + め/目 = 視 - め/目 + す/発 = 覧 - て/扌 + め/目 + す/発 = 攬 - 心 + め/目 + す/発 = 欖 - い/糹/#2 + め/目 + す/発 = 纜 - め/目 + め/目 + す/発 = 覽 - な/亻 + め/目 + 宿 = 俔 - ま/石 + め/目 + 宿 = 硯 - ち/竹 + め/目 + 宿 = 筧 - む/車 + め/目 + 宿 = 蜆 - 龸 + め/目 + 宿 = 覓 - れ/口 + め/目 + 宿 = 覘 - 仁/亻 + め/目 + 宿 = 覡 - ゆ/彳 + め/目 + 宿 = 覦 - と/戸 + め/目 + 宿 = 覩 - や/疒 + め/目 + 宿 = 覬 - き/木 + め/目 + 宿 = 覲 - め/目 + め/目 + 宿 = 靦 - め/目 + 宿 + む/車 = 覯 - め/目 + め/目 + 宿 = 靦 - ふ/女 + め/目 = 媚 - ろ/十 + め/目 = 直 - な/亻 + め/目 = 値 - ほ/方 + め/目 = 殖 - す/発 + め/目 = 置 - め/目 + き/木 = 植 - つ/土 + ろ/十 + め/目 = 埴 - る/忄 + ろ/十 + め/目 = 悳 - め/目 + ろ/十 + め/目 = 矗 - の/禾 + ろ/十 + め/目 = 稙 - ひ/辶 + め/目 = 遁 - き/木 + め/目 = 相 - ち/竹 + め/目 = 箱 - よ/广 + き/木 + め/目 = 廂 - に/氵 + き/木 + め/目 = 湘 - よ/广 + め/目 = 盾 - ゆ/彳 + め/目 = 循 - き/木 + よ/广 + め/目 = 楯 - そ/馬 + め/目 = 省 - う/宀/#3 + め/目 = 督 - め/目 + ゑ/訁 = 叡 - め/目 + る/忄 = 懸 - め/目 + て/扌 = 攫 - め/目 + ほ/方 = 盲 - め/目 + せ/食 = 眉 - や/疒 + め/目 + せ/食 = 嵋 - め/目 + か/金 = 看 - め/目 + ゐ/幺 = 県 - め/目 + め/目 + ゐ/幺 = 縣 - め/目 + ん/止 = 眠 - め/目 + う/宀/#3 = 眺 - め/目 + や/疒 = 眼 - め/目 + そ/馬 = 着 - め/目 + に/氵 = 睡 - め/目 + む/車 = 睦 - め/目 + こ/子 = 睨 - め/目 + 氷/氵 = 瞥 - め/目 + 龸 = 瞬 - め/目 + ろ/十 = 瞭 - め/目 + ま/石 = 瞳 - め/目 + 日 = 瞼 - め/目 + は/辶 = 瞽 - め/目 + と/戸 = 算 - 日 + 宿 + め/目 = 冐 - う/宀/#3 + め/目 + う/宀/#3 = 鼎 - め/目 + 宿 + く/艹 = 瞿 - め/目 + い/糹/#2 + ゑ/訁 = 矍 - か/金 + め/目 + め/目 = 钁 - て/扌 + 宿 + め/目 = 攪 - に/氵 + 宿 + め/目 = 泪 - め/目 + 宿 + も/門 = 盻 - め/目 + 宿 + う/宀/#3 = 眄 - め/目 + ほ/方 + そ/馬 = 眇 - め/目 + 宿 + 龸 = 眈 - め/目 + き/木 + selector 4 = 眛 - め/目 + と/戸 + 仁/亻 = 眤 - め/目 + 宿 + 比 = 眥 - め/目 + 比 + selector 4 = 眦 - め/目 + 龸 + ゐ/幺 = 眩 - め/目 + 宿 + け/犬 = 眷 - め/目 + selector 5 + む/車 = 眸 - め/目 + ゆ/彳 + 宿 = 睇 - め/目 + 宿 + つ/土 = 睚 - め/目 + し/巿 + せ/食 = 睛 - め/目 + た/⽥ + さ/阝 = 睥 - め/目 + 宿 + ま/石 = 睫 - め/目 + と/戸 + 日 = 睹 - め/目 + 宿 + へ/⺩ = 瞎 - め/目 + 龸 + 日 = 瞑 - め/目 + 宿 + め/目 = 瞞 - め/目 + 宿 + め/目 = 瞞 - め/目 + 龸 + つ/土 = 瞠 - め/目 + み/耳 + 氷/氵 = 瞰 - め/目 + を/貝 + き/木 = 瞶 - め/目 + 龸 + selector 1 = 瞹 - め/目 + 宿 + 日 = 瞻 - め/目 + 宿 + そ/馬 = 矇 - め/目 + と/戸 + み/耳 = 矚 - 心 + 龸 + め/目 = 苜 - め/目 + つ/土 + を/貝 = 覿 - め/目 + 宿 + い/糹/#2 = 雎 Compounds of 自 - れ/口 + め/目 = 嗅 - め/目 + 心 = 息 - 火 + め/目 + 心 = 熄 - め/目 + け/犬 = 臭 - も/門 + め/目 + け/犬 = 闃 - め/目 + た/⽥ = 鼻 - れ/口 + め/目 + た/⽥ = 嚊 - ふ/女 + め/目 + た/⽥ = 嬶 - か/金 + め/目 + た/⽥ = 鼾 Compounds of 睿 - に/氵 + 龸 + め/目 = 濬 Compounds of 面 - に/氵 + め/目 + selector 4 = 湎 - い/糹/#2 + め/目 + selector 4 = 緬 - よ/广 + め/目 + selector 4 = 靨 - す/発 + め/目 + selector 4 = 麺 - め/目 + も/門 + selector 2 = 靤 Compounds of 真 and 眞 - る/忄 + め/目 = 慎 - る/忄 + る/忄 + め/目 = 愼 - か/金 + め/目 = 鎮 - か/金 + か/金 + め/目 = 鎭 - め/目 + お/頁 = 顛 - や/疒 + selector 1 + め/目 = 癲 - れ/口 + selector 1 + め/目 = 嗔 - つ/土 + selector 1 + め/目 = 填 - 心 + selector 1 + め/目 = 槙 - め/目 + selector 1 + め/目 = 瞋 - め/目 + せ/食 + selector 1 = 鷆 - め/目 + 龸 + せ/食 = 鷏 Compounds of 乂 - 心 + め/目 = 艾 - め/目 + ぬ/力 = 刈 - く/艹 + め/目 + ぬ/力 = 苅 - め/目 + ね/示 = 刹 - め/目 + し/巿 = 希 - れ/口 + め/目 + し/巿 = 唏 - 日 + め/目 + し/巿 = 晞 - ん/止 + め/目 + し/巿 = 欷 - の/禾 + め/目 + し/巿 = 稀 - せ/食 + め/目 + し/巿 = 鯑 - め/目 + の/禾 = 殺 - め/目 + ⺼ = 肴 - に/氵 + め/目 + ⺼ = 淆 - 囗 + め/目 + の/禾 = 弑 - め/目 + 龸 + ち/竹 = 爻 - め/目 + selector 5 + そ/馬 = 爼 - な/亻 + 宿 + め/目 = 爽 Compounds of 牙 - り/分 + め/目 = 穿 - く/艹 + め/目 = 芽 - え/訁 + め/目 = 訝 - め/目 + さ/阝 = 邪 - め/目 + い/糹/#2 = 雅 - れ/口 + selector 4 + め/目 = 呀 - 氷/氵 + 宿 + め/目 = 冴 - め/目 + た/⽥ + selector 1 = 谺 - め/目 + 宿 + せ/食 = 鴉 Compounds of 黽 - い/糹/#2 + め/目 = 縄 - い/糹/#2 + い/糹/#2 + め/目 = 繩 - む/車 + め/目 = 蝿 - む/車 + 宿 + め/目 = 蠅 - の/禾 + 宿 + め/目 = 龝 Compounds of 弗 - 氷/氵 + め/目 = 沸 - め/目 + を/貝 = 費 - 仁/亻 + め/目 = 仏 - 仁/亻 + 仁/亻 + め/目 = 佛 - て/扌 + め/目 = 払 - て/扌 + て/扌 + め/目 = 拂 - と/戸 + selector 6 + め/目 = 髴 - ゆ/彳 + 宿 + め/目 = 彿 - る/忄 + 宿 + め/目 = 怫 - け/犬 + 宿 + め/目 = 狒 Compounds of 亀 - の/禾 + 比 + め/目 = 穐 - 比 + 比 + め/目 = 龜 - も/門 + 比 + め/目 = 鬮 - ほ/方 + 比 + め/目 = 鼇 - 氷/氵 + 比 + め/目 = 鼈 Compounds of 百 - な/亻 + 数 + め/目 = 佰 - ゆ/彳 + 数 + め/目 = 弼 - 心 + 数 + め/目 = 栢 - か/金 + 数 + め/目 = 瓸 - ま/石 + 数 + め/目 = 竡 - の/禾 + 数 + め/目 = 粨 - そ/馬 + 数 + め/目 = 貊 - さ/阝 + 数 + め/目 = 陌 Other compounds - ゐ/幺 + め/目 = 綿 - 心 + 宿 + め/目 = 棉 - ゐ/幺 + 宿 + め/目 = 緜 - か/金 + 宿 + め/目 = 錦 - 囗 + め/目 = 爾 - に/氵 + に/氵 + め/目 = 滿 - に/氵 + め/目 = 満 - る/忄 + に/氵 + め/目 = 懣 - め/目 + へ/⺩ = 璽 - ゆ/彳 + 囗 + め/目 = 彌 - 氷/氵 + 囗 + め/目 = 瀰 - に/氵 + 囗 + め/目 = 濔 - ね/示 + 囗 + め/目 = 禰 - み/耳 + 宿 + め/目 = 蹣 - れ/口 + 宿 + め/目 = 嚏 - れ/口 + う/宀/#3 + め/目 = 嚔 - そ/馬 + 宿 + め/目 = 牝 - ひ/辶 + 囗 + め/目 = 迩
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hsp33** Hsp33: Hsp33 protein is a molecular chaperone, distinguished from all other known chaperones by its mode of functional regulation. Its activity is redox regulated. Hsp33 is a cytoplasmically localized protein with highly reactive cysteines that respond quickly to changes in the redox environment. Oxidizing conditions like H2O2 cause disulphide bonds to form in Hsp33, a process that leads to the activation of its chaperone function.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Enumerative combinatorics** Enumerative combinatorics: Enumerative combinatorics is an area of combinatorics that deals with the number of ways that certain patterns can be formed. Two examples of this type of problem are counting combinations and counting permutations. More generally, given an infinite collection of finite sets Si indexed by the natural numbers, enumerative combinatorics seeks to describe a counting function which counts the number of objects in Sn for each n. Although counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description. The twelvefold way provides a unified framework for counting permutations, combinations and partitions. Enumerative combinatorics: The simplest such functions are closed formulas, which can be expressed as a composition of elementary functions such as factorials, powers, and so on. For instance, as shown below, the number of different possible orderings of a deck of n cards is f(n) = n!. The problem of finding a closed formula is known as algebraic enumeration, and frequently involves deriving a recurrence relation or generating function and using this to arrive at the desired closed form. Enumerative combinatorics: Often, a complicated closed formula yields little insight into the behavior of the counting function as the number of counted objects grows. In these cases, a simple asymptotic approximation may be preferable. A function g(n) is an asymptotic approximation to f(n) if f(n)/g(n)→1 as n→∞ . In this case, we write f(n)∼g(n). Generating functions: Generating functions are used to describe families of combinatorial objects. Let F denote the family of objects and let F(x) be its generating function. Then F(x)=∑n=0∞fnxn where fn denotes the number of combinatorial objects of size n. The number of combinatorial objects of size n is therefore given by the coefficient of xn . Some common operation on families of combinatorial objects and its effect on the generating function will now be developed. Generating functions: The exponential generating function is also sometimes used. In this case it would have the form F(x)=∑n=0∞fnxnn! Once determined, the generating function yields the information given by the previous approaches. In addition, the various natural operations on generating functions such as addition, multiplication, differentiation, etc., have a combinatorial significance; this allows one to extend results from one combinatorial problem in order to solve others. Generating functions: Union Given two combinatorial families, F and G with generating functions F(x) and G(x) respectively, the disjoint union of the two families ( F∪G ) has generating function F(x) + G(x). Pairs For two combinatorial families as above the Cartesian product (pair) of the two families ( F×G ) has generating function F(x)G(x). Sequences A (finite) sequence generalizes the idea of the pair as defined above. Sequences are arbitrary Cartesian products of a combinatorial object with itself. Formally: Seq (F)=ϵ∪F∪F×F∪F×F×F∪⋯ To put the above in words: An empty sequence or a sequence of one element or a sequence of two elements or a sequence of three elements, etc. The generating function would be: 1+F(x)+[F(x)]2+[F(x)]3+⋯=11−F(x). Combinatorial structures: The above operations can now be used to enumerate common combinatorial objects including trees (binary and plane), Dyck paths and cycles. A combinatorial structure is composed of atoms. For example, with trees the atoms would be the nodes. The atoms which compose the object can either be labeled or unlabeled. Unlabeled atoms are indistinguishable from each other, while labelled atoms are distinct. Therefore, for a combinatorial object consisting of labeled atoms a new object can be formed by simply swapping two or more atoms. Combinatorial structures: Binary and plane trees Binary and plane trees are examples of an unlabeled combinatorial structure. Trees consist of nodes linked by edges in such a way that there are no cycles. There is generally a node called the root, which has no parent node. In plane trees each node can have an arbitrary number of children. In binary trees, a special case of plane trees, each node can have either two or no children. Let P denote the family of all plane trees. Then this family can be recursively defined as follows: Seq (P) In this case {∙} represents the family of objects consisting of one node. This has generating function x. Let P(x) denote the generating function P Putting the above description in words: A plane tree consists of a node to which is attached an arbitrary number of subtrees, each of which is also a plane tree. Using the operation on families of combinatorial structures developed earlier, this translates to a recursive generating function: P(x)=x11−P(x) After solving for P(x): P(x)=1−1−4x2 An explicit formula for the number of plane trees of size n can now be determined by extracting the coefficient of xn: pn=[xn]P(x)=[xn]1−1−4x2=[xn]12−[xn]121−4x=−12[xn]∑k=0∞(12k)(−4x)k=−12(12n)(−4)n=1n(2n−2n−1) Note: The notation [xn] f(x) refers to the coefficient of xn in f(x). Combinatorial structures: The series expansion of the square root is based on Newton's generalization of the binomial theorem. To get from the fourth to fifth line manipulations using the generalized binomial coefficient is needed. The expression on the last line is equal to the (n − 1)st Catalan number. Therefore, pn = cn−1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sofosbuvir/daclatasvir** Sofosbuvir/daclatasvir: Daclatasvir/sofosbuvir (trade names Darvoni, Sovodak) is a two-drug combination for the treatment of hepatitis C. It is given as a single daily pill containing daclatasvir, a viral NS5A inhibitor, and sofosbuvir, a nucleotide inhibitor of the viral RNA polymerase NS5B.It is on the World Health Organization's List of Essential Medicines. Society and culture: This combination is produced by an Iranian company under the trade name of Sovodak. The combination includes 400 mg sofosbuvir and 60 mg daclatasvir and has been used in clinical trials since 2015. Sovodak was approved by the Iranian Food and Drug Administration in October 2015 and is currently marketed in Iran as the treatment of choice for all genotypes of hepatitis C as recommended by the national Iranian guideline for treating hepatitis C. Research: The similarities between the hepatitis C and SARS-CoV-2 virus has led some researches to investigate the effectiveness of sofosbuvir/daclatasvir against COVID-19. Three recently published studies have found this combination to be beneficial against COVID-19 although the findings require confirmation by larger studies.In October 2020, a meta-analysis found a significantly lower risk of all-cause mortality with the drug combination when given to hospitalized patients with COVID-19.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Homework coach** Homework coach: A homework coach is a category of a tutor whose mission is to help a student's overall academic success instead of providing remedial instruction in a specific subject. A parent might hire a homework coach when their child is struggling in school, not because they have difficulties with the academic material but because of problems with study skills, organization, executive function skills and motivation. The goal of the coach is to teach the child to become a successful student by learning to plan assignments, organize materials, manage time effectively and, in the case of an ADHD student, learn ways to manage the symptoms of their attention deficit disorder. As such, the role of a homework coach is similar to ADHD coaching but is focused specifically on success in school. Some providers use the term "homework helper" as well as "homework coach." Applicability: A homework coach is indicated for any student whose poor performance in school or college appears to be more related to organization and study skills rather than difficulty understanding the instructional material. Such students may show signs of ADHD or Executive Function disorder. Current statistics published by the Centers for Disease Control show that as many as 11% of school children 4–17 years of age have received an ADHD diagnosis. Among ADHD students, about 33% will not graduate high school with their peers, which is about twice the rate of the non-ADHD student population. By hiring a homework coach, parents hope that the added support in building study skills, helping plan assign assignments, test-taking strategies and general homework monitoring will keep their children on track in school and increase their chances of graduating on time. Effectiveness: Parents generally measure the efficiency of homework coaching in terms of higher grades and less discord in the household over their student's homework habits. There are also many anecdotal news stories and case studies about how a homework coach has helped students improve their grades and self-confidence. The intensity and tempo of homeworkers affect stress resistance and quite an emotional state of the pupil. The correct approach to implementing tasks will allow for performing all tasks with maximum efficiency.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ice cream parlor** Ice cream parlor: Ice cream parlors (American English) or ice cream parlours (British English) are places that sell ice cream, gelato, sorbet, and/or frozen yogurt to consumers. Ice cream is typically sold as regular ice cream (also called hard-packed or hard-serve ice cream), and/or soft serve, which is usually dispensed by a machine with a limited number of flavors (e.g., chocolate, vanilla, and "twist", or "zebra", a mix of the two). Ice cream parlors generally offer a number of flavors and items. Parlors often serve ice cream and other frozen desserts in cones, cups or dishes, the latter two to be eaten with a spoon. Some ice cream parlors prepare ice cream desserts such as sundaes (ice cream topped with syrup, whipped cream and other toppings) or milkshakes, or even a blend (known as a Boston shake). History: While the origins of ice cream are often debated, most scholars trace the first ice cream parlor back to France in the 17th century. In 1686, Francesco Procopio del Coltelli opened Paris' first café. The Café Procope, named by its Sicilian founder, introduced gelato to the French public. The dessert was served to its elite guests in small porcelain bowls.Until 1800, ice cream remained a rare and exotic dessert enjoyed mostly by the elite. The introduction of insulated ice houses in 1800, the first ice cream factory in Pennsylvania in 1851, and industrial refrigeration in the 1870s made manufacturing and storing ice cream much simpler. The first ice cream factory was built by Jacob Fussell, a milk dealer who bought dairy products from Philadelphia farmers and sold them in Baltimore. The mass production of ice cream cut the product's cost significantly, making it more popular and more affordable for people of lower classes. History: In the early 1800s, an early form of a U.S. ice cream parlor was existent in Philadelphia, Pennsylvania that sold "all kinds of refreshments, as Ice Cream, Syrups, French Cordials, Cakes, Clarets of the best kind, Jellies, etc." According to one source, the first U.S. ice cream parlor opened in New York City in 1790. Product overview: Gelato is a type of Italian ice cream with more milk and less cream than American ice cream. Sorbet is a frozen treat made from fruit, syrup and ice. No milk or cream is used. Frozen yogurt is a common low-fat ice cream alternative with a smooth texture that is similar to soft serve ice cream. All of these frozen products may be sold in ice cream cones, cups, sundaes, and milkshakes. Some parlors may also sell ice cream cakes, ice cream bars and other pre-packaged frozen sweets. In addition to frozen dessert products, some modern ice cream parlors also sell a variety of hot fast foods. Types: Parlors vary in terms of size and environment. Some only have an order window and outside seating, while others have complete indoor facilities. Some parlors have drive-through windows. There are even parlors that combine several of these methods. Some parlors remain open all year round, typically in warmer weather locations and urban areas, and others in colder climates stay open only during warmer months, particularly from March to November. For example, some ice cream parlors in Vienna, Austria close in the winter months.Some ice cream parlors in Moscow, Russia, offer alcoholic beverages along with ice cream. Ice cream parlor chains: Because ice cream parlors are located throughout the world, there are both small, local franchises as well as large, global enterprises. Some of the most notable large, global ice cream parlors include Baskin-Robbins, Ben & Jerry's, Bruster's Ice Cream, Carvel, Cold Stone Creamery, Dairy Queen, Dippin' Dots, Friendly's, and Häagen-Dazs. Yogurtland, Yogen Früz, and sweetFrog are notable frozen yogurt parlors.Just as the size, style, and selection within each ice cream parlor may differ, so may its notoriety. Each July in the United States, in honor of National Ice Cream Month, several prominent publications rank the popularity of ice cream parlors throughout the United States. In 2014, Travel + Leisure, National Geographic, Business Insider, Food & Wine, and TripAdvisor published their top ranked ice cream parlors. Ice cream parlor chains: Travel + Leisure: America's Best Ice Cream Shops National Geographic: Top 10 Places to Eat Ice Cream Business Insider: The 10 Best Ice Cream Shops In The US, According To Pinterest Users Food & Wine: Best Ice Cream Spots in the U.S. TripAdvisor: Best ice cream parlors in the US, ranked by TripAdvisor users
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Narrator (Windows)** Narrator (Windows): Narrator is a screen reader in Microsoft Windows. Developed by Professor Paul Blenkhorn in 2000, the utility made the Windows operating system more accessible for blind and visually impaired users. Overview: Narrator is included with every copy of Microsoft Windows, providing a measure of access to Windows without the need to install additional software as long as the computer in use includes a sound card and speakers or headphones. Windows 2000 was the first Microsoft operating system released with some degree of accessibility for the blind built in, permitting a blind person to walk up to any such computer and make some use of it immediately. Overview: The Windows 2000 version of Narrator uses SAPI 4 and allows the use of other SAPI 4 voices. The Windows XP version uses the newer SAPI 5. However, it only allows the use of the default voice, Microsoft Sam, even if other voices have been installed. In Windows Vista and Windows 7, Narrator has been updated to use SAPI 5.3 and the Microsoft Anna voice for English. In Windows Ultimate and Windows editions for China, the Microsoft Lili voice for Mandarin Chinese is included. In Windows 10, Narrator is available in English (United States, United Kingdom, and India), French, Italian, German, Japanese, Korean, Mandarin (Chinese Simplified and Chinese Traditional), Cantonese (Chinese Traditional), Spanish (Spain and Mexico), Polish, Russian, and Portuguese (Brazil). A version of Narrator is also included in all Windows Phones, though with far fewer settings than the Narrator for the desktop. Narrator for Windows Phones previously only worked if the phone's language is set to "English (United States)". There are numerous voices included in the narrator pack, such as Microsoft David, Microsoft Zira, Microsoft Mark, and in earlier editions, Microsoft Hazel. In Windows 11, the Narrator app was redesigned and new natural voices were added. The app introduced a new icon and is available in both Light and Dark Theme.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Maria (reachability analyzer)** Maria (reachability analyzer): Maria: The Modular Reachability Analyzer is a reachability analyzer for concurrent systems that uses Algebraic System Nets (a high-level variant of Petri nets) as its modelling formalism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded