text
stringlengths
60
353k
source
stringclasses
2 values
**Daily call sheet** Daily call sheet: Daily call sheet is a filmmaking term for the schedule supervised by the assistant director and crafted by the 2nd assistant director, using the director's shot list, the production schedule and other logistics considerations. It is issued to the cast and crew of a film production to inform them of where and when they should report for a particular day of filming, usually no later than 12 hours before the start of the next work day.Call sheets are a vital part of video production. Daily call sheet: The start of the day's production schedule is marked by a general and individual call times, the time when people are expected to start work on a film set. Information found on call sheets: Call sheets include other useful information such as contact information (e.g. phone numbers of crew members and other contacts), the schedule for the day, which scenes and script pages are being shot, and the address of the shoot location and parking arrangements. Call sheets also have information about cast transportation arrangements, parking instructions and safety notes. A section on the front of the call sheet is usually dedicated to reminding department heads of the day's specific needs that go beyond the unit's usual tools and equipment – such as special crane rentals, special effects builds required, props and sets needing to be readied for the day and more. Call sheets may also provide logistical information regarding the location. It is common to find such items as weather information, sunrise/sunset times, local hospitals, restaurants, dietary limitations, meal times and quantities, and hardware stores on call sheets. Information found on call sheets: Historically, call sheets were typed by typewriter (or handwritten), then copied and delivered by courier or runner. While the history of call sheets is not well-documented, the oldest artifacts being sold publicly date back to as early as 1941. Modern call sheets are Excel-based and emailed as PDFs as well as printed and distributed on set. The latest generation of call sheets is cloud-based, while emailed PDFs remain as the industry norm; paper copies on set have become rarer during the COVID-19 pandemic. Call sheets adhere to the Legal paper size format, and film production departments keep them handy on set for printing on the specialized format.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Mathematical Psychology** Journal of Mathematical Psychology: The Journal of Mathematical Psychology is a peer-reviewed scientific journal established in 1964. It covers all areas of mathematical and theoretical psychology, including sensation and perception, psychophysics, learning and memory, problem solving, judgment and decision-making, and motivation. It is the official journal of the Society for Mathematical Psychology and is published on their behalf by Elsevier. Abstracting and indexing: The journal is abstracted and indexed by: According to the Journal Citation Reports, the journal has a 2017 impact factor of 2.176.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Baum test** Baum test: Baum test (also known as the "Tree test" or the "Koch test") is a projective test that is used extensively by psychologists around the world."Baum" is the German word for the tree. It reflects an individual's personality and their underlying emotions by drawing a tree then analyzing it. History: In 1952, Karl Koch interpreted patterns according to principles of handwriting analysis after asking the subject to draw a tree. He attributed the method to Emil Jucker, who clinically analyses the forms of trees. The contents it analyzed include the size of trees, the elements of trees (trunk, crown, branches), the ground and the prosperity of trees. Baum test is used as a clinical method for personality test and expressing conflicts, especially for assessing personality in the developmental age. Nowadays, Baum test is also used in clinical research like diagnosing cognitive disorder. Process: The first step is to let participant draw a tree on a paper. In some cases, participants are also asked to write a short essay about the drawn tree. A psychologist or a psychiatrist will then evaluate the various aspects of the drawing as well as the individual's behavior /comments while completing the test. The evaluation is based on standard criteria and scored from "very immature" to "very mature" while the essay is graded from advanced, normal, and backwards. Indications: It is suggested by J.H. Plokker that the type of tree an individual draws relates to the structure of the psyche or unconscious itself or that it symbolizes one's personality as it can project self-image.According to Koch and Jucker, they focus on interpreting parts of trees. Here is the analysis raised by them: Large Baum: indicates self-confidence Small Baum: indicates a lack of self-confidence Big trunk: indicates straightforwardness and liveliness Small trunk: indicates weariness Deep Roots: indicates stability No/shallow roots: indicates a feeling of exclusion Big Branches: indicates arrogance No/small branches: indicates unsocial behavior Large Leaves: indicates friendliness, social ability Small Leaves: indicates shyness Forms of analysis: Two forms of analysis are used to evaluate and interpret the Baum test. The global structure analysis sees the tree as a whole,for example, the tree's overall size and location on the paper. The internal structure which raised by Emil Jucker, focuses on analysis finer details of the tree. There are 59 detail oriented aspects of the tree drawing that are used to evaluate an individual's thoughts or feelings including roots, trunk, branches, crown, leaves, knots, shading, symmetry, archetypal features etc. Research and applications: Uses in Cross-cultural psychology Since drawing is nonverbal character, it has effectively overcome the language barriers in different countries. In 1966, Dennis analyzed children’s drawing s and found that they are great indicators of group values and cognitive functioning. Though familiarity is a factor children choose to depict, they would also draw things they valued (wishes and desires). He concluded that drawing as a projective technique provides children with a good opportunity to express their personal feelings and their attitudes towards others and environment. Research conducted in 2007 encouraged people who work with young people to use drawing as a child-centered procedure and evaluation tool, though during the process of interpretation expect a level of subjectivity. Research and applications: Uses in personality psychology This projective test is also used for reflecting human personality constructs, while it is an introspective, self-report questionnaire. It represents the unconsciousness of personality and the assessment by responding to a stimulus(drawing the tree). According to Ursula, the size of the tree and the width of the trunk symbolize "a sense of self-expression and the amount of mental energy". The position of a tree symbolizes "how one perceives and relates to the mental space and time in which one lives." Uses in diagnosis of mental disorders There are many conducted research provides evidence of the Baum test. Roberto and his colleagues researched the Baum test with a group of mild cognitively impaired (MCI) patients and controls. By comparing their tree-drawing test with the control groups, they found that the trunk-to-crown ratio of trees drawn by MCI is greater than controls, while their size of trees is significantly smaller than controls. This indicates its inverse relationship with the ability to use language, which also supported by previous studies. One research on Alzheimer's disease also shown the difference in drawing patterns. By founding the MCI drawn trees are different from those drawn by healthy subjects with a progressive differentiation from degrees of cognitive impairment, it suggests the Baum test could help make diagnosis of cognitive related diseases. Research and applications: Another recent implication of Baum test is for diagnosing depression disorders. It has found statistical significance difference in canopy widths. Besides, it could help to find the characteristics of eating disorders. Researcher measured the height and width of the trunk and the crown and details of how drawings were processed. Advantages and limitations: The advantages of the Baum test are that it can be administered quickly ( 5–10 minutes), is suitable for both individual and group tests, and it offers the clinician an opportunity to observe the patients motor skills.Besides, as a nonverbal tool for psychodiagnosis, it provides personality information for psychotherapy while not easy to cause trauma to the subject. However, researchers have pointed out that like other projective test, the Baum test lacks scientific evidence of supporting its analysis. The methods of analysis depend on individual subjective judgement. Additionally, the test is typically not used on patients with very low IQs because their drawings tend to be quite meager.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Axo-axonic synapse** Axo-axonic synapse: An axo-axonic synapse is a type of synapse, formed by one neuron projecting its axon terminals onto another neuron's axon.Axo-axonic synapses have been found and described more recently than the other more familiar types of synapses, such as axo-dendritic synapses and axo-somatic synapses. The spatio-temporal properties of neurons get altered by the type of synapse formed between neurons. Unlike the other types, the axo-axonic synapse does not contribute towards triggering an action potential in the postsynaptic neuron. Instead, it affects the probability of neurotransmitter release in the response to any action potential passing through the axon of the postsynaptic neuron. Thus, axo-axonic synapses appear to be very important for the brain in achieving a specialized neural computation. Axo-axonic synapse: Axo-axonic synapses are found throughout the central nervous system, including in the hippocampus, cerebral cortex and striatum in mammals; in the neuro-muscular junctions in crustaceans; and in the visual circuitry in dipterans. Axo-axonic synapses can induce either inhibitory or excitatory effects in the postsynaptic neuron. A classic example of the role of axo-axonic synapses is causing inhibitory effects on motoneurons in the spinal-somatic reflex arc. This phenomenon is known as presynaptic-inhibition. Background: Complex interconnections of neurons form neural networks, which are responsible for various types of computation in the brain. Neurons receive inputs mainly through dendrites, which play a role in spatio-temporal computation, leading to the firing of an action potential which subsequently travels to synaptic terminals passing through axons. Based on their locations, synapses can be classified into various kinds, such as axo-dendritic synapse, axo-somatic synapse, and axo-axonal synapse. The prefix here indicates the part of the presynaptic neuron (i.e., ‘axo-’ for axons), and the suffix represents the location where the synapse is formed on the postsynaptic neuron (i.e., ‘-dendritic’ for dendrites, ‘-somatic’ for cell body and ‘-axonic’ for synapses on axons). Synapse location will govern the role of that synapse in a network of neurons. In axo-dendritic synapses, the presynaptic activity will affect the spatio-temporal computation in postsynaptic neurons by altering electrical potential in the dendritic branch. Whereas the axo-somatic synapse will affect the probability of firing an action potential in the postsynaptic neuron by causing inhibitory or excitatory effects directly at the cell body.Whereas the other types of synapses modulate postsynaptic neural activity, the axo-axonic synapses show subtle effects on the network-level neural information transfer. In such synapses, the activity in presynaptic neurons will not change the membrane potential (i.e., depolarize or hyperpolarize) of the cell body of postsynaptic neurons because presynaptic neurons project directly on the axons of the postsynaptic neurons. Thus, the axo-axonic synapse will mainly affect the probability of neurotransmitter vesicle release in response to an action potential firing in the postsynaptic neuron. Unlike other kinds of synapses, the axo-axonic synapse manipulates the effects of a postsynaptic neuron's firing on the neurons further downstream in the network. Due to the mechanism of how axo-axonic synapses work, most of these synapses are inhibitory, and yet a few show excitatory effects in postsynaptic neurons. History: The first direct evidence of the existence of axo-axonic synapses was provided by E. G. Gray in 1962. Gray produced electron microscopy photographs of axo-axonic synapses formed on the terminals of muscle afferents involved in the spinal somatic reflex arc in a cat's spinal cord slices. Later, Gray coined the term ‘axo-axonic’ after getting photographic confirmation from as many as twelve axo-axonic synapses. Within the next two years, scientists found axo-axonic synapses in various other places in the nervous system in different animals, such as in the retina of cats and pigeons, in the lateral geniculate nucleus of monkeys, in the olfactory bulb of mice, and in various lobes in the octopus brain. This further confirmed the existence of axo-axonic synapses in the brain across animal phyla. History: Prior to the discovery of axo-axonic synapses, physiologists predicted the possibility of such mechanisms as early as in year 1935, following their observations of electrophysiological recordings and quantal analysis of brain segments. They had observed inhibitory responses in postsynaptic motoneurons in the slice preparation of the monosynaptic reflex arc. During simultaneous recordings from presynaptic and postsynaptic neurons, the physiologists could not make sense of the infrequent inhibition observed in the postsynaptic neuron, with no membrane potential changes in the presynaptic neuron. At that time, this phenomenon was known as “presynaptic inhibitory action”, the term proposed by Karl Frank in 1959 and later well summarized by John Eccles in his book. After Gray's finding of the axo-axonic synapse in 1962, scientists confirmed that this phenomenon was in fact due to the axo-axonic synapse present in the reflex arc.More recently, in 2006 researchers discovered the first evidence of excitatory effects caused by an axo-axonic synapse. They found that GABAergic neurons project onto the axons of pyramidal cells in the cerebral cortex to form axo-axonic synapse and elicit excitatory effects in cortical microcircuits. Function: Below are the brain locations where axo-axonic synapses are found in different animals. Function: Cerebellar cortex The axo-axonic synapse in the cerebellar cortex originally appeared in one of the drawings of Santiago Ramón y Cajal in his book published in 1909. Later using electron microscopy, it was confirmed that the basket cell axon projects on the axon hillock of Purkinje cells in the cerebellar cortex in cats and other mammals, forming axo-axonic synapses. The first electrophysiological characterization of an axo-axonic synapse formed on Purkinje cells was done in 1963, where the presynaptic basket cell axons were found to inhibit the terminal output of postsynaptic Purkinje cells through the axo-axonic synapse. Network-level study revealed that the granule cells (a.k.a. the parallel fibers) which activated Purkinje cells, also activated the basket cells which subsequently inhibited the effect of Purkinje cells on the downstream network. Function: Cerebral cortex Axo-axonic synapses are found In the visual cortex (in V1 and V2) in mammals, and have been well studied in cats, rats and primates such as monkeys. The synapse is formed on the initial segments of the axons of pyramidal cells in several layers in the visual cortex. The projecting neurons for these synapses come from various parts of the central nervous system and neocortex. Similarly, axo-axonic synapses are found in the motor cortex, in the subiculum and in the piriform cortex. In the striate cortex, as the Golgi's method and electron microscopy revealed, as many as five axo-axonic synapses are formed onto a single pyramidal cell. In the cerebral cortex, inhibitory axo-axonic synapses may play a widespread role in network level activity by enabling synchronized firing of pyramidal cells, essentially by modulating the threshold for output of these cells. These synapses are also found on the initial segments of axons in pyramidal cells in the somatosensory cortex, and in the primary olfactory cortex which are found to be the inhibitory kind. Studying the locations of axo-axonic synapses in the primary olfactory cortex, researchers have suggested that axo-axonic synapses may play a critical role in synchronizing oscillations in the piriform cortex (in the olfactory cortex), which aids olfaction. The axo-axonic synapses are also found in the hippocampus. These synapses are found to be formed mainly on principal cells in stratum oriens and stratum pyramidale and rarely on stratum radiatum; they commonly receive projections from GABAergic local interneurons. The horizontal interneurons show a laminar distribution of dendrites and are involved in axo-axonic synapses in the hippocampus, which get direct synaptic inputs from CA1 pyramidal cells. Thus, in general, these studies indicate that axo-axonic synapses can provide a basic mechanism of information processing in the cerebral cortex. Function: Basal ganglia Microscopy studies in the striatum previously suggested rare occurrence of axo-axonic synapses in individual sections. Extrapolations from the topological data suggest much higher counts of such synapses in the striatum where the therapeutic role of the axo-axonic synapses in treating schizophrenia has been postulated previously. In this study, authors examined 4,811 synapses in rat striatum sections, and 15 of them were found to be the axo-axonic synapses. These axo-axonic synapses are formed by dopaminergic inhibitory interneurons (on the presynaptic side) projecting onto the axons of glutamatergic cortico-striatal fibers in the rat striatum. Function: Brainstem Axo-axonic synapses are found in the spinal trigeminal nucleus in the brainstem. Electron microscopy studies on the kitten brainstem quantified synaptogenesis of axo-axonic synapses in the spinal trigeminal nucleus at different development ages of the brain. Authors identified the synapses by counting vesicles released in the synaptic cleft, which can be observed in the micrographs. Axo-axonic contacts are shown to consistently increase throughout the development period, starting from the age of 3 hours to the age of 27 days in kittens. The highest rate of synaptogenesis is during the first 3 to 6 days, at the end of which, the kitten's spinal trigeminal nucleus will have nearly half of the axo-axonic synapses present in adult cats. Later, between 16 and 27 days of age, there is another surge of axo-axonic synaptogenesis. Axo-axonic synapses are also observed in the solitary nucleus (also known as nucleus of the solitary tract) uniquely in the commissural portion in the neuroanatomical studies, which used 5-hydroxydopamine to label axo-axonic synapses. Axo-axonic synapses are formed on baroreceptor terminals by the presynaptic adrenergic fibers, and are proposed to play a role in baroreflex. Function: Spinal Cord Axo-axonic synapses are found in the mammalian spinal reflex arc and in Substantia gelatinosa of Rolando (SGR). In the spinal cord, axo-axonic synapses are formed on the terminals of sensory neurons with presynaptic inhibitory interneurons. These synapses are first studied using intracellular recordings from the spinal motoneurons in cats, and have been shown to cause presynaptic inhibition. This seems to be a common mechanism in spinal cords, in which GABAergic interneurons inhibit presynaptic activity in sensory neurons and eventually control activity in motor neurons enabling selective control of muscles. In efforts to quantify the occurrence of axo-axonic synapses in the SGR region in rats, 54 such synapses were found among the total 6,045 synapses examined. These 54 axo-axonic synapses were shown to have either agranular vesicles or large granular vesicles. Function: Vestibular system Axo-axonic synapses are found in the lateral vestibular nucleus in rats. Axo-axonic synapses are formed from the small axons of interneurons onto the axon terminals of large axons, which are upstream to the main dendritic stem. Interestingly, the authors claimed that axo-axonic synapses, which are abundant in rats, are absent in the lateral vestibular nucleus in cats. They note that the types of axon terminals identified and described in cats are all found in rats, but the reverse is not true because the axons forming the axo-axonic synapses are missing in cats. These synapses are proposed to enable complex neural computation for the vestibular reflex in rats. Function: Hindbrain Axo-axonic synapses are found in the mauthner cells in goldfish. The axon hillock and initial axon segments of mauthner cells receive terminals from extremely fine unmyelinated fibers, which cover the axon hillock with helical projections. These helical projections around mauthner cells are also known as the axon cap. The difference between the axo-axonic synapses and other synapses on mauthner cells is that synapses on dendrites and soma receive myelinated fibers, while axons receive unmyelinated fibers. Mauthner cells are big neurons which are involved in fast escape reflexes in fish. Thus, these axo-axonic synapses could selectively disable the escape network by controlling the effect of mauthner cells on the neural network further downstream. Studying the morphological variation of the axo-axonic synapses at the axon hillock in mauthner cells suggests that, evolutionarily, these synapses are more recent than the mauthner cells. Response to the startle can be mapped phylogenetically, which confirms that basal actinopterygian fish, with little to no axo-axonic synapses on mauthner cells, show worse escape response than fish with axo-axonic synapses. Function: Neuromuscular junction Inhibitory axo-axonic synapses are found in the crustacean neuromuscular junctions and have been widely studied in Crayfish. Axo-axonic synapses are formed on the excitatory axons as a postsynaptic neuron by the motor neurons from the presynaptic side. Motor neurons, which is the common inhibitor in crab limb closers and limb accessory flexors, form axo-axonic synapses in addition to the neuromuscular junction with the muscles in crayfish. These synapses were first observed in 1967, when they were found to cause presynaptic inhibition in leg muscles of crayfish and crabs. Subsequent studies found that axo-axonic synapses showed varying numbers of occurrence based on the location of the leg muscles from the nervous system. For instance, proximal regions have thrice as many axo-axonic synapses than the central regions. These synapses are proposed to function by limiting neurotransmitter release for controlled leg movements. Clinical Significance: An example of the physiological role of axo-axonic synapses, which are formed by GABAergic inhibitory interneurons to the axons of granule cells, is in eliciting spontaneous seizures, which is a key symptom of Intractable Epilepsy. The presynaptic inhibitory interneurons, which can be labeled by cholecystokinin and GAT-1, are found to modulate the granule cells's spike output. The same cells subsequently project excitatory mossy fibers to pyramidal neurons in the hippocampal CA3 region. Clinical Significance: One of the two leading theories for the pathoetiology of schizophrenia is the glutamate theory. Glutamate is a well studied neurotransmitter for its role in learning and memory, and also in the brain development during prenatal and childhood. Studies of rat striatum found inhibitory axo-axonic synapses formed on the glutamatergic cortico-striatal fibers. They proposed that these axo-axonic synapses in the striatum could be responsible for inhibiting the glutamatergic neurons. Additionally, these dopaminergic synapses are also proposed to cause hyperdopaminergic activity and become neurotoxic for the postsynaptic glutamatergic neurons. This mechanism is proposed to be a possible mechanism for glutamate dysfunction in observed schizophrenia. Development: A study on the spinal cord in mice suggests that the sensory Ig/Caspr4 complex is involved in the formation of axo-axonic synapses on proprioceptive afferents. These synapses are formed through projection of GABAergic interneurons on sensory neurons, which is upstream to the motor neurons. In the axo-axonic synapse, expressing NB2 (Contactin5)/Caspr4 coreceptor complex in postsynaptic neurons along with expressing NrCAM/CHL1 in presynaptic interneurons results in the increased numbers of such synapses forming in the spinal cord. Also, knocking out NB2 from the sensory neurons reduced the number of axo-axonic synapses from GABAergic interneurons, which suggests the necessity and the role of NB2 in synaptogenesis of axo-axonic type of synapses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar-powered pump** Solar-powered pump: Solar-powered pumps run on electricity generated by photovoltaic (PV) panels or the radiated thermal energy available from collected sunlight as opposed to grid electricity- or diesel-run water pumps. Generally, solar-powered pumps consist of a solar panel array, solar charge controller, DC water pump, fuse box/breakers, electrical wiring, and a water storage tank. The operation of solar-powered pumps is more economical mainly due to the lower operation and maintenance costs and has less environmental impact than pumps powered by an internal combustion engine. Solar pumps are useful where grid electricity is unavailable or impractical, and alternative sources (in particular wind) do not provide sufficient energy. Components: A PV solar-powered pump system has three main parts - one or more solar panels, a controller, and a pump. The solar panels make up most (up to 80%) of the system's cost. [1] The size of the PV system is directly dependent on the size of the pump, the amount of water that is required, and the solar irradiance available. Components: The purpose of the controller is twofold. Firstly, it matches the output power that the pump receives with the input power available from the solar panels. Secondly, a controller usually provides a low- or high-voltage protection, whereby the system is switched off, if the voltage is too low or too high for the operating voltage range of the pump. This increases the service life of the pump, thus reducing the need for maintenance. Other ancillary functions include automatically shutting down the system when the water source level is low or when the storage tank is full, regulating water output pressure, blending power input between the solar panels and an alternate power source such as the grid or an engine-powered generator, and remotely monitoring and managing the system through an online portal offered as a cloud service by the manufacturer. Components: Voltage of the solar pump motors can be alternating current (AC) or direct current (DC). DC motors are used for small to medium applications up to about 4 kW rating, and are suitable for applications such as garden fountains, landscaping, drinking water for livestock, or small irrigation projects. Since DC systems tend to have overall higher efficiency levels than AC pumps of a similar size, the costs are reduced, as smaller solar panels can be used. Components: Finally, if an AC solar pump is used, an inverter is necessary to change the DC power from the solar panels into AC for the pump. The supported power range of inverters extends from 0.15 to 55 kW, and can be used for larger irrigation systems. The panel and inverters must be sized accordingly, though, to accommodate the inrush characteristic of an AC motor. To aid in proper sizing, leading manufacturers provide proprietary sizing software tested by third-party certifying companies. The sizing software may include the projected monthly water output, which varies due to seasonal change in insolation. Water pumping: Solar-powered water pumps can deliver drinking water, water for livestock, or irrigation water. Solar water pumps may be especially useful in small-scale or community-based irrigation, as large-scale irrigation requires large volumes of water that in turn require a large solar PV array. As the water may only be required during some parts of the year, a large PV array would provide excess energy that is not necessarily required, thus making the system inefficient, unless an alternative use can be found. Water pumping: Solar PV water pumping systems are used for irrigation and drinking water in India. Most of the pumps are fitted with a 2.0 - 3.7 kW motor that receives energy from a 4.8 kWp PV array. The 3.7 kW systems can deliver about 124,000 liters of water/day from a total of 50 meters setoff head and 70 meters dynamic head. By 30 August 2016, a total of 120,000 solar PV water pumping systems had been installed around the world. Energy storage in the form of water storage is better than energy storage in the form of batteries for solar water pumps because no intermediary transformation of one form of energy to another is needed. The most common pump mechanics used are centrifugal pumps, multistage pumps, borehole pumps, and helical pumps. Important scientific concepts of fluid dynamics such as pressure vs. head, pump heads, pump curves, system curves, and net suction head are really important for the successful deployment and design of solar-powered pumps. Oil and gas: To combat negative publicity related to the environmental impacts of fossil fuels, including fracking, the oil and gas industry is embracing solar-powered pumping systems. Many oil and gas wells require the accurate injection (metering) of various chemicals under pressure to sustain their operation and to improve extraction rates. Historically, these chemical injection pumps (CIPs) have been driven by gas reciprocating motors using the pressure of the well's gas, and exhausting the raw gas into the atmosphere. Solar-powered electrical pumps (solar CIPs) can reduce these greenhouse gas emissions. Solar arrays (PV cells) not only provide a sustainable power source for the CIPs, but can also provide an electricity source to run remote SCADA-type diagnostics with remote control and satellite/cell communications from very remote locations to a desktop or notebook monitoring computer. Stirling engine: Instead of generating electricity to turn a motor, sunlight can be concentrated on the heat exchanger of a Stirling engine and used to drive a pump mechanically. This dispenses with the cost of solar panels and electric equipment. In some cases, the Stirling engine may be suitable for local fabrication, eliminating the difficulty of importing equipment. One form of Stirling engine is the fluidyne engine, which operates directly on the pumped fluid as a piston. Fluidyne solar pumps have been studied since 1987. At least one manufacturer has conducted tests with a Stirling solar-powered pump.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Propallylonal** Propallylonal: Propallylonal (trade names Nostal, Quietal, Ibomal) is a barbiturate derivative invented in the 1920s. It has sedative, hypnotic and anticonvulsant properties, and is still rarely prescribed as a sleeping medication in some Eastern-European countries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Proxmire Amendment** The Proxmire Amendment: The Proxmire Amendments were a series of legislation that prohibits the Food and Drug Administration from monitoring and limiting the potency of vitamins and minerals found in dietary supplements. The Proxmire Amendment also made it so that food supplements could not be classified as drugs, making their sale possible without a prescription from a doctor. According to a study done, "dietary supplements fall into the following categories: vitamins, minerals, herbs or other botanicals, amino acids, animal-derived products, hormones and hormone analogs, enzymes, and concentrates, metabolites, constituents, or extracts of these." They can be used by anyone wishing to purchase them as much or as little as they desire. Dietary supplements can be used to increase productivity, treating illness, helping mental health such as depression and anxiety, enhancing mental abilities, building muscle, or losing weight, among many other uses. William Proxmire, a Senator for Wisconsin, was instrumental in influencing the passing the Proxmire Amendment. The Proxmire Amendment is also known as The Rogers-Proxmire Amendment of 1976, and The Vitamins and Minerals Amendments. This amendment became section 411 of the Federal Food, Drug, and Cosmetic Act. Supplement Definition and Background: Dietary supplements are vitamins, minerals, proteins, herbs, enzymes and fish oils that people can consume in order to ensure that their body gets enough nutrients to be healthy. They are often used as replacements or even alternative treatments for illness. Because of the ingredients in some dietary supplements, they may be classified as food. This is because these supplements are used for adding taste, aromatic purposes, or adding nutrition to other food. As a result of them being classified as food, they do not require Food and Drug Administration approval before being sold on the market. Plants and herbs were used even in ancient times as medicines. After the popularization of patent medicines in the 1800’s and the demand became greater, more false claims and companies started to emerge with “medications” with secret ingredients not released to the public. The false information continues with the false advertising of the effect and purpose of dietary supplements. The public were and, in some cases, still are ignorant to the dangers and potentially harmful effects of mega dosing or over-consuming specific vitamins and minerals. A lot of confusion stems from the fact that dietary and vitamin supplements are not considered drugs, but they are also not considered food thus exempting them from regulation and approval from the Food and Drug Association. There are some cases when taking supplements may be necessary for health reasons. An example of this may be during pregnancy, or when a person is experiencing a severe deficiency of a specific vitamin or mineral. Before consuming dietary supplements however, one should always consult their doctor to ensure safe use and affects. When supplements are taken unnecessarily or in large amounts it can result in toxic reactions, sickness, and in some cases death. Many people are unaware of the potential dangers of dietary supplements because they are often marketed as harmless and helpful. History: The dietary supplement industry has accused the Food and Drug Administration of regulatory bias. In 1976, the Food and Drug Administration attempted to restrict the purchase and contents of dietary supplements. The Food and Drug Administration attempted to use evidence that said certain nutrients could be unhealthy if taken in unregulated amounts, but American citizens rebutted with a campaign based largely around letter-writing. The campaign resulted in the Food and Drug Administration's attempted restrictions being withdrawn. The legislation that made it so that the Food and Drug Administration can not require a prescription from a doctor to buy supplements and also that the contents of supplements cannot be government regulated. Public Opinion: As of 1994, not quite half of Americans took dietary supplements regularly. J.B. Cordaro, president of the Council for Responsible Nutrition said about the Proxmire Amendment, "Rogers-Proxmire meant the survival of our industry... Without that, the Food and Drug Administration...could have crippled us." Public opinion was that Food and Drug Administration restriction on the supplement industry would have inhibited American citizen's rights to their personal healthcare. Some dieticians warn of the dangers of unregulated supplements, saying that they are misleading and are actually not as useful as they are marketed to be. Lasting Effects: The Proxmire Amendment outlawed the Food and Drug Administration from regulating potency of vitamin and mineral supplements. The mineral and vitamin supplements industry continued to grow in the '80s as did the reports of illnesses and deaths due to the overconsumption of specific vitamins, minerals, and supplements. Because of the passing of the Proxmire amendment and Dietary Supplement Health and Education Act, the dietary supplement industry has had a large increase in revenue and sales, around $30 billion. Back in 2015 they had more than 85,000 products selling in stores that didn’t help improve health in any way. Manufacturers of dietary supplements don’t have to give the full background and overall safety information to the Food and Drug Administration before releasing the products. The Food and Drug Administration has no other choice than to try to determine the health risks and benefits of products from the information companies release to the public, and the experiments they can hold in their own laboratories. As part of the Proxmire Amendment, the Food and Drug Administration does not schedule routine inspections of companies that produce dietary supplements. Instead, the Food and Drug Administration waits until a consumer files a complaint and then they oversee an investigation into that specific supplement's risks.From the 1960s to present, the Food and Drug Administration's relationship with American consumers has changed drastically. In the 1960s, there were not as many options and opinions regarding things such as food, medicine, hygiene products, among other things that were available to purchase in stores. The Food and Drug Administration used to see the consumer as a passive and relatively uninformed about their purchases. It was common for them to trust what they read without doing any research. This made it necessary to put regulations in place for many things, including dietary supplements. Current Legislation: After the 1976 Proxmire Amendment was put in place, the Food and Drug Administration continued to fight for the regulation of minerals and dietary supplements. In 1993, an attempt was made to lay more restrictions and rules on dietary supplements under another name. The Food and Drug Administration attempted to call them unapproved food additives and drugs in the hopes that they would then be able to regulate them in supplements. However, the public and the media were not receptive to this new labeling of supplements. In response to the push back from the public, the government put the Dietary Supplement Health and Education Act in place. The Dietary Supplement Health and Education Act provided more regulations on dietary supplements, therefore, further limiting the Food and Drug Administration’s ability to regulate the ingredients and risks of these products. The Dietary Supplement Health and Education Act required supplements that contained new ingredients to be marked so that consumers were aware. Manufacturers were required to give the Food and Drug Administration the background of why new ingredients they were adding were deemed safe for public use. The companies, however, could still promote their product before giving the Food and Drug Administration that information. The Dietary Supplement Health and Education Act made it easier for companies, such as J.R. Carlson's Laboratories, to better educate their consumers as they considered buying their products in an effort to increase their personal health. The Proxmire Amendments made it so that the Food and Drug Administration was unable to regulate the contents of supplements, but the Dietary Supplement Health and Education Act modified that and provided guidelines for labeling. However, if a dietary supplement makes a claim to have the same benefits as a drug, it is required to be verified and go through the same process as a drug to gain market approval. The Food and Drug Administration considers moderate amounts of vitamins and minerals to be generally safe without premarket approval. It is important to note that the information put on the safety labels is not the only thing that the Food and Drug Administration needs to consider. The different composition and quality of products can be more detrimental to health and is more important to note, in some cases, than the inherent safety of a product. The Food and Drug Administration is restricted from that information, however, and can not be entirely sure of the composition of certain products. In summary, the Proxmire Amendment and Dietary Supplement Health and Education Act are still in place, therefore, the Food and Drug Administration's ability to regulate dietary and mineral supplements is minimal and restricted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pulser pump** Pulser pump: A pulser pump is a gas lift device that uses gravity to pump water to a higher elevation. It has no moving parts. Operation: A pulser pump makes use of water that flows through pipes and an air chamber from an upper reservoir to a lower reservoir. The intake is a trompe, which uses water flow to pump air to a separation chamber; air trapped in the chamber then drives an airlift pump. The top of the pipe that connects the upper reservoir to the air chamber is positioned just below the water surface. As the water drops down the pipe, air is sucked down with it. The air forms a "bubble" near the roof of the air chamber. A narrow riser pipe extends from the air chamber up to the higher elevation to which the water will be pumped. Operation: Initially the water level will be near the roof of the air chamber. As air accumulates, pressure builds, which will push water up into the riser pipe. At some point the "air bubble" will extend below the bottom of the riser pipe, which will allow some of the air to escape through the riser, pushing the water that is already in the pipe up with it. As the air escapes, the water level in the air chamber will rise again. The alternating pressure build up and escape causes a pulsing effect, hence the name: pulser pump. Operation: The maximum air pressure that can accumulate depends on the height of the water column between the air chamber and the lower reservoir. The deeper the air chamber is positioned, the higher the elevation to which the water can be pumped. The depth of the air chamber position is limited by the depth to which the flowing water can pull the air from the surface of the upper reservoir down to the chamber. This depth partially depends on the speed of the water, which in turn depends on the difference in height between the upper and lower reservoir. History: Brian White, stonemason by profession, claims to have invented the pulser pump in 1987. He put the idea in the public domain.However, Charles H. Taylor invented the hydraulic air compressor before the year 1910 while living in Montreal. The working principle of the hydraulic air compressor and the pulser pump is exactly the same. But the purpose of the compressor is to generate compressed air. Expelling the water up to 30 meter high serves to prevent potentially damaging over-pressure. The primary purpose of the pulser pump is to use the air pressure to expel the water to a higher elevation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cord circuit** Cord circuit: In telecommunication, a cord circuit is a switchboard circuit in which a plug-terminated cord is used to establish connections manually between user lines or between trunks and user lines. A number of cord circuits are furnished as part of the switchboard position equipment. The cords may be referred to as front cord and rear cord or trunk cord and station cord. In modern cordless switchboards, the cord-circuit function is switch operated and may be programmable. Cord circuit: In early and middle 20th century telephone exchanges this task was done by a supervisory relay set known variously as junctor circuit or district junctor. Later designs made it a function of the trunk circuit or absorbed it into software.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unvanquished (video game)** Unvanquished (video game): Unvanquished is a free and open-source video game. It is a multiplayer first-person shooter and real-time strategy game where Humans and Aliens fight for domination. Gameplay: Players fight in an alien or human team with respective melee and conventional ballistic weaponry. The aim of the game is to destroy the enemy team and the structures that keep them alive, as well as ensure one's own team's bases and expansions are maintained. Players earn resources for themselves and their team via aggression.Commenting on gameplay, Lifewire noted: "One particularly fun aspect of Unvanquished is that as insects, players can crawl on the walls and ceilings, adding a new, though perhaps somewhat disorienting, take on game physics". Development: Unvanquished is a spiritual successor to Tremulous. The gameplay and game resources are under the CC BY-SA 2.5 Creative Commons license whilst the Daemon engine is under the GPLv3.Development began the summer of 2011 on SourceForge, with the first alpha version being released on February 29, 2012. The game moved to Github in 2015.Unvanquished is developed by a team of volunteers who used to release a new Alpha on the first Sunday of every month.However, since the project reached a new stage of development, betas are released with less frequency. Engine: The Unvanquished game uses the Dæmon Engine, born from a merge of the Wolfenstein: Enemy Territory engine (id Tech 3) and the XreaL engine (merge that was initially named OpenWolf).The daemon engine is a fork of the earlier version of the OpenWolf Engine. While developing "their" engine, Unvanquished team uploaded clean copy of the source code, dropped original commit history and claimed project as theirs. Its development is now proceeding in its own path from its predecessors.In 2015, with version 0.42, the Unvanquished developers managed to separate the game's engine code from the game's code by teaming up with developers of Xonotic. Reception: Michael Larabel from Phoronix.com praised Unvanquished's graphics in July 2012, while it was still in alpha state. Lifewire praised the insect mechanic as an interesting twist and the ease of modding (referring to the level editor).Softpedia reviewed the game in version 0.49 in March 2016 and gave 3.5 stars.Between 2011 and June 2017 the game was downloaded alone from SourceForge over 1.3 million times.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modified Morlet wavelet** Modified Morlet wavelet: Modified Mexican hat, Modified Morlet and Dark soliton or Darklet wavelets are derived from hyperbolic (sech) (bright soliton) and hyperbolic tangent (tanh) (dark soliton) pulses. These functions are derived intuitively from the solutions of the nonlinear Schrödinger equation in the anomalous and normal dispersion regimes in a similar fashion to the way that the Morlet and the Mexican hat are derived. Modified Morlet wavelet: The modified Morlet is defined as: cos ⁡(ω0t)sech(t)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Standard Name Identifier** International Standard Name Identifier: The International Standard Name Identifier (ISNI) is an identifier system for uniquely identifying the public identities of contributors to media content such as books, television programmes, and newspaper articles. Such an identifier consists of 16 digits. It can optionally be displayed as divided into four blocks. ISNI can be used to disambiguate named entities that might otherwise be confused, and links the data about names that are collected and used in all sectors of the media industries. It was developed under the auspices of the International Organization for Standardization (ISO) as Draft International Standard 27729; the valid standard was published on 15 March 2012. The ISO technical committee 46, subcommittee 9 (TC 46/SC 9) is responsible for the development of the standard. ISNI format: The FAQ of the isni.org websites states "An ISNI is made up of 16 digits, the last character being a check character." ISNI consists of 15 digits followed by a check character. The check character may be either a decimal digit or the character "X". The check character is calculated using the preceding 15 decimal digits using the ISO/IEC 7064, MOD11-2 algorithm. ISNI format: Format without space MARC: it was proposed to store the ISNI without spaces, e.g.(isni)1234567899999799 isni.org URL: no spaces, e.g. http://www.isni.org/isni/000000012146438X (old) https://isni.org/isni/000000012146438X (current) https://isni.org/000000012146438X (alternative) viaf.org: URL https://viaf.org/viaf/sourceID/ISNI%7C000000012146438X URL https://viaf.org/processed/ISNI%7C000000012146438X the data dumps contain it in form ISNI|000000012146438X Format with space In display it is frequently shown with spaces. isni.org viaf.org Uses of an ISNI: The ISNI allows a single identity (such as an author's pseudonym or the imprint used by a publisher) to be identified using a unique number. This unique number can then be linked to any of the numerous other identifiers that are used across the media industries to identify names and other forms of identity. Uses of an ISNI: An example of the use of such a number is the identification of a musical performer who is also a writer both of music and of poems. While they might be identified in various databases using numerous private and public identification systems, under the ISNI system, they would have a single linking ISNI record. The many different databases could then exchange data about that particular identity without resorting to messy methods such as comparing text strings. An often quoted example in the English language world is the difficulty faced when identifying 'John Smith' in a database. While there may be many records for 'John Smith', it is not always clear which record refers to the specific 'John Smith' that is required. Uses of an ISNI: If an author has published under several different names or pseudonyms, each such name will receive its own ISNI. ISNI can be used by libraries and archives when sharing catalogue information; for more precise searching for information online and in databases, and it can aid the management of rights across national borders and in the digital environment. ORCID ORCID (Open Researcher and Contributor ID) identifiers consist of a reserved block of ISNI identifiers for scholarly researchers and administered by a separate organisation. Individual researchers can create and claim their own ORCID identifier. The two organisations coordinate their efforts. Organisations involved in the management: ISNI Registration Authority According to ISO the Registration Authority for ISO 27729:2012 is the "ISNI International Agency". It is located in London (c/o EDItEUR)It is incorporated under the Companies Act 2006 as a private company limited by guarantee.The 'International Agency' is commonly known as the ISNI-IA.This UK registered, not-for-profit company has been founded by a consortium of organisations consisting of the Confédération Internationale des Sociétés d'Auteurs et Compositeurs (CISAC), the Conference of European National Librarians (CENL), the International Federation of Reproduction Rights Organisations (IFRRO), the International Performers Database Association (IPDA), the Online Computer Library Center (OCLC) and ProQuest. It is managed by directors nominated from these organisations and, in the case of CENL, by representatives of the Bibliothèque nationale de France and the British Library. Organisations involved in the management: ISNI Registration Agencies A registration agency provides the interface between ISNI applicants and the ISNI Assignment Agency. Organisations involved in the management: In 2018, YouTube became an ISNI registry, and announced its intention to begin creating ISNI IDs for the musicians whose videos it features. ISNI anticipates the number of ISNI IDs "going up by perhaps 3-5 million over the next couple of years" as a result.In 2020, Sound Credit, together with ISNI, announced that music industry ISNI registrations were free and automated. The free registration system is part of Sound Credit user profile creation, used by its larger system for music crediting. It includes an automated search to avoid duplicate ISNIs and a certificate generated by the Sound Credit registration system to officiate newly registered ISNIs. Organisations involved in the management: ISNI members ISNI members (ISNI-IA Members) as of 2018-07-11: ABES (French Bibliographic Agency for Higher Education) Brill Publishers CEDRO (Centro Español de Derechos Reprográficos) CDR (Centrale Discotheek Rotterdam) Copyrus FCCN French National Archives (Archives nationales de France) Harvard University Iconoclaste Irish Copyright Licensing Agency (ICLA) ISSN International Centre La Trobe University Library of Congress MacOdrum Library, Carleton University National Library of Finland National Library of New Zealand National Library of Norway National Library of Sweden (Kungliga Biblioteket) Publishers' Licensing Services UNSW Library Copyright: A subset of the data is available under CC0. ISNI assignment: ISNI-IA uses an assignment system comprising a user interface, data-schema, disambiguation algorithms, and database that meets the requirements of the ISO standard, while also using existing technology where possible. The system is based primarily on the Virtual International Authority File (VIAF) service, which has been developed by OCLC for use in the aggregation of library catalogues. ISNI assignment: Access to the assignment system and database, and to the numbers that are generated as the output of the process, are controlled by independent bodies known as 'registration agencies'. These registration agencies deal directly with customers, ensuring that data is provided in appropriate formats and recompensing the ISNI-IA for the cost of maintaining the assignment system. Registration agencies are appointed by ISNI-IA but will be managed and funded independently. ISNI coverage: The following table lists ISNI coverage counts, for millions of identities of all types, millions of people, millions of researchers (also included in people), and organisations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Motorized bicycle** Motorized bicycle: A motorized bicycle is a bicycle with an attached motor or engine and transmission used either to power the vehicle unassisted, or to assist with pedalling. Since it sometimes retains both pedals and a discrete connected drive for rider-powered propulsion, the motorized bicycle is in technical terms a true bicycle, albeit a power-assisted one. Typically they are incapable of speeds above 52 km/h (32 mph), however in recent years larger motors have been built, allowing bikes to reach speeds of upwards of 72 km/h (45 mph). Motorized bicycle: Powered by a variety of engine types and designs, the motorized bicycle formed the prototype for what would later become the motor driven cycle. Terminology: The term motorized bicycle refers to just a bicycle combining pedal power and internal combustion engine power. However, the term could be used as an umbrella category to refer to bicycles using sources besides pedal power. Electric bicycles technically could be in the category of motorized bicycles but instead of using internal combustion engines as a combination it is driven by electric motors which power from pedals and batteries. Mopeds are also almost motorized bicycles since they function the same way as these vehicles but with engines less than 50 cc (3.1 cu in). Terminology: The term motorized bicycle should not be confused with motorcycle since this type of vehicle uses a combination of pedal power and engine power whereas motorcycles are purely either internal combustion engine or electric motor powered. Design and usage: Motorized bicycles have utilized all variety of engines, from internal-combustion (IC) two-stroke and four-stroke gasoline engines to electric, diesel, or even steam propulsion. Most motorized bicycles are based or derived from standard general-purpose bicycle frame designs and technologies, although exceptions abound. In addition, modifications to a standard bicycle frame to support motorization may be extensive. Design and usage: The earliest motorized bicycles were ordinary utility bicycles fitted with an add-on motor and transmission to assist normal pedal propulsion, and it is this form that principally distinguishes the motorized bicycle from a moped or motorcycle. In a day when gasoline engine and transmission designs were in their infancy, and power-to-weight ratios were low, a dual-purpose propulsion system seemed particularly advantageous. As time went on, pedal propulsion was increasingly replaced by constant use of a two or four-stroke gasoline engine. Nevertheless, the concept of using motor assist for the ordinary bicycle has persisted, and the concept has periodically resurfaced over the years, particularly in times of austerity or fuel shortages. In countries where automobiles and/or fuels are prohibitively expensive, the motorized bicycle has enjoyed continued popularity as a primary mode of transportation.The design of the motorized bicycle or motorbike varies widely according to intended use. Some motorized bicycles are powerful enough to be self-propelled, without use of the pedals. A development of the motorized bicycle is the moped, which commonly has only a vestigial pedal drive fitted primarily to satisfy legal requirements, and suitable only for starting the engine or for emergency use. The alternate design philosophy to the moped is the so-called motor-assist or pedal-assist bicycle. These machines utilize the pedals as the dominant form of propulsion, with the motor used only to give extra assistance when needed for hills or long journeys. History: The two-wheeled pedal powered bicycle was first conceived in Paris in the 1860s. By 1888 John Dunlop's pneumatic tire and the chain drive made possible the safety bicycle, giving the bicycle its modern form. History: The origins of the motorized bicycle or motorbike can be traced back to the latter part of the 19th century when experimenters began attaching steam engines to bicycles, tricycles, and quadracycles. The first true motorized bicycle is generally considered to be the French Michaux-Perreaux steam velocipede of 1868. The Michaux-Perreaux was followed by the American Roper steam velocipede of 1869, built by Sylvester H. Roper of Roxbury, Massachusetts. Roper demonstrated his machine at fairs and circuses in the eastern United States in 1867, and built a total of 10 examples. These early attempts at propelling a bicycle by means other than the human body were not successful, either practically or commercially. It was not until the 1890s, with the advent of the gasoline-powered internal combustion engine (ICE), that the motorized bicycle could be considered a practical machine.One of the first gas motor-assisted bicycle designs was the Millet motorcycle developed by Félix Millet in France in 1892. Millet's designs had both pedals and a fixed-crankshaft radial engine built into the back wheel. History: In 1896, E. R. Thomas of Buffalo, New York, began selling gasoline engine kits for propelling ordinary bicycles. After forming the Thomas Motor Company, he began selling complete motor-assisted bicycles under the name Auto-Bi. The Auto-Bi is generally considered to be the first production motorized bicycle made in the United States.The 1900 Singer Motor Wheel was a wheel incorporating a small ICE powerplant that could be substituted for the front wheel of a bicycle. A later design, the 1914 Smith Motor Wheel, was attached to the rear of a bicycle by means of an outrigger arm, a design later taken up by Briggs & Stratton. History: In Belgium, the Minerva company, later known for luxury cars, started out manufacturing standard safety bicycles in 1897, before expanding into light cars and "motocyclettes" from 1900. They produced lightweight clip-on engines that mounted below the front down tube, specifically for Minerva bicycles, but also available in kit form suitable for almost any bicycle. The engine drove a belt turning a large gear wheel attached to the opposite side of the rear wheel as the chain. By 1901 the kit engine was a 211 cc (12.9 cu in) unit developing 1.5 hp (1.1 kW; 1.5 PS), comfortably cruising at 30 km/h (19 mph) at 1,500 rpm, capable of a top speed of 50 km/h (31 mph), and getting fuel consumption in the range of 3 L/100 km (94 mpg‑imp; 78 mpg‑US). These kits were exported around the world to countries including the United Kingdom, France, Germany, the Netherlands, Australia, and other British territories of the time.As engine power increased, frame ruptures became increasingly common, and by 1903 Minerva had developed an in-frame design with the engine mounted above the bottom bracket, while still also offering the clip-on kit. From 1904 Minerva began focusing more on car production, and while development and production of the Minerva motorized bicycles continued through to about 1909, they became increasing a less significant part of the company.In England, the Phelon & Rayner motorized bicycle was introduced in 1901, and was sold through 1903. The original Phelon & Rayner machine used a 260 cc (16 cu in), 1.75 horsepower (1 kW; 2 PS) gasoline engine mounted to a standard 28-inch bicycle frame. History: In the United States, the California Motor Company was formed in 1901 to sell complete gasoline-engine motorbikes in San Francisco, Oakland, and San Jose. The company began with a 200 cc single-cylinder, 1.5-horsepower, four-stroke engine designed by R.C. Marks. Mounted to a standard bicycle frame, the California could reach speeds of approximately 25 mph (40 km/h). The California weighed around 75-80 pounds, and featured a leading-link front fork, a leather spring saddle, a front Duck roller brake, and an Atherton rear coaster brake. A leathern belt-drive directly connected the engine output shaft to the rear wheel. During the summer of 1903, George A. Wyman rode a 200 cc (12 cu in), 1.5 hp (1 kW; 2 PS) California from San Francisco to New York City, becoming the first person to cross the North American continent aboard a motor vehicle.As early as 1903, motorized bicycles were being fitted with larger and heavier loop frames designed to specifically accommodate larger displacement engines, which produced higher speeds. These new motorbike frame designs soon incorporated a new riding position that no longer centered the rider over the pedals, but instead moved the rider's feet forward, where they rested on pegs or platforms. The new riding position was designed to increase rider comfort and control when using the motor for propulsion, and soon owners began relying on the gasoline motor for all but emergency use. Front suspension and (on some machines) rear suspension increased control at high speeds. By 1915, some manufacturers were omitting pedal propulsion entirely, resulting in the introduction of the first true modern motorcycle. History: At the same time, purpose-built motorbikes like the Derny and VéloSoleX, with stronger frames and sometimes with only token ability to be wholly human-powered were introduced in France. Many years later, manufacturers would re-introduce this concept as the moped, a small motorcycle fitted with pedals that can be used as a starting aid but which cannot, practically, be ridden under pedal power alone. History: In France, the gasoline-powered motorized bicycle, known popularly as the vélomoteur or vélomoto was popular during the 1930s, and continued to be widely sold in early postwar years as a means of transportation during a period of gasoline shortages and limited automobile production. History: In the 1930s, the United Kingdom and its former colonies also developed "clip-on" motors for bicycles (35 to 49 cc), followed by the "Autocycle" with a purpose-built frame incorporating pedals and a two-stroke engine (often a 98 cc Villiers engine), but without a gearbox (e.g. the Malvern Star). Autocycle manufacturers were well established in countries such as Britain and Australia before World War II. History: In 1939, the American bolt-on Whizzer gas-engined bicycle kit was introduced, utilizing a 138 cc side-valve four-stroke engine, the first whizzer was of the friction drive variety, but was soon replaced with belt drive. Despite some initial engine reliability issues, the Whizzer enjoyed modest popularity during World War II due to fuel and automobile shortages, and was used by war plant workers and others with priority for transportation. After the war, the Whizzer became popular with youth who desired faster speeds from their heavy cruiser-framed Schwinn bicycles. In 1949, the company introduced a complete production bike, the Pacemaker. Sales of the Whizzer conversion kits continued until 1962. In the United Kingdom, the motorized bicycle saw a resurgence of popularity and such bolt-on motors as the Cyclaid and the Cyclemaster motor wheel saw brief periods of immense popularity. The Cyclemaster, which was a hub motor that could be fitted to an ordinary bike, started at 25 cc (painted black), but later the size went up to 32 cc (painted grey). History: Elsewhere in Europe the motorized bicycle continued to be popular, particularly in France and Italy. An Italian manufacturer, Vincenti Piatti, designed a 50 cc engine for driving portable lathes and this was also used to power a bicycle frame in the form of the Mini Motore. Piatti later licensed the design to Trojan for production in Britain as the Trojan Minimotor. In West Germany, a compression-ignition (diesel) engine kit using an 18 cc variable head engine made by Lohmann was produced during the 1950s. In France, where postwar reconstruction, taxes, and fuel shortages limited automobile access, motorized bicycle kits and complete models were produced by a variety of smaller manufacturers, often using a two-stroke gasoline engine mounted above the front wheel. In 1946, production of the very successful French VELOSOLEX commenced, continuing until 1988. The VéloSoleX was a mass-produced motorized bicycle that used a tire roller (friction drive) to the front wheel. After French production ceased, the VELOSOLEX continued to be produced in China and Hungary. An in-wheel gasoline engine was used on the Honda P50 moped, which ceased production in 1968. The velomoteur and motor scooter enjoyed a second renaissance in the 1960s and 1970s as a new generation of youth discovered they could ride a motorized vehicle without need of a driver's license. Other countries had relaxed licensing requirements, e.g. lower age limits for motorized bicycles, which increased their popularity. History: In USSR, a deficiency of any vehicles in the market led to great increasing of cheap 50cc mopeds and kits production. It achieves approximately a half million per year at the later 70s. As to technical level, it was analogous to pre-WW2 German models, with minimal changes made to later 80s. The only clip-on kit engine was "D" series ("Д-4" ... "Д-8Э", designed by Soviet engineer Filip Priboloi), a single-speed chain-driven 45cc 2-stroke motor with manual clutch and a rotary slide valve in a crankshaft. It was destined for clipping into a classic twin-diamond bike frame. As of the present day, it's still widely produced by some Chinese factories like Jiangdu in a piston distribution version, and has some popularity even in United States.During the 1960s, the moped craze arrived in the United States, the United Kingdom, and other countries. Mopeds had been produced for years in France and Italy, but were largely unknown in other countries. The moped's surge in popularity was motivated by the arrival of new machines produced in Japan by Honda, Yamaha, and other manufacturers, which could be operated without a driver's license and with a minimum of effort to meet existing regulation by the authorities. The new moped designs were really low-powered motorcycles, equipped with pedals largely to meet legal requirements. Most could be pedaled only with difficulty over short distances on level ground. Current trends: Today, motorized bicycles are still being developed both as complete designs and as add-on motor kits for use on standard bicycles, either by part-time hobbyists or by commercial manufacturers. With the development of new, lighter, and more powerful batteries, electric motors for power assist are increasingly popular, often using hub motors to facilitate after-market conversions. Converting bicycles or tricycles has proven useful for some people with physical disabilities such as knee injury or arthritis. Current trends: In 2003 production of the French gasoline-powered VELOSOLEX ceased in Hungary. However, production continues in China and has restarted in France. In the United States, Velosolex America markets the VELOSOLEX worldwide. Current trends: Currently there are several companies manufacturing aftermarket internal combustion engine (ICE) motor conversion kits for conventional bicycles. These include both four-stroke and two-stroke gasoline engine designs. Among these, Golden Eagle Bike Engines currently produces a rear-engine (rack-mounted) kit using a belt to drive the rear wheel. Staton-Inc., a motorized bicycle manufacturer of long standing, also uses a rack-mount with either a tire roller-mount (friction drive) or a chain-driven, geared transmission. Other manufacturers produce kits using small two- or four-stroke gas engines mounted in the central portion of the bicycle frame, and incorporating various types of belt- or chain-driven transmissions and final drives. Some of these brands include Jiangdu Flying Horse Gasoline Engine Factory Ltd., EZ Motorbike Company, Inc., Mega Motors Inc., and Grubee Inc. Current trends: Motorized bicycles using electric motors have also re-entered the market. Electrically powered bicycles use batteries, which have a limited capacity and thus a limited range, particularly when large amounts of power are utilized. This design limitation means that the use of the electric motor as an assist to pedal propulsion is more emphasized than is the case with an internal combustion engine. While costly, new types of lithium batteries along with electronic controls now offer users increased power and range while reducing overall weight. Newer electric motor bicycle designs are gaining increasing acceptance, particularly in countries where increasing traffic congestion, aging populations, and concern for the environment have stimulated development and usage. Legal status of internal combustion engine (ICE) powered bicycles: The legal definition and status of motorized bicycles using internal combustion engines varies from nation to nation, and in some cases, on local rules and regulations. Australia Current laws in some states/territories allow the use of ICE motorized bicycles, provided they do not produce more than a certain number of watts of power. In Tasmania and The Northern Territory this limit is 200 watts. Legal status of internal combustion engine (ICE) powered bicycles: Queensland, New South Wales and South Australia have banned the use of ICE motorized bicycles. For example "In NSW, all petrol-powered bicycles are banned on New South Wales roads and road-related areas such as footpaths, shared paths, cycle ways and cycle paths. The ban, introduced on 1 October 2014, includes bicycles that: -Have had a petrol-powered engine attached after purchase -Were bought with an attached petrol-powered engine -Are powered by any type of internal combustion engine."And "As from the 15 December 2016 internal combustion engines that are fitted to bicycles are not permitted to be used on South Australian roads or road-related areas".In Victoria ICE bicycles over 200 watts are classed as motorcycles and must be registered as such. Legal status of internal combustion engine (ICE) powered bicycles: The legislation in all states favours electric power assisted bicycles over ICE powered bicycles. Canada In Canada, each province has its own authority over motor vehicle and transportation laws, including classification of vehicles used on public roads. Motorized bicycles using an internal combustion engine under 100 cc are generally legally indistinguishable from a bicycle on public roads. Legal status of internal combustion engine (ICE) powered bicycles: An example of Nova Scotia's Motor Vehicle act, as it applies to Motorized Bicycles: 2 In this Act, (c) "bicycle" means (i) a vehicle propelled by human power upon which or in which a person may ride and that has two tandem wheels either of which is 350 millimetres or more in diameter or that has four wheels any two of which are 350 millimetres or more in diameter but does not include a wheelchair, or (ii) a vehicle propelled by human and mechanical power that is fitted with pedals that are operable at all times to propel the bicycle, that has the same wheel requirements as set out in subclause (i) and that has an attached motor driven by electricity not producing more than 500 watts or with a piston displacement of not more than 50 cubic centimetres and is incapable of providing further assistance when the vehicle attains a speed of thirty kilometres per hour on level ground; France In France, the laws regulating moped and scooter (cyclomoteur) operation apply as well to the gas-powered motorized bicycle, variously known as a bicyclette motorisée, vélo motorisé, vélomoteur, or vélomoto. Under French law, no person under 14 years of age may operate a gas-powered motorized bicycle (defined as a bicycle with a gasoline motor under 50 cc displacement, and capable of a maximum speed of 45 kilometres per hour (28 mph)). All vélomotos or motorbikes must be registered, and all riders without a full driving license must pass a test and receive a certificate (Brevet de Sécurité Routière, or BSR) consisting of written exam and five hours of practical training, four and a half of which must be on public roads, with a driving school. All operators must carry third-party insurance and wear helmets, and a metal license tab with the owner's name (Plaque de nom) must be attached to the handlebars. Motorized bicycles are not permitted on French motorways, and riders must use cycle paths where provided. Legal status of internal combustion engine (ICE) powered bicycles: Japan In Japan, any vehicle with an internal combustion engine is regarded as a car, motorcycle or motorized bicycle. A motorized bicycle with a gasoline motor under 50 cc has a regulated maximum speed of 30 km/h (18.6 mph). It must be registered and the rider must have insurance. To ride one, you have to be at least 16 years old, have a license and wear a helmet. All riders without a full driving license for cars or motorcycles must pass a written test and go through one-day practical training in a driving school. Any two-wheel cycle with a gasoline motor over 50 cc is regarded as a motorcycle in law. It can be ridden by persons age 18 or over who have a full driving license for motorcycles who have had long-term training and have sat a driving exam in a driving school. A two-wheel cycle with a gasoline motor of under 125 cc is not permitted to go on Japan's controlled-access highways. Legal status of internal combustion engine (ICE) powered bicycles: Motorized bicycles in Japanese law are treated as a 'miniature version of a motorcycle' in many cases, but a motorized bicycle should hook turn to the right in some cases on a signalized intersection with more than three lanes including a L/R turn lane in the same direction. A vehicle that has a maximum speed of over 20 km/h (12.4 mph) also requires, as a miniature version of a motorcycle, to have similar performance in terms of its brakes, tires, silencer, headlight, license plate lamp, rear reflector, side mirror, horn, tail lamp, brake lamp, direction indicators and speedometer. Hence, a regular bicycle to which an engine or motor has been simply added is illegal. Legal status of internal combustion engine (ICE) powered bicycles: Driving without a suitable license or without insurance is strictly punishable by law. Any other violations including parking violations in urban areas are severely enforced. Legal status of internal combustion engine (ICE) powered bicycles: Russia Under Russian law, no person under 16 years of age may operate a gas-powered motorized bicycle (defined as a moped with a gasoline motor under 50 сс) displacement, and capable of a maximum speed of 50 kilometres per hour (31 mph)). Moped driver must use a helmet and he must keep right border. Velomotors are not under register. Since 2014, moped driver must have an "M category" license or any of higher grade. But as of 2015, there's no official procedure to obtain "M" license, so moped drivers are still not under control in practice. Legal status of internal combustion engine (ICE) powered bicycles: United Kingdom In United Kingdom, driving a motorised bicycle with a gasoline motor under 50cc, and max design speed of 30 mph is allowed for persons over 16 years old, needing a driving licence, and for the bike to be taxed, tested ("MOTed"), and insured. The rider also has to pass a Compulsory Basic Training (CBT) before going on the road. All bigger bikes have a minimum age of 17. Legal status of internal combustion engine (ICE) powered bicycles: Purchasers of new-construction ICE motorized bicycles must meet a host of regulatory requirements and pass inspections from the DVLA and MSVA in order to register such machines for operation on public roadways. Owners must obtain an EC-type Motorcycle Single Vehicle Type Approval Certification (MSVA), and in order to obtain a DVLA-required insurance certificate, must provide proof from the original seller of the bicycle and engine that both the bicycle and the engine are new and unused. ICE motorized bicycles that are amateur-built and meet MSVA safety and regulatory standards, and which meet the definition of low powered mopeds (engines of proven output of less than 1 kW (1.34 hp) and capable of 25 km/h (16 mph) or less) face considerably fewer regulatory requirements. Legal status of internal combustion engine (ICE) powered bicycles: United States Federal law In the United States, federal law governing ICE motorized bicycles is subject to interpretative rulings by the National Highway Traffic Safety Administration (NHTSA) at the United States Department of Transportation. Under current NHTSA rules, a "motor-driven cycle" (a definition that includes a two-wheel vehicle such as a bicycle with an add-on ICE engine of five brake horsepower or less, a speed capability of more than 20 miles per hour and/or lacks both a Vehicle Identification Number (VIN) and standard on-road safety equipment such as mirrors, turn signal lamps, side marker lamps, and stop lamps), then the vehicle shall not be considered a "motor vehicle" as defined by DOT/NHTSA regulations, but instead is defined as an off-road vehicle (since the lack of a VIN and on-road equipment indicates that a vehicle was not manufactured primarily for use on public roads.) Such off-road vehicles are considered to be neither motor vehicles nor motorcycles, as those terms are defined under federal law. Under present-day NHTSA rules, the final decision as to whether such federally defined off-road vehicles may be legally operated on public roads is determined by the laws of the state in which the vehicle is being operated. Legal status of internal combustion engine (ICE) powered bicycles: State and local laws The legal status of an ICE motorized bicycle in the United States is presently determined by the laws of each state and/or local jurisdiction. Several states allow ICE motorbikes to be operated on roadways without registration, tax, or licensing in the same manner as bicycles, providing certain restrictions are observed. Many state jurisdictions use limits on top speed and/or engine displacement to determine if ICE motorized bicycles require registration and licensing—sometimes as mopeds, sometimes as motorcycles. Some states prohibit the use of motorbikes on multi-use recreational paths or high-speed, limited access roadways, while others require additional safety equipment for operation on public roads, such as wearing a helmet. Many United States cities and other local jurisdictions may impose additional restrictions upon ICE motorized bicycles when operated on public streets and roadways. Legal status of electrically powered bicycles: The laws on electric motor-powered bicycles or E-bikes vary considerably according to country. In many nations, a top limit on the power of the electric motor is imposed if the vehicle is to be legally classified and/or taxed as a motorized bicycle. Legal status of electrically powered bicycles: Australia In Australia, electric power-assisted bicycles do not require registration, provided that it: Is designed to be propelled primarily by human power (in most but not all states); and Has one or more auxiliary propulsion motors attached that have a combined output of no more than 200 watts or 250 watts if complying to European Standard for Power Assisted Pedal Cycles (EN 15194). Legal status of electrically powered bicycles: Canada In Canada, eight provinces currently allow the operation of motorized bicycles using low-powered electric motors capable of a maximum speed of 32 km/hour under the definition of power-assisted bicycles. The Province of Ontario introduced a three-year trial ending October 2009 for these bicycles, which are now officially defined as Power-Assisted Bicycles by the Ontario Ministry of Transportation with specific parameters.In accordance with federal law, all power-assisted bicycles, regardless of province, must 1) have a maximum of three wheels; 2) have steering handlebars and pedals; 3) use an electric motor of 500W output or less for propulsion; 4) must not be capable of speeds faster than 32 km/h (20 mph) on level ground using motor power alone; and 5) bear a permanently affixed label by the manufacturer stating in both official languages that the vehicle conforms to the federal definition of a power-assisted bicycle. Power-assisted bicycles are required to use electric motors only, and may not be operated on certain provincial controlled-access highways or where prohibited by municipal law. Age restrictions vary from province to province, but all require an approved helmet. Some versions (e.g., if capable of operating without pedaling) of e-bikes require a drivers license in some provinces and have age restrictions. Vehicle licenses and liability insurance are not required. E-bikes are required to follow the same traffic regulations as regular bicycles. Legal status of electrically powered bicycles: Greece In Greece, bicycles assisted by an electric motor up to 0.25 kW are clearly allowed by the traffic regulation and they are still considered as regular bicycles. Therefore, all laws for bicycles are applied to e-bikes too. No license is required to drive them. Japan In Japan, only following exceptional electric motored vehicles are regarded as regular bicycles or pedestrian in law. the pedelecs (electric-motor assisted bicycle) which the maximum power assist ratio in times are 2 under 10 km/h (6.2 mph), 2 - (speed in km/h minus 10) / 7 between 10 – 24 km/h (6.2 - 14.9 mph) and zero over 24 km/h (14.9 mph): as regular bicycles. electric-motored shopping cart, trolley bag or rollator within the size length 120 cm, width 70 cm, height 109 cm, maximum speed limit at 6 km/h (3.78 mph), without any sharp protrusions and that have automatic stop function : as pedestrian. Legal status of electrically powered bicycles: electric-motored mobility scooters or wheelchairs within the size length 120 cm, width 70 cm, height 120 cm, maximum speed limit at 6 km/h (3.78 mph), without any sharp protrusions and that have automatic stop function : as pedestrian.Save for the exceptions above, any vehicle with electric motors is regarded as car, motorcycle or motorized bicycle. Other restrictions are just the same as the one with internal combustion engines, where electric motor of 600 W by ICE 50 cc, motor of 1 kW by ICE 125 cc. In well-known, Segways are regarded as motorcycles and ordinal electric bicycles with regular output motor are as motorized bicycles, so they are illegal, as they are, in Japan. Legal status of electrically powered bicycles: United States In the United States, federal law exempts low-speed electric bicycles from Department of Transportation and NHTSA motor vehicle regulations, and they are regulated under federal law in the same manner as ordinary bicycles. The Consumer Product Safety Act defines the term low speed electric bicycle as a two- or three-wheeled vehicle with fully operable pedals and an electric motor of less than 750 watts (1 horsepower), whose maximum speed on a paved level surface, when powered solely by such a motor while ridden by an operator who weighs 170 pounds, is less than 20 mph (15 U.S.C. 2085(b)). At the present time, neither the DOT nor the NHTSA restrict the assembly of e-bikes for use on public roads, although commercially manufactured e-bikes capable of speeds greater than 20 mph are considered motor vehicles and thus subject to DOT and NHTSA safety requirements. Consequently, the laws of the individual state and/or local jurisdiction govern the type, motor wattage, and speed capability of e-bikes used on public roadways (see Electric bicycle laws). As long as the bicycle is capable of pedal propulsion, most U.S. states currently do not distinguish between designs that may be self-propelled by the electric motor versus pedal assist designs in which the electric motor assists pedal propulsion by the rider. Power sources: Historically, internal combustion engine (ICE) designs dominated the motorized bicycle market, and still do today. Most still use small two stroke or four stroke IC engines. Power sources: Power can be applied to the drive wheel in a number of ways: the front or rear wheel may be powered directly by a motor built into the hub (e.g. Singer Motor Wheel, Copenhagen Wheel). This avoids the need to transfer power to wheel by some other means. The downside of this system is that often, the original wheel must be replaced for the wheel with the hub motor. Power sources: an engine or motor mounted in the frame (called a frame mount), under the frame, or behind the rider (called a rack mount) may drive the rear wheel via a sprocket with a chain or a rubber belt. This is called "chain drive" and "belt drive" respectively. Besides connecting the engine to a sprocket, the engine can also be connected directly to the crank. This is called "crank drive" or "mid-drive" and also allows incorporating the gears of the bicycle in the motorized system. Most of these bikes and kits are under 50 cc and do not have to be registered in most states (the exception being the Whizzer NE5). Power sources: power may be transferred to a wheel from a motor mounted directly above, by bringing a powered roller or rubber belt into contact with the tire. These are called "friction drives". Power sources: Internal combustion Small two-stroke, bolt-on gasoline bike motors and adapter kits for bicycles had a brief surge in popularity in the late 1960s and early 1970s in the United States and Canada. These engine kits were designed or marketed by both small and large companies, including Bike Bug, Tas Spitz, and even Sears, which sold the Free Spirit, and Little Devil engine kits. Most of these kits were designed to use lightweight, low-cost two-stroke engines from Japanese manufacturers such as Tanaka. During the late 1990s, the arrival of inexpensive two-stroke engines and chain-drive transmissions from mainland China, designed to mount to bicycle frames, helped spark a new wave of United States consumer interest in motorized bicycles. Increasingly tighter United States emissions laws have made it more difficult for traditional two-stroke engines to pass emissions requirements, though Tanaka Inc. has since introduced a clean-burning "Pure Fire" line of two-stroke motors. Power sources: Increasingly, designers of ICE motorized bicycles and other small-engined off-road vehicles are turning to four-stroke gasoline engines, which consume less fuel and oil, and tend to be quieter while producing fewer emissions. The Honda 49 cc GXH-50, Shandong HuaSheng 49 cc 142F, Robin/Subaru 25 cc, and other small four-stroke engines are commonly used as part of an engine/transmission combination for adaptation to a variety of bicycle frames. Power sources: Electric Electric bicycles have become one of the most popular vehicles used for transportation across the world. Millions have been sold in Asia and Europe. Sales in the United States and Australia have increased sharply since the late 1990s. Hybrid electric/petroleum Bicycles can also use hybrid systems including both internal combustion and electric power sources, either in parallel, or in series. The first practical series-hybrid electric/petroleum system used the engine to turn a generator that charged the batteries directly, maintaining or increasing battery charge levels while riding the bicycle. Since these vehicles combine 3 forms of energy electrical, petrol (liquid fuel) and human power, they can be also known as tribrid electric bicycles. Other power sources Individuals have built bicycles powered by steam and air engines, and there are many known jet propelled bicycles. No large-scale manufacture of any of these is known (though jet-powered bicycles have been created by hobbyists, as seen in some homemade videos on websites such as Google Video and YouTube). Environmental effects: The environmental effects of motorized bicycles vary according to the power source. Most electric bicycles are considered by some to be zero-emissions vehicles, as they emit no combustion byproducts. However, the environmental effects of electricity generation and power distribution from plants generating power from fossil fuels, as well as manufacturing and disposing of (limited life) high storage density batteries containing toxic materials must also be taken into account. Older two stroke engines, commonly use in motorized bicycles powered by internal combustion engines often emitted more pollution than automobiles due to partial combustion of oil included in the fuel, but this is not the case with four-stroke or newer 2 stroke motor designs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Molecular symmetry** Molecular symmetry: In chemistry, molecular symmetry describes the symmetry present in molecules and the classification of these molecules according to their symmetry. Molecular symmetry is a fundamental concept in chemistry, as it can be used to predict or explain many of a molecule's chemical properties, such as whether or not it has a dipole moment, as well as its allowed spectroscopic transitions. To do this it is necessary to use group theory. This involves classifying the states of the molecule using the irreducible representations from the character table of the symmetry group of the molecule. Symmetry is useful in the study of molecular orbitals, with applications to the Hückel method, to ligand field theory, and to the Woodward-Hoffmann rules. Many university level textbooks on physical chemistry, quantum chemistry, spectroscopy and inorganic chemistry discuss symmetry. Another framework on a larger scale is the use of crystal systems to describe crystallographic symmetry in bulk materials. There are many techniques for determining the symmetry of a given molecule, including X-ray crystallography and various forms of spectroscopy. Spectroscopic notation is based on symmetry considerations. Point group symmetry concepts: Elements The point group symmetry of a molecule is defined by the presence or absence of 5 types of symmetry element. Symmetry axis: an axis around which a rotation by 360 ∘n results in a molecule indistinguishable from the original. This is also called an n-fold rotational axis and abbreviated Cn. Examples are the C2 axis in water and the C3 axis in ammonia. A molecule can have more than one symmetry axis; the one with the highest n is called the principal axis, and by convention is aligned with the z-axis in a Cartesian coordinate system. Point group symmetry concepts: Plane of symmetry: a plane of reflection through which an identical copy of the original molecule is generated. This is also called a mirror plane and abbreviated σ (sigma = Greek "s", from the German 'Spiegel' meaning mirror). Water has two of them: one in the plane of the molecule itself and one perpendicular to it. A symmetry plane parallel with the principal axis is dubbed vertical (σv) and one perpendicular to it horizontal (σh). A third type of symmetry plane exists: If a vertical symmetry plane additionally bisects the angle between two 2-fold rotation axes perpendicular to the principal axis, the plane is dubbed dihedral (σd). A symmetry plane can also be identified by its Cartesian orientation, e.g., (xz) or (yz). Point group symmetry concepts: Center of symmetry or inversion center, abbreviated i. A molecule has a center of symmetry when, for any atom in the molecule, an identical atom exists diametrically opposite this center an equal distance from it. In other words, a molecule has a center of symmetry when the points (x,y,z) and (−x,−y,−z) are identical. For example, if there is an oxygen atom in some point (x,y,z), then there is an oxygen atom in the point (−x,−y,−z). There may or may not be an atom at the inversion center itself. Examples are xenon tetrafluoride where the inversion center is at the Xe atom, and benzene (C6H6) where the inversion center is at the center of the ring. Point group symmetry concepts: Rotation-reflection axis: an axis around which a rotation by 360 ∘n , followed by a reflection in a plane perpendicular to it, leaves the molecule unchanged. Also called an n-fold improper rotation axis, it is abbreviated Sn. Examples are present in tetrahedral silicon tetrafluoride, with three S4 axes, and the staggered conformation of ethane with one S6 axis. An S1 axis corresponds to a mirror plane σ and an S2 axis is an inversion center i. A molecule which has no Sn axis for any value of n is a chiral molecule. Point group symmetry concepts: Identity, abbreviated to E, from the German 'Einheit' meaning unity. This symmetry element simply consists of no change: every molecule has this symmetry element, which is equivalent to a C1 proper rotation. It must be included in the list of symmetry elements so that they form a mathematical group, whose definition requires inclusion of the identity element. It is so called because it is analogous to multiplying by one (unity). Operations: The five symmetry elements have associated with them five types of symmetry operation, which leave the geometry of the molecule indistinguishable from the starting geometry. They are sometimes distinguished from symmetry elements by a caret or circumflex. Thus, Ĉn is the rotation of a molecule around an axis and Ê is the identity operation. A symmetry element can have more than one symmetry operation associated with it. For example, the C4 axis of the square xenon tetrafluoride (XeF4) molecule is associated with two Ĉ4 rotations in opposite directions (90° and 270°), a Ĉ2 rotation (180°) and Ĉ1 (0° or 360°). Because Ĉ1 is equivalent to Ê, Ŝ1 to σ and Ŝ2 to î, all symmetry operations can be classified as either proper or improper rotations. Operations: For linear molecules, either clockwise or counterclockwise rotation about the molecular axis by any angle Φ is a symmetry operation. Symmetry groups: Groups The symmetry operations of a molecule (or other object) form a group. In mathematics, a group is a set with a binary operation that satisfies the four properties listed below. In a symmetry group, the group elements are the symmetry operations (not the symmetry elements), and the binary combination consists of applying first one symmetry operation and then the other. An example is the sequence of a C4 rotation about the z-axis and a reflection in the xy-plane, denoted σ(xy)C4. By convention the order of operations is from right to left. Symmetry groups: A symmetry group obeys the defining properties of any group. Symmetry groups: closure property: This means that the group is closed so that combining two elements produces no new elements. Symmetry operations have this property because a sequence of two operations will produce a third state indistinguishable from the second and therefore from the first, so that the net effect on the molecule is still a symmetry operation. This may be illustrated by means of a table. For example, with the point group C3, there are three symmetry operations: rotation by 120°, C3, rotation by 240°, C32 and rotation by 360°, which is equivalent to identity, E. Symmetry groups: This table also illustrates the following properties Associative property: existence of identity property: existence of inverse element: The order of a group is the number of elements in the group. For groups of small orders, the group properties can be easily verified by considering its composition table, a table whose rows and columns correspond to elements of the group and whose entries correspond to their products. Symmetry groups: Point groups and permutation-inversion groups The successive application (or composition) of one or more symmetry operations of a molecule has an effect equivalent to that of some single symmetry operation of the molecule. For example, a C2 rotation followed by a σv reflection is seen to be a σv' symmetry operation: σv*C2 = σv'. ("Operation A followed by B to form C" is written BA = C). Moreover, the set of all symmetry operations (including this composition operation) obeys all the properties of a group, given above. So (S,*) is a group, where S is the set of all symmetry operations of some molecule, and * denotes the composition (repeated application) of symmetry operations. Symmetry groups: This group is called the point group of that molecule, because the set of symmetry operations leave at least one point fixed (though for some symmetries an entire axis or an entire plane remains fixed). In other words, a point group is a group that summarises all symmetry operations that all molecules in that category have. The symmetry of a crystal, by contrast, is described by a space group of symmetry operations, which includes translations in space. Symmetry groups: One can determine the symmetry operations of the point group for a particular molecule by considering the geometrical symmetry of its molecular model. However, when one uses a point group to classify molecular states, the operations in it are not to be interpreted in the same way. Instead the operations are interpreted as rotating and/or reflecting the vibronic (vibration-electronic) coordinates and these operations commute with the vibronic Hamiltonian. They are "symmetry operations" for that vibronic Hamiltonian. The point group is used to classify by symmetry the vibronic eigenstates of a rigid molecule. The symmetry classification of the rotational levels, the eigenstates of the full (rotation-vibration-electronic) Hamiltonian, requires the use of the appropriate permutation-inversion group as introduced by Longuet-Higgins. Point groups describe the geometrical symmetry of a molecule whereas permutation-inversion groups describe the energy-invariant symmetry. Symmetry groups: Examples of point groups Assigning each molecule a point group classifies molecules into categories with similar symmetry properties. For example, PCl3, POF3, XeO3, and NH3 all share identical symmetry operations. They all can undergo the identity operation E, two different C3 rotation operations, and three different σv plane reflections without altering their identities, so they are placed in one point group, C3v, with order 6. Similarly, water (H2O) and hydrogen sulfide (H2S) also share identical symmetry operations. They both undergo the identity operation E, one C2 rotation, and two σv reflections without altering their identities, so they are both placed in one point group, C2v, with order 4. This classification system helps scientists to study molecules more efficiently, since chemically related molecules in the same point group tend to exhibit similar bonding schemes, molecular bonding diagrams, and spectroscopic properties. Symmetry groups: Point group symmetry describes the symmetry of a molecule when fixed at its equilibrium configuration in a particular electronic state. It does not allow for tunneling between minima nor for the change in shape that can come about from the centrifugal distortion effects of molecular rotation. Symmetry groups: Common point groups The following table lists many of the point groups applicable to molecules, labelled using the Schoenflies notation, which is common in chemistry and molecular spectroscopy. The descriptions include common shapes of molecules, which can be explained by the VSEPR model. In each row, the descriptions and examples have no higher symmetries, meaning that the named point group captures all of the point symmetries. Symmetry groups: Representations A set of matrices that multiply together in a way that mimics the multiplication table of the elements of a group is called a representation of the group. For example, for the C2v point group, the following three matrices are part of a representation of the group: [−1000−10001]⏟C2×[1000−10001]⏟σv=[−100010001]⏟σv′ Although an infinite number of such representations exist, the irreducible representations (or "irreps") of the group are all that are needed as all other representations of the group can be described as a linear combination of the irreducible representations. Symmetry groups: Also, the irreducibile representations are those matrix representations in which the matrices are in their most diagonal form possible. Character tables: For any group, its character table gives a tabulation (for the classes of the group) of the characters (the sum of the diagonal elements) of the matrices of all the irreducible representations of the group. As the number of irreducible representations equals the number of classes, the character table is square. Character tables: The representations are labeled according to a set of conventions: A, when rotation around the principal axis is symmetrical B, when rotation around the principal axis is asymmetrical E and T are doubly and triply degenerate representations, respectively when the point group has an inversion center, the subscript g (German: gerade or even) signals no change in sign, and the subscript u (ungerade or uneven) a change in sign, with respect to inversion. Character tables: with point groups C∞v and D∞h the symbols are borrowed from angular momentum description: Σ, Π, Δ.The tables also capture information about how the Cartesian basis vectors, rotations about them, and quadratic functions of them transform by the symmetry operations of the group, by noting which irreducible representation transforms in the same way. These indications are conventionally on the righthand side of the tables. This information is useful because chemically important orbitals (in particular p and d orbitals) have the same symmetries as these entities. Character tables: The character table for the C2v symmetry point group is given below: Consider the example of water (H2O), which has the C2v symmetry described above. The 2px orbital of oxygen has B1 symmetry as in the fourth row of the character table above, with x in the sixth column). It is oriented perpendicular to the plane of the molecule and switches sign with a C2 and a σv'(yz) operation, but remains unchanged with the other two operations (obviously, the character for the identity operation is always +1). This orbital's character set is thus {1, −1, 1, −1}, corresponding to the B1 irreducible representation. Likewise, the 2pz orbital is seen to have the symmetry of the A1 irreducible representation (i.e.: none of the symmetry operations change it), 2py B2, and the 3dxy orbital A2. These assignments and others are noted in the rightmost two columns of the table. Historical background: Hans Bethe used characters of point group operations in his study of ligand field theory in 1929, and Eugene Wigner used group theory to explain the selection rules of atomic spectroscopy. The first character tables were compiled by László Tisza (1933), in connection to vibrational spectra. Robert Mulliken was the first to publish character tables in English (1933), and E. Bright Wilson used them in 1934 to predict the symmetry of vibrational normal modes. The complete set of 32 crystallographic point groups was published in 1936 by Rosenthal and Murphy. Molecular rotation and molecular nonrigidity: As discussed above in the section Point groups and permutation-inversion groups, point groups are useful for classifying the vibrational and electronic states of rigid molecules (sometimes called semi-rigid molecules) which undergo only small oscillations about a single equilibrium geometry. Longuet-Higgins introduced a more general type of symmetry group suitable not only for classifying the vibrational and electronic states of rigid molecules but also for classifying their rotational and nuclear spin states. Further, such groups can be used to classify the states of non-rigid (or fluxional) molecules that tunnel between equivalent geometries (called versions) and to allow for the distorting effects of molecular rotation. These groups are known as permutation-inversion groups, because the symmetry operations in them are energetically feasible permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the two. Molecular rotation and molecular nonrigidity: For example, ethane (C2H6) has three equivalent staggered conformations. Tunneling between the conformations occurs at ordinary temperatures by internal rotation of one methyl group relative to the other. This is not a rotation of the entire molecule about the C3 axis. Although each conformation has D3d symmetry, as in the table above, description of the internal rotation and associated quantum states and energy levels requires the more complete permutation-inversion group G36.Similarly, ammonia (NH3) has two equivalent pyramidal (C3v) conformations which are interconverted by the process known as nitrogen inversion. This is not the point group inversion operation i used for centrosymmetric rigid molecules (i.e., the inversion of vibrational displacements and electronic coordinates in the nuclear center of mass) since NH3 has no inversion center and is not centrosymmetric. Rather, it is the inversion of the nuclear and electronic coordinates in the molecular center of mass (sometimes called the parity operation), which happens to be energetically feasible for this molecule. The appropriate permutation-inversion group to be used in this situation is D3h(M) which is isomorphic with the point group D3h. Molecular rotation and molecular nonrigidity: Additionally, as examples, the methane (CH4) and H3+ molecules have highly symmetric equilibrium structures with Td and D3h point group symmetries respectively; they lack permanent electric dipole moments but they do have very weak pure rotation spectra because of rotational centrifugal distortion. The permutation-inversion groups required for the complete study of CH4 and H3+ are Td(M) and D3h(M), respectively. In its ground (N) electronic state the ethylene molecule C2H4 has D2h point group symmetry whereas in the excited (V) state it has D2d symmetry. To treat these two states together it is necessary to allow torsion and to use the double group of the permutation-inversion group G16.A second and less general approach to the symmetry of nonrigid molecules is due to Altmann. In this approach the symmetry groups are known as Schrödinger supergroups and consist of two types of operations (and their combinations): (1) the geometric symmetry operations (rotations, reflections, inversions) of rigid molecules, and (2) isodynamic operations, which take a nonrigid molecule into an energetically equivalent form by a physically reasonable process such as rotation about a single bond (as in ethane) or a molecular inversion (as in ammonia).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MindTrap** MindTrap: MindTrap is a series of lateral thinking puzzle games played by two individuals or teams. Invented in Canada, it is the main product of MindTrap Games, Inc., who license the game for manufacture by various companies including Outset Media, Blue Opal, the Great American Puzzle Factory, Pressman Toy Corporation, Spears Games and Winning Moves. Players are given a puzzle from a card and a limited amount of time to solve it. Each correct answer advances the player or team along a track printed on the scorecard; they win by being the first to reach the end. MindTrap: The original game contained only logic and lateral thinking puzzles, while later editions added other types of brain teasers including tangrams and stick puzzles. Lateral thinking problems are identified by a diamond on the question side of the card, indicating that answering team are allowed to ask "yes/no" questions about the puzzle scenario. These puzzles often give unnecessary information in order to distract the answerer from a simple, common sense solution, and play on common assumptions. Some questions play on words or pictures and some on everyday trivia. MindTrap: Many scenarios and characters reoccur throughout the puzzles, including murders and other crimes investigated by "Detective Shadow" (and perpetrated by villains including "Sid Shady" and "Sam Sham"), and tricks performed by magician "Dee Sceptor". The questions are worded in Canadian-English, with Canadian terminology and spelling, and are not localized for the American, UK or Australian markets. Releases and Distributors: MindTrap was originally released as a board game in carton packaging with over 500 puzzles printed on cards and a playing board (printed on paper) by Pressman Toy Corporation in Canada and the US in 1991, by Blue Opal in Australia and by Spears Games in the UK. Translated versions of the original game have been released in French, German and Italian. Tin packaged versions of the game were released by Paul Lamond in the UK, and later by Pressman Toy Corporation and the Great American Puzzle Factory as tenth anniversary editions. Releases and Distributors: In 1994, MindTrap Games, Inc. and Pirate Radio released MindTrap -- New Audio Mystery Edition on cassette tapes featuring over 2 hours of mysteries along with an answer book. This double cassette edition lacks the playing board and cards. A sequel to MindTrap, originally titled MindTrap - The Challenge Continues was released in 1997 by MindTrap Games, Inc. and Pressman Toy Corporation. The Roman numeral "II" was later added to this title. European released translations of the sequel include Greek, Portuguese and Spanish. In 2001 Ultimate MindTrap, the official sequel to MindTrap II, was released in the UK. Releases and Distributors: Outset Media, a Canadian-based manufacturer and distributor of games and puzzles, licensed MindTrap in 2007 and has since released MindTrap games in different forms. Outlet Media releases of the board game include a tin-packaged MindTrap: Classic Edition, a 20th Anniversary Edition, and a French translated version of the game, which, akin to the English versions of the game, uses Canadian-French terminology and wording in favour of any localization. In 2007, a travel pack of games called Geometrical Riddles was created by Outlet Media. The game includes three levels of difficulty: Novice, Master, and Genius. Each assortment contains 54 cards with recognition problems, geometrical and mathematical puzzles. Newer MindTrap card games were released in 2011 called Left Brain Right Brain, Brain Cramp, and Shadow Mysteries. Releases and Distributors: Pressman Toy Corporation released a series of 500 piece, 24" x 18", "mystery" jigsaw puzzles, each provided with a booklet of an original short mystery story. Readers assemble the puzzle to discover clues to match wits against Detective Shadow in a race to solve the crime. List of MindTrap Games: Board games: MindTrap (1991) - the original game, consisting only of logic and lateral thinking puzzles. MindTrap II - The Challenge Continues (1997) - a sequel introducing additional puzzle types (picture, stick and shape puzzles). Ultimate MindTrap (2001) - another sequel, with new puzzles in the various types introduced in MindTrap II. MindTrap - The Revised Edition (2007) - a new edition comprising puzzles from MindTrap and MindTrap II. MindTrap - Classic Edition (2007) - 486 of the best puzzles, mysteries, conundrums and trick questions from MindTrap.Anniversary editions: MindTrap 10th Anniversary Edition (2001) - same as the original game but comes in a 10th Anniversary Edition tin. MindTrap 20th Anniversary Edition (2011) - does away with the playing board but adds two new categories to those available in MindTrap II.Translated releases: MindTrap French (2006) - the original game, translated into French and distributed by Outset Media. MindTrap (1991) - German (Schmidt Spiele), Italian (Giochi Spear). MindTrap II - Greek (Spear's Games), Portuguese (Mattel Games), Spanish (Juegos Spear).Audio editions: MindTrap - All New Audio Mystery Edition (1994) - double cassette of over 2 hours, including new mysteries (to the original).Card games: MindTrap Geometrical Riddles: Novice Level (2007) -- for ages 10 and up. MindTrap Geometrical Riddles: Master Level (2007) -- for ages 12 and up. MindTrap Geometrical Riddles: Genius Level (2007) -- for ages 14 and up. MindTrap: Brain Cramp (2011) - for ages 10 and up. MindTrap: Shadow Mysteries (2011) - for ages 12 and up. Includes detective scenarios from MindTrap's infamous characters "Detective Shadow", "Sid Shady" and "Sam Sham". MindTrap: Left Brain Right Brain (2001) - for ages 14 and up.Jigsaw puzzles: MindTrap - Murder By Will MindTrap - Murder's in Fashion MindTrap - An Unsavoury Demise MindTrap - Revenge in ParadiseBooks: Tricky MindTrap Puzzles: Challenge the Way You Think & See (2000) Lateral MindTrap Puzzles: Challenge the Way You Think & See (2000)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transition nuclear protein** Transition nuclear protein: Transition nuclear proteins (TNPs) are proteins that are involved in the packaging of sperm nuclear DNA during spermiogenesis. They take the place of histones associated with the sperm DNA, and are subsequently themselves replaced by protamines.TNPs in humans include TNP1 and TNP2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mimicry** Mimicry: In evolutionary biology, mimicry is an evolved resemblance between an organism and another object, often an organism of another species. Mimicry may evolve between different species, or between individuals of the same species. Often, mimicry functions to protect a species from predators, making it an anti-predator adaptation. Mimicry evolves if a receiver (such as a predator) perceives the similarity between a mimic (the organism that has a resemblance) and a model (the organism it resembles) and as a result changes its behaviour in a way that provides a selective advantage to the mimic. The resemblances that evolve in mimicry can be visual, acoustic, chemical, tactile, or electric, or combinations of these sensory modalities. Mimicry may be to the advantage of both organisms that share a resemblance, in which case it is a form of mutualism; or mimicry can be to the detriment of one, making it parasitic or competitive. The evolutionary convergence between groups is driven by the selective action of a signal-receiver or dupe. Birds, for example, use sight to identify palatable insects and butterflies, whilst avoiding the noxious ones. Over time, palatable insects may evolve to resemble noxious ones, making them mimics and the noxious ones models. In the case of mutualism, sometimes both groups are referred to as "co-mimics". It is often thought that models must be more abundant than mimics, but this is not so. Mimicry may involve numerous species; many harmless species such as hoverflies are Batesian mimics of strongly defended species such as wasps, while many such well-defended species form Müllerian mimicry rings, all resembling each other. Mimicry between prey species and their predators often involves three or more species.In its broadest definition, mimicry can include non-living models. The specific terms masquerade and mimesis are sometimes used when the models are inanimate. For example, animals such as flower mantises, planthoppers, comma and geometer moth caterpillars resemble twigs, bark, leaves, bird droppings or flowers. Many animals bear eyespots, which are hypothesized to resemble the eyes of larger animals. They may not resemble any specific organism's eyes, and whether or not animals respond to them as eyes is also unclear. Nonetheless, eyespots are the subject of a rich contemporary literature. The model is usually another species, except in automimicry, where members of the species mimic other members, or other parts of their own bodies, and in inter-sexual mimicry, where members of one sex mimic members of the other. Mimicry: Mimicry can result in an evolutionary arms race if mimicry negatively affects the model, and the model can evolve a different appearance from the mimic.p161 Mimicry should not be confused with other forms of convergent evolution that occurs when species come to resemble each other by adapting to similar lifestyles that have nothing to do with a common signal receiver. Mimics may have different models for different life cycle stages, or they may be polymorphic, with different individuals imitating different models, such as in Heliconius butterflies. Models themselves may have more than one mimic, though frequency-dependent selection favours mimicry where models outnumber mimics. Models tend to be relatively closely related organisms, but mimicry of vastly different species is also known. Most known mimics are insects, though many other examples including vertebrates are also known. Plants and fungi may also be mimics, though less research has been carried out in this area. Etymology: Use of the word mimicry dates to 1637. It derives from the Greek term mimetikos, "imitative", in turn from mimetos, the verbal adjective of mimeisthai, "to imitate". Originally used to describe people, "mimetic" was used in zoology from 1851, "mimicry" from 1861. Classification: Many types of mimicry have been described. An overview of each follows, highlighting the similarities and differences between the various forms. Classification is often based on function with respect to the mimic (e.g., avoiding harm). Some cases may belong to more than one class, e.g., automimicry and aggressive mimicry are not mutually exclusive, as one describes the species relationship between model and mimic, while the other describes the function for the mimic (obtaining food). The terminology used is not without debate and attempts to clarify have led to new terms being included. The term "masquerade" is sometimes used when the model is inanimate but it is differentiated from "crypsis" in its strict sense by the potential response of the signal receiver. In crypsis the receiver is assumed to not respond while a masquerader confuses the recognition system of the receiver that would otherwise seek the signaller. In the other forms of mimicry, the signal is not filtered out by the sensory system of the receiver. These are not mutually exclusive and in the evolution of wasp-like appearance, it has been argued that insects evolve to masquerade wasps since predatory wasps do not attack each other but this mimetic resemblance also deters vertebrate predators. Classification: Defensive Defensive or protective mimicry takes place when organisms are able to avoid harmful encounters by deceiving enemies into treating them as something else. The first three such cases discussed here entail mimicry of animals protected by warning coloration: Batesian mimicry, where a harmless mimic poses as harmful. Müllerian mimicry, where two or more harmful species mutually advertise themselves as harmful. Mertensian mimicry, where a deadly mimic resembles a less harmful but lesson-teaching model.The fourth case, Vavilovian mimicry, where weeds resemble crops, involves humans as the agent of selection. Classification: Batesian In Batesian mimicry the mimic shares signals similar to the model, but does not have the attribute that makes it unprofitable to predators (e.g., unpalatability). In other words, a Batesian mimic is a sheep in wolf's clothing. It is named after Henry Walter Bates, an English naturalist whose work on butterflies in the Amazon rainforest (described in The Naturalist on the River Amazons) was pioneering in this field of study. Mimics are less likely to be found out (for example by predators) when in low proportion to their model. This phenomenon is called negative frequency-dependent selection, and it applies in most forms of mimicry. Batesian mimicry can only be maintained if the harm caused to the predator by eating a model outweighs the benefit of eating a mimic. The nature of learning is weighted in favor of the mimics, for a predator that has a bad first experience with a model tends to avoid anything that looks like it for a long time, and does not re-sample soon to see whether the initial experience was a false negative. However, if mimics become more abundant than models, then the probability of a young predator having a first experience with a mimic increases. Such systems are therefore most likely to be stable where both the model and the mimic occur, and where the model is more abundant than the mimic. This is not the case in Müllerian mimicry, which is described next. Classification: There are many Batesian mimics in the order Lepidoptera. Consul fabius and Eresia eunice imitate unpalatable Heliconius butterflies such as H. ismenius. Limenitis arthemis imitate the poisonous pipevine swallowtail (Battus philenor). Several palatable moths produce ultrasonic click calls to mimic unpalatable tiger moths. Octopuses of the genus Thaumoctopus (the mimic octopus) are able to intentionally alter their body shape and coloration to resemble dangerous sea snakes or lionfish. In the Amazon, the helmeted woodpecker (Dryocopus galeatus), a rare species which lives in the Atlantic Forest of Brazil, Paraguay, and Argentina, has a similar red crest, black back, and barred underside to two larger woodpeckers: Dryocopus lineatus and Campephilus robustus. This mimicry reduces attacks on Dryocopus galeatus from other animals. Scientists had falsely believed that D. galeatus was a close cousin of the other two species, because of the visual similarity, and because the three species live in the same habitat and eat similar food. Batesian mimicry also occurs in the plant kingdom, such as the chameleon vine, which adapts its leaf shape and colour to match that of the plant it is climbing, such that its edible leaves appear to be the less desirable leaves of its host. Classification: Müllerian Müllerian mimicry, named for the German naturalist Fritz Müller, describes a situation where two or more species have similar warning or aposematic signals and both share genuine anti-predation attributes (e.g. being unpalatable). At first, Bates could not explain why this should be so—if both were harmful why did one need to mimic another? Müller put forward the first explanation and mathematical model for this phenomenon: if a common predator confuses two species, individuals in both those species are more likely to survive. This type of mimicry is unique in several respects. Firstly, both the mimic and the model benefit from the interaction, which could thus be classified as mutualism. The signal receiver also benefits by this system, despite being deceived about species identity, as it is able to generalize the pattern to potentially harmful encounters. The distinction between mimic and model that is clear in Batesian mimicry is also blurred. Where one species is scarce and another abundant, the rare species can be said to be the mimic. When both are present in similar numbers, however, it makes more sense to speak of each as a co-mimic than of distinct 'mimic' and 'model' species, as their warning signals tend to converge. Also, the mimetic species may exist on a continuum from harmless to highly noxious, so Batesian mimicry grades smoothly into Müllerian convergence. Classification: The monarch butterfly (Danaus plexippus) is a member of a Müllerian complex with the viceroy butterfly (Limenitis archippus), sharing coloration patterns and display behaviour. The viceroy has subspecies with somewhat different coloration, each closely matching the local Danaus species. For example, in Florida, the pairing is of the viceroy and the queen butterfly, whereas in Mexico the viceroy resembles the soldier butterfly. The viceroy is thus involved in three different Müllerian pairs. This example was long believed to be Batesian, with the viceroy mimicking the monarch, but the viceroy is actually more unpalatable than the Queen. The genus Morpho is palatable, but some species (such as M. amathonte) are strong fliers; birds – even species that specialize in catching butterflies on the wing – find it hard to catch them. The conspicuous blue coloration shared by most Morpho species may be Müllerian, or may be "pursuit aposematism". Since Morpho butterflies are sexually dimorphic, the males' iridescent coloration may also relate to sexual selection. The "orange complex" of distasteful butterfly species includes the heliconiines Agraulis vanillae, Dryadula phaetusa, and Dryas iulia. At least seven species of millipedes in the genera Apheloria and Brachoria (Xystodesmidae) form a Müllerian mimicry ring in the eastern United States, in which unrelated polymorphic species converge on similar colour patterns where their range overlaps. Classification: Emsleyan/Mertensian Emsleyan or Mertensian mimicry describes the unusual case where a deadly prey mimics a less dangerous species. It was first proposed by M. G. Emsley as a possible explanation for how a predator can learn to avoid a very dangerous aposematic animal, such as a coral snake, when the predator is very likely to die, making learning unlikely. The theory was developed by the German biologist Wolfgang Wickler who named it after the German herpetologist Robert Mertens.The scenario is unusual, as it is usually the most harmful species that is the model. But if a predator dies on its first encounter with a deadly snake, it has no occasion to learn to recognize the snake's warning signals. There would then be no advantage for an extremely deadly snake in being aposematic: any predator that attacked it would be killed before it could learn to avoid the deadly prey, so the snake would be better off being camouflaged, to avoid attacks altogether. But if the predator first learnt to avoid a less deadly snake that had warning colours, the deadly species could then profit (be attacked less often) by mimicking the less dangerous snake.Some harmless milk snake (Lampropeltis triangulum) subspecies, the moderately toxic false coral snakes (genus Erythrolamprus), and the deadly coral snakes (genus Micrurus) all have a red background color with black and white / yellow rings. In this system, both the milk snakes and the deadly coral snakes are mimics, whereas the false coral snakes are the model. Classification: Wasmannian In Wasmannian mimicry, the mimic resembles a model that it lives along with in a nest or colony. Most of the models here are social insects such as ants, termites, bees and wasps. Vavilovian Vavilovian mimicry is found in weeds that come to share characteristics with a domesticated plant through artificial selection. It is named after Russian botanist and geneticist Nikolai Vavilov. Selection against the weed may occur either by manually killing the weed, or by separating its seeds from those of the crop by winnowing. Classification: Vavilovian mimicry presents an illustration of unintentional (or rather 'anti-intentional') selection by man. Weeders do not want to select weeds and their seeds that look increasingly like cultivated plants, yet there is no other option. For example, early barnyard grass, Echinochloa oryzoides, is a weed in rice fields and looks similar to rice; its seeds are often mixed in rice and have become difficult to separate through Vavilovian mimicry. Vavilovian mimics may eventually be domesticated themselves, as in the case of rye in wheat; Vavilov called these weed-crops secondary crops.Vavilovian mimicry can be classified as defensive mimicry, in that the weed mimics a protected species. This bears strong similarity to Batesian mimicry in that the weed does not share the properties that give the model its protection, and both the model and the dupe (in this case people) are harmed by its presence. There are some key differences, though; in Batesian mimicry, the model and signal receiver are enemies (the predator would eat the protected species if it could), whereas here the crop and its human growers are in a mutualistic relationship: the crop benefits from being dispersed and protected by people, despite being eaten by them. In fact, the crop's only "protection" relevant here is its usefulness to humans. Secondly, the weed is not eaten, but simply destroyed. The only motivation for killing the weed is its effect on crop yields. Finally, this type of mimicry does not occur in ecosystems unaltered by humans. Classification: Gilbertian Gilbertian mimicry involves only two species. The potential host (or prey) drives away its parasite (or predator) by mimicking it, the reverse of host-parasite aggressive mimicry. It was coined by Pasteur as a phrase for such rare mimicry systems, and is named after the American ecologist Lawrence E. Gilbert .Gilbertian mimicry occurs in the genus Passiflora. The leaves of this plant contain toxins that deter herbivorous animals. However, some Heliconius butterfly larvae have evolved enzymes that break down these toxins, allowing them to specialize on this genus. This has created further selection pressure on the host plants, which have evolved stipules that mimic mature Heliconius eggs near the point of hatching. These butterflies tend to avoid laying eggs near existing ones, which helps avoid exploitative intraspecific competition between caterpillars — those that lay on vacant leaves provide their offspring with a greater chance of survival. Most Heliconius larvae are cannibalistic, meaning that on leaves older eggs hatch first and eat the new arrivals. Thus, it seems that such plants have evolved egg dummies under selection pressure from these grazing herbivore enemies. In addition, the decoy eggs are also nectaries, attracting predators of the caterpillars such as ants and wasps as a further defence. Classification: Browerian Browerian mimicry, named after Lincoln P. Brower and Jane Van Zandt Brower, is a postulated form of automimicry; where the model belongs to the same species as the mimic. This is the analogue of Batesian mimicry within a single species, and occurs when there is a palatability spectrum within a population. Examples include the monarch and the queen from the subfamily Danainae, which feed on milkweed species of varying toxicity. These species store toxins from its host plant, which are maintained even in the adult (imago) form. As levels of toxin vary depending on diet during the larval stage, some individuals are more toxic than others. Less palatable organisms, therefore, mimic more dangerous individuals, with their likeness already perfected. Classification: This is not always the case, however. In sexually dimorphic species, one sex may be more of a threat than the other, which could mimic the protected sex. Evidence for this possibility is provided by the behaviour of a monkey from Gabon, which regularly ate male moths of the genus Anaphe, but promptly stopped after it tasted a noxious female. Classification: Aggressive Predators Aggressive mimicry is found in predators or parasites that share some of the characteristics of a harmless species, allowing them to avoid detection by their prey or host; this can be compared with the story of the wolf in sheep's clothing as long as it is understood that no conscious deceptive intent is involved. The mimic may resemble the prey or host itself, or another organism that is either neutral or beneficial to the signal receiver. In this class of mimicry, the model may be affected negatively, positively or not at all. Just as parasites can be treated as a form of predator, host-parasite mimicry is treated here as a subclass of aggressive mimicry. Classification: The mimic may have a particular significance for duped prey. One such case is spiders, amongst which aggressive mimicry is quite common both in luring prey and disguising stealthily approaching predators. One case is the golden orb weaver (Nephila clavipes), which spins a conspicuous golden colored web in well-lit areas. Experiments show that bees are able to associate the webs with danger when the yellow pigment is not present, as occurs in less well-lit areas where the web is much harder to see. Other colours were also learned and avoided, but bees seemed least able to effectively associate yellow-pigmented webs with danger. Yellow is the colour of many nectar-bearing flowers, however, so perhaps avoiding yellow is not worthwhile. Another form of mimicry is based not on colour but pattern. Species such as the silver argiope (Argiope argentata) employ prominent patterns in the middle of their webs, such as zigzags. These may reflect ultraviolet light, and mimic the pattern seen in many flowers known as nectar guides. Spiders change their web day to day, which can be explained by the ability of bees to remember web patterns. Bees are able to associate a certain pattern with a spatial location, meaning the spider must spin a new pattern regularly or suffer diminishing prey capture. Classification: Another case is where males are lured towards what seems to be a sexually receptive female. The model in this situation is the same species as the dupe. Beginning in the 1960s, James E. Lloyd's investigation of female fireflies of the genus Photuris revealed they emit the same light signals that females of the genus Photinus use as a mating signal. Further research showed male fireflies from several different genera are attracted to these "femmes fatales", and are subsequently captured and eaten. Female signals are based on that received from the male, each female having a repertoire of signals matching the delay and duration of the female of the corresponding species. This mimicry may have evolved from non-mating signals that have become modified for predation. Classification: The listrosceline katydid Chlorobalius leucoviridis of inland Australia is capable of attracting male cicadas of the tribe Cicadettini by imitating the species-specific reply clicks of sexually receptive female cicadas. This example of acoustic aggressive mimicry is similar to the Photuris firefly case in that the predator's mimicry is remarkably versatile – playback experiments show that C. leucoviridis is able to attract males of many cicada species, including cicadettine cicadas from other continents, even though cicada mating signals are species-specific.Some carnivorous plants may also be able to increase their rate of capture through mimicry.Luring is not a necessary condition however, as the predator still has a significant advantage simply by not being identified as such. They may resemble a mutualistic symbiont or a species of little relevance to the prey. Classification: A case of the latter situation is a species of cleaner fish and its mimic, though in this example the model is greatly disadvantaged by the presence of the mimic. Cleaner fish are the allies of many other species, which allow them to eat their parasites and dead skin. Some allow the cleaner to venture inside their body to hunt these parasites. However, one species of cleaner, the bluestreak cleaner wrasse (Labroides dimidiatus), is the unknowing model of a mimetic species, the sabre-toothed blenny (Aspidontus taeniatus). This wrasse resides in coral reefs in the Indian and the Pacific Oceans, and is recognized by other fishes that then let it clean them. Its imposter, a species of blenny, lives in the Indian Ocean—and not only looks like it in terms of size and coloration, but even mimics the cleaner's "dance". Having fooled its prey into letting its guard down, it then bites it, tearing off a piece of its fin before fleeing. Fish grazed on in this fashion soon learn to distinguish mimic from model, but because the similarity is close between the two they become much more cautious of the model as well, so both are affected. Due to victims' ability to discriminate between foe and helper, the blennies have evolved close similarity, right down to the regional level.Another interesting example that does not involve any luring is the zone-tailed hawk, which resembles the turkey vulture. It flies amongst the vultures, suddenly breaking from the formation and ambushing its prey. Here the hawk's presence is of no evident significance to the vultures, affecting them neither negatively or positively. Classification: Parasites Parasites can also be aggressive mimics, though the situation is somewhat different from those outlined previously. Some predators have a feature that draws prey; parasites can also mimic their hosts' natural prey, but are eaten themselves, a pathway into their host. Leucochloridium, a genus of flatworm, matures in the digestive system of songbirds, their eggs then passing out of the bird in the faeces. They are then taken up by Succinea, a terrestrial snail. The eggs develop in this intermediate host, and must then find a suitable bird to mature in. Since the host birds do not eat snails, the sporocyst has another strategy to reach its host's intestine. They are brightly coloured and move in a pulsating fashion. A sporocyst-sac pulsates in the snail's eye stalks, coming to resemble an irresistible meal for a songbird. In this way, it can bridge the gap between hosts, allowing it to complete its life cycle. A nematode (Myrmeconema neotropicum) changes the colour of the abdomen of workers of the canopy ant Cephalotes atratus to make it appear like the ripe fruits of Hyeronima alchorneoides. It also changes the behaviour of the ant so that the gaster (rear part) is held raised. This presumably increases the chances of the ant being eaten by birds. The droppings of birds are collected by other ants and fed to their brood, thereby helping to spread the nematode.In an unusual case, planidium larvae of some beetles of the genus Meloe form a group and produce a pheromone that mimics the sex attractant of its host bee species. When a male bee arrives and attempts to mate with the mass of larvae, they climb onto his abdomen. From there, they transfer to a female bee, and from there to the bee nest to parasitize the bee larvae. Classification: Host-parasite mimicry is a two species system where a parasite mimics its own host. Cuckoos are a canonical example of brood parasitism, a form of parasitism where the mother has its offspring raised by another unwitting individual, often from a different species, cutting down the biological mother's parental investment in the process. The ability to lay eggs that mimic the host eggs is the key adaptation. The adaptation to different hosts is inherited through the female line in so-called gentes (gens, singular). Cases of intraspecific brood parasitism, where a female lays in a conspecific's nest, as illustrated by the goldeneye duck (Bucephala clangula), do not represent a case of mimicry. A different mechanism is chemical mimicry, as seen in the parasitic butterfly Phengaris rebeli, which parasitizes the ant species Myrmica schencki by releasing chemicals that fool the worker ants to believe that the caterpillar larvae are ant larvae, and enable the P. rebeli larvae to be brought directly into the M. schencki nest. Parasitic (cuckoo) bumblebees (formerly Psithyrus, now included in Bombus) resemble their hosts more closely than would be expected by chance, at least in areas like Europe where parasite-host co-speciation is common. However, this is explainable as Müllerian mimicry, rather than requiring the parasite's coloration to deceive the host and thus constitute aggressive mimicry. Classification: Reproductive Reproductive mimicry occurs when the actions of the dupe directly aid in the mimic's reproduction. This is common in plants with deceptive flowers that do not provide the reward they seem to offer and it may occur in Papua New Guinea fireflies, in which the signal of Pteroptyx effulgens is used by P. tarsalis to form aggregations to attract females. Other forms of mimicry have a reproductive component, such as Vavilovian mimicry involving seeds, vocal mimicry in birds, and aggressive and Batesian mimicry in brood parasite-host systems. Classification: Bakerian and Dodsonian Bakerian mimicry, named after Herbert G. Baker, is a form of automimicry where female flowers mimic male flowers of their own species, cheating pollinators out of a reward. This reproductive mimicry may not be readily apparent as members of the same species may still exhibit some degree of sexual dimorphism. It is common in many species of Caricaceae.Dodsonian mimicry, named after Calaway H. Dodson, is a form of reproductive floral mimicry where the model belongs to a different species than the mimic. By providing similar sensory signals as the model flower, it can lure its pollinators. Like Bakerian mimics, no nectar is provided. Epidendrum ibaguense (Orchidaceae) resembles flowers of Lantana camara and Asclepias curassavica, and is pollinated by monarch butterflies and perhaps hummingbirds. Similar cases are seen in some other species of the same family. The mimetic species may still have pollinators of its own though. For example, a lamellicorn beetle, which usually pollinates correspondingly colored Cistus flowers, is also known to aid in pollination of Ophrys species that are normally pollinated by bees. Classification: Pseudocopulation Pseudocopulation occurs when a flower mimics a female of a certain insect species, inducing the males to try to copulate with the flower. This is much like the aggressive mimicry in fireflies described previously, but with a more benign outcome for the pollinator. This form of mimicry has been called Pouyannian mimicry, after Maurice-Alexandre Pouyanne, who first described the phenomenon. It is most common in orchids, which mimic females of the order Hymenoptera (generally bees and wasps), and may account for around 60% of pollinations. Depending on the morphology of the flower, a pollen sac called a pollinia is attached to the head or abdomen of the male. This is then transferred to the stigma of the next flower the male tries to inseminate, resulting in pollination. Visual mimicry is the most obvious sign of this deception for humans, but the visual aspect may be minor or non-existent. It is the senses of touch and olfaction that are most important. Classification: Inter-sexual mimicry Inter-sexual mimicry occurs when individuals of one sex in a species mimic members of the opposite sex to facilitate sneak mating. An example is the three male forms of the marine isopod Paracerceis sculpta. Alpha males are the largest and guard a harem of females. Beta males mimic females and manage to enter the harem of females without being detected by the alpha males allowing them to mate. Gamma males are the smallest males and mimic juveniles. This also allows them to mate with the females without the alpha males detecting them. Similarly, among common side-blotched lizards, some males mimic the yellow throat coloration and even mating rejection behaviour of the other sex to sneak matings with guarded females. These males look and behave like unreceptive females. This strategy is effective against "usurper" males with orange throats, but ineffective against blue throated "guarder" males, which chase them away. Female spotted hyenas have pseudo-penises that make them look like males. Classification: Automimicry Automimicry or intraspecific mimicry occurs within a single species. One form of such mimicry is where one part of an organism's body resembles another part. For example, the tails of some snakes resemble their heads; they move backwards when threatened and present the predator with the tail, improving their chances of escape without fatal harm. Some fishes have eyespots near their tails, and when mildly alarmed swim slowly backwards, presenting the tail as a head. Some insects such as some lycaenid butterflies have tail patterns and appendages of various degrees of sophistication that promote attacks at the rear rather than at the head. Several species of pygmy owl bear "false eyes" on the back of the head, misleading predators into reacting as though they were the subject of an aggressive stare. Classification: Some writers use the term "automimicry" when the mimic imitates other morphs within the same species. For example, in a species where males mimic females or vice versa, this may be an instance of sexual mimicry in evolutionary game theory. Examples are found in some species of birds, fishes, and lizards. Quite elaborate strategies along these lines are known, such as the well-known "scissors, paper, rock" mimicry in Uta stansburiana, but there are qualitatively different examples in many other species, such as some Platysaurus.Many species of insects are toxic or distasteful when they have fed on certain plants that contain chemicals of particular classes, but not when they have fed on plants that lack those chemicals. For instance, some species of the subfamily Danainae feed on various species of the Asclepiadoideae in the family Apocynaceae, which render them poisonous and emetic to most predators. Such insects frequently are aposematically coloured and patterned. When feeding on innocuous plants however, they are harmless and nutritious, but a bird that once has sampled a toxic specimen is unlikely to eat harmless specimens that have the same aposematic coloration. When regarded as mimicry of toxic members of the same species, this too may be seen as automimicry. Classification: Some species of caterpillar, such as many hawkmoths (Sphingidae), have eyespots on their anterior abdominal segments. When alarmed, they retract the head and the thoracic segments into the body, leaving the apparently threatening large eyes at the front of the visible part of the body. Classification: Many insects have filamentous "tails" at the ends of their wings and patterns of markings on the wings themselves. These combine to create a "false head". This misdirects predators such as birds and jumping spiders (Salticidae). Spectacular examples occur in the hairstreak butterflies; when perching on a twig or flower, they commonly do so upside down and shift their rear wings repeatedly, causing antenna-like movements of the "tails" on their wings. Studies of rear-wing damage support the hypothesis that this strategy is effective in deflecting attacks from the insect's head. Classification: Other forms Some forms of mimicry do not fit easily within the classification given above. Floral mimicry is induced by the discomycete fungus Monilinia vaccinii-corymbosi. In this case, a fungal plant pathogen infects leaves of blueberries, causing them to secrete sugars, in effect mimicking the nectar of flowers. To the naked eye the leaves do not look like flowers, yet they still attract pollinating insects like bees using an ultraviolet signal. This case is unusual, in that the fungus benefits from the deception but it is the leaves that act as mimics, being harmed in the process. It is similar to host-parasite mimicry, but the host does not receive the signal. It has something in common with automimicry, but the plant does not benefit from the mimicry, and the action of the pathogen is required to produce it. Evolution: It is widely accepted that mimicry evolves as a positive adaptation. The lepidopterist and novelist Vladimir Nabokov however argued that although natural selection might stabilize a "mimic" form, it would not be necessary to create it.The most widely accepted model used to explain the evolution of mimicry in butterflies is the two-step hypothesis. The first step involves mutation in modifier genes that regulate a complex cluster of linked genes that cause large changes in morphology. The second step consists of selections on genes with smaller phenotypic effects, creating an increasingly close resemblance. This model is supported by empirical evidence that suggests that a few single point mutations cause large phenotypic effects, while numerous others produce smaller effects. Some regulatory elements collaborate to form a supergene for the development of butterfly color patterns. The model is supported by computational simulations of population genetics. The Batesian mimicry in Papilio polytes is controlled by the doublesex gene.Some mimicry is imperfect. Natural selection drives mimicry only far enough to deceive predators. For example, when predators avoid a mimic that imperfectly resembles a coral snake, the mimic is sufficiently protected.Convergent evolution is an alternative explanation for why organisms such as coral reef fish and benthic marine invertebrates such as sponges and nudibranchs have come to resemble each other.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gould Belt Survey** Gould Belt Survey: The Gould Belt Survey is an astronomical research project led by the Center for Astrophysics | Harvard & Smithsonian, with the participation of several other institutions.The astronomers use observations and data captured by the Spitzer Space Telescope and other telescopes to create a complete picture of the star forming regions within an approximately 1600 light-year radius centered on the Solar System. These regions are partly or completely clouded by interstellar dust and therefore cannot be observed by telescopes using visible light, like the Hubble Space Telescope. Gould Belt Survey: The Gould Belt survey team uses a variety of telescopes and observatories to study multiple aspects of star formation. The Spitzer Space Telescope provides imagery and observations made in the infrared spectrum, while for example the James Clerk Maxwell Telescope provides images from the submillimeter wavelength region of the spectrum. The Herschel Space Observatory telescope provided observation data between the far infrared and the sub-millimeter wavelengths, effectively covering wavelengths between the observational capabilities of the Spitzer and the Maxwell telescopes. Observation results: Regions surveyed include clouds in Scorpius, Lupus, Musca, Chamaeleon, the Serpens-Aquila Rift and W40, Cepheus, and IC 5146. The first observations by the Spitzer Space Telescope were completed between September 21–27, 2006. The first region that has been surveyed is IC5146 (the Cocoon Nebula in Cygnus). The first details of the research were presented at a meeting of the American Astronomical Society in Seattle.Based on these observations, the team of astronomers led by Robert Gutermuth, of the Center for Astrophysics | Harvard & Smithsonian reported the discovery of Serpens South, a cluster of 50 young stars in the Serpens constellation.The research team also released a poster which covers the details of the research and the results presented at the AAS meeting. Related projects: The data from this project will be used in combination with observations from the James Clerk Maxwell Telescope, Herschel Space Observatory, and previous Spitzer observations from the "Cores to Disks" Legacy program and from various Guaranteed Time and General Observer programs that studied star-forming regions within Gould's Belt.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crystallography Reviews** Crystallography Reviews: Crystallography Reviews is a quarterly peer-reviewed scientific journal publishing review articles on all aspects of crystallography. It is published by Taylor & Francis. The editor-in-chief is Petra Bombicz (Research Centre for Natural Sciences, Budapest), book review editor is Alice Brink (University of Free State, Bloemfontein), advising editor is John R. Helliwell (University of Manchester), founding editor is Moreton Moore, (Royal Holloway, University of London). Abstracting and indexing: The journal is abstracted and indexed in: Chemical Abstracts Service/CASSI Science Citation Index Expanded ScopusAccording to the Journal Citation Reports, the journal has a 2020 impact factor of 2.467.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Esterom** Esterom: Esterom is an investigational drug being studied as a topical analgesic. Chemically, it is a mixture of compounds derived from the esterification of cocaine in propylene glycol. While the major component is benzoylecgonine, the analgesic activity is likely to due to hydroxypropyl benzoylecgonine, the only component that penetrates the skin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Volkswagen Group A platform** Volkswagen Group A platform: The Volkswagen Group A platform is an automobile platform shared among compact and mid-size cars of the Volkswagen Group. The first version debuted in 1974 and was originally based on the engineering concept of the Volkswagen Golf Mk1, and is applicable to either front- or four-wheel drive vehicles, using only front-mounted transverse engines. Volkswagen Group A platform: Volkswagens based on this platform have been colloquially referred to by generation number, e.g. the first Golf version (A1) is referred to as a Mark 1 Golf." Often each generation is designated by substituting "Mark" for "A," but this can be misleading. For example, the Mk1 and Mk2 Scirocco are both based on the A1 platform. Furthermore, confusion was possible with the Volkswagen Passat, which has been produced on both the B platform alongside the Audi A4, as well as the A platform depending on the generation. Volkswagen has never used the Mark or Mk designations. Volkswagen Group A platform: Volkswagen Group introduced a new alphanumeric nomenclature for vehicle platforms for the fourth generation. Under Volkswagen's revised platform naming system, the "A4" platform became the PQ34 platform, and what would have been called the A5 platform was called the PQ35 platform. The platform code is composed as follows: A letter, P, indicating a passenger car platform A letter indicating the configuration of the engine: Q indicates a transverse engine (Quer in German) A digit indicating the platform size or class: 3 corresponds to compact cars 4 corresponds to mid-size cars A digit indicating the generation or evolutionThe A platform has been superseded by the MQB platform for new models, with the exception of a few models only sold in certain markets. A1: The A1 platform debuted on the Mk1 Golf on its launch in 1974, and continued into the early 1990s, when the last remaining models using the platform - the Scirocco, Cabriolet, and Caddy - were discontinued. A1 platform cars (Typ numbers in brackets): Volkswagen Golf Mk1 (17) Volkswagen Golf Cabriolet (155) Volkswagen Jetta Mk1 (16) Volkswagen Caddy Mk1 (14) Volkswagen Scirocco Mk1 & Mk2 (53/53B) Volkswagen Citi Golf A2: The A2 platform debuted in 1983 on the Mk2 Golf, and lasted until 1998, when the original SEAT Toledo (the first Volkswagen-developed SEAT following the Spanish company's takeover by Volkswagen) was replaced. The Volkswagen Passat B3 was based on a stretched A2 platform. The Volkswagen Corrado, while being an A2 platform car, uses some components from the A3 platform, notably the rear suspension assembly and some front suspension parts. A2 platform cars (Typ numbers in brackets): Volkswagen Corrado (53I) Volkswagen Golf Mk2 (19E) Volkswagen Jetta II (1G) SEAT Toledo Mk1 (1L) Chery A11 and Chery A15 Vortex Corda Volkswagen Jetta King Volkswagen Jetta Pioneer A3: The A3 platform was only used for two models - the Mk3 Golf, launched in 1991, and its saloon equivalent, the Vento, launched in early 1992. A3 platform cars (Typ numbers in brackets): Volkswagen Golf Mk3 (1E) Volkswagen Vento/Jetta III (1H)The smaller A03 platform, used in the Polo (6N) is based on the A3 platform as well, and shares many components. SEAT Ibiza (6K) and derived models, uses components of both A3 and A03 platforms. PQ34 (A4): The A4 platform (PQ34 under the revised scheme) debuted on the Audi A3 in 1996 and went on to be used for many different models over the next two decades. PQ34 platform cars (Typ numbers in brackets): Audi A3 Mk1 (8L) Audi TT Mk1 (8N) Volkswagen Golf Mk4 (1J) Volkswagen Bora/Jetta (1J/9M) Volkswagen Lavida (18) Volkswagen New Beetle (1C/1Y/9C) SEAT León Mk1 (1M) SEAT Toledo Mk2 (1M) Škoda Octavia Mk1 (1U) Škoda Kamiq (chinese version) PQ35 (A5)/PQ46 (A6): The PQ35 platform was designed to be more modular than previous A platforms. For the first time, a fully independent suspension was used in the rear of all A platform vehicles. The PQ46 platform is a variant derived from this platform primarily intended for larger vehicles, such as mid-size cars and crossovers. A common misconception is that the PQ46 based sixth and seventh generations of the Passat, are based on the PL46 (B6) and B7 platforms. However, this transverse engine Passat has little in common with the longitudinal engine "B6" and "B7" Audi A4. PQ35 (A5)/PQ46 (A6): PQ35 platform cars (Type numbers in brackets): Audi A3 Mk2 (8P) Audi TT Mk2 (8J) Audi Q3 Mk1 (8U) SEAT León Mk2 (1P) SEAT Toledo Mk3 (5P) SEAT Altea (5P) Škoda Octavia Mk2 (1Z) Škoda Yeti (5L) Volkswagen Touran (1T) Volkswagen Caddy Mk3 (2K) Volkswagen Golf Mk5 (1K) Volkswagen Golf Mk6 (5K) Volkswagen Jetta Mk5 (1K) Volkswagen Scirocco Mk3 (13) Volkswagen Jetta Mk6 (1K) Volkswagen Beetle (A5) (16) Volkswagen Eos (1F) PQ46 platform cars (Type numbers in brackets): Škoda Superb (3T) Volkswagen CC (3C/35) Volkswagen Passat B6 & B7 (3C) Volkswagen Passat NMS (A32/A33) Volkswagen Sharan Mk2 (7N) SEAT Alhambra Mk2 (7N) Volkswagen Tiguan Mk1 (5N)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shelf-break front** Shelf-break front: Shelf-Break Fronts are a process by which stratification of the water column occurs. This stratification normally results in thermoclines, since they occur where a sudden change in water depth causes a constriction of the current flow. They can be expressed as a ratio of their potential energy due to maintaining mixed (non-stratified) conditions, to the dissipated energy produced by the current being forced across the sudden change in depth. This can be expressed as: Ratio=Pot.EnergyDissipatedEnergy The energy terms can be expressed in very detailed equations, but with constant terms factored out, the important terms are water velocity (average velocity, |U¯| ) and water depth (h). Shelf-break front: The equation for the stratification index can be expressed as: log 10 ⁡hCD|U¯|3 Where C_{D} is a friction coefficient, approximated as 0.003 for a sandy bottom. This index can be calculated for any coastal region, usually in the range of +3 (highly stratified) to -2 (highly turbulent). Reason to calculate: The stratification index for a Shelf Break Front is an indication of how productive phytoplankton will be. When the stratification index is approximately 1.5, this produces a nutrient-rich environment for the growth of phytoplankton. Too much higher, and the stratification of the water column will not cause the upwellings of nutrients needed for the phytoplankton to prosper, too much lower, and the water will be too turbulent for the phytoplankton to use the nutrients available. Stability of the front, in addition to nutrients, is a key to phytoplankton production. An illustration of the stratification index for Narragansett Bay is shown here, with the average speeds estimated, using actual bathymetry for the bay, and an estimated C D C_{D} for silt, which composes much of the bay's bottom. Using the Stokes Spreadsheet, and some customization on the size of silt particles, I used a C D C_{D} = 0.0011. More accurate speed measurements and detailed C D C_{D} values for the Bay's bottom could yield a higher fidelity image. Reason to calculate: Notice the green color (a stratification index of approximately 1.5) along the edges of the Northern Bay and near some of the islands. These areas are favorable to the formation of algal blooms in the Narraganset Bay habitat due to the stratification index being approximately 1.5. Algae have been observed in high concentration in some of these areas, but not all of them. Reason to calculate: Using flow cytometry, results have determined that the relative abundance of picophytoplankton (< 2 μ \mu m), small nanophytoplankton (2 to 10 μ \mu m) and large nanophytoplankton (10-20 μ \mu m) are greatly affected by the stratification index of the water column. Cell diversity was greatest in the presence of moderate levels of stratification. If the turbulence is too high, their numbers remain stable or fall, but if there is no turbulence, their numbers do fall. It is postulated that the nutrient-rich boundary layer around each phytoplankton cell is not exhausted, but renewed, by this moderate level of turbulence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Linear complex structure** Linear complex structure: In mathematics, a complex structure on a real vector space V is an automorphism of V that squares to the minus identity, −I. Such a structure on V allows one to define multiplication by complex scalars in a canonical fashion so as to regard V as a complex vector space. Linear complex structure: Every complex vector space can be equipped with a compatible complex structure, however, there is in general no canonical such structure. Complex structures have applications in representation theory as well as in complex geometry where they play an essential role in the definition of almost complex manifolds, by contrast to complex manifolds. The term "complex structure" often refers to this structure on manifolds; when it refers instead to a structure on vector spaces, it may be called a linear complex structure. Definition and properties: A complex structure on a real vector space V is a real linear transformation such that Here J2 means J composed with itself and IdV is the identity map on V. That is, the effect of applying J twice is the same as multiplication by −1. This is reminiscent of multiplication by the imaginary unit, i. A complex structure allows one to endow V with the structure of a complex vector space. Complex scalar multiplication can be defined by for all real numbers x,y and all vectors v in V. One can check that this does, in fact, give V the structure of a complex vector space which we denote VJ. Definition and properties: Going in the other direction, if one starts with a complex vector space W then one can define a complex structure on the underlying real space by defining Jw = iw for all w ∈ W. Definition and properties: More formally, a linear complex structure on a real vector space is an algebra representation of the complex numbers C, thought of as an associative algebra over the real numbers. This algebra is realized concretely as which corresponds to i2 = −1. Then a representation of C is a real vector space V, together with an action of C on V (a map C → End(V)). Concretely, this is just an action of i, as this generates the algebra, and the operator representing i (the image of i in End(V)) is exactly J. Definition and properties: If VJ has complex dimension n then V must have real dimension 2n. That is, a finite-dimensional space V admits a complex structure only if it is even-dimensional. It is not hard to see that every even-dimensional vector space admits a complex structure. One can define J on pairs e,f of basis vectors by Je = f and Jf = −e and then extend by linearity to all of V. If (v1, …, vn) is a basis for the complex vector space VJ then (v1, Jv1, …, vn, Jvn) is a basis for the underlying real space V. Definition and properties: A real linear transformation A : V → V is a complex linear transformation of the corresponding complex space VJ if and only if A commutes with J, i.e. if and only if Likewise, a real subspace U of V is a complex subspace of VJ if and only if J preserves U, i.e. if and only if Examples: Elementary example The collection of 2x2 real matrices M(2,R) over the real field is 4-dimensional. Any matrix J=(acb−a) with a2 + bc = –1has square equal to the negative of the identity matrix. A complex structure may be formed in M(2,R): with identity matrix I, elements x I + y J, with matrix multiplication form complex numbers. Examples: Cn The fundamental example of a linear complex structure is the structure on R2n coming from the complex structure on Cn. That is, the complex n-dimensional space Cn is also a real 2n-dimensional space – using the same vector addition and real scalar multiplication – while multiplication by the complex number i is not only a complex linear transform of the space, thought of as a complex vector space, but also a real linear transform of the space, thought of as a real vector space. Concretely, this is because scalar multiplication by i commutes with scalar multiplication by real numbers i(λv)=(iλ)v=(λi)v=λ(iv) – and distributes across vector addition. As a complex n×n matrix, this is simply the scalar matrix with i on the diagonal. The corresponding real 2n×2n matrix is denoted J. Examples: Given a basis {e1,e2,…,en} for the complex space, this set, together with these vectors multiplied by i, namely {ie1,ie2,…,ien}, form a basis for the real space. There are two natural ways to order this basis, corresponding abstractly to whether one writes the tensor product as Cn=Rn⊗RC or instead as Cn=C⊗RRn. If one orders the basis as {e1,ie1,e2,ie2,…,en,ien}, then the matrix for J takes the block diagonal form (subscripts added to indicate dimension): This ordering has the advantage that it respects direct sums of complex vector spaces, meaning here that the basis for Cm⊕Cn is the same as that for Cm+n. On the other hand, if one orders the basis as {e1,e2,…,en,ie1,ie2,…,ien} , then the matrix for J is block-antidiagonal: This ordering is more natural if one thinks of the complex space as a direct sum of real spaces, as discussed below. Examples: The data of the real vector space and the J matrix is exactly the same as the data of the complex vector space, as the J matrix allows one to define complex multiplication. At the level of Lie algebras and Lie groups, this corresponds to the inclusion of gl(n,C) in gl(2n,R) (Lie algebras – matrices, not necessarily invertible) and GL(n,C) in GL(2n,R): The inclusion corresponds to forgetting the complex structure (and keeping only the real), while the subgroup GL(n,C) can be characterized (given in equations) as the matrices that commute with J: The corresponding statement about Lie algebras is that the subalgebra gl(n,C) of complex matrices are those whose Lie bracket with J vanishes, meaning [J,A]=0; in other words, as the kernel of the map of bracketing with J, [J,−]. Examples: Note that the defining equations for these statements are the same, as AJ=JA is the same as AJ−JA=0, which is the same as [A,J]=0, though the meaning of the Lie bracket vanishing is less immediate geometrically than the meaning of commuting. Direct sum If V is any real vector space there is a canonical complex structure on the direct sum V ⊕ V given by The block matrix form of J is where IV is the identity map on V. This corresponds to the complex structure on the tensor product C⊗RV. Compatibility with other structures: If B is a bilinear form on V then we say that J preserves B if for all u, v ∈ V. An equivalent characterization is that J is skew-adjoint with respect to B: If g is an inner product on V then J preserves g if and only if J is an orthogonal transformation. Likewise, J preserves a nondegenerate, skew-symmetric form ω if and only if J is a symplectic transformation (that is, if ω ( J u , J v ) = ω ( u , v ) {\textstyle \omega (Ju,Jv)=\omega (u,v)} ). For symplectic forms ω an interesting compatibility condition between J and ω is that holds for all non-zero u in V. If this condition is satisfied, then we say that J tames ω (synonymously: that ω is tame with respect to J; that J is tame with respect to ω; or that the pair ( ω , J ) {\textstyle (\omega ,J)} is tame). Compatibility with other structures: Given a symplectic form ω and a linear complex structure J on V, one may define an associated bilinear form gJ on V by Because a symplectic form is nondegenerate, so is the associated bilinear form. The associated form is preserved by J if and only if the symplectic form is. Moreover, if the symplectic form is preserved by J, then the associated form is symmetric. If in addition ω is tamed by J, then the associated form is positive definite. Thus in this case V is an inner product space with respect to gJ. Compatibility with other structures: If the symplectic form ω is preserved (but not necessarily tamed) by J, then gJ is the real part of the Hermitian form (by convention antilinear in the first argument) h J : V J × V J → C {\textstyle h_{J}\colon V_{J}\times V_{J}\to \mathbb {C} } defined by Relation to complexifications: Given any real vector space V we may define its complexification by extension of scalars: VC=V⊗RC. This is a complex vector space whose complex dimension is equal to the real dimension of V. It has a canonical complex conjugation defined by v⊗z¯=v⊗z¯ If J is a complex structure on V, we may extend J by linearity to VC: J(v⊗z)=J(v)⊗z. Since C is algebraically closed, J is guaranteed to have eigenvalues which satisfy λ2 = −1, namely λ = ±i. Thus we may write VC=V+⊕V− where V+ and V− are the eigenspaces of +i and −i, respectively. Complex conjugation interchanges V+ and V−. The projection maps onto the V± eigenspaces are given by P±=12(1∓iJ). So that V±={v⊗1∓Jv⊗i:v∈V}. There is a natural complex linear isomorphism between VJ and V+, so these vector spaces can be considered the same, while V− may be regarded as the complex conjugate of VJ. Note that if VJ has complex dimension n then both V+ and V− have complex dimension n while VC has complex dimension 2n. Abstractly, if one starts with a complex vector space W and takes the complexification of the underlying real space, one obtains a space isomorphic to the direct sum of W and its conjugate: WC≅W⊕W¯. Extension to related vector spaces: Let V be a real vector space with a complex structure J. The dual space V* has a natural complex structure J* given by the dual (or transpose) of J. The complexification of the dual space (V*)C therefore has a natural decomposition (V∗)C=(V∗)+⊕(V∗)− into the ±i eigenspaces of J*. Under the natural identification of (V*)C with (VC)* one can characterize (V*)+ as those complex linear functionals which vanish on V−. Likewise (V*)− consists of those complex linear functionals which vanish on V+. Extension to related vector spaces: The (complex) tensor, symmetric, and exterior algebras over VC also admit decompositions. The exterior algebra is perhaps the most important application of this decomposition. In general, if a vector space U admits a decomposition U = S ⊕ T then the exterior powers of U can be decomposed as follows: ΛrU=⨁p+q=r(ΛpS)⊗(ΛqT). A complex structure J on V therefore induces a decomposition ΛrVC=⨁p+q=rΛp,qVJ where Λp,qVJ=def(ΛpV+)⊗(ΛqV−). All exterior powers are taken over the complex numbers. So if VJ has complex dimension n (real dimension 2n) then dim dim C⁡Λp,qVJ=(np)(nq). The dimensions add up correctly as a consequence of Vandermonde's identity. Extension to related vector spaces: The space of (p,q)-forms Λp,q VJ* is the space of (complex) multilinear forms on VC which vanish on homogeneous elements unless p are from V+ and q are from V−. It is also possible to regard Λp,q VJ* as the space of real multilinear maps from VJ to C which are complex linear in p terms and conjugate-linear in q terms. Extension to related vector spaces: See complex differential form and almost complex manifold for applications of these ideas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reflexive verb** Reflexive verb: In grammar, a reflexive verb is, loosely, a verb whose direct object is the same as its subject, for example, "I wash myself". More generally, a reflexive verb has the same semantic agent and patient (typically represented syntactically by the subject and the direct object). For example, the English verb to perjure is reflexive, since one can only perjure oneself. In a wider sense, the term refers to any verb form whose grammatical object is a reflexive pronoun, regardless of semantics; such verbs are also more broadly referred to as pronominal verbs, especially in the grammar of the Romance languages. Other kinds of pronominal verbs are reciprocal (they killed each other), passive (it is told), subjective, and idiomatic. The presence of the reflexive pronoun changes the meaning of a verb, e.g., Spanish abonar to pay, abonarse to subscribe.There are languages that have explicit morphology or syntax to transform a verb into a reflexive form. In many languages, reflexive constructions are rendered by transitive verbs followed by a reflexive pronoun, as in English -self (e.g., "She threw herself to the floor.") English employs reflexive derivation in idiosyncratically, as in "self-destruct". Indo-European languages: Romance and Slavic languages make extensive use of reflexive verbs and reflexive forms. Indo-European languages: In the Romance languages, there are nonemphatic clitic reflexive pronouns and emphatic ones. In Spanish, for example, the particle se encliticizes to the verb's infinitive, gerund, and imperative (lavarse "to wash oneself"), while in Romanian, the particle procliticizes to the verb (a se spăla "to wash oneself"). Full reflexive pronouns or pronominal phrases are added for emphasis or disambiguation: Me cuido a mí mismo "I take care of myself" (mismo combines with the prepositional form of the pronoun mí to form an intensive reflexive pronoun). Indo-European languages: The enclitic reflexive pronoun sa/se/si/się is used in Western and South Slavic languages, while Eastern Slavic languages use the suffix -sja (-ся). There is also the non-clitic emphatic pronoun sebja/себя, used to emphasize the reflexive nature of the act; it is applicable only to "true" reflexive verbs, where the agent performs a (transitive) action on itself. Indo-European languages: The Slavic languages use the same reflexive pronoun for all persons and numbers, while the Romance and North Germanic ones have a special third person pronoun that cliticizes and the other Germanic ones do as well without cliticizing. This is illustrated in the following table for the word "to recall" (e.g., Je me souviens means "I recall", Tu te souviens means "You recall", and so on). Indo-European languages: In all of these language groups, reflexive forms often present an obstacle for foreign learners (notably native speakers of English, where the feature is practically absent) due to the variety of uses. Even in languages which contain the feature, it is not always applicable to the same verbs and uses (although a common subset can be generally extracted, as outlined below). For example, the Spanish reflexive construct "se hundió el barco" ("the boat sank") has no reflexive equivalent in some Slavic languages (which use an intransitive equivalent of sink), though for example Czech and Slovak do use a reflexive verb: "loď se potopila"/"loď sa potopila". Reflexive verbs can have a variety of uses and meanings, which often escape consistent classification. Some language-common identified uses are outlined below. For example, Davies et al. identify 12 uses for Spanish reflexive constructions, while Vinogradov divides Russian reflexive verbs into as many as 16 groups. Indo-European languages: Martin Haspelmath also has a useful distinction between the reflexive types mentioned below, which he calls introverted reflexives, and so-called extroverted reflexives, which are used for verbs that are usually not reflexive, like hate oneself, love oneself, hear oneself, and kill oneself. Some Indo-European languages have a different reflexive morpheme for extroverted reflexives. For example: See how the Russian ненавидеть себя (nenavidet' sebja) "to hate oneself", which uses a reflexive pronoun, compares to мыться (myt'-sja) "to wash (oneself)", which uses a reflexive suffix (Russian can also say мыть себя (myt' sebja), with a reflexive pronoun, but only when the pronoun needs to be stressed for emphasis or contrast). Indo-European languages: Or Dutch "zij haat zichzelf" "she hates herself", versus "zij wast zich" "she washes (herself)". The distinction exists similarly in English, where introverted reflexive verbs usually have no reflexive pronoun, unlike extroverted. In ancient Greek, the introverted reflexive was expressed using the middle voice rather than a pronoun. Similarly, in modern Greek, it is expressed using the middle usage of the mediopassive voice. On the other hand, the extroverted reflexive was a true reflexive in ancient Greek and modern Greek. Similarly, Claire Moyse-Faurie distinguishes between middle and reflexive in Oceanic languages in her on-line articles about reflexives in Oceanic languages. Properly reflexive The "true" (literal) reflexive denotes that the agent is simultaneously the patient. The verb is typically transitive and can be used in non-reflexive meaning as well. Reciprocal "Reciprocal" reflexive denotes that the agents perform the mutual actions among themselves, as in English constructions using "each other". In most cases, the transitive verbs are also used. Indo-European languages: In modern Scandinavian languages, the passive (or more properly mediopassive) voice is used for medial, especially reciprocal, constructions. Some examples from Danish are: Maria og Peter skændes; "Mary and Peter are bickering", lit. "Mary and Peter are scolded by each other."Maria og Peter blev forlovet; "Mary and Peter got engaged [to each other]."(The hypothetical form **kysses (kiss each other) is not often—if ever—seen in Danish; however, it will likely be understood by most native speakers, indicating that the mediopassive voice is still at the very least potentially productive in Danish. An expression like "de kysses uafladeligt" (they kiss each other all the time) could very well be used for humorous purposes.) Autocausative "Autocausative" reflexive denotes that the (usually animate) "referent represented by the subject combines the activity of actor and undergoes a change of state as a patient": Anticausative "Anticausative" reflexive denotes that the (usually inanimate) subject of the verb undergoes an action or change of state whose agent is unclear or nonexistent. Indo-European languages: Intransitive or impersonal "Intransitive" forms (also known as "impersonal reflexive" or "mediopassive") are obtained by attaching the reflexive pronoun to intransitive verbs. The grammatical subject is either omitted (in pro-drop languages) or is a dummy pronoun (otherwise). Thus, those verbs are defective, as they have only the 3rd person singular (masculine or neuter, depending on language) form. Indo-European languages: In Slavic languages, practically "the only condition is that they can be construed as having a human agent. The applied human agent can be generic, or loosely specified collective or individual." In many cases, there is a semantic overlap between impersonal/anticausative/autocausative constructs and the passive voice (also present in all Romance and Slavic languages). On one hand, impersonal reflexive constructs have a wider scope of application, as they are not limited to transitive verbs like the canonical passive voice. On the other hand, those constructs can have slight semantic difference or markedness. Indo-European languages: Inherent "Inherent" or "pronominal" (inherently or essentially) reflexive verbs lack the corresponding non-reflexive from which they can be synchronically derived. In other words, se is an inherent part of an unergative reflexive or reciprocal verb with no meaning of its own, and an obligatory part of the verb's lexical entry": Hebrew: In Hebrew reflexive verbs are in binyan הִתְפַּעֵל. A clause whose predicate is a reflexive verb may never have an object but may have other modifiers. e.g. האיש התפטר מעבודתו - the man resigned from his job. האיש התמכר לסמים - the man got addicted to drugs. האיש התקלח בבוקר - the man 'showered himself', i.e., took/had a shower in the morning. האישה הסתפרה אצל אבי - the woman took a haircut/had her hair done at Avi's. Inuktitut: A reflexive verb is a verb which must have both an object and a subject, but where, in some context, both the object and the subject are identical. In Inuktitut, this situation is expressed by using a specific verb but by affixing a non-specific ending to it. Australian languages: Guugu Yimithirr In Guugu Yimithirr (a member of the Pama-Nyungan language family) reflexivity can combine with past (PST), nonpast (NPST), and imperative (IMP) tense marking to form the verbal suffixes: /-dhi/ (REFL+PST), /-yi/ (REFL+NPST) and /-ya/ (REFL+IMP) respectively. See the following example where the verb waarmbal, a transitive verb meaning 'send back' is detransitivized to mean 'return' taking only one nominal argument with an agentive role: The same valence-reduction process occurs for the transitive wagil 'cut' In each of these cases, the reflexively inflected verb now forms a new stem to which additional morphology may be affixed, for example waarmba-adhi 'returned' may become waarmba-adhi-lmugu (return-REFL+PST-NEG) 'didn't return.' As with many Pama–Nyungan languages, however, verbs in the lexicon belong to conjugation classes, and a verbs class may restrict the ease with which it can be reflexive. These reflexive morphemes are largely employed for expressing reciprocity as well; however, in cases where there is potential ambiguity between a reflexive and a reciprocal interpretation, Guugu Yimithirr has an additional means for emphasizing the reflexive (i.e., by the agent upon the agent) interpretation: namely, the /-gu/ suffix upon the grammatical subject. See for example the following contrast between the reciprocal and reflexive: Reciprocal Reflexive Gumbaynggir Another Pama–Nyungan language, Gumbaynggir has a verbal suffix /-iri/ to mark reciprocity and de-transitive transitive verbs e.g. Australian languages: Kuuk Thaayorre As with Guugu Yimithirr, Kuuk Thaayorre, a Paman language, has some ambiguity between reflexive and reciprocal morphemes and constructions. Ostensibly, there are two suffixes /-e/ and /-rr/ for reflexivity and reciprocity respectively; however, in practice it is less clear cut. Take for example the presence of the reciprocal suffix in what should seem like a simple reflexive example. Australian languages: Or the reverse wherein an apparent reciprocal assertion has reflexive morphology: In actuality, the broader function of the reciprocal verb is to emphasize the agentively of the grammatical subject(s), sometimes to directly counteract expectations of an external agent--as in the first example above. The combination of the reciprocal verb with the reflexive pronoun highlights the notion that the subject acted highly agentively (as in a mutual/symmetric reciprocal event) but was also the undergoer of their own action (as in a reflexive event where agentively is backgrounded e.g. "I soiled myself"). Conversely, the reflexive verb can have precisely this function of backgrounding the agentively of the subject and bringing the focus to the effect that was wrought upon the undergoer(s) as in the second example above. Uralic languages: Hungarian language "The door opened" is expressed in Hungarian as "Az ajtó kinyílt", from the verb kinyílik, while the passive voice is rare and archaic. There are numerous verb pairs where one element is active and the other expresses middle voice, something happening apparently on its own, rendered in English like "to become, get, grow, turn" (something). See also the grammatical voice of Hungarian verbs and the Wiktionary entries of -ul/-ül, -ódik/-ődik and -odik/-edik/-ödik, three suffix groups that form such verbs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boron triiodide** Boron triiodide: Boron triiodide is a chemical compound of boron and iodine with chemical formula BI3. It has a trigonal planar molecular geometry. It is a crystalline solid, which reacts vigorously with water to form hydroiodic acid and boric acid. Its dielectric constant is 5.38 and its heat of vaporization is 40.5 kJ/mol. At extremely high pressures, BI3 becomes metallic at ~23 GPa and is a superconductor above ~27 GPa. Preparation: Boron triiodide can be prepared by the reaction of boron with iodine at 209.5 °C or 409.1 °F. It can also be prepared by this other method: HI BCl BI HCl (this reaction requires high temperature).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cat train** Cat train: A Cat train is a train of one or more supply sleds/sleighs hauled by a continuous track vehicle, and is typically used in roadless areas. They are so named for the caterpillar tracks of the hauling vehicle. In northern climates, they were used to haul supplies to isolated communities in winter before engineers such as John Denison created modern winter roads which enabled standard winterized semi-trucks and trailers to haul these loads and heavier freight. Cat trains are still used in areas where winter roads cannot be built, such as along the Hudson Bay, as seen in Season 9 of Ice Road Truckers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Economy car** Economy car: Economy car is a term mostly used in the United States for cars designed for low-cost purchase and operation. Typical economy cars are small (compact or subcompact), lightweight, and inexpensive to both produce and purchase. Stringent design constraints generally force economy car manufacturers to be inventive. Many innovations in automobile design were originally developed for economy cars, such as the Ford Model T and the Austin Mini. Definition: The precise definition of what constitutes an economy car has varied with time and place, based on the conditions prevailing at the time, such as fuel prices, disposable income of buyers, and cultural mores. It typically refers to a car that is designed to be small and lightweight to offer low-cost operation. In any given decade globally, there has generally been some rough consensus on what constituted the minimum necessary requirements for a highway-worthy car, constituting the most economical car possible. However, whether that consensus could be a commercial success in any given country depended on local culture. Thus in any given decade, every country has had a rough national consensus on what constituted the minimum necessary requirements for the least expensive car that wasn't undesirable, that is, that had some commercially attractive amount of market demand, making it a mainstream economy car. In many countries at various times, mainstream economy and maximum economy have been one and the same. Background: From its inception into the 1920s, the Ford Model T fulfilled both of these roles simultaneously in the U.S. and in many markets around the world. In Europe and Japan in the 1920s and 1930s, this was achieved by the much smaller Austin 7 and its competitors and derivatives. From the 1940s and into the 1960s, the Volkswagen Beetle played both roles throughout much of the world, particularly in Germany and Latin America – due to high fuel consumption, British, French, Italian, and Japanese models, all with better fuel economy, were able to capture the maximum-economy position in their home countries. By the 1970s the hatchback had become the standard body type for new economy car models. From 1960-1994 the Soviet Union sold the economy car Zaporozhets (and in the 80s, its successors, ZAZ-1102, and VAZ-1111) on the world market. In the mid-1980s, the Yugoslavian Zastava Koral (Yugo), was being sold as the cheapest new car on the U.S. market. South Korea's Hyundai models also sold well in the U.S., and have gone on to be successful around the world.Since the 1990s, the automotive industry has become extensively globalized, with all major manufacturers being multinational corporations who use globally sourced raw materials and components, with most moving assembly to the lowest labour cost countries. Today, a majority of major manufacturer offers economy cars, including at least one truly small car that may fall into subclassifications such as subcompact car, supermini, B-segment; city car; microcar; and others.Gordon Murray, the Formula 1 and McLaren F1 designer, said when designing his new Murray T.25 city car: "I would say that building a car to sell for six thousand pounds and designing that for a high-volume production, where you have all the quality issues under control, is a hundred times more difficult than designing a McLaren F1, or even a racing car. It is certainly the biggest challenge I've ever had from a design point of view." History: 1886-1920 The history of the automobile after many experimental models dating back at least a hundred years, started with the first production car – the 1886 Benz Tricycle. This began a period that was later known as the Brass era which is considered to be from 1890 to 1918 in the U.S. In the UK this is split into the pre-1905 Veteran era and Edwardian era to 1918. The U.S. Veteran era is usually dated pre-1890. History: In the 1890s and into the first decade of the twentieth century; the motorized vehicle was considered a replacement for the carriages of the rich, or simply a dangerous toy, that annoyed and inconvenienced the general public. For example, the 1908 children's book The Wind in the Willows, pokes fun at early privileged motorists. The automotive industry in France were the world leaders during this period – the Locomotives Act 1865 (the 'Red Flag Act') had obstructed automotive development in the UK until it was mostly r repealed by the Locomotives on Highways Act 1896. The high wheeler was an early car body style virtually unique to the United States. It was typified by large-diameter slender wheels, frequently with solid tires, to provide ample ground clearance on the primitive roads in much of the country at the turn of the 20th century. For the same reason, it usually had a wider track than normal automobiles. History: The first car to be marketed as an 'economy car' was the 1901–1907 Oldsmobile Curved Dash - it was produced by the thousands, with over 19,000 built in all. It was inspired by the buckboard type horse and buggy, (used like a small two-seat pickup truck) popular in rural areas of the U.S. It had two seats, but was less versatile than the vehicle that inspired it. It was produced after a fire at the Oldsmobile plant, when the prototype was saved by a nightwatchman named Stebbins (who later became the Mayor of Detroit) and was the only product available to the company to produce.Although cars were becoming more affordable before it was launched, the 1908–1927 Ford Model T is considered to be the first true economy car, because the very few previous vehicles at the bottom of the market were 'horseless carriages' rather than practical cars. The major manufacturers at the time had little interest in low-priced models. The first 'real' cars had featured the FR layout first used by the French car maker Panhard and so did the Model T. History: Henry Ford declared at the launch of the vehicle: I will build a car for the great multitude. It will be large enough for the family, but small enough for the individual to run and care for. It will be constructed of the best materials, by the best men to be hired, after the simplest designs that modern engineering can devise. But it will be low in price that no man making a good salary will be unable to own one - and enjoy with his family the blessing of hours of pleasure in God's great open spaces. History: The Ford Model T was a large scale mass-produced car; that very innovation, along with the attributes it required a simple inexpensive design, that allowed it to be the first car to exemplify the ideals of the economy car. Although it followed the Panhard mechanical layout, it used an epicyclic gearbox more like later automatic gearboxes, rather than the Panhard type manual gearbox, which in a developed form is still in common use today. The innovations involved in making it a successful design were in its production and materials technology; particularly the use of new vanadium steel alloys. Model T production was a leading example of the Taylorism school of scientific management (also known as Fordism) and its production techniques evolved at the Highland Park Ford Plant that opened in 1910, after it outgrew the Piquette Avenue Plant. The River Rouge Plant which opened in 1919, was the most technologically advanced in the world, raw materials entered at one end and finished cars emerged from the other. The innovation of the moving assembly line, was inspired by the 'dis-assembly' plants of the Chicago meat packing industry, reduced production time from twelve and a half hours, to just an hour and thirty-three minutes per car. Black was the only colour available because it was the only paint that would dry in the required production time. The continuous improvement of production methods, and economies of scale from larger and larger scale production, allowed Henry Ford to progressively lower the price of the Model T throughout its production run. It was far less expensive, smaller, and more austere than its hand-built pre-first world war contemporaries. The size of the Model T was arrived at, by making its track to the width of the ruts in the unsurfaced rural American roads of the time, ruts made by horse-drawn vehicles. It was specifically designed with a large degree of axle articulation, and a high ground clearance, to deal with these conditions effectively. It had an under stressed 177 cu in (2.9 L) engine. It set the template for American vehicles being larger than comparable vehicles in other countries, which would later on have economy cars scaled to their narrower roads with smaller engines. History: In 1912 Edward G. Budd founded the Budd Company, which initially specialized in the manufacture of pressed-steel frames for automobiles. This built on his railroad experience. In 1899 he had taken his knowledge of pressed steel to the railroad industry. He worked with the Pullman Company on a contract for Pennsylvania Railroad, building the first all-steel railcar. History: In 1913 in the UK, the 1018 cc "Bullnose" Morris Oxford was the first model launched by Morris Motors. Only 1302 were made. The Oxford was available as a two-seater, or van but the chassis was too short to allow four-seat bodies to be fitted. It made extensive use of bought in components, including many from the U.S. to reduce costs. The 1915-1919 Morris Cowley (about 1400 produced) powered by a new US Continental engine was a bigger stronger better finished version of the first Oxford. The post–First World War Oxford was a deluxe version of that, now made plainer, 1915-1919 Cowley. They were larger cars with 50% bigger engines than the 1913 Oxford. By 1925 Cowleys and Oxfords were 41 per cent of British private car production and limousine and landaulet bodies for 14/28 Oxfords were supplied ex-factory. Morris then added a commercial vehicle operation and bought Wolseley Motors the following year. Cecil Kimber, a Morris employee, founded MG (Morris Garages) aiming to sell more Morrises. After the Second World War Morris Motors, having swept in Riley, merged with the Austin Motor Company together forming the British Motor Corporation. History: In 1913 the British Trojan company had its prototype ready for production. It had a two-stroke engine with four cylinders arranged in pairs, and each pair shared a common combustion chamber - a doubled-up version of what would later be called the "split-single" engine. The pistons in each pair drove the crankshaft together as they were coupled to it by a V-shaped connecting rod. For this arrangement to work, it is necessary for the connecting rod to flex slightly, which goes completely against normal practice. The claim was that each engine had only seven moving parts, four pistons, two connecting rods and a crankshaft. This was connected to a two-speed epicyclic gearbox, to simplify gear changing, and a chain to the rear wheels. Solid tyres were used, even though these were antiquated for car use, to prevent punctures and very long springs used to give some comfort. Before production could start war broke out and from 1914 to 1918, the company made tools and gauges for the war effort. History: In 1914 Ford was producing half a million Model Ts a year, with a sale price of less than US$500. This was more than the rest of the U.S. auto industry combined and ten times the total national car production of 1908, the year of the cars launch. Also in that year Ford made headlines by increasing the minimum wage of his workers from $2.83 for a nine-hour day to $5.00 for an eight-hour day, to combat low workforce morale, and employee turnover problems because of the repetitive and stressful nature of working on the production line, and more radically, to turn his semi-skilled workers into potential customers.The Ford Model T was the first automobile produced in many countries at the same time. It was the first 'World Car', since they were being produced in Canada and in Manchester, England starting in 1911 and were later assembled in Germany, Argentina, France, Spain, Denmark, Norway, Belgium, Brazil, Mexico, and Japan.At the New York Motor Show in January 1915, William C. Durant the head of Chevrolet (and founder of GM), launched the Chevrolet Four-Ninety, a stripped-down version of the Series-H, to compete with Henry Ford's Model T, and went into production in June. To aim directly at Ford, Durant said the new car would be priced at US$490 (the source of its name), the same as the Model T touring. Its introductory price was US$550, however, although it was reduced to US$490 later when the electric starter and lights were made a US$60 option. Henry Ford responded by reducing the Model T to US$440.In 1916 Edward G. Budd's first big order for the Budd Company was from the Dodge brothers, who purchased 70,000 bodies, mounting the steel bodies onto conventional chassis frames. History: 1920s During the 1920s Edward G. Budd's pressed steel bodies were fast replacing traditional coachbuilt bodies all around the world. These were fully closed roofed bodies, until this time open tourer bodies were the standard body on the market. Budd envisioned pushing his technology even further, and in 1924 he found another visionary in André Citroën. By 1934, they had developed the Citroën Traction Avant, the first unibody, pressed-steel automobile. Budd also pioneered the use of electric arc welding in automobile manufacturing. It would be the 1930s before this technology was generally applied to economy cars. History: The cyclecar was an attempt in the period before 1922 in the post-First World War austerity period, as a form of "four-wheeled motorcycle", with all the benefits of a motorcycle and side-car, in a more stable package. History: In 1920 Trojan in Britain made its first series of six cars from a works in Croydon and the final revised production version was shown at the 1922 London Motor Show. An agreement was reached with Leyland Motors to produce the cars at their Kingston upon Thames factory where work on reconditioning ex RAF wartime trucks was running down. This arrangement would continue until 1928 when Leyland wanted factory space for truck production. During the nearly seven years of the agreement 11,000 cars and 6700 vans were made. The car known as the Trojan Utility Car went onto the market at £230, reducing to £125 in 1925, the same as a Model T Ford. Nothing was conventional. Rather than a chassis the car had a punt shaped tray which housed the engine and transmission below the seats. This is a similar idea to the chassis-less design of the contemporary 1922 Italian Lancia Lambda luxury car. The transmission used a chain to drive the solid tyre shod wheels. The 1527-cc engine to the ingenious Hounsfield design was started by pulling a lever on the right of the driver. To prove how economical the car was to run, the company ran the slogan "Can you afford to walk?" and calculated that over 200 miles (320 km) it would cost more in shoes and socks than to cover the distance by Trojan car. History: The astronomical success of the Model T accelerated after the First World War, and by the time Ford made his 10 millionth car, half of all cars in the world were Fords. It was so successful that Ford did not purchase any advertising between 1917 and 1923; more than 15 million Model Ts were manufactured, reaching a rate of 9,000 to 10,000 cars a day in 1925, or 2 million annually, more than any other model of its day, at a price of just $240. The need for constant reductions in price through the 1920s reflected increasing competition from newer designs for the relatively unchanged and increasingly obsolescent Model T. History: In 1923 Chevrolet developed a new car to compete with the Model T, the Chevrolet Series M 'Copper-Cooled', air-cooled car, designed by General Motors engineer at AC Delco Charles Kettering, (who invented the points/condenser ignition system that was in use until the 1980s). It was a rare failure for him, due to uneven cooling of the inline four-cylinder engine. History: The most development of small economy cars occurred in Europe. There was less emphasis on long-distance automobile travel, a need for vehicles that could navigate narrow streets and alleys in towns and cities (many were unchanged since medieval times), and the narrow and winding roads commonly found in the European countryside. Ettore Bugatti designed a small car for Peugeot. The 1911 Peugeot Bébé Type 19. It had an 850 cc 4-cylinder engine. The Citroën Type A was the first car produced by Citroën from June 1919 to December 1921 in Paris. Citroën had been established to produce the double bevel gears that its logo resembles, but had ended the First World War with large production facilities, from the production of much needed artillery shells for the French army. Andre Citroën was a keen adopter of U.S. car manufacturing ideas and technology in the 1920s and 1930s. Andre Citroën re-equipped his factory as a scaled down version of the Ford River Rouge Plant, that he had visited in Detroit Michigan. It was advertised as "Europe's first mass production car." The Type A reached a production number of 24,093 vehicles. The Opel 4 PS, Germany's first 'peoples car', popularly known as the Opel Laubfrosch (Opel Treefrog), was a small two-seater car introduced by the then family owned auto maker Opel, early in 1924, which bore an uncanny resemblance to the little Torpedo Citroën 5 CV of 1922. History: On an even smaller scale, European cars, such as the 747 cc Austin Seven, (which made cyclecars obsolete overnight.) The Austin 7 was considerably smaller than the Ford Model T. The wheelbase was only 1,905 millimetres (6 ft 3 in), and the track only 1,016 millimetres (40 in). Equally it was lighter - less than half the Ford's weight at 360 kilograms (794 lb). The engine required for adequate performance was therefore equally reduced and the 747 cubic centimetres (45.6 cu in) sidevalve was quite capable with a modest 7 kilowatts (10 bhp) output. It would also start to catch on in Japan during the same time period, as a Datsun Type 11 that may have been pirated, at the start of their own automobile industry. It was also produced as a BMW Dixi and BMW 3/15 in Germany, Rosengart in France with French styled bodywork, and by American Austin Car Company with American styling, (later American Bantam) in the U.S. It displaced the motorcycle and sidecar combination that was popular in Europe in the 1920s. It spawned a whole industry of 'specials' builders. Swallow Sidecars switched to making cars based on Austin Seven chassis during the 1920s, then made their own complete cars in the 1930s as SS. With the advent of Nazi Germany the company changed its name to Jaguar. The Seven continued to be produced until the late 1930s along with an updated and restyled closed body, known as the "Big Seven" until World War II, but still on the early 1920s running gear, but with a slightly enlarged chassis and widened track. History: In the late-1920s, General Motors finally overtook Ford, as the U.S. new car market doubled in size, and fragmented into niches on a wave of prosperity, with GM producing a range of cars to match. This included a Chevrolet economy car that was just an entry level model for the range of cars. It was only a small part of the marketing strategy - "A car for every purse and purpose" of GM head Alfred P. Sloan. Harley Earl was appointed as head of the newly formed GM "Art and Color Section" in 1927. Harley Earl and Alfred P. Sloan implemented planned obsolescence and the annual model change to emphasise design as an engine for the success of the company's products. This moved cars from being utilitarian items to fashionable status symbols - that needed regular replacement "to keep up with the Joneses." Later in 1937, the Art and Color Section was renamed the Styling Section, and a few years afterward became one of the GM technical staff operations as the Styling Staff. It was funded by high interest/low regular payments consumer credit, as was the 1920s boom in other consumer durable products. It marked the beginning of mass market consumerism, that had been enabled by the efficiency of mass production and the moving production line. Until this time, manufacturers of consumer goods were concerned, by the possibility that the market would be fulfilled and demand would dry up. The seller's market in new cars in the U.S. was over, as customers wanted choice. The 'one model' policy had nearly bankrupted the Ford Motor Company. By the end of production in 1927 it seemed outdated, and was replaced by the Model A. The Ford Model T was voted Car of the Century on December 18, 1999 in Las Vegas, Nevada.In 1929 Chevrolet replaced the 171 cu in (2.8 L) straight-4 engine that dated from 1913, with the 194 cu in (3.2 L) Stovebolt Six engine that was to last until the 1960s as Chevrolet's base engine. A few years later Ford developed the Model 18 with the 221 cu in (3.6 L) flathead V8. The same car was available with a slightly reworked Model A engine, marketed until 1933 (in U.S.) as the Model B. In Europe, it remained in the Ford lineup, as the Ford V8 in Britain in the 1930s which was re-styled and relaunched as the post-war Ford Pilot. They were viewed as large cars in Europe. The 1932 Ford V8 (Model 18) coupe became the car of choice for post-war hot rodders. It was the first V8 engine in a low priced car, and along with the Chevrolet 6, showed how the U.S. was diverging from the rest of the world in its ideas about what constituted a basic economy car. History: In 1928 Morris launched the first Morris Minor (1928) in Britain to compete with the Austin Seven. Also that year German motorcycle manufacturer DKW launched their first car, the P15, a rear-wheel-drive, wood-and-fabric bodied monocoque car, powered by a 600 cc inline two-cylinder two-stroke engine. History: Also, in the 1920s, Ford (with the Model T in Manchester, England), General Motors (who took over Opel in Germany and Vauxhall in Britain), expanded into Europe. Most Ford and GM European cars, especially economy cars, were technologically conservative and all were rear-wheel-drive to a smaller European size, with improvements focused mainly on styling, (apart from the introduction of the 1935 monocoque Opel Olympia, and the Macpherson strut by Ford in the 1950s), until the late 1970s and early 1980s. History: 1930s-1945 In 1931 the DKW F1 was launched. This was the first successful mass-produced front-wheel drive car in the world. It was priced at 1,700 ℛ︁ℳ︁. (The British 1928-30 Alvis cars 'FWD' models had handling problems and only 150 were made. The British 1929 BSA was a three-wheel competitor to Morgan and the motorcycle combination market, the 1931 four-wheeler was very short-lived. The 1929 U.S. Cord L-29 having been seriously flawed, production ended at 4,429. The 1930 U.S. Ruxton made about 500, production lasted for only four months.) The F1 featured a front-engine, front-wheel-drive layout using a water-cooled 494 cc or 584 cc transverse two-cylinder two-stroke engine with chain drive. This was developed through the 1930s into the 1938 F8 model and the F9 that was not put into production because World War II started, 250,000 were made. By this time DKW had become the largest manufacturer of motorcycles in the world. Their two-stroke engine technology was to appear in the postwar products of Harley-Davidson, BSA, Trabant, Wartburg, Saab, Subaru, Piaggio, Puch, Kawasaki, Mitsubishi, Mazda, Daihatsu, Honda, and Suzuki. The DKW type of two stroke engine was replaced with four strokes in western economy cars by the 1960s, but lived on in stagnating and cash strapped Communist East Germany's Trabant and Wartburg and Communist Poland's FSO Syrena until the 1980s. History: In the late 1920s in Germany, Josef Ganz independent car engineer/inventor and editor of Motor-Kritik magazine had been a fierce opponent of the status quo of car design. He became a consultant engineer to Adler in December 1930. In the first months of 1931, Ganz constructed a lightweight economy car or peoples car, prototype at Adler with a tubular chassis, a mid-mounted engine, and swing axle independent rear suspension. After completion in May 1931, Ganz nicknamed his new prototype Maikäfer (German for cockchafer) which is a species of beetle. In July 1931 he was also consultant engineer to BMW on the 1932-34 BMW 3/20 successor to the BMW 3/15 model. It featured transverse leaf independent front and rear suspension and an updated overhead-valve cylinder head version of the Austin 7 based engine. After a demonstration of the Adler Maikäfer by Ganz, the German Standard Fahrzeugfabrik company (unrelated to the British 'Standard' company), then purchased a license from Ganz to develop and build a small car according to his design. The prototype of this new model, which was to be called Standard Superior, was finished in 1932. It featured a tubular chassis, a mid-mounted engine, and independent wheel suspension with swing-axles at the rear. At about the same time from 1931, two years prior to Hitler's accession to power, Ferdinand Porsche founded Dr. Ing. h. c. F. Porsche GmbH - the Porsche company to offer motor vehicle development work and consulting. Together with Zündapp they developed the prototype Porsche Type 12 "Auto für Jedermann" ("car for everyone"), which was the first time the name "Volkswagen" was used. Porsche preferred a 4-cylinder flat engine, but Zündapp used a water-cooled 5-cylinder radial engine. In 1932 three prototypes were running but were not put into production. All three cars were lost during the war, the last in 1945 in Stuttgart during a bombing raid. History: In 1932 ASAP-Škoda in Mladá Boleslav Bohemia produced a type Škoda 932 prototype of a streamlined 4-seater two-door car with a rear air-cooled flat-four engine designed by Karel Hrdlička and Vsevold Korolkov. This car's bodywork closely resembled the small car designs yet to come.In Berlin in February 1933, the first production model of the Standard Superior was introduced at the IAMA (Internationale Automobil- und Motorradausstellung). It had a 396 cc 2-cylinder 2-stroke engine. Because of some criticism of the body design, not in the least by Josef Ganz in Motor-Kritik, it was followed in April 1933 by a slightly altered model. In November 1933, the Standard Fahrzeugfabrik introduced yet another new and improved model for 1934, which was slightly longer with one additional window on each side and had a small seat for children or as luggage space in the back. This car was advertised as the German Volkswagen. During the early 1930s German car manufacturers one by one adopted the progressive ideas published in Motor-Kritik since the 1920s. In the meantime in May 1933, the Jewish Josef Ganz was arrested by the Gestapo on trumped up charges of blackmail of the automotive industry, at the instigation of those that he had ferociously criticized. He was eventually released, but his career was systematically destroyed and his life endangered. He fled Germany in June 1934 – the same month Adolf Hitler gave Ferdinand Porsche the brief for designing a mass-producible car for a consumer price of 1,000 ℛ︁ℳ︁. Production of the Standard Superior ended in 1935. The Standard company was forbidden by the Nazis from using the term 'Volkswagen'. History: The Volkswagen Beetle would be the longest-lasting icon of this 1930s era. Adolf Hitler admired the ideals exemplified by the Ford Model T, (even though he didn't drive himself), and sought the help of Ferdinand Porsche to create a 'peoples-car' ("volkswagen"), with the same ideals for the people of Germany. This car was to complement the new Autobahns that were to be built. They had been planned under the Weimar Republic, but he stole the credit for them. Many of the design ideas were plagiarised from the work of Hans Ledwinka, the Tatra T97 and Tatra V570 with the Czechoslovakian Tatra (car) company. It was also suspiciously similar in many ways to the Josef Ganz–designed cars, it even looked very similar to the Mercedes-Benz 120H prototype of 1931. The Nazi Kraft durch Freude (German for Strength through Joy, abbreviated KdF) "KdF-Wagen" or "Strength through Joy - Car" project ground to a halt before serious production had started because of World War II. The KdF, was the Nazi state propaganda organisation to promote holiday and leisure activities of the population, approved of, and monitored by, the state. The 'price' of the free or subsidised activities was extensive Nazi ideological indoctrination. When the German car industry was unable to meet Hitler's demand that the Volkswagen be sold at 1,000 ℛ︁ℳ︁ or less, the project was taken over by the German Labour Front (Deutsche Arbeitsfront, DAF). Now working for the DAF, Porsche built a new Volkswagen factory at Fallersleben, called "Stadt des KdF-Wagens bei Fallersleben" (Wolfsburg after 1945), at a huge cost which was partly met by raiding the DAF's accumulated assets and misappropriating the dues paid by DAF members. The Volkswagen was sold to German workers on an installment plan where buyers of the car made payments and posted stamps in a stamp-savings book, which when full, would be redeemed for the car. Due to the shift of wartime production, no consumer ever received a "Kdf-Wagen" (although after the war, Volkswagen did give some customers a DM 200 discount for their stamp-books). The project was not commercially viable and only government support was able to keep it afloat. The Beetle factory was primarily converted to produce the Kübelwagen (the German equivalent of the jeep). The few Beetles that were produced went to the diplomatic corps and military officials. History: After the war, the Volkswagen company would be founded to produce the car in the new democratic West Germany, where it would be a success. History: From 1936 to 1955, Fiat in Italy produced the advanced and very compact FR layout Fiat 500 "Topolino" or "little mouse", the precursor of the 1950s Fiat 500, it was designed by Dante Giacosa. The inline four cylinder 569 cc 13+1⁄2 hp engine was placed right at the front of the chassis with the radiator behind it. This allowed for a sloping front and good legroom when combined with lowered seating. This also allowed Fiat to lower the roofline. Although nominally a two-seater more were often squeezed in behind the seats. Initially it had quarter elliptic leaf spring rear suspension, but with an axle locating trailing arm, that was upgraded to stronger semi-elliptic to cope with overloading by customers. The front suspension was independent and was used as the basis of the suspension of the first English Cooper racing cars in the 1940s that became successful in the 1950s. It had a four speed gearbox (when three was common) and all hydraulic brakes. It was a similar size to the Austin Seven but much more advanced. It was exported all over the world and produced at the NSU factory in Germany, Steyr-Puch in Austria and Simca in France. It was facelifted with American influenced 'full width styling' of the frontal panels after the war, with headlights integrated into the wings/fenders. History: The Fiat 1100 was first introduced in 1937 as an updated version of the 508 "Balilla" (its real name was the 508C) with a look similar to the 1936 Fiat 500 "Topolino" and the larger 1500, with the typical late-thirties heart-shaped front grille, with styling by the emerging designer, Dante Giacosa. It was powered by a 1,089 cc four cylinder overhead-valve engine. Drive was to the rear wheels through a four-speed gearbox, and for the period, its comfort, handling, and performance were prodigious, making it "the only people's car that was also a driver's car".The Steyr 50 streamlined small car was introduced in 1936 by the Austrian manufacturer Steyr-Puch. The car had a water-cooled four-cylinder boxer engine driving the rear wheels through a four-speed transmission. It had a similar engine and radiator layout as the Fiat Topolino that was launched at about the same time. To save room and weight a dynastarter was used, which doubled as the axle of the radiator fan. It was regarded as the "Austrian Peoples' Car" and was affectionately referred to as the Steyr "Baby". Professor Porsche had, despite rumors, not been involved in the design or production of the 50. Moreover, the little Steyr offered better seating and luggage space than Porsche's Volkswagen with shorter overall length, a large sheet metal sliding roof and was available with hydraulic brakes (instead of the early Volkswagens' cable-operated ones). In early 1938, the car was revised. It got a more powerful engine and a longer wheelbase. The new model was called the Steyr 55 and went on sale in 1940. A total of 13,000 Steyr "Babies" were sold. The production of Steyr cars was discontinued during World War II, after bombing of the factory. After the war, the factory was rebuilt and specialized in Austrian versions of the Fiat 500 and Fiat 1500. Today the Steyr factory produces the BMW X models for Europe. History: The pre-war European car market was not one market. Trade barriers fragmented it into national markets, apart from luxury cars where the extra cost of tariffs could actually make cars more exclusive and desirable. The only way for a car maker to enter another national market of a major European car making country, (and their colonial markets of the time), was to open factories there. For example, Citroën and Renault opened factories in England in this period. This situation only changed with the post-war growth of the EEC (European Community) and EFTA. The British RAC (Royal Automobile Club) horsepower taxation system had the secondary function of excluding foreign vehicles. It was specifically targeted at the Ford Model T, which the then government feared would wipe out the fledgling indigenous motor industry. It crippled car engine design in Britain in the inter-war period, causing British car makers to produce under-square, low revving, long stroke engines. It was abolished after World War II as part of the British export drive for desperately needed, hard foreign currency, because it made British cars uncompetitive internationally. The technologically conservative 1930s Morris Eight, Ford Eight (Ford Model Y which was related to the German Ford Köln), and Standard Eight (Standard, later became Triumph) were named after their RAC horsepower car tax rating. The Ford Model Y had replaced the Model A in Europe in 1932 and ran until 1937. It was a much smaller and lighter car, weighing one third less, with an engine two thirds smaller in capacity than the Model A. The basic Model Y 'Popular' was an important milestone in British economy cars, being the first steel-bodied four-seater saloon to sell for £100; previously the only four-wheeled car to sell for that price had been the two-seater tourer model of the Morris Minor. The Model Y was reworked into the more streamlined Ford 7Y for 1938-1939. This was restyled again into the 1939 launched Ford Anglia. Initial sales in Britain actually began in early 1940. Production was suspended in early 1942, and resumed in mid-1945. Production ceased in 1948 after a total of 55,807 had been built. The Anglia was restyled again in 1948. Including all production, 108,878 were built. When production as an Anglia ceased in October 1953, it continued as the extremely basic Ford Popular entry level car until 1959. The Ford Prefect was a differently styled and slightly more upmarket version of the Anglia launched in October 1938 and remained in production until 1941, returning to the market in 1945. The car was face lifted in 1953 from its original perpendicular or sit-up-and-beg style to a more modern three-box structure. It was sold until 1961. The Anglia, Popular and Prefect sold well for a long time despite their old fashioned technology using transverse leaf springs and beam axles for front and rear suspension, side valve engines and only partly synchromeshed three speed gearboxes. They sold on price because of limited car supply on the used car market due to the Second World War, and new car market because of the British government's post war policy of exporting cars. The Morris Eight, introduced in 1935, was a deliberate close copy of the Ford and served as a lower cost, more profitable replacement for the 1928-vintage Morris Minor which had not achieved the hoped-for success. Prices for the Morris started at £112 and it offered more equipment than the Ford and a much more modern design than the ageing Austin Seven, which meant it became the UK's bestselling car by 1939. History: Crosley, a U.S. appliance manufacturer, from 1939 (switching to war production in 1942-45) to 1952, produced small economy cars of a European rather than American scale. These featured a variety of innovative in-house designed engines of less than one litre capacity. They were popular in the 1940s due to their high fuel economy during fuel rationing because of the war. There were a wide variety of two-door body styles; Sedan, Convertible Sedan, Coupe, Convertible Coupe, Covered Wagon, and Station Wagon. Also, there was a successful sports car, the Crosley Hotshot. The styling of 1951 Crosley Super Sport is very similar to the 1958 Frogeye/Bugeye Austin-Healey Sprite. Production peaked at 24,871 cars in 1948. Sales began to slip in 1949, as the post war American economy took off, and even adding the Crosley Hotshot and a combination farm tractor-Jeep-like vehicle called the Farm-O-Road in 1950, could not stop the decline. In 1952, only 1522 Crosley vehicles were sold. Production was shut down and the plant was sold to the General Tire and Rubber Company. History: 1945–1960 In anticipation of a repeat of the post First World War economic recession, GM started the "Chevrolet Cadet" project (a compact car intended to sell for less than US$1,000), that ran from 1945 to 1947, to extend the Chevrolet range downwards in the U.S. new car market. Chevrolet head of engineering Earle S. MacPherson was in charge of development. It had a unibody structure, an over-square over head valve engine, a strut-type front suspension, small-diameter road wheels, a three-speed gearbox, brake and clutch pedals suspended from the bulkhead rather than floor-mounted, and integrated fender/body styling. It was light and technically advanced, but GM's management cancelled it, stating that it was not economically viable - the anticipated post Second World War U.S. car market recession hadn't materialised. The MacPherson strut, probably the world's most common form of independent suspension, evolved in the GM Cadet project by combining long tubular shock absorbers with external coil springs, and locating them in tall towers that directed the vertical travel of the wheels and also formed the "king pin" or "swivel pin axis" around which the front wheels could turn. It was elegantly simple, with just three links holding the wheel in place - the strut itself, the single-piece transverse lower arm, and the anti-roll bar that doubled as a drag link for the wheel hub. MacPherson took his ideas to Ford instead. They were first used in the French 1948 Ford Vedette. Next in the 1950 British Ford Consul and Zephyr (British mid-size cars, the same size as the Cadet), which owed more to the Cadet than just the MacPherson strut suspension, and caused a sensation when they were launched. In 1953, a miniaturised economy car version, the Anglia 100E was launched in Britain. These early 1950s British Fords used styling elements from the U.S. 1949 Ford 'Shoebox'. History: As Europe and Japan rebuilt from the war, their growing economies led to a steady increase in demand for cheap cars to 'motorise the masses'. Emerging technology allowed economy cars to become more sophisticated. Early post-war economy cars like the VW Beetle, Citroën 2CV, Renault 4CV, and Saab 92 looked extremely minimal; however, they were technologically more advanced than most conventional cars of the time. History: The 4CV was designed covertly by Renault engineers during the World War II German occupation of France, when under strict orders to design and produce only commercial and military vehicles. Between 1941 and 1944, Renault was under the Technical Directorship of a francophile German installed former Daimler Benz engineer called Wilhelm von Urach who turned a blind eye to the small, economy car project suitable for the period of post war austerity. The design team went against the wishes of Louis Renault who in 1940 believed that post-war Renault should concentrate on mid-range cars. Only after a row in May 1941 did Louis Renault approve the project. In October 1944 after the liberation, Louis Renault who was imprisoned on charges of collaboration, died in suspicious circumstances. In January 1945, newly nationalised Renault had officially acquired a new boss, the former resistance hero Pierre Lefaucheux, (he had been acting administrator since September 1944). Lefaucheux had been arrested by the Gestapo in June 1944, and deported to Buchenwald concentration camp. The Gestapo transferred him to Metz for interrogation, but the city was deserted because of the advancing allied front, the Germans abandoned their prisoner. In November 1945 the French government invited Ferdinand Porsche to France looking to relocate the Volkswagen project as part of war reparations. On 15 December 1945, Porsche was invited to consult with Renault about the Renault 4CV. Lefaucheux was enraged that anyone should think the almost production-ready Renault 4CV was in any way inspired by the German Volkswagen, and that the politicians should presume to send Porsche to advise on it. The government insisted on nine meetings with Porsche which took place in rapid succession. Lefaucheux insisted that the meetings would have absolutely no influence on the design of the Renault 4CV, and Porsche cautiously went on record saying that the car would be ready for large scale production in a year. Lefaucheux was a man with contacts, as soon as the 4CV project meetings had taken place, Porsche was arrested in connection with war crimes allegations involving the use of forced labour including French in the Volkswagen plant in Germany. Ferdinand Porsche, despite never facing any sort of trial, spent the next twenty months in a Dijon jail. The 760 cc rear-mounted four-cylinder engine, three-speed manual transmission 4CV was launched at the 1946 Paris Motor Show and went on sale a year later. Volume production with the help of Marshall Plan aid money, was said to have commenced at the company's Parisian Boulogne-Billancourt plant a few weeks before the Paris Motor Show of October 1947, although the cars were in very short supply for the next year or so. On the 4CV's launch, it was nicknamed "La motte de beurre" (the lump of butter); this was due to the combination of its shape and the use of surplus paint from the German Army vehicles of Rommel's Afrika Korps, which were a sand-yellow color.The VW featured a 1.1-litre, rear engined air-cooled 'boxer' flat four with rear-wheel drive, all round fully independent suspension, semi monocoque construction and the ability to cruise on the autobahn for long periods reliably. This cruising ability and engine durability was gained by high top gearing, and by restricting the engine breathing and performance to well below its maximum capability. Production was restarted after the war by the British Army Royal Electrical and Mechanical Engineers, under Major Ivan Hirst after it was dismissed as valueless for war reparations by the Western Allies. In 1948 Hirst recruited Heinrich Nordhoff to run Volkswagen GmbH and in 1949 ownership was handed over to the West German government. The Volkswagen Type 1 'Beetle' was to become the most popular single design in history. Its production surpassed The Ford Model T on February 17, 1972. It was withdrawn from the European market in 1978. The Volkswagen Beetle was an icon of the post-war West German reconstruction known as the Wirtschaftswunder. History: The 375 cc Citroën 2CV had interconnected all round fully independent suspension, rack and pinion steering, radial tyres and front-wheel drive with an air-cooled flat twin engine and four-speed gearbox. It was some 10 to 15 MPG (Imperial) more fuel efficient than any other economy car of its time – but with restricted performance to match. It was designed to motorise rural communities where speed was not a requirement. The original design brief had been issued before the Second World War in the mid-1930s. It had been completely redesigned three times, as its market and materials costs had changed drastically during its development period. Engine size increased over time; from 1970 it was a still tiny 602 cc. It was in production until 1990. History: The Saab 92 had a transversely mounted, water-cooled two-cylinder, two-stroke based on a DKW design, driving the front wheels. It had aircraft derived monocoque construction, with an aerodynamic (drag coefficient) value of 0.30 – not bettered until the 1980s. It was later developed into the Saab 93, Saab 95, and Saab 96. It was produced until 1980, switching to a V4 four stroke engine in the 1960s. The mechanicals were used in the Saab Sonett sports cars. History: Also in the immediate postwar period, the monocoque FR layout Morris Minor was launched in 1948. To reduce costs it initially reused the pre-war side-valve 918 cubic centimetres (56.0 cu in) Morris 8 engine instead of an intended flat-four. Later, after the 1952 formation of British Motor Corporation it had the Austin designed 948cc and later 1098cc OHV BMC A-Series engine. It had a strong emphasis on good packaging and roadholding, with independent front suspension and rack and pinion steering, and American influenced styling. It was produced in different body styles including a 2-door and 4-door saloon, a 2-door convertible, a 'woody' estate car / station wagon, a van with a rear box and a pick-up truck. 1.3 million had been built by the end of production in 1971. It was designed by Alec Issigonis. History: The 1947 FR layout Toyota SA was Toyota's first true post war design. Although permission to begin full production of passenger cars in Japan was not granted until 1949, limited numbers of cars were permitted to be built from 1947, and the Toyota SA was one such car. It had a 4-cylinder engine, 4-wheel independent suspension and a smaller, aerodynamic body. The project was driven by Kiichiro Toyoda, but most of the design work was done by Kazuo Kumabe. The two-door body was aerodynamic in a style similar to the Volkswagen Beetle. The doors were hinged at the rear (often called suicide doors). The front window was a single pane of flat glass with a single wiper mounted above the driver. Only right hand drive was offered. Toyota engineers (including Dr Kumabe) had visited Germany before World War II and had studied Porsche and Volkswagen designs (independent suspension, aerodynamic bodies, backbone chassis, rear-mounted air-cooled engines, economical production cost). Many Japanese companies had ties with Germany during the war years. But unlike other Japanese car firms Toyota did not partner with Western companies after the war, so it was free to use German designs. Many features of the prototype Beetle were subsequently put into the SA, although the Beetle's rear-mounted air-cooled engine feature was not used. Later on, Toyota revisited the economic principles exemplified by the Beetle when designing the 1950s successors to the SA and the later Publica and Corolla. History: In the post war austerity of the late 1940s, when most of the Japanese population could not afford a car, but could afford a motorcycle, the Japanese codified a legal standard for extremely economical small cars, known as the keicar. The aim was to promote the growth of the car industry, as well as to offer an alternative delivery method to small business and shop owners. Originally limited to a mere 150 cc (100 cc for two-strokes) in 1949, dimensions and engine size limitations were gradually increased (in 1950, 1951, and 1955) to tempt more manufacturers to produce kei cars. It wasn't until the 1955 change to 360 cc as the upper limit for two-strokes as well as four-strokes that the class really began taking off, with cars from Suzuki Suzulight (front-wheel drive based on the German Lloyd with a DKW type engine) and then Subaru 360, finally able to fill people's need for basic transportation without being too severely compromised. Early generation keicars on the market were mostly rear-engined, rear drive RR layout cars. From the end of the 1960s Keicars switched to front-engined, front-wheel-drive FF layout. This market has thrived ever since, with the cars increasing in size and engine capacity, including sports cars such as the Honda Beat and Suzuki Cappuccino, and even miniaturised MPVs. History: In 1953, in Japan Hino entered the private car market, by manufacturing the Renault 4CV under licence. Also, in 1953 the Fiat 1100 in Italy was completely redesigned as a compact four-door sedan, with a modern monocoque bodywork and integrated front lights. History: While economy cars flourished in Europe and later Japan, the booming postwar American economy combined with the emergence of the suburban and interstate highways in that country led to slow acceptance of small cars. Brief economic recessions saw interest in economical cars wax and wane. During the early 1950s, the independent lower volume American auto manufacturers launched new 'small' cars. They were designed to be smaller than contemporary American cars, but would still have mainstream appeal, because they could still accommodate five passengers comfortably with conventional engineering and FR layout, establishing the American sized 'Compact car'. Nash Motors launched the Nash Rambler and Kaiser-Frazer introduced the Henry J in 1950. Willys-Overland (the designers of the Second World War Jeep MB) the Willys Aero in 1952, and Hudson the 1953 Hudson Jet. This niche market avoiding competition with the 'Big Three' of GM, Ford and Chrysler was not large enough to sustain all these competing models. They were priced too near the cost of a base model full sized car from the big three, that in the 1950s had a price war. US Fuel prices during this period were very low and the maintenance costs of the compacts were almost as much as full sized cars, so total running cost were not that much cheaper. Only the Nash Rambler made it to a second generation. The losses caused to the other car makers initiated the succession of mergers that eventually resulted in the American Motors Corporation (AMC). Nash also contracted with British Motor Corporation to build the American designed Metropolitan using existing BMC mechanical components, (the 1.5 Liter engine is a BMC B-Series engine also used in larger sizes in the MG MGA and MG MGB). Imported cars began to appear on the U.S. market during this time to satisfy the demands for true economy cars. An initial late 1940s–early 1950s success in a small way, was the monocoque Morris Minor launched in 1948, with its miniaturized Chevrolet styling. It was underpowered for the long distance roads of the U.S. and especially the freeways that were starting to spread across the country in the 1950s. The first British Motorway did not open until 1959. BMC preferred to develop the higher profit margin MGs for the American market and also worked with Nash and so passed on the opportunity. From the mid-1950s the Volkswagen Beetle using clever and innovative advertising and capitalising on its very high build quality, durability and reliability, was a spectacular success. Having been designed for cruising the autobahns, freeways were no problem for it. It disproved the scepticism of American buyers as to the usefulness of, by their standards, such small cars. In 1955 VW launched the Beetle-based Italian styled Karmann Ghia coupé, that was developed from a 1953 concept car. Production doubled soon after its introduction, becoming the car most imported into the U.S, with more than 445,000 imported. Initially the stylish Renault Dauphine derived from the Renault 4CV, looked like it would follow the VWs footsteps, but then was a failure due to mechanical breakdowns and body corrosion. This failure on the U.S. market in the late 1950s, may have harmed the acceptance of small cars generally in America.In 1955, the Japanese Ministry of International Trade and Industry set a goal to all Japanese makers to create what was called a "national car" that was larger than the kei car. This influenced Japanese automobile manufacturers to focus their product development efforts for the smaller kei cars, or the larger "national cars". The concept stipulated that the vehicle be able to maintain a maximum speed over 100 km/h (62 mph), weigh below 400 kg (882 lbs), fuel consumption at 30 km/L (85 mpg‑imp; 71 mpg‑US) or more, at an average speed of 60 km/h (37 mph) on a level road, and not require maintenance or significant service for at least 100,000 km (62,000 mi). This established a "compact car" class, that is by far the most popular in Japan due to tax benefits stipulated by Japanese government regulations. One of the first compact cars that met those requirements was the FR layout Toyota Publica with a flat twin engine, and the RR layout Mitsubishi 500. The Publica and the Mitsubishi 500 were essentially "kei cars" with engines larger than regulations permitted at the time. History: In the late 1950s the DDR German Democratic Republic produced its 'peoples' car'. The Trabant sold 3 million vehicles in thirty years due to its communist captive market. It had a transverse two-cylinder air-cooled two-stroke engine and front-wheel drive, using DKW technology. History: In 1957, Fiat in Italy launched the 479 cc 'Nuovo' Fiat 500 designed by Dante Giacosa with frontal styling by Claus Luthe. It was the first real city car. It had a rear-mounted air-cooled vertical twin engine, and all round independent suspension. Its target market was Italian scooter riders who had settled down and had a young family, and needed their first car. It was also produced in Austria as the Puch 500. Fiat had also launched the larger 1955 Fiat 600 with a similar layout but with a water-cooled in-line four-cylinder engine, it even had a six-seater people carrier / MPV / mini-van version called the 'Multipla', even though it was about the same size as a modern supermini. History: Car body corrosion was a particular problem from the 1950s to the 1980s when cars moved to monocoque or uni-body construction (starting from the 1930s), from a separate Body-on-frame chassis made from thick steel. This relied on the shaped body panels and box sections, like sills/rockers, providing the integrity of the body-shell rather than a separate frame (vehicle) for strength. A light car was a fast and/or economical car. The introduction of newly available computers for structural analysis from the 1960s, with computers like the IBM 360, the thickness of sheet metal in bodyshells was reduced to the minimum needed for structural integrity. However, corrosion prevention / rustproofing, that had not previously been significant because of the thickness of metal and separate chassis, had not kept pace with this new construction technology. The lightest monocoque economy cars would be the most affected by structural corrosion. History: 1960s The world's first hatchback, the 1958 FR layout Austin A40 Farina Countryman model that was a co-development of BMC and the Italian design house Pininfarina at a time when this was unusual. It had a lift up rear window and drop down boot lid. It was also sold as a two-door saloon. It was built in Italy by Innocenti as well as in the UK. For 1965 Innocenti designed a new single-piece rear door for their Combinata version of the Countryman. This top-hinged door used struts to hold it up over a wide cargo opening and was a true hatchback – a model never developed in the home (United Kingdom) market. The Countryman name has 'estate' type associations, and BMC successor company Rover used the name on estate cars / Station Wagons so it is largely forgotten. History: The next major advance in small car design was the 1959 848 cc FF layout Austin Mini from the British Motor Corporation, designed by Alec Issigonis as a response to the first oil crisis, the 1956 Suez Crisis, and the boom in bubble cars and Microcars that followed. It was the first front-wheel-drive car with a water-cooled in-line four-cylinder engine mounted transversely - the BMC A-Series engine. This allowed eighty percent of the floor plan for the use of passengers and luggage. The majority of modern cars use this configuration. Its progressive rate rubber sprung independent suspension (Hydrolastic 1964–1971), low centre of gravity, and wheel at each corner with radial tyres, increased the car's grip and handling over all but the most expensive automobiles on the market. The Mini was voted the second most important car of the 20th century after the Ford Model T. Badge engineered luxury versions with modified bodywork and wood and leather interiors were made under the names of Riley Elf and Wolesely Hornet. Customised versions were made by coach-builders like Harold Radford, and were very popular with the rich and famous of 1960s London. From 1964 a Jeep like version (that had been rejected by British Armed Forces), of the Mini, the Mini Moke was popular with the King's Road / Carnaby Street, Swinging London set. An Estate car / station-wagon (with a non-structural wood frame 'Countryman' version), a Van and Pick-up versions were also successful. In 1962, a slightly larger 1098cc (and later 1256cc) version of the Mini engineering design was launched, as the Austin/Morris 1100. It had front disc brakes as standard. It had Italian styling by Pininfarina and used front-to-rear interconnected independent Hydrolastic suspension. It was available in sporting MG versions, and luxury wood and leather interior Riley, Wolesely and Vanden Plas versions. It was for most of the 1960s, the top selling car in Britain, and was sold until the mid-1970s. It was sold as the Austin America in the U.S., Canada, and Switzerland between 1968 and 1972. It was also sold in South Africa - Austin Apache and Spain - Austin Victoria, with Triumph type Michelotti re-styling, built in local factories. History: BMC had hedged their bets after the launch of the front wheel drive Mini and 1100 and to meet the demands of more conservative customers, by keeping their rear wheel drive Austin A40 Farina in production until 1967 and Morris Minor until 1971, while front wheel drive was being accepted by the UK and European markets. Demand from older customers in particular kept the Minor in production, despite it being very outdated. History: Ford in the UK in 1959 launched the Anglia 105E. It had a new overhead valve engine and a four speed gearbox, which was a great improvement over the previous model Anglia 100E that had a side-valve engine and only three speeds. It was rear wheel drive using a rear beam axle and leaf springs, with front Macpherson struts used like its predecessor. It used much miniaturised late 1950s American-influenced styling that was very fashionable, including a sweeping nose line, and on deluxe versions, a full-width slanted chrome grille in between prominent "eye" headlamps. The car also sported a backward-slanted rear window and tailfins. This dated the car over its nine-year production life. The Anglia's Ford Kent engine was in production in a developed form into the 21st century. History: The launch in the 1960s of the Mini Cooper to exploit the exceptional grip and handling of the Austin Mini, along with its success in rallying, (Monte Carlo Rally in particular) and circuit racing, first showed that economy cars could be effective sports cars. It made traditional sports cars like the MG Midget look very old fashioned. The rear-wheel-drive Ford Lotus Cortina and Ford Escort 1300GT and RS1600, along with the Vauxhall Viva GT and Brabham SL/90 HB in the late 1960s opened up this market still further in Britain. Meanwhile, from the 1950s Abarth tuned Fiats and Gordini tuned Renaults did the same in Italy and France. History: Also in 1959 the FR layout DAF 600, with a rear-mounted automatic gearbox, was launched in the Netherlands. The 600 was the first car to have a continuously variable transmission (CVT) system – the innovative DAF Variomatic. It was the first European economy car with an automatic gearbox. The CVT was continued through the 1960s and 1970s by DAF with the DAF Daffodil, DAF 33, DAF 44, DAF 46, DAF 66, and later by Volvo after they merged with the Volvo 340. The 1960s Austin Mini/Austin 1100 compact and innovative automatic gearbox, developed by Automotive Products (with a conventional epicyclic / torque converter coupling) was much less efficient at transmitting drive. History: In the 1960s the semi-monocoque/platform chassis 750 cc Renault 4 (arguably the first small five-door hatchback, but viewed as a small estate car or station wagon at the time) was launched in France. It had a very soft but well controlled ride like the Citroën 2CV. In layout, it was essentially an economy car version of the 1930s designed Citroën Traction Avant, in particular the 'Commerciale' derivative, but with fully independent rear suspension (the Commerciale used a flexible beam axle, similar to 1970s VW twist-beam rear suspension). The Commerciale had been smaller than an estate car with a horizontally split two-piece rear door before the second world war. When it was relaunched in 1954 it featured a one-piece top-hinged tailgate. Citroën responded with the 2CV-based 1960 602 cc Citroën Ami and hatchback 1967 Citroën Dyane. Also in France, in 1966 Renault launched the midrange Renault 16 - although it was not an economy car, it is widely recognised as the first non-commercial mass-market hatchback car. The hatchback was a leap forward in practicality. It was adopted as a standard feature on most European cars, with saloons declining in popularity apart from at the top of the market over the next twenty years. Small economy cars that were more limited in load carrying ability than larger cars benefited most people. History: In the 1960s the Japanese MITI "national car" class of vehicles, saw the launch of the Isuzu Bellett, Daihatsu Compagno and Mazda Familia in 1963, the Mitsubishi Colt in 1965, and the Nissan Sunny, Subaru 1000, and Toyota Corolla in 1966. Honda introduced their first four-door sedan in 1969, called the Honda 1300. In North America, these cars were classified as subcompact cars. The 1960s Toyota Corolla, Datsun Sunny refined the conventional small rear-wheel-drive economy cars were widely exported from Japan as postwar international competition and trade increased. Japan also instituted the "Shaken" road-worthiness testing regime, that required progressively more expensive maintenance, involving the replacement of entire vehicle systems, that was unnecessary for safety, year on year, to devalue older cars and promote new cars on their home market that were available for low prices. There are very few cars in Japan more than five years old. History: In 1962 Fiat introduced the third generation FR layout Fiat 1100, called the 1100D. It was restyled into the 1100R from 1966. The Fiat 1100D was made in India from 1964 onwards. In 1973 (for that model year alone) it was named the Premier President. From 1974 onwards until it was finally discontinued in 2000, it was known as the Premier Padmini. History: In 1964 Fiat under the engineering leadership of Dante Giacosa designed the first car with a transverse engine and an end on gearbox (using different length drive shafts) and a hatchback - the Autobianchi Primula, It had Pininfarina styling that bore a resemblance to the Austin 1100. It was put into limited production by Fiat under their Autobianchi brand. Fiat still produced the FR layout 1100 of about the same size, so that any potential technical teething problems would not damage their brand. Primula production ceased in 1970, by which time 74,858 had been built. It was replaced by the Autobianchi A112 and Autobianchi A111 with the same mechanical layout. They were only sold in mainland Europe, where they were popular into the 1980s (replaced by the Lancia Y10), but unknown in the UK. The French 1967 Simca 1100 (who had previously used Fiat technology under licence), the 1969 Fiat 128, and the 1971 Fiat 127 regarded as the first 'super-mini' brought this development to a wider audience. The 128 was Dante Giacosa's final project. This layout gradually superseded the gearbox in the engine's sump of BMC Austin Morris and later Peugeot PSA X engine, until the only car in production with this transmission layout by the 1990s, was the then long obsolescent Austin (Rover) Mini. History: The Simca 1100 was also the first small car, that was designed from the outset, with an angled single piece hatchback tailgate to enter large scale production. The earlier Renault 4 tailgate was near vertical like an estate car, and the Austin A40 originally had a split tailgate. The Simca was successful in France, but less so elsewhere due to 'tappety' engines and body corrosion. A total of 2.2 million cars were produced by 1985. In 1972 Renault introduced the monocoque Renault 5 supermini hatchback, that used the proven and successful Renault 4 mechanicals and suspension. It was made until 1985, when it was replaced by the 'Super Cinq'. American Motors (AMC) marketed a version with sealed-beam headlamps and reinforced bumpers as the 'Le Car' in the U.S. from 1976 to 1983.GM's British and German subsidiaries re-entered the small car market with then conventional FR layout cars for the first time since the Second World War in the early 1960s. The Vauxhall Viva, launched in September 1963, and was replaced in September 1966. It was also the first new small car produced by Vauxhall since 1936. The HA Viva was powered by a 1,057 cc (64.5 cu in), overhead valve, four cylinder, front-mounted engine driving the rear wheels. It was comparable in size and mechanical specifications with the new GM Germany Opel Kadett released in 1962 in continental Europe. The Opel featured a brand new OHV 993cc engine, that in a developed form lasted until the early 1990s. The Viva and Kadett were sold alongside each other in many markets. The HA Viva was just an inch longer than the Ford Anglia which dated back to 1959. It was offered only as a two-door saloon. The HB Viva, announced in September 1966 and sold by Vauxhall until 1970, was a larger car than the HA, featuring coke bottle styling, and was modelled after American General Motors (GM) models such as the Chevrolet Impala/Caprice of the period. It featured the same basic engine as the HA, but enlarged to 1,159 cc, but with the added weight of the larger body the final drive gearing was reduced to maintain performance. The Opel Kadett B was launched in 1965. History: In 1968 Ford replaced the outmoded Anglia with the Ford Escort which was sold across Western Europe. It was longer and wider than its predecessor to fill the gap left by increasing the size of Ford's next model up in the range the Ford Cortina. It had the same mechanical layout and suspension as the Cortina and Anglia, but with contemporary 'Coke Bottle' styling. It struggled to compete with the larger and more comfortable second generation 1965 Opel Kadett in West Germany. History: In the U.S. market, in 1959 Studebaker launched the Studebaker Lark, then 1960 brought the Chevrolet Corvair, Ford Falcon, and Plymouth Valiant into the market segment dominated by Rambler. These vehicles were lower priced and offered better fuel economy than the standard domestic full-size models, that had grown in size and price through the 1950s. The Corvair, Chevrolet's rear-engined compact car, was originally brought to market to compete directly with the VW Beetle. Ford Falcon and Plymouth Valiant were conventional, compact six-cylinder sedans that competed directly with the Rambler American. In 1962 Chevrolet introduced the Chevy II / Nova line of conventional compacts first offered with 4- and six-cylinder engines. These American vehicles were still much larger than fuel-efficient economy cars popular in Europe and Japan. The Corvair is twenty inches longer, seven inches wider, eight hundred pounds heavier and includes an engine almost twice the size of the Beetle that inspired it. Corvair offered VW's rear engine advantages of traction, light steering, and flat floor with Chevrolet's six-passenger room and six-cylinder power American buyers were accustomed to. Later versions of the Corvair were considered sports cars rather than 'economy' cars including Monza Spyder models, which featured one of the first production car turbocharged engines. The Corvair Monza was followed by the Falcon-based Ford Mustang, introduced in 1964, establishing the "pony car" class which included Corvair's replacement, the Chevrolet Camaro in 1967, expanding the domestic pony car market segment started in mid-1960s. History: The 1960s also saw the swan song of the rear-engined rear-wheel-drive car RR layout. The first models designed with this layout in Central Europe before the second world war, had better traction than any other two wheel drive car layout. They were very capable in the mountainous country there, that had many unsurfaced roads, just how capable was shown by the performance of the two wheel drive military Kübelwagen version of the VW Beetle. This layout also had better interior space utilisation than front engine rear-wheel-drive cars, and a better ride than those with a live rear beam axle. It was an affordable way to produce a car with all round independent suspension, without the need for expensive constant-velocity joints needed by front-wheel-drive cars, or axle arrangements of FR layout cars. They could have road-holding issues due to unfavorable weight distribution and wheel camber changes (rear wheel tuck under), of the lower-cost swing axle rear suspension design. These were highlighted and a little exaggerated by Ralph Nader. These problems were ameliorated on later Beetles and were eliminated on the second-generation Chevrolet Corvair with the switch to a four-link, fully independent rear suspension. The Hillman Imp, NSU Prinz 4 (styled by Claus Luthe) and Soviet Zaporozhets all had styling cues derived from the original Corvair. Connections to the Corvair are mentioned on their respective Wikipages. The only economy cars with this layout launched since the 1960s have been the turn of the millennium ultra compact two seater city car Smart Fortwo and Indian market Tata Nano. History: RR layout cars launched in the 1960s include: The 874 cc Hillman Imp - UK. The 583 cc NSU Prinz, and the relatively unsuccessful attempt at diversification of the Volkswagen Type 3, Volkswagen Type 4 - West Germany. The 956–1289 cc Renault 8/10 and 777–1294 cc Simca 1000 - France. The 2296 cc Chevrolet Corvair - US. The first 1960s air-cooled two-stroke in-line twin-engined generation of the 360 cc Keicar class - the 1958 Subaru 360, Mitsubishi 360 1961, Mazda Carol 1962, Daihatsu Fellow 1966, Honda N360 and the Suzuki Fronte 1967 along with the non Keicar, Renault based Hino Motors 4CV was replaced by the 1961 Contessa - Japan. In Communist Eastern Europe there was the Škoda 1000MB/1100MB that was developed into the 1970s Škoda S100/110 and then the 1970s–1980s Škoda 105/120/125 Estelle - Czechoslovakia. History: Centrally planned as the small Soviet car, the Zaporozhets from the Ukrainian ZAZ factory (1960-1969 styled like a Fiat 600), (1966-1994 styled like NSU Prinz 4), was sold in the USSR and Warsaw Pact/COMECON countries of Eastern and Central Europe. It had a low price and good fuel economy and was designed to be DIY maintained by the customer, because of lack of maintenance facilities in communist countries. It was also issued to Soviet war amputees by the state through their social welfare system. In total, 3,422,444 air-cooled Zaporozhets made. - Soviet Union 1970s The 1973 oil crisis (and again in 1979), emphasised the importance of fuel economy worldwide, as an increasing proportion of the cost of vehicle operation. This had particular impact in the United States with its greater distances, which was arguably the nation hardest hit because of the prevalence of large, fuel-thirsty cars. At the same time, new emissions and safety regulations were being implemented requiring major and costly changes to vehicle design and construction for the U.S. market. The sales of imported economy cars had continued to rise from the 1960s. The first response by domestic American car makers included the U.S. produced, FR layout cars, the AMC Gremlin, Chevrolet Vega, and Ford Pinto, along with captive imports. History: AMC was determined to have the first subcompact offering and 1970 AMC Gremlin sales began six months ahead of the all-new 1971 models from GM and Ford. The Gremlin used the AMC Hornet's existing design with a shortened wheelbase and "chopped" tail, and had an important low-price advantage.The Chevrolet Vega, featuring attractive miniaturised Camaro styling, introduced in September 1970, was GM's first subcompact, economy car. Nearly two million were sold over its seven-year production run, due in part to its low price and fuel economy. By 1974, the Vega was among the top 10 best-selling American-made cars, but the aluminum-block engine developed a questionable reputation. Chevrolet increased the engine warranty to 50,000 miles (80,000 km) to all Vega owners, which proved costly for Chevrolet. The 1976 Vega had extensive engine and body durability improvements and a five-year/60,000 mi (97,000 km) engine warranty. After a three-year sales decline, the Vega and its aluminium engine were discontinued at the end of the 1977 model year. History: Pontiac's cheapest car was a re-badged Vega variant exclusively available in Canada for the 1973-74 model years, and introduced in the U.S. the following year. The final 1977 models featured the first use of Pontiac's Iron Duke inline-4 engine. Lower priced versions of the Chevrolet Monza were introduced for 1978 and rebadged variants of the discontinued Vega were also added to the Monza line - the Monza wagon using the Vega Kammback body was sold for the 1978-79 model years, and the Monza S hatchback, a price leader model using the Vega Hatchback body, was also offered for the 1978 model year.The Ford Pinto was introduced one day after the Vega. It was small, economical, and a top seller. Controversy over the design of the fuel system and a Ford cost benefit analysis presented to the public as specific to the Pinto destroyed the reputation of the car. The Ford Pinto engine though was successful in European Fords for twenty years, in successive mid and large European sized mainstay models of the; UK Ford Cortina, German Ford Taunus, the Ford Sierra, and the Ford Granada amongst others. History: By 1970, Nissan released their first front-wheel-drive car that was originally developed by Prince Motor Company which had merged with Nissan in 1966. This was introduced in 1970 as the Datsun/Nissan Cherry. In 1973, the Energy Crisis started, which made small fuel efficient cars more desirable, and the North American driver began exchanging their large cars for the smaller, imported compacts that cost less to fill up and were inexpensive to maintain. The Toyota Corona, the Datsun 510, the Mitsubishi Galant (a captive import from Chrysler sold as the Dodge Colt), the Subaru DL, and later the Honda Accord gave buyers increased passenger space and some luxury amenities, such as air conditioning, power steering, AM-FM radios, and even power windows and central locking without increasing the price of the vehicle. In 1972, the Honda Civic was launched, the CVCC (Compound Vortex Controlled Combustion) Stratified charge engine debuted in 1975 and was offered alongside the standard Civic engine. The CVCC engine had a head design that promoted cleaner, more efficient combustion, eliminating the need to use a Catalytic converter for the new California emission standards - nearly every other U.S. market car for that year needed exhausts with catalytic converters. The Japanese, who had previously competed on price, equipment and reliability with conservative designs, were starting to make advanced, globally competitive cars.The Audi 50 designed by Claus Luthe was a front wheel drive supermini three door hatchback car launched in 1974 and produced until 1978, and sold only in Europe, 180,812 were built. A re-badged cheaper, lower equipment version was marketed as the Volkswagen Polo Mk1 (1975-1981) sold more than 500,000. It impacted sales of the Audi, causing production to come to an end early compared to the Polo. The Volkswagen Derby was the booted two door saloon (three-box) version of its Volkswagen Polo Mk1 supermini, between 1977 and 1981 in Europe. History: Ford had sold 2 million Escorts across Europe by 1974, the year the MK2 was launched with a squarer flatter panel body re-style. The most successful market was the UK. The Escort's success was greatly helped by its numerous rallying successes in the 1970s, and the performance versions like the Escort Mexico and RS2000 that traded on that success, and provided a halo effect for the lesser models. History: The Chevrolet Chevette was introduced in September 1975 and produced through to 1987. It was a successful and 'Americanized' version of the GM T platform (1973) 'world car' that was developed with Opel, GM's German subsidiary and Isuzu Motors of Japan.Chrysler having taken control of Simca (and Hillman in the UK) in the 1960s, as part of expansion plans to match GM and Ford, turned to their French subsidiary, when they needed to launch an American made sub-compact, to comply with federal Corporate Average Fuel Economy (CAFE) regulations that were being introduced starting with the 1978 model year cars. The replacement for the Simca 1100, the C2 project, became the (Simca) Talbot Horizon that won European Car of the Year 1978, and more than 3 million were sold in the United States as the Dodge Omni and Plymouth Horizon from 1978 to 1990. It had been re-engineered with a federal emission VW Golf 1.7L engine and MacPherson strut suspension for the U.S. market. Chrysler Europe was sold to Peugeot in 1978, due to mounting operating losses in Europe and the U.S. that required a U.S. government bailout.Captive imports were the other response by U.S. car makers to the increase in popularity of imported economy cars in the 1970s and 1980s. These were cars bought from overseas subsidiaries or from companies in which they held a significant shareholding. GM, Ford, and Chrysler sold imports for the U.S. market. The Buick Opel, Ford Cortina, Mercury Capri, Ford Festiva, and Dodge Colt are examples. History: Technologies that developed during the post-war era, such as disc brakes, overhead-cam engines and radial tires, had become cheap enough to be used in economy cars at this time (radials began to be adopted in the 1950s and 1960s, and front disc brakes in the 1960s, towards the bottom of the market in Europe). This led to cars such as the 1974 Mk 1 Volkswagen Golf designed by Giorgetto Giugiaro, Fiat 128 and 1972 Honda Civic. Some previously exotic technology, electronic fuel injection, became affordable, which allowed the production of high-performance hot hatch sport compacts like the 1976 Volkswagen Golf GTI. This car combined economy of use and a practical hatchback body, with the performance and driving fun that kicked off the hot hatchback boom. Also introduced in 1976 was the 1.5 L VW Golf diesel—the first small diesel hatchback. It used new Bosch rotary mechanical diesel injection pump technology. Also in that year, Ford of Europe (produced by the merging of Ford national operations in Europe) launched their first front-wheel-drive small car, the Ford Fiesta, having gained experience from the Ford of Germany 1960s European midsized Ford Taunus P4 and Ford of Brazil Ford Corcel. History: 1980s GM Europe (Vauxhall/Opel) introduced their first European market front-wheel-drive car, the Opel Kadett D/Vauxhall Astra, Golf-sized car in 1979 by Opel in Germany and 1980 by Vauxhall in the UK. In 1980, Fiat introduced the Giugiaro designed Mk 1 Fiat Panda. It was originally designed to be produced in China at its 1970s level of industrialisation. It was a utilitarian front-wheel-drive supermini with Fiat standard transverse engine and end-on gearbox. It featured mostly flat body panels and flat glass. Also in 1980, the all new hatchback and front wheel drive third generation Ford Escort Mark III (Europe) was launched. Previous Escorts were saloon/sedans with front engine and rear wheel drive. In 1981 Ford launched the American version of the Ford Escort. In 1982 GM launched their first front-wheel-drive supermini, the Opel Corsa/Vauxhall Nova in Europe. History: In 1983 Fiat launched the next step forward in small car design, the Fiat Uno. It was designed by Giorgetto Giugiaro's ItalDesign. The tall, square body utilising a Kamm tail achieved a drag coefficient of 0.34, and it won much praise for an airy interior space and fuel economy. It incorporated many packaging lessons learnt from Giugiaro's 1978 Lancia Megagamma concept car, (the first modern people carrier-MPV-mini-van)—but miniaturised. Its tall car, high-seating packaging is imitated by every small car today. It showed that not just low sleek cars could be aerodynamic, but small boxy well packaged cars could be too. It was voted Car of the Year in 1984.Also in 1983 Peugeot launched the Pininfarina-styled Peugeot 205. While not as radical as the Uno in body design, it was also very aerodynamic. It was the first European supermini with a diesel engine - the XUD. It provided performance of a 1.4 L petrol with economy - 55 miles per imperial gallon (5.1 L/100 km; 46 mpg‑US) - that was better than the base 1 L petrol version. It could, like most diesel engines, last for several hundred thousand miles with regular servicing. It was, along with the larger (also XUD powered) Citroën BX, the beginning of the start of the boom in diesel sales in Europe. The 205 GTI was as popular as the Golf GTI in Europe. The 205 was named "Car of the Decade" in the UK, by CAR magazine in 1990. History: In the US Chevrolet offered three new small economy cars in the 1980s to replace the Chevette: the Chevrolet Sprint, a three-cylinder Suzuki-built hatchback, The Chevrolet Spectrum built by Isuzu and the Chevrolet Nova built by NUMMI in California, a GM-Toyota joint venture. 1990s Chevrolet offered the Geo brand in the US in the 1990s featuring the Suzuki-built Geo Metro (marketed as the Suzuki Swift in Europe, Suzuki Cultus in Japan, and Holden Barina in Australia), the Isuzu-built Storm, and the NUMMI-built Prizm. History: In Europe in 1993, Fiat launched the boxy and conservatively styled but very well packaged front wheel drive Fiat Cinquecento that could accommodate four adults that was only slightly larger than an Austin Mini. It eventually replaced the first Fiat Panda and also the aged rear engined rear wheel drive 1970s Fiat 126, as the smallest car in the Fiat range. But the real breakthrough in small car design was the 1993 Renault Twingo which was a revolution in styling by being the first 'one box' small car to reach production. The early pre-production Citroën AX supermini launched in 1987 was a 'monobox' design, but the production version was much more conservative after negative reactions in focus groups. Both had the interior space of a much larger car. The Twingo and Cinquecento relaunched the city car market in Europe - for decades the only competitors in this market were the Austin Mini and the Polish built Fiat 126 (developed from the 1950s Fiat 500). History: Economy cars since 2000 Today economy cars have specialised into market niches. The small city car, the inexpensive-to-run but not necessarily very small general economy car, and the performance derivatives that capitalise on light weight of the cars on which they are based. Some models that started as economy cars such as the Volkswagen Golf and Toyota Corolla, have increased in size and moved upmarket over several generations, and their makers have added smaller new models in their original market niches. The 2003 Volkswagen Golf Mk5 had put on so much weight, that some models weigh the same as a Volvo 200 Series. The supermini 2002 Volkswagen Polo Mk4 was longer and wider than the 1970s Volkswagen Golf Mk1. Gordon Murray the Formula 1 and Mclaren F1 designer, said when designing his new Murray T.25 city car: "Today with all the promises of hydrogen and hybrids and electric cars, if you could take ten percent out of the weight of every car, the effect in the next ten years would be more than that of all the hybrids and electric cars on the planet."The City car market in Europe from the 1990s has seen increased competition, with the market split between standard and 'designer' city cars that are sold for a premium. These cars are at the lower end of supermini size or smaller. Standard city cars include the Toyota Yaris, Citroën C1/Peugeot 107/Toyota Aygo (built in the same factory), Fiat Panda, Kia Picanto, Chevrolet Matiz, Volkswagen Fox, Mitsubishi Colt, Volkswagen Lupo, and 2011 Volkswagen Up. The 'designer' city car became increasingly popular in Europe in the 1990s. The first car of this kind was a limited success, the 1985 Lancia Y10, because it had been hampered by its poor ride, from being based on the original utilitarian Fiat Panda. Also, Lancia was a dying brand in the UK at this time. The 1993 Renault Twingo and Ford Ka in 1996, marked an upsurge in sales for this niche. The Ka was to be launched along with the mid-1990s Fiesta with the innovative Australian two stroke Orbital engine, but tightening emissions laws meant that it was launched with an updated Ford Kent engine instead. This was followed by the innovative engineering designs; Mercedes-Benz A-Class with an under floor engine, the two seater rear engined Smart ForTwo, and the aluminium bodied Audi A2 which in its most aerodynamic form only achieved a drag coefficient of Cd=0.25. Sales really took off with the 2001 BMW Mini with its modern but conventional front wheel drive engineering and re-interpretation of the classic Austin Mini styling. This has sold well in other markets including North America. Other cars of this type include the Mitsubishi Colt-based Smart Forfour, VW Polo based Audi A1, Fiat Panda based Fiat Nuova 500, Citroën C3 based Citroën DS3, and Fiat Grande Punto based Alfa Romeo MiTo. The Toyota iQ, designed in France, went on sale in January 2009 in the UK. It is a competitor for the Smart ForTwo but with occasional rear seats. It follows the Issigonis philosophy of packaging, with innovations including a flat under floor fuel tank and specially located steering rack and final drive unit to maximise floor space for passengers. It seats four adults in a car 2.985 m (117.5 in) long, 1.680 m (66.1 in) wide, and 1.5 m (59.1 in) tall, and achieves 65.69 mpg‑imp (4.300 L/100 km; 54.70 mpg‑US) with a 99 g/km CO2 rating. It also achieved the top Euro NCAP 5/5 stars safety rating. History: A development in recent years in Europe, has been the launch of small MPVs/people carriers. This is a development of the Giorgetto Giugiaro tall car, high-seating packaging innovation first used in the Fiat Uno. The niche first emerged in Japan with the 1993 Suzuki Wagon R Keicar class car. This was sold by GM in Europe from 2000 as the Vauxhall/Opel Agila. This was followed by the slightly larger supermini based cars like the Renault Modus, Citroën C3 Picasso, Fiat Idea, Nissan Note, and the Vauxhall/Opel Meriva that is also produced in Brazil. Their tall packaging designs offer the interior space of a larger car. The higher seating increases visibility for the driver that is useful in urban driving. They also make it easier to get in and out, which is useful for those with personal mobility problems, such as the elderly and the disabled. History: The conflicting design goals for economy cars – small size with maximum usable interior space, low cost and light weight with acceptable safety performance (light cars have a higher ratio of unsprung suspension mass to sprung mass, which affects ride quality) and the need for light materials with acceptable durability – continue to stimulate innovative development. Technology improvements such as electronic engine management, adoption of four valves per cylinder, variable valve timing, direct injection of petrol/Gasoline and diesel, hybrid power, and smoother, more powerful diesel engines with very high pressure electronic injection, have dramatically improved fuel economy and performance. The latest technologies to improve efficiency are engine downsizing and automatic engine stop-start. Automatic engine stop-start systems like VWs BlueMotion, shut the engine down when the car is stopped to reduce idling emissions and boost economy, and it is now mandatory not to idle unnecessarily in cities in Germany. It is an updated version of the 1980s VW 'Formel E' system that was developed into the 1990s VW 'Ecomatic' system. The application of turbo-charging to downsized engines is one way to achieve efficiency benefits into fuel economy and emission benefits instead of for performance. The recent Fiat 'Multiair' system, is an electro-hydraulic development of variable valve timing that allows the engine management computer to control valve timing, improving engine efficiency, giving better torque, power, economy, and emissions. Safety design is a particular challenge in a small, lightweight car. This is an area where Renault has been particularly successful. Sport compacts and hot hatches have developed into their own competitive genre. Although their economy has been compromised, these models offer higher performance because of the lightness of the platforms that they are based upon. History: As an alternative to manual synchromesh gearboxes, automatic continuously variable transmission (CVT) gearboxes are optional on some economy cars, such as Audi, Honda, and the MINI ONE and MINI Cooper. Tata Motors from India, recently announced that it too would use a variomatic transmission in its US$2,500 Nano. CVT application to economy cars was pioneered by Fiat, Ford, and Van Doorne in the 1980s. Rather than the pulled rubber drive belts as used in the past by DAF, the modern transmission is made much more durable by the use of electronic control and steel link belts pushed by their pulleys. History: A difference between the North American car market and the markets of Europe and Japan is the price of fuel. Fuel is heavily taxed and therefore relatively costly in most first-world markets outside North America; fuel is about two and a half times the price in the UK than the U.S. Fuel costs are also a much higher proportion of income, due to generally higher wages and lower living costs in the U.S. Only during occasional fuel price spikes such as those of 1973, 1979–81, and 2008-9 have North American drivers been motivated to seek levels of fuel economy considered ordinary outside North America. History: The growth of developing countries has also created a new market for inexpensive new cars. Unlike in the postwar period this demand has not been met by utilitarian models, but advanced 'peoples' cars'. Adaptation of standard or obsolete models from the first world has been the norm. Production of car models superseded in first-world markets is often moved to cost-sensitive markets like South Africa and Brazil; the Citi Golf is an example. History: Some mainstream European auto makers have developed models specifically for developing countries, such as the Fiat Palio, Volkswagen Gol and Dacia Logan. Renault has teamed up with India's Mahindra and Mahindra to produce a low-cost car in the range of US$2,500 to US$3,000. The Tata Nano launched in January 2008, in India by Tata Motors, was claimed by Tata to be the world's cheapest car at US$2,500. The Nano, like the 1950s Fiat 500, has a rear engine and was styled by Italians. It is designed to get whole families off scooters and onto four wheels. The Tata Indica that was formerly sold in Europe as the City Rover until the 2005 collapse of Rover, sold poorly as it was overpriced for a basic car. History: The narrow profit margins of economy cars can cause financial instability for their manufacturers. Historically, Volkswagen in the 1970s and Ford in the 1920s almost collapsed because of their one model economy car business strategy. Ford was saved by the Model A and Volkswagen was saved by the Golf. Ford started the Mercury and Lincoln brands to diversify its product range. VW moved away from the narrow profit margins of economy cars, by expanding its range so that now it spans from very small city cars like the Volkswagen Up to Audis and Bentleys, and it also owns SEAT and Skoda. History: China has become one of the fastest growing car markets, recently overtaking the U.S. as the largest producer of cars in the world. It is followed by India with a preference towards inexpensive, basic cars, but they are both moving upmarket in their tastes as their economic rise continues. History: India is becoming a global outsourcing production centre for small cars. The Suzuki Alto and Hyundai i10 are already being exported to Europe from India. In March 2010 at Chennai formerly Madras, the Renault-Nissan Alliance opened a US$990 million plant to produce 400,000 units per year at full production. The first vehicle to be produced at the plant will be the new Nissan Micra, for the Indian market as well as for export to over 100 countries in Europe, the Middle East and Africa. Production of the Micra has been re-located from the UK and other developed countries. In 2011, the plant will start production of the Renault Koleos and Fluence, for the Indian market based on the same platform. The Maruti version of the Suzuki Swift is also produced in India. History: Gordon Murray the Formula 1 and Mclaren F1 designer, has pointed out the difficulty of making economy cars cost-efficient. His solution is a laser cut tubular steel space-frame chassis built with an automated tube mill, braced with bonded low-cost composite sheets that would be a cheaper and greener means of production. Murray's 'iStream' simplifies each process with an eighty percent smaller factory with lower cost production, making light weight efficient cars. There are no sheet metal presses, spot welders or paint plants. It would be built local to its market. Murray was reported to be negotiating production licences. The T25 and T27 were projected to be available in 2016.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UDP-glucose 6-dehydrogenase** UDP-glucose 6-dehydrogenase: UDP-glucose 6-dehydrogenase is a cytosolic enzyme that in humans is encoded by the UGDH gene.The protein encoded by this gene converts UDP-glucose to UDP-glucuronate and thereby participates in the biosynthesis of glycosaminoglycans such as hyaluronan, chondroitin sulfate, and heparan sulfate. These glycosylated compounds are common components of the extracellular matrix and likely play roles in signal transduction, cell migration, and cancer growth and metastasis. The expression of this gene is up-regulated by transforming growth factor beta and down-regulated by hypoxia.This enzyme participates in 4 metabolic pathways: pentose and glucuronate interconversions, ascorbate and aldarate metabolism, starch and sucrose metabolism, and nucleotide sugars metabolism. UDP-glucose 6-dehydrogenase: Loss of UGDH has recently been implicated in epileptic encephalopathy in humans Nomenclature: This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is UDP-glucose:NAD+ 6-oxidoreductase. Other names in common use include: UDP-glucose dehydrogenase, uridine diphosphoglucose dehydrogenase, UDPG dehydrogenase, UDPG:NAD oxidoreductase, UDP-alpha-D-glucose:NAD oxidoreductase, UDP-glucose:NAD+ oxidoreductase, uridine diphosphate glucose dehydrogenase, UDP-D-glucose dehydrogenase, and uridine diphosphate D-glucose dehydrogenase. Biochemistry: In enzymology, a UDP-glucose 6-dehydrogenase (EC 1.1.1.22) is an enzyme that catalyzes the chemical reaction UDP-glucose + 2 NAD+ + H2O ⇌ UDP-glucuronate + 2 NADH + 2 H+The 3 substrates of this enzyme are UDP-glucose, NAD+, and H2O, whereas its 3 products are UDP-glucuronate, NADH, and H+ References:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DOME MicroDataCenter** DOME MicroDataCenter: A microDataCenter contains compute, storage, power, cooling and networking in a very small volume, sometimes also called a "DataCenter-in-a-box". The term has been used to describe various incarnations of this idea over the past 20 years. Late 2017 a very tightly integrated version was shown at SuperComputing conference 2017: the DOME microDataCenter. Key features are its hot-watercooling, fully solid-state and being built with commodity components and standards only. DOME project: DOME is a Dutch government-funded project between IBM and ASTRON in form of a public–private partnership to develop technology roadmaps targeting the Square Kilometer Array (SKA), the world's largest planned radio telescope. It will be built in Australia and South Africa during the late 2010s and early 2020s. One of the 7 DOME projects is MicroDataCenter (previously called Microservers) that are small, inexpensive and computationally efficient.The goal for the MicroDataCenter is the capability to be used both near the SKA antennas to do early processing of the data, and inside much larger supercomputers that will do the big data analysis. These servers can be deployed in very large numbers and in environmentally extreme locations such as in deserts where the antennas will be located and not in only in cooled datacenters. DOME project: A common misconception is that microservers offer only low performance. This is caused by the first microservers being based on Atoms or early 32bit ARM cores. The aim of the DOME MicroDataCenter project is to deliver high performance at low cost and low power. A key characteristic of a MicroDataCenter is its packaging: very small form factor that allows short communication distances. This is based on using Microservers, eliminating all unnecessary components by integrating as much as possible from the traditional compute server into a single SoC (Server on a chip). A microserver will not deliver the highest possible single-thread performance, instead, it offers an energy optimized design point at medium-high delivered performance. In 2015, several high performance SoCs start appearing on the market, late 2016 a wider choice is available, such as Qualcomms Hydra.At server level, the 28 nm T4240 based microserver card offers twice the operations per Joule compared to an energy optimized 22 nm Finfet XEON-E3 1230Lv3 based server, while delivering 40% more aggregate performance. The comparison is at server board and not at chip level. Design: In 2012 a team at IBM Research Zürich led by Ronald P. Luijten started pursuing a very computational dense, and energy efficient 64-bit computer design based on commodity components, running Linux. A system-on-chip (SoC) design where most necessary components would fit on a single chip would fit these goals best, and a definition of "microserver" emerged where essentially a complete motherboard (except RAM, boot flash and power conversion circuits) would fit on chip. ARM, x86 and Power ISA-based solutions were investigated and a solution based on Freescale's Power ISA-based dual core P5020 / quad core P5040 processor came out on top at the time of decision in 2012. Design: The concept is similar to IBM's Blue Gene supercomputers but the DOME microserver is designed around off the shelf components and will run standard operating systems and protocols, to keep development and component costs down.The complete microserver is based on the same form factor as standard FB-DIMM socket. The idea is to fit 128 of these compute cards within a 19" rack 2U drawer together with network switchboards for external storage and communication. Cooling will be provided via the Aquasar hot water cooling solution pioneered by the SuperMUC supercomputer in Germany.The designs of the first prototype was released to the DOME user community on July 3, 2014. The P5040 SoC chip, 16 GB of DRAM and a few control chips (such as the PSoC 3 from Cypress used for monitoring, debugging and booting) comprise a complete compute node with the physical dimensions of 133×55 mm. The card's pins are used for a SATA, five Gbit and two 10 Gbit Ethernet ports, one SD card interface, one USB 2 interface, and power. The compute card operates within a 35 W power envelope with headroom up to 70 W. The bill of materials is around $500 for the single prototype.Late 2013 a new SoC was chosen for the second prototype. Freescale's newer 12 core / full 24 thread T4240 is significantly more powerful and operates within a comparable power envelope to the P5040 at 43W TDP. This new micro server card offers 24 GB DRAM, and be powered as well as cooled from the copper heatspreader. It is being built and validated for the larger scale deployment in the full 2U drawer in early 2017. To support native 10 GbE signalling, the DIMM connector was replaced with the SPD08 connector. Design: Late 2016, the production version of the T4240 based microserver card was completed. Using the same form factor and the same connector (and thus plug compatible) a second server prototype board based on the NXP (Formerly Freescale) LS2088A SoC (with 8 A72 ARMv8 cores) was completed around the same time. Design: History The smallest form factor micro data center technology was pioneered by the DOME micro server team in Zurich. The computing consists of multiple Microservers and the networking consists of at least one Micro switch module. The first live demo of an 8-way prototype system based on P5040ZMS was performed at Supercomputing 2015 as part of the emerging technologies display, followed by a live demo at CeBIT in March 2016. 8 Way HPL was demonstrated at CeBIT, hence named 'LinPack-in-a-shoebox'. Design: In 2017 the team finished the production ready version that contains 64 T4240ZMS servers, two 10/40 GbE switches, storage, power and cooling in a 2U rack unit. The picture on lowest right shows the 32-way carrier (half of the 2U rack unit) populated with 24 T4240ZMS servers, 8 FPGA boards, switch, storage power and cooling. This technology increases density 20-fold compared to traditionally packaged datacenter technology while delivering same aggregate performance. This is achieved by a new top-down design, minimizing component count, using an SoC instead of traditional CPU and dense packaging enabled by the use of hot-water cooling. Future: A startup company - still in stealth mode - is in the process of obtaining a technology licence from IBM to bring the technology to market 1H 2018. Unfortunately, the startup company was unable to obtain seed funding to start productization. This project, including all resources, has been mothballed and Ronald retired from the Zurich research lab. (August 2020)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mixed martial arts in Australia** Mixed martial arts in Australia: Mixed martial arts (MMA) has developed in Australia from a wide cross-section of sporting and martial arts disciplines to become the most popular combat sport in Australia. History: The influence traditional martial arts, Olympic wrestling and Brazilian jiu-jitsu have shaped MMA in Australia, along with the combat sports of Boxing and Kickboxing/Muay Thai. History: Between 1905-1914, Australian's witnessed a prizefighting novelty called "All-in" which started with "jiu jitsu" demonstrations and developed into a no-holds barred fighting phenomenon. One of the most notable participants was Sam McVea, an African-American heavyweight boxing champion who would participate in a highly publicised "all-in" fight in Lismore, Australia, against 'Prof.' Stevenson in 1913.However, the early hybrid didn't last and during most of the 20th century traditional martial arts schools and striking based gyms existed apart as with Amateur wrestling in Australia. Traditional martial arts in general are well attended and feature in the top ten organised sports for children, for both males and females, in Australia.In the 1990s the three grappling disciplines of BJJ, amateur wrestling and Catch wrestling provided the base for the modern sport. Mixed Martial Arts, in its recognized and regulated form, came to Australia via the Ultimate Fighting Championship's emergence in 1993, but was predated by Vale tudo in Brasil and Shoot wrestling in Japan. MMA gained an underground following through video and bootleg copies of UFC events in the mid 1990s. History: The explosion of BJJ globally, through Gracie BJJ schools, was assisted by the success of Royce Gracie at UFC 1-4, but BJJ was first introduced into Australia by John Will in 1989. Initially dedicated Australian practitioners travelled overseas to gain their belts and returned to start schools. Mixed Martial Arts training and gyms began to evolve. The long history of boxing and the more recent variant of kickboxing/Muay Thai in Australia provided a large injection of fighters with a striking base. The sport of MMA has been described as the fastest growing sport in the twenty first century. Sanctioning States and territories of Australia there are different sanctioning bodies and rules. Sanctioning bodies include: Combat Sports Authority (NSW), Professional Boxing and Combat Sports Board (VC) and Combat Sports Commission of Western Australia (WA). Promotions: Ultimate Fighting Championship UFC 110: Nogueira vs. Velasquez (2010) - Sydney UFC 127: Penn vs. Fitch (2011) - Sydney UFC on FX: Alves vs. Kampmann (2012) - Sydney UFC on FX: Sotiropoulos vs. Pearson (2012) - Gold Coast UFC Fight Night: Hunt vs. Bigfoot (2013) - Brisbane UFC Fight Night: Bisping vs. Rockhold (2014) - Sydney UFC Fight Night: Miocic vs. Hunt (2015) - Adelaide UFC 193: Rousey vs. Holm (2015) - Melbourne UFC Fight Night: Hunt vs. Mir (2016) - Brisbane UFC Fight Night: Whittaker vs. Brunson (2016) - Melbourne UFC Fight Night: Werdum vs. Tybura (2017) - Sydney UFC 221: Romero vs. Rockhold (2018) - Perth UFC Fight Night: dos Santos vs. Tuivasa (2018) - Adelaide UFC 234: Adesanya vs. Silva (2019) - Melbourne UFC 243: Whittaker vs. Adesanya (2019) - Melbourne UFC 284: Makhachev vs. Volkanovski (2023) - Perth UFC 293: Adesanya vs. Strickland (2023) - Sydney Local MMA Promotions Aftershock MMA - Brisbane, Queensland - Professional and Amateur Fight Promotion Australian Fighting Championship - Melbourne, Victoria BRACE Carnage in the Cage (CITC) - Mackay, Queensland, Australia Coastal Combat - Sunshine Coast, Queensland Eternal MMA - Gold Coast, Queensland Fightworld Cup - Gold Coast, Queensland HAMMA Fight Night - Brisbane, Queensland Hex Fight Series - Melbourne, VIC Minotaur Mixed Martial Arts (Melbourne Fight Club) - Melbourne Nitro MMA Storm Damage - Australian Capital Territory Superfight MMA (Superfight Australia) aka (Mach1FightClub) - New South Wales Unarmed Combat Unleashed (UCU) - Emerald, Queensland Urban Fight Night - Liverpool, New South Wales Wollongong Wars (WW) - Wollongong XFC - Australian and New Zealand professional and amateur Xtreme Impact Fighting Championships (XIFC) - Toowoomba, Queensland Demolition Fight Series - Sunshine, Victoria Past Cage Fighting Championship (CFC) 2007-2012 TUFFA MMA 2009-2011 Impact Fighting Championships (2010) Xtreme MMA (XMMA) 2009-2010 Amateur MMA Organizations Some may have one or two pro fights but their focus is on AMMY. Promotions: Amateur Cage Fighting Australia (ACFA) - Gold Coast, Queensland, Australia International Mixed Martial Arts Federation of Australia Reality television: The Ultimate Fighter: The Smashes Wimp 2 Warrior The Ultimate Fighter Nations: Canada vs. Australia Gyms: some mixed martial arts training camps and gyms. Gyms: Gamebred Academy - Brisbane, Queensland Fightcross MMA - Brisbane, Queensland Absolute MMA - Melbourne, Victoria Australian Elite Team Australian Top Team - Sydney, New South Wales Bulldog MMA Parramatta Igor MMA Sydney Dominance MMA - Melbourne, Victoria ELEV8 MMA - Melbourne, Victoria Extreme MMA - Melbourne, Victoria Hammers Gym KMA Top Team - Sydney, New South Wales Integrated MMA - Brisbane, Queensland Kings Academy of Martial Arts Riddlers Gym - Perth, Western Australia Sinosic Perosh Martial Arts VT1 Mixed Martial Arts Academy - Sydney, New South Wales 99ers MMA Gym Kumiai Ryu Martial Arts Langes Mixed Martial Arts - Sydney, New South Wales Perth Kickboxing Academy Blue Mountains Martial Arts Centre Westside MMA - Caroline Springs, Victoria The MMA Clinic - Perth, Western Australia Scrappy MMA - Perth, Western Australia Media outlets: Fight News Australia Submission Radio From The Stands
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nightmare flip** Nightmare flip: The nightmare flip (also known as a nightmare kickflip hyperflip or nightmare varial flip), is an aerial skateboarding trick. Description: To perform the nightmare flip aerial trick, the skateboarder kicks their board in order to make it flip 720 degrees along the board's long axis, while turning in a 180 degree motion toward the toe edge of the board, essentially combining a double kickflip, and a pop shove it. This trick is also called a varial double flip. Inventor: It was most likely invented by Rodney Mullen, an early skateboarder. However, it is also possible that it is simply a variation on one of his tricks, created by someone else.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Macintosh Common Lisp** Macintosh Common Lisp: Macintosh Common Lisp (MCL) is an implementation and IDE for the Common Lisp programming language. Various versions of MCL run under the classic Mac OS (m68k and PPC) and Mac OS X.Versions of MCL up to and including 5.1 are proprietary. Version 5.2 has been open sourced.In 2009 a new different version of MCL has been open sourced: RMCL. RMCL is based on MCL 5.1 and does run under Rosetta on Intel-based Macs. Features of MCL: MCL was famous for its integration with the Macintosh toolbox (later: Apple Carbon), which allowed direct access to most of the Mac OS functionality directly from Lisp. This was achieved with a low-level interface that allowed direct manipulation of native Mac OS data structures from Lisp, together with a high-level interface that was more convenient to use. In a 2001 article in Dr. Dobb's Journal, Peter Norvig wrote that "MCL is my favorite IDE on the Macintosh platform for any language and is a serious rival to those on other platforms". History of MCL: Development on MCL began in 1984. History of MCL: Over its history, MCL has been known under different names: Running on 68k-based Apple Macintosh Computers: 1987, Coral Common Lisp 1987, Macintosh Allegro Common Lisp 1988, Apple Macintosh Common LispRunning on PowerPC-based Apple Macintosh Computers: 1994, Digitool Macintosh Common LispIt has also spawned at least one separately maintained fork: 1998, Clozure CL (CCL), known previously as OpenMCL In 2007 MCL 5.2 was open sourced. History of MCL: In 2009 RMCL (MCL running under Rosetta) was published as open source. Since 2009 an open source version of RMCL (based on MCL 5.2) is hosted at Google Code MCL. This version runs under Rosetta (Apple's PPC to Intel code translator that is an optional install under Mac OS X 10.6).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Binomial proportion confidence interval** Binomial proportion confidence interval: In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments (Bernoulli trials). In other words, a binomial proportion confidence interval is an interval estimate of a success probability p when only the number of experiments n and the number of successes nS are known. There are several formulas for a binomial confidence interval, but all of them rely on the assumption of a binomial distribution. In general, a binomial distribution applies when an experiment is repeated a fixed number of times, each trial of the experiment has two possible outcomes (success and failure), the probability of success is the same for each trial, and the trials are statistically independent. Because the binomial distribution is a discrete probability distribution (i.e., not continuous) and difficult to calculate for large numbers of trials, a variety of approximations are used to calculate this confidence interval, all with their own tradeoffs in accuracy and computational intensity. Binomial proportion confidence interval: A simple example of a binomial distribution is the set of various possible outcomes, and their probabilities, for the number of heads observed when a coin is flipped ten times. The observed binomial proportion is the fraction of the flips that turn out to be heads. Given this observed proportion, the confidence interval for the true probability of the coin landing on heads is a range of possible proportions, which may or may not contain the true proportion. A 95% confidence interval for the proportion, for instance, will contain the true proportion 95% of the times that the procedure for constructing the confidence interval is employed. Normal approximation interval or Wald interval: A commonly used formula for a binomial confidence interval relies on approximating the distribution of error about a binomially-distributed observation, p^ , with a normal distribution. This approximation is based on the central limit theorem and is unreliable when the sample size is small or the success probability is close to 0 or 1.Using the normal approximation, the success probability p is estimated as p^±zp^(1−p^)n, or the equivalent nSn±znnnSnF, where p^=nS/n is the proportion of successes in a Bernoulli trial process, measured with n trials yielding nS successes and nF=n−nS failures, and z is the 1−α2 quantile of a standard normal distribution (i.e., the probit) corresponding to the target error rate α . For a 95% confidence level, the error 0.95 0.05 , so 0.975 and 1.96 From this one finds two problems. First, for p^ approaching unit (or zero), the interval narrows to zero width (implying certainty). Second, for values of p^<z2z2+n (or equivalently for 1−p^ ), the interval boundaries exceed [0,1] (overshoot). Normal approximation interval or Wald interval: An important theoretical derivation of this confidence interval involves the inversion of a hypothesis test. Under this formulation, the confidence interval represents those values of the population parameter that would have large p-values if they were tested as a hypothesized population proportion. The collection of values, θ , for which the normal approximation is valid can be represented as {θ|y≤p^−θ1np^(1−p^)≤zα/2}, where y is the α2 quantile of a standard normal distribution. Normal approximation interval or Wald interval: Since the test in the middle of the inequality is a Wald test, the normal approximation interval is sometimes called the Wald interval or Wald method, after Abraham Wald, but it was first described by Pierre-Simon Laplace in 1812. Normal approximation interval or Wald interval: Bracketing the confidence interval Extending the normal approximation and Wald-Laplace interval concepts, Michael Short has shown that inequalities on the approximation error between the binomial distribution and the normal distribution can be used to accurately bracket the estimate of the confidence interval around p^ :k+CL1n+z2−znk−k2+CL2n−CL3k+CL4n(n+z2)2≤p^≤k+CU1n+z2+znk−k2+CU2n−CU3k+CU4n(n+z2)2 where p^ is again the (unknown) proportion of successes in a Bernoulli trial process, measured with n trials yielding k successes, z is the 1−α2 quantile of a standard normal distribution (i.e., the probit) corresponding to the target error rate α , and the constants CL1,CL2,CL3,CL4,CU1,CU2,CU3 and CU4 are simple algebraic functions of z . For a fixed α (and hence z ), the above inequalities give easily computed one- or two-sided intervals which bracket the exact binomial upper and lower confidence limits corresponding to the error rate α Standard error of a proportion estimation when using weighted data Let there be a simple random sample X1,…,Xn where each Xi is i.i.d from a Bernoulli(p) distribution and weight wi is the weight for each observation. Standardize the (positive) weights wi so they sum to 1. The weighted sample proportion is: {\textstyle {\hat {p}}=\sum _{i=1}^{n}w_{i}X_{i}} . Since the Xi are independent and each one has variance Var (Xi)=p(1−p) , the sampling variance of the proportion therefore is: Var Var (wiXi)=p(1−p)∑i=1nwi2 .The standard error of p^ is the square root of this quantity. Because we do not know p(1−p) , we have to estimate it. Although there are many possible estimators, a conventional one is to use p^ , the sample mean, and plug this into the formula. That gives: SE (p^)=p^(1−p^)∑i=1nwi2 For unweighted data, {\textstyle w_{i}=1/n} , giving {\textstyle \sum _{i=1}^{n}w_{i}^{2}=1/n} . The SE becomes {\textstyle {\sqrt {{\hat {p}}(1-{\hat {p}})/n}}} , leading to the familiar formulas, showing that the calculation for weighted data is a direct generalization of them. Wilson score interval: The Wilson score interval is an improvement over the normal approximation interval in multiple respects. It was developed by Edwin Bidwell Wilson (1927). Unlike the symmetric normal approximation interval (above), the Wilson score interval is asymmetric. It does not suffer from problems of overshoot and zero-width intervals that afflict the normal interval, and it may be safely employed with small samples and skewed observations. The observed coverage probability is consistently closer to the nominal value, 1−α .Like the normal interval, the interval can be computed directly from a formula. Wilson score interval: Wilson started with the normal approximation to the binomial: z≈(p−p^)σn with the analytic formula for the sample standard deviation given by Combining the two, and squaring out the radical, gives an equation that is quadratic in p: (p^−p)2=z2⋅p(1−p)n Transforming the relation into a standard-form quadratic equation for p, treating p^ and n as known values from the sample (see prior section), and using the value of z that corresponds to the desired confidence for the estimate of p gives this: where all of the values in parentheses are known quantities. Wilson score interval: The solution for p estimates the upper and lower limits of the confidence interval for p. Hence the probability of success p is estimated by p≈(w−,w+)=11+z2n(p^+z22n)±z1+z2np^(1−p^)n+z24n2 or the equivalent p≈nS+12z2n+z2±zn+z2nSnFn+z24. The practical observation from using this interval is that it has good properties even for a small number of trials and / or an extreme probability. Wilson score interval: Intuitively, the center value of this interval is the weighted average of p^ and 12 , with p^ receiving greater weight as the sample size increases. Formally, the center value corresponds to using a pseudocount of 1/2 z2, the number of standard deviations of the confidence interval: add this number to both the count of successes and of failures to yield the estimate of the ratio. For the common two standard deviations in each direction interval (approximately 95% coverage, which itself is approximately 1.96 standard deviations), this yields the estimate (nS+2)/(n+4) , which is known as the "plus four rule". Wilson score interval: Although the quadratic can be solved explicitly, in most cases Wilson's equations can also be solved numerically using the fixed-point iteration pk+1=p^±z⋅pk⋅(1−pk)n with p0=p^ The Wilson interval can also be derived from the single sample z-test or Pearson's chi-squared test with two categories. The resulting interval, {θ|y≤p^−θ1nθ(1−θ)≤z}, can then be solved for θ to produce the Wilson score interval. The test in the middle of the inequality is a score test. Wilson score interval: The interval equality principle Since the interval is derived by solving from the normal approximation to the binomial, the Wilson score interval (w−,w+) has the property of being guaranteed to obtain the same result as the equivalent z-test or chi-squared test. This property can be visualised by plotting the probability density function for the Wilson score interval (see Wallis 2021: 297-313) and then plotting a normal pdf at each bound. The tail areas of the resulting Wilson and normal distributions, representing the chance of a significant result in that direction, must be equal. The continuity-corrected Wilson score interval and the Clopper-Pearson interval are also compliant with this property. The practical import is that these intervals may be employed as significance tests, with identical results to the source test, and new tests may be derived by geometry. Wilson score interval with continuity correction The Wilson interval may be modified by employing a continuity correction, in order to align the minimum coverage probability, rather than the average coverage probability, with the nominal value, 1−α Just as the Wilson interval mirrors Pearson's chi-squared test, the Wilson interval with continuity correction mirrors the equivalent Yates' chi-squared test. The following formulae for the lower and upper bounds of the Wilson score interval with continuity correction (wcc−,wcc+) are derived from Newcombe (1998). max min {1,2np^+z2+[zz2−1n+4np^(1−p^)−(4p^−2)+1]2(n+z2)} However, if p = 0, wcc− must be taken as 0; if p = 1, wcc+ is then 1. Wallis (2021) identifies a simpler method for computing continuity-corrected Wilson intervals that employs functions. For the lower bound, let WilsonLower(p^,n,α/2)=w− , where α is the selected error level for z . Then max (p^−12n,0),n,α/2) . This method has the advantage of being further decomposable. Jeffreys interval: The Jeffreys interval has a Bayesian derivation, but it has good frequentist properties. In particular, it has coverage properties that are similar to those of the Wilson interval, but it is one of the few intervals with the advantage of being equal-tailed (e.g., for a 95% confidence interval, the probabilities of the interval lying above or below the true value are both close to 2.5%). In contrast, the Wilson interval has a systematic bias such that it is centred too close to p = 0.5.The Jeffreys interval is the Bayesian credible interval obtained when using the non-informative Jeffreys prior for the binomial proportion p. The Jeffreys prior for this problem is a Beta distribution with parameters (1/2, 1/2), it is a conjugate prior. After observing x successes in n trials, the posterior distribution for p is a Beta distribution with parameters (x + 1/2, n – x + 1/2). Jeffreys interval: When x ≠0 and x ≠ n, the Jeffreys interval is taken to be the 100(1 – α)% equal-tailed posterior probability interval, i.e., the α / 2 and 1 – α / 2 quantiles of a Beta distribution with parameters (x + 1/2, n – x + 1/2). These quantiles need to be computed numerically, although this is reasonably simple with modern statistical software. Jeffreys interval: In order to avoid the coverage probability tending to zero when p → 0 or 1, when x = 0 the upper limit is calculated as before but the lower limit is set to 0, and when x = n the lower limit is calculated as before but the upper limit is set to 1. Clopper–Pearson interval: The Clopper–Pearson interval is an early and very common method for calculating binomial confidence intervals. This is often called an 'exact' method, as is attains the nominal coverage level in an exact sense, meaning that the coverage level never is less than the nominal 1−α .The Clopper–Pearson interval can be written as S≤∩S≥ or equivalently, inf sup S≤) with := Bin and := Bin ⁡(n;p)≥x]>α2}, where 0 ≤ x ≤ n is the number of successes observed in the sample and Bin(n; p) is a binomial random variable with n trials and probability of success p. Clopper–Pearson interval: Equivalently we can say that the Clopper–Pearson interval is {\textstyle \left({\frac {x}{n}}-\varepsilon _{1},\ {\frac {x}{n}}+\varepsilon _{2}\right)} with confidence level 1−α if εi is the infimum of those such that the following tests of hypothesis succeed with significance {\textstyle {\frac {\alpha }{2}}} H0: p=xn−ε1 with HA: p>xn−ε1 H0: p=xn+ε2 with HA: p<xn+ε2 .Because of a relationship between the binomial distribution and the beta distribution, the Clopper–Pearson interval is sometimes presented in an alternate format that uses quantiles from the beta distribution. Clopper–Pearson interval: B(α2;x,n−x+1)<p<B(1−α2;x+1,n−x) where x is the number of successes, n is the number of trials, and B(p; v,w) is the pth quantile from a beta distribution with shape parameters v and w. Thus, min max , where: min tx−1(1−t)n−xdt=α2 max tx(1−t)n−x−1dt=1−α2 The binomial proportion confidence interval is then min max ) , as follows from the relation between the Binomial distribution cumulative distribution function and the regularized incomplete beta function. Clopper–Pearson interval: When x is either 0 or n , closed-form expressions for the interval bounds are available: when x=0 the interval is {\textstyle \left(0,\,1-\left({\frac {\alpha }{2}}\right)^{\frac {1}{n}}\right)} and when x=n it is {\textstyle \left(\left({\frac {\alpha }{2}}\right)^{\frac {1}{n}},\,1\right)} .The beta distribution is, in turn, related to the F-distribution so a third formulation of the Clopper–Pearson interval can be written using F quantiles: (1+n−x+1(x)F[α2;2(x),2(n−x+1)])−1<p<(1+n−x(x+1)F[1−α2;2(x+1),2(n−x)])−1 where x is the number of successes, n is the number of trials, and F(c; d1, d2) is the c quantile from an F-distribution with d1 and d2 degrees of freedom.The Clopper–Pearson interval is an exact interval since it is based directly on the binomial distribution rather than any approximation to the binomial distribution. This interval never has less than the nominal coverage for any population proportion, but that means that it is usually conservative. For example, the true coverage rate of a 95% Clopper–Pearson interval may be well above 95%, depending on n and p. Thus the interval may be wider than it needs to be to achieve 95% confidence, and wider than other intervals. In contrast, it is worth noting that other confidence interval may have coverage levels that are lower than the nominal 1−α , i.e., the normal approximation (or "standard") interval, Wilson interval, Agresti–Coull interval, etc., with a nominal coverage of 95% may in fact cover less than 95%, even for large sample sizes.The definition of the Clopper–Pearson interval can also be modified to obtain exact confidence intervals for different distributions. For instance, it can also be applied to the case where the samples are drawn without replacement from a population of a known size, instead of repeated draws of a binomial distribution. In this case, the underlying distribution would be the hypergeometric distribution. Clopper–Pearson interval: The interval boundaries are easily computed with numerical methods functions like qbeta in R and scipy.stats.beta.ppf in Python. Agresti–Coull interval: The Agresti–Coull interval is also another approximate binomial confidence interval.Given nS successes in n trials, define n~=n+z2 and p~=1n~(nS+z22) Then, a confidence interval for p is given by p~±zp~n~(1−p~) where z=Φ−1(1−α2) is the quantile of a standard normal distribution, as before (for example, a 95% confidence interval requires 0.05 , thereby producing 1.96 ). According to Brown, Cai, and DasGupta, taking z=2 instead of 1.96 produces the "add 2 successes and 2 failures" interval previously described by Agresti and Coull.This interval can be summarised as employing the centre-point adjustment, p~ , of the Wilson score interval, and then applying the Normal approximation to this point. Arcsine transformation: The arcsine transformation has the effect of pulling out the ends of the distribution. While it can stabilize the variance (and thus confidence intervals) of proportion data, its use has been criticized in several contexts.Let X be the number of successes in n trials and let p = X/n. The variance of p is var ⁡(p)=p(1−p)n. Using the arc sine transform the variance of the arcsine of p1/2 is var arcsin var ⁡(p)4p(1−p)=p(1−p)4np(1−p)=14n. So, the confidence interval itself has the following form: sin arcsin sin arcsin ⁡p+z2n))2 where z is the 1−α2 quantile of a standard normal distribution. This method may be used to estimate the variance of p but its use is problematic when p is close to 0 or 1. ta transform: Let p be the proportion of successes. For 0 ≤ a ≤ 2, log log log ⁡(1−p) This family is a generalisation of the logit transform which is a special case with a = 1 and can be used to transform a proportional data distribution to an approximately normal distribution. The parameter a has to be estimated for the data set. Rule of three — for when no successes are observed: The rule of three is used to provide a simple way of stating an approximate 95% confidence interval for p, in the special case that no successes ( p^=0 ) have been observed. The interval is (0,3/n). By symmetry, one could expect for only successes ( p^=1 ), the interval is (1 − 3/n,1). Comparison and discussion: There are several research papers that compare these and other confidence intervals for the binomial proportion. Both Agresti and Coull (1998) and Ross (2003) point out that exact methods such as the Clopper–Pearson interval may not work as well as certain approximations. The Normal approximation interval and its presentation in textbooks has been heavily criticised, with many statisticians advocating that it be not used. The principal problems are overshoot (bounds exceed [0, 1]), zero-width intervals at p^ = 0 and 1 (falsely implying certainty), and overall inconsistency with significance testing.Of the approximations listed above, Wilson score interval methods (with or without continuity correction) have been shown to be the most accurate and the most robust, though some prefer the Agresti–Coull approach for larger sample sizes. Wilson and Clopper–Pearson methods obtain consistent results with source significance tests, and this property is decisive for many researchers. Comparison and discussion: Many of these intervals can be calculated in R using packages like "binom".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Switch hit** Switch hit: A switch hit is a modern cricket shot. A switch hit involves the batter effectively changing from a right-hander to a left-hander (or vice versa) just before the ball is delivered by the bowler for the purpose of executing the shot. It is a variation of the reverse sweep, in which the hands on the bat handle are switched and the stance is changed during the bowler's delivery action, and has been compared to switch-hitting in baseball. Early history: An early instance of a switch hit in Test matches happened in the fourth Test between Australia and England at Manchester in 1921. Australian captain Warwick Armstrong was bowling wide outside the leg stump to slow the scoring. To take advantage of the absence of fielders on the offside, Percy Fender switched his hands on the bat handle and played the ball towards cover point. The Times reported the shot thus: in dealing with Mr. Armstrong, he [Fender] contrived at times to get away and place the ball on the deserted off side. He once shifted hands on the handle of the bat and pulled him back-handed across the wicket to the place where cover-point generally stands.The Herald reported that the crowd laughed uproariously when Fender hit a "wide-pitched ball from Armstrong left-handed for two and converting a square cut into a square leg hit".In 1924 the Marylebone Cricket Club (MCC) ruled that the shot was illegal and a batsman using it should be given out "obstructing the field". New Zealand authorities, on the other hand, saw nothing wrong with it, and noted that the rare instances when it had happened had amused the players and the spectators. Like Percy Fender, the New Zealand cricketer and clergyman Ernest Blamires had also used it to counter the leg theory bowling of Warwick Armstrong. In Australia, The Australasian described the MCC ruling as "so ridiculous that it leaves one in wonderment", and noted that the Victorian batsman of the 1890s Dick Houston had employed the shot frequently. The modern shot: In modern times, the shot is usually attributed to Kevin Pietersen. Pietersen played the shot in a Test match for the first time off Muttiah Muralitharan against Sri Lanka in May 2006, and used it again on 15 June 2008 in a one-day international against New Zealand. Despite the shot becoming known due to Pietersen's successful execution of it, it is believed that Jonty Rhodes actually executed this shot before Pietersen: he hit a switch-hit six off Darren Lehmann in a one day international between Australia and South Africa on March 27, 2002. However, Krishnamachari Srikanth played a switch hit for four off Dipak Patel in India's last league match of the 1987 World Cup against New Zealand on 31 October 1987. Australia's Glenn Maxwell is a notable user of this shot and was endorsed to use a double-faced bat in Twenty20 cricket. The modern shot: The shot has generated debate in the cricket world, some heralding it as an outstanding display of skill and others arguing that if the batsman changes stance he gains an unfair advantage over the bowler, because the field is set based on the batsman's initial stance at the crease. The Marylebone Cricket Club (MCC), guardians of the laws of cricket, has confirmed it will not legislate against the switch shot and cited that the shot was perfectly legal in accordance with cricketing laws. The MCC believes that the stroke is exciting for the game of cricket, and highlighted Law 36.3 which defines the off side of the striker's wicket as being determined by his stance at the moment the bowler starts his run-up. The MCC has also acknowledged that the switch hit has implications on the interpretation of the 'on side' and 'off side' for the purposes of adjudicating on wides or leg before wicket decisions.In June 2012, the International Cricket Council (ICC) committee declared it to be a legitimate shot. They issued a statement saying they have decided to make no change to the current regulations.On the other hand, if defined by baseball equivalent, where a player change stance before even facing the ball, players that are able to bat both left handed and right handed straight up (notably David Warner) do not actually change handedness in the course of a match.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Who's musical equipment** The Who's musical equipment: This is a history of the equipment that the English rock band The Who used. It also notes their influence on the instruments of the time period. The Who's musical equipment: As their sound developed with each album, and their audience expanded with each tour, John Entwistle and Pete Townshend, supported by sound engineer Bob Pridden, became known for constantly changing their stage equipment. Townshend altered his setup for nearly every tour, and Entwistle's equipment changed even more than that. Keith Moon played various drum kits, recognizably 'Pictures of Lily' kit, manufactured by Premier Percussion, which consisted of one and a half kits' worth of equipment as a precaution towards his tendency to destroy parts onstage. Early rigs and Marshall Stacks: In 1965, Pete Townshend and John Entwistle were directly responsible for the creation and widespread use of Marshall amplifiers powering stacked speaker cabinets. In fact, the first 100 watt Marshall amps (called "Superleads") were created specifically for Entwistle and Townshend when they wanted an amplifier that sounded like a Fender head but with much more power.At this time, The Who were using their own precursors to the Marshall Stack with 50 watt amps; John Entwistle used a Marshall JTM45 head feeding two 4x12" cabinets (set up side-by-side), and Townshend had a 1964 Fender Bassman powering a single 4x12" Marshall cabinet set up on top of a second cab. Around this time, Eric Clapton was using a JTM45, which he had modified into the 1962 Bluesbreaker combo. These rigs proved not to be loud enough for The Who as they moved into bigger and bigger venues, and in the summer of 1965 they switched to Vox AC100s; the very first (and at the time, only) 100w amps on the market, which were designed for use by The Beatles. However, in September that year, The Who's van was stolen, including all of their equipment.Following the theft, unhappy with the sound and reliability of the Vox amps, Entwistle and Townshend approached Jim Marshall asking if it would be possible for him to make new Marshall amps for them that were more powerful than the JTM45, to which they were told that the cabinets would have to double in size. They agreed and six rigs of this 8x12 prototype were manufactured, of which two each were sold to Townshend and Entwistle and one each to Ronnie Lane and Steve Marriott of The Small Faces. These new "double" cabinets proved too heavy and awkward to be transported practically, so Townshend returned to Marshall asking if they could be cut in half and stacked like his old Bassman rig, and although the double cabinets were left intact, the existing single cabinet models were modified to make them more suitable for stacking, which has become the standard over the years.Entwistle and Townshend continued expanding and experimenting with their rigs, until (at a time when most bands still used 50–100w amplifiers with single cabinets) they were both using twin Stacks, with each Stack being driven by the then-new and experimental 200w prototype Marshall Majors. This, in turn, also had a strong influence on the band's contemporaries at the time, with Cream and The Jimi Hendrix Experience both following suit. However, due to the cost of transport, The Who could not afford to take their full rigs with them for their earliest overseas tours; thus, Cream and Hendrix were the first to be seen to use this setup on a wide scale, particularly in America.Ironically, although The Who pioneered and directly contributed to the development of the "classic" Marshall sound and setup with their equipment being built and tweaked to their personal specifications, by the time they toured America as headliners in 1968, they had stopped using Marshalls and moved on to Sound City equipment, which were as powerful as Marshalls, but had a cleaner sound, which both Townshend and Entwistle preferred. Cream and particularly Hendrix would be associated with the adoption of Marshall stacks. Sound City and the invention of Hiwatt amplifiers: John Entwistle traded in his Marshall Stacks in favour of Sound City at the beginning of 1967, and Townshend followed later that year. Around this time, Jimi Hendrix and his manager Chas Chandler approached Townshend asking for his opinion on amplification. He told them that he had stopped using Marshall as he thought Sound City were better. The Jimi Hendrix Experience subsequently started using Sound City rigs, but set them up together with their Marshall Stacks instead of replacing them. Sound City and the invention of Hiwatt amplifiers: In late 1968 The Who approached Dallas Arbiter, the makers of Sound City, asking if their equipment could be modified slightly. This request was denied, but independent amp designer and manufacturer Dave Reeves, a former employee of Sound City, agreed and created customised Sound City L100 amplifiers under the name Hylight Electronics. This model was named the Hiwatt DR103, which would be modified in 1970 into the CP103 "Super Who 100" model which Townshend used almost exclusively for over a decade. In 1973 the updated DR103W model was created, which has been the central piece of equipment around which Townshend's various rigs were built for the next thirty years.Since the early 1990s, Pete Townshend has relied on Fender Vibroking amps Fender with 212 extension cabinets. At times adding,a Fender Custom Vibrolux Reverb or a Lazy J or for a brief period in 2006 a Hiwatt Custom 50 The Who mention Hiwatt amplifiers in the song "Long Live Rock". Rotosound strings: In 1966, bassist John Entwistle was looking for a set of roundwound strings "which vibrated properly". He contacted James How of Rotosound and set up a meeting to visit their factory in London. Entwistle spent the afternoon there, trying various strings made for him by the on-site technicians in different gauges with different cores and types of wire until they found a set that he was happy with. Rotosound strings: In return for a free lifetime's supply, Entwistle agreed to allow Rotosound to market the strings they had co-developed as their flagship "Swing Bass 66" range, with a black-and-white photograph of John and James How on that day in the Rotosound factory gracing the reverse of every packet. In honour of the partnership, The Who wrote and recorded a jingle for Rotosound which appears on their 1967 album The Who Sell Out. Entwistle used these strings exclusively for the next twenty-three years until switching to gold-plated Maxima strings in 1989. In 2001 he switched back to using Rotosound until his death in June 2002.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jargon** Jargon: Jargon is the specialized terminology associated with a particular field or area of activity. Jargon is normally employed in a particular communicative context and may not be well understood outside that context. The context is usually a particular occupation (that is, a certain trade, profession, vernacular or academic field), but any ingroup can have jargon. The main trait that distinguishes jargon from the rest of a language is special vocabulary—including some words specific to it and often different senses or meanings of words, that outgroups would tend to take in another sense—therefore misunderstanding that communication attempt. Jargon is sometimes understood as a form of technical slang and then distinguished from the official terminology used in a particular field of activity.The terms jargon, slang, and argot are not consistently differentiated in the literature; different authors interpret these concepts in varying ways. According to one definition, jargon differs from slang in being secretive in nature; according to another understanding, it is specifically associated with professional and technical circles. Some sources, however, treat these terms as synonymous. In Russian linguistics, jargon is classified as an expressive form of language, while secret languages are referred to as argots. The use of jargon became more popular around the sixteenth century attracting persons from different career paths. This led to there being printed copies available on the various forms of jargon. Specifics: Jargon is "the technical terminology or characteristic idiom of a special activity or group". Most jargon is technical terminology (technical terms), involving terms of art or industry terms, with particular meaning within a specific industry. A main driving force in the creation of technical jargon is precision and efficiency of communication, when a discussion must easily range from general themes to specific, finely differentiated details without circumlocution. Jargon enriches everyday vocabulary with meaningful content and can potentially become a catchword.While jargon allows greater efficiency in communication among those familiar with it, a side-effect is that it raises the threshold of comprehensibility for outsiders. This is usually accepted as an unavoidable trade-off, but it may also be used as a means of social exclusion (reinforcing ingroup–outgroup barriers) or social aspiration (when introduced as a way of showing off). Some academics promote the use of jargon-free language, as an audience may be alienated or confused by the technical terminology, and thus lose track of a speaker or writer's broader and more important arguments. Etymology: The French word is believed to have been derived from the Latin word gaggire, meaning "to chatter", which was used to describe speech that the listener did not understand. The word may also come from Old French jargon meaning "chatter of birds". Middle English also has the verb jargounen meaning "to chatter", or "twittering", deriving from Old French.The first use of the word dates back to The Canterbury Tales written by Geoffrey Chaucer between 1387 and 1400. Chaucer referred to jargon as the utterance of birds or sounds resembling birds. Etymology: In colonial history, jargon was seen as a device of communication to bridge the gap between two speakers who did not speak the same tongue. Jargon was synonymous with pidgin in naming specific language usages. Jargon then began to have a negative connotation with lacking coherent grammar, or gibberish as it was seen as a "broken" language of many different languages with no full community to call their own. In the 1980s, linguists began restricting this usage of jargon to keep the word to more commonly define a technical or specialized language use. Etymology: Fields using the term The term is used, often interchangeably, with the term buzzword when examining organizational culture. In linguistics, it is used to mean "specialist language", with the term also seen as closely related to slang, argot and cant. Various kinds of language peculiar to ingroups can be named across a semantic field. Slang can be either culture-wide or known only within a certain group or subculture. Argot is slang or jargon purposely used to obscure meaning to outsiders. Conversely, a lingua franca is used for the opposite effect, helping communicators to overcome unintelligibility, as are pidgins and creole languages. For example, the Chinook Jargon was a pidgin. Although technical jargon's primary purpose is to aid technical communication, not to exclude outsiders by serving as an argot, it can have both effects at once and can provide a technical ingroup with shibboleths. For example, medieval guilds could use this as one means of informal protectionism. On the other hand, jargon that once was obscure outside a small ingroup can become generally known over time. For example, the terms bit, byte, and hexadecimal (which are terms from computing jargon) are now recognized by many people outside computer science. Etymology: Referenced The philosopher Étienne Bonnot de Condillac observed in 1782 that "every science requires a special language because every science has its own ideas". As a rationalist member of the Enlightenment, he continued: "It seems that one ought to begin by composing this language, but people begin by speaking and writing, and the language remains to be composed." Industry term: "An industry term... is a type of technical terminology that has a particular meaning in a specific industry. It implies that a word or phrase is a typical one in a particular industry and people working in the respective industry or business will be familiar with and use the term."Precise technical terms and their definitions are formally recognized, documented, and taught by educators in the field. Other terms are more colloquial, coined and used by practitioners in the field, and are similar to slang. The boundaries between formal and slang jargon, as in general English, are quite fluid. This is especially true in the rapidly developing world of computers and networking. For instance, the term firewall (in the sense of a device used to filter network traffic) was at first technical slang. As these devices became more widespread and the term became widely understood, the word was adopted as formal terminology.Technical terminology evolves due to the need for experts in a field to communicate with precision and brevity, but often has the effect of excluding those who are unfamiliar with the particular specialized language of the group. This can cause difficulties as, for example, when a patient is unable to follow the discussions of medical practitioners, and thus cannot understand his own condition and treatment. Differences in jargon also cause difficulties where professionals in related fields use different terms for the same phenomena. Industry term: Business jargon The use of jargon in the business world is a common occurrence. The use of jargon in business correspondence reached a high popularity between the late 1800's into the 1950's. Jargon in business is most frequently used in modes of communication especially in business letters and changes as language evolves. Common phrases used in business jargon includes: As per Ditto Hereby Meet with your approval Oblige Please be advised Pursuant Undersigned Medical jargon This is another common area that jargon is known to be found. Medicine is rich in scientific terminology that is used amongst medical professionals. However, these terms when used with patients or non medical professionals has caused issues. Most patients encounter medical jargon when referring to their diagnosis or when receiving or reading their medication. Some of the most commonly used terms in medical jargon are: Ablation Biopsied Hematoma Infarct Ketosis Papillary carcinoma Plantar fasciitis sciatica VertebraeOn first glance many people do not understand what these terms mean and may panic when they see these scientific names being used in reference to their health. The argument as to whether medical jargon is a positive or negative attribute to a patients experience has evidence to support both sides. On one hand, as mentioned before these phrases can be overwhelming for some patients who may not understand the terminology. However, with the accessibility to the internet, it has been suggested that these terms can be used and easily researched for clarity. Industry term: In practice Jargon may serve the purpose of a "gatekeeper" in conversation, signaling who is allowed into certain forms of conversation. Jargon may serve this function by dictating to which direction or depth a conversation about or within the context of a certain field or profession will go. For example, a conversation between two professionals in which one person has little previous interaction or knowledge of the other person could go one of at least two possible ways. One of the professionals (who the other professional does not know) does not use, or does not correctly use the jargon of their respective field, and is little regarded or remembered beyond small talk or fairly insignificant in this conversation. Or, if the person does use particular jargon (showing their knowledge in the field to be legitimate, educated, or of particular significance) the other professional then opens the conversation up in an in-depth or professional manner. Outside of conversation, jargon can become confusing in writing. When used in text, readers can become confused if there are terms used that require outside knowledge on the subject. Positivity: Ethos is used to create an appeal to authority. It is one of three pillars of persuasion created by Aristotle to create a logical argument. Ethos uses credibility to back up arguments. It can indicate to the audience that a speaker is an insider with using specialized terms in the field to make an argument based on authority and credibility.Jargon can be used to convey meaningful information and discourse in a convenient way within communities. A subject expert may wish to avoid jargon when explaining something to a layperson. Jargon may help communicate contextual information optimally. For example, a football coach talking to their team or a doctor working with nurses. Accessibility and criticism: With the rise of the self-advocacy movement within the disability movement, "jargonized" language has been much objected to by advocates and self-advocates. Jargon is largely present in everyday language, in newspapers, government documents, and official forms. Several advocacy organizations work on influencing public agents to offer accessible information in different formats. One accessible format that offers an alternative to jargonised language is "easy read", which consists of a combination of plain English and images. Accessibility and criticism: The criticism against jargon can be found in certain fields when responding to specific information. In a study done by analyzing 58 patients and 10 radiation therapists, they diagnosed and explained the treatment of a disease to a patient with the use of jargon. It was found that using jargon in the medical field is not the best in communicating the terminology and concepts. Patients tend to be confused about what the treatments and risks were. There are resources that include online glossaries of technical jargon, also known as "jargon busters." Examples: Many examples of jargon exist because of its use among specialists and subcultures alike. In the professional world, those who are in the business of filmmaking may use words like "vorkapich" to refer to a montage when talking to colleagues. In Rhetoric, rhetoricians use words like "arete" to refer to a person of power's character when speaking with one another.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cold-shock domain** Cold-shock domain: In molecular biology, the cold-shock domain (CSD) is a protein domain of about 70 amino acids which has been found in prokaryotic and eukaryotic DNA-binding proteins. Part of this domain is highly similar to the RNP-1 RNA-binding motif.When Escherichia coli is exposed to a temperature drop from 37 to 10 degrees Celsius, a 4–5 hour lag phase occurs, after which growth is resumed at a reduced rate. During the lag phase, the expression of around 13 proteins, which contain cold shock domains is increased 2–10 fold. These so-called cold shock proteins induced in the cold shock response are thought to help the cell to survive in temperatures lower than optimum growth temperature, by contrast with heat shock proteins induced in the heat shock response, which help the cell to survive in temperatures greater than the optimum, possibly by condensation of the chromosome and organisation of the prokaryotic nucleoid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Relationship obsessive–compulsive disorder** Relationship obsessive–compulsive disorder: In psychology, relationship obsessive–compulsive disorder (ROCD) is a form of obsessive–compulsive disorder focusing on close or intimate relationships. Such obsessions can become extremely distressing and debilitating, having negative impacts on relationships functioning.Obsessive–compulsive disorder comprises thoughts, images or urges that are unwanted, distressing, interfere with a person's life and that are commonly experienced as contradicting a person's beliefs and values. Such intrusive thoughts are frequently followed by compulsive behaviors aimed at "neutralizing" the feared consequence of the intrusions and temporarily relieve the anxiety caused by the obsessions. Attempts to suppress or "neutralize" obsessions increase rather than decrease the frequency and distress caused by the obsessions. Relationship obsessive–compulsive disorder: Common obsessive themes include fear of contamination, fears about being responsible for harming the self or others, doubts, and orderliness. However, people with OCD can also have religious and sexual obsessions. Some people with OCD may experience obsessions relating to the way they feel in an ongoing relationship or the way they felt in past relationships (ROCD). Repetitive thought about a person's feelings towards a relationship partner may occur in different relational contexts such as intimate or parent-child relationships; however, in ROCD such preoccupations are unwanted, intrusive, chronic and disabling. Signs and symptoms: Relationship-centered symptoms People may continuously doubt whether they love their partner, whether their relationship is the right relationship or whether their partner really loves them. When they know they love someone or that someone loves them, they constantly check and reassure themselves that it is the right feeling. When they attempt to end the relationship, they are overwhelmed with anxiety. By staying in the relationship, however, they are haunted by continuous doubts regarding the relationship. Signs and symptoms: Partner-focused symptoms Another form of ROCD includes preoccupation, checking, and reassurance-seeking behaviors relating to the partner's perceived flaws. Instead of finding good in their partner, they are constantly focused on their shortcomings. They often exaggerate these flaws and use them to prove the relationship is fundamentally bad. The fact that they are unable to concentrate on anything but their partner's flaws causes the affected individual great anxiety, and often leads to a strained relationship. Recent investigations suggest partner-focused ROCD symptoms may also occur in the parent-child context. In such cases, parents may be overwhelmed by preoccupations that their child is not socially competent, good looking, moral or emotionally balanced enough. Such obsession are associated with increased parental stress and low mood. Causes: Like other forms of OCD, psychological and biological factors are believed to play a role in the development and maintenance of ROCD. In addition to the maladaptive ways of thinking and behaving identified as important in OCD, models of ROCD suggest that over-reliance on intimate relationships or the perceived value of the partner for a person's feelings of self-worth and fear of abandonment (also see attachment theory) may increase vulnerability and maintain ROCD symptoms. CBT Models of ROCD: ROCD is a form of OCD. Cognitive behavioral therapies (CBT) are considered the Gold Standard psychological treatments for OCD. According to CBT models, we all have unwanted, intrusive thoughts, images and urges. Individuals with OCD interpret these intrusive experiences as meaning something bad about their character (crazy or bad) or about the future (a catastrophe is going to occur). For instance, they may give the mere occurrence of an unwanted thought of a loved one having an accident the meaning that they wanted something bad to happen to them. Such interpretations increase attention to unwanted intrusive experiences, making them more distressing and increasing their frequency. Individuals with OCD try to control, neutralize or prevent intrusive experiences (or their content) from occurring using washing, checking, avoidance, suppression of thoughts or other mental and behavioral rituals (compulsions). These control attempts, however, paradoxically increase (rather than decrease) the occurrence of these unwanted intrusions and the distress associated with them. According to CBT models, individuals with OCD give such extremely negative interpretations to intrusive experiences because they hold maladaptive beliefs. For instance, the belief that if anything bad happens it is their own responsibility (inflated responsibility), can lead individuals with OCD to wash their hands repeatedly after having the thought "this may be contaminated." They will do this, in order to avoid feeling responsible for hurting someone else or themselves. CBT Models of ROCD: In ROCD, intrusions relating to the "rightness" of relationship or the suitability of the relationship partner (e.g., not smart, moral or good looking enough) are often the most distressing. In order to reduce the distress associated with such intrusions, individuals with ROCD often use various mental or behavioral strategies. For instance, they often try and get reassurance from others that the partner or the relationship is good enough, they may test the partner or check (from up close) their perceived flaw, they may look for information on the internet on "how do I know I'm in the right relationship" or assess their physical reaction and feelings towards their partner. These and similar behaviors, increase the attention given to the intrusion, give it more importance and make it more frequent. Individuals with ROCD also give catastrophic meaning to intrusions based on extreme maladaptive beliefs such as being in a relationship they are not absolutely sure about always leads to extreme disaster. Such beliefs lead individuals with ROCD to interpret common relationship doubts in a catastrophic way provoking compulsive mental acts and behaviors such as repeated checking of perceived flaws or repeated assessment of the strength and quality of one's feelings towards the partner. CBT Models of ROCD: Treatment of ROCD symptoms often involve psycho-education about the disorder and the CBT model, exposure and response prevention to feared thoughts or images and challenging of maladaptive relationship beliefs (e.g., believing that being in love means being happy all the time) and more common OCD beliefs such as perfectionism and intolerance of uncertainty. Recently, mobile applications have been developed to assist therapists challenge maladaptive beliefs associated with OCD and ROCD symptoms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hydrogen damage** Hydrogen damage: Hydrogen damage is the generic name given to a large number of metal degradation processes due to interaction with hydrogen atoms. Note that molecular gaseous hydrogen does not have the same effect as atoms or ions released into solid solution in the metal. Creation of internal defects: Carbon steels exposed to hydrogen at high temperatures experience high temperature hydrogen attack which leads to internal decarburization and weakening. Blistering: Atomic hydrogen diffusing through metals may collect at internal defects like inclusions and laminations and form molecular hydrogen. High pressures may be built up at such locations due to continued absorption of hydrogen leading to blister formation, growth and eventual bursting of the blister. Such hydrogen induced blister cracking has been observed in steels, aluminium alloys, titanium alloys and nuclear structural materials. Metals with low hydrogen solubility (such as tungsten) are more susceptible to blister formation. While in metals with high hydrogen solubility like vanadium, hydrogen prefers to induce stable metal-hydrides instead of bubbles or blisters. Shatter cracks, flakes, fish-eyes and micro perforations: Flakes and shatter cracks are internal fissures seen in large forgings. Hydrogen picked up during melting and casting segregates at internal voids and discontinuities and produces these defects during forging. Fish-eyes are bright patches named for their appearance seen on fracture surfaces, generally of weldments. Hydrogen enters the metal during fusion-welding and produces this defect during subsequent stressing. Steel containment vessels exposed to extremely high hydrogen pressures develop small fissures or micro perforations through which fluids may leak. Loss in tensile ductility: Hydrogen lowers tensile ductility in many materials. In ductile materials, like austenitic stainless steels and aluminium alloys, no marked embrittlement may occur, but may exhibit significant lowering in tensile ductility (% elongation or % reduction in area) in tensile tests. Control of hydrogen damage: The best method of controlling hydrogen damage is to control contact between the metal and hydrogen. Many steps can be taken to reduce the entry of hydrogen into metals during critical operations like melting; casting; working (rolling, forging, etc.); welding; and surface preparation, like chemical cleaning, electroplating, and corrosion during their service life. Control of the environment and metallurgical control of the material to decrease its susceptibility to hydrogen are the two major approaches to reduce hydrogen damage. Detection of hydrogen damage: There are various methods of adequately identifying and monitoring hydrogen damage, including ultrasonic echo attenuation method, amplitude-based backscatter, velocity ratio, creeping waves/time-of-flight measurement, pitch-catch mode shear wave velocity, advanced ultrasonic backscatter techniques (AUBT), time of flight diffraction (TOFD), thickness mapping and in-situ metallography – replicas. For hydrogen damage, the backscatter technique is used to detect affected areas in the material. To cross-check and confirm the findings of the backscatter measurement, the velocity ratio measurement technique is used. For the detection of micro and macro cracks, time of flight diffraction is a suitable method to use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**-yne** -yne: In chemistry, the suffix -yne is used to denote the presence of a triple bond.The suffix follows IUPAC nomenclature, and is mainly used in organic chemistry. However, inorganic compounds featuring unsaturation in the form of triple bonds may be denoted by substitutive nomenclature with the same methods used with alkynes, i.e., the name of the corresponding saturated hydride is modified by replacing the "-ane" ending with "-yne". The suffix "-diyne" is used when there are two triple bonds, and so on. The position of unsaturation is indicated by a numerical locant immediately preceding the "-yne" suffix, or locants in the case of multiple triple bonds. Locants are chosen to be as low as possible. While generally used as a suffix, "-yne" is also used as an infix to name substituent groups that are triply bound to the parent compound. -yne: This suffix arose as a collapsed form of the end of the word acetylene. The final "-e" disappears if it is followed by another suffix that starts with a vowel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wet meadow** Wet meadow: A wet meadow is a type of wetland with soils that are saturated for part or all of the growing season which prevents the growth of trees and brush. Debate exists whether a wet meadow is a type of marsh or a completely separate type of wetland. Wet prairies and wet savannas are hydrologically similar. Hydrology and ecology: Wet meadows may occur because of restricted drainage or the receipt of large amounts of water from rain or melted snow. They may also occur in riparian zones and around the shores of large lakes. Hydrology and ecology: Unlike a marsh or swamp, a wet meadow does not have standing water present except for brief to moderate periods during the growing season. Instead, the ground in a wet meadow fluctuates between brief periods of inundation and longer periods of saturation. Wet meadows often have large numbers of wetland plant species, which frequently survive as buried seeds during dry periods, and then regenerate after flooding. Wet meadows therefore do not usually support aquatic life such as fish. They typically have a high diversity of plant species, and may attract large numbers of birds, small mammals and insects including butterflies. Hydrology and ecology: Vegetation in a wet meadow usually includes a wide variety of herbaceous species including sedges, rushes, grasses and a wide diversity of other plant species. A few of many possible examples include species of Rhexia, Parnassia, Lobelia, many species of wild orchids (e.g. Calopogon and Spiranthes), and carnivorous plants such as Sarracenia and Drosera. Woody plants, if present, account for a minority of the total area cover. High water levels are one of the important factors that prevent invasion by woody plants; in other cases, fire is important. In areas with low frequencies of fire, or reduced water level fluctuations, or higher fertility, plant diversity will decline. Conservation: Wet meadows were once common in wetland types around the world. They remain an important community type in wet savannas and flatwoods. The also survive along rivers and lakeshores where water levels are allowed to change within and among years. But their area has been dramatically reduced. In some areas, wet meadows are partially drained and farmed and therefore lack the biodiversity described here. In other cases, the construction of dams has interfered with the natural fluctuation of water levels that generates wet meadows.The most important factors in creating and maintaining wet meadows are therefore natural water level fluctuations and recurring fire. In some cases, small areas of wet meadow are artificially created. Due to the concern with damage that excessive stormwater runoff can cause to nearby lakes and streams, artificial wetlands can be created to capture stormwater. Often this produce marshes, but in some cases wet meadows may be produced. The idea is to capture and store rainwater onsite and use it as a resource to grow attractive native plants that thrive in such conditions. The Buhr Park Children's Wet Meadow is one such project. It is a group of wet meadow ecosystems in Ann Arbor, Michigan designed as an educational opportunity for school-age children. In Europe, wet meadows are sometimes managed by hay-cutting and grazing. Intensified agricultural practices (too frequent mowing, use of mineral fertilizers, manure and insecticides), may lead to declines in the abundance of organisms and species diversity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**David Rousseau** David Rousseau: David Rousseau (born 1960) is a British systems philosopher, Director of the Centre for Systems Philosophy, chair of the Board of Trustees of the International Society for the Systems Sciences (ISSS), a Past President of the ISSS (2017-2018), and the Company Secretary of the British Association for the Study of Spirituality. He is known for having revived interest in establishing a scientific general systems theory (GST), for promoting systems philosophy as a route to advances in GST, for contributions on scientific general systems principles and for advocating systems research as a route to a scientific understanding of spiritual and other exceptional human experiences. His research interests include systems philosophy, systems science, systems engineering, systems methods for exploratory research, the mind-body problem, and the ontological foundations of moral intuitions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cognitive response model** Cognitive response model: The cognitive response model of persuasion locates the most direct cause of persuasion in the self-talk of the persuasion target, rather than the content of the message. Cognitive response model: Anthony Greenwald first proposed the theory in 1968.The cognitive response model shows that learning our cognitive responses to persuasion provides a basis for understanding the persisting effects of communication. Greenwald’s theory states that we remember our cognitive responses better than actual information presented to us. Simply put, we are better at remembering our thoughts about an argument during the argument, rather than the actual argument itself. Responses: Two types of cognitive responses exist: direct and indirect. Direct responses are relevant to the material being presented and can increase persuasion. For example, when presented with the fact, “ 9 out of 10 college students drink alcohol”, and your cognitive response is, “ Yeah, I would say most of the people at my school are drinkers”, you would be having a direct response. Indirect responses have nothing to do with the material at hand and do not increase persuasive effects. If presented with the same fact, “ 9 out of 10 college students drink alcohol", and your cognitive response is, “I wonder what I am doing this weekend”, you would have an indirect response. Research: Research supporting the model shows that persuasion is powerfully affected by the amount of self-talk that occurs in response to a message. The degree to which the self-talk supports the message and the confidence that recipients express in the validity of that self-talk further support the cognitive response model. Implications for persuasion: The cognitive response model suggests that effective messages should take into account factors that are likely to enhance positive cognitive responses to the receivers. Counterarguments, in contrast, are negative cognitive responses that prohibit persuasion. Factors that reduce counterarguments include communicator expertise and insufficient time and ability to formulate counterarguments. Such tactics are often used in interrogations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tiling puzzle** Tiling puzzle: Tiling puzzles are puzzles involving two-dimensional packing problems in which a number of flat shapes have to be assembled into a larger given shape without overlaps (and often without gaps). Some tiling puzzles ask you to dissect a given shape first and then rearrange the pieces into another shape. Other tiling puzzles ask you to dissect a given shape while fulfilling certain conditions. The two latter types of tiling puzzles are also called dissection puzzles. Tiling puzzles may be made from wood, metal, cardboard, plastic or any other sheet-material. Many tiling puzzles are now available as computer games. Tiling puzzle: Tiling puzzles have a long history. Some of the oldest and most famous are jigsaw puzzles and the tangram puzzle. Other examples of tiling puzzles include: Conway puzzle Domino tiling, of which the mutilated chessboard problem is one example Eternity puzzle Geometric magic square Puzz-3D Squaring the square Tantrix T puzzleMany three-dimensional mechanical puzzles can be regarded as three-dimensional tiling puzzles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rainbow Islands Evolution** Rainbow Islands Evolution: Rainbow Islands Evolution is a game in the Bubble Bobble series for the PSP system. It is also known as New Rainbow Island: Hurdy Gurdy Daibōken!! (NEW(ニュー)レインボーアイランド ハーディガーディ大冒険!!, Nyū Reinbō Airando Hādi Gādi Daibōken!!) in Japan. It is an enhanced remake of the arcade game Rainbow Islands. Bub and Bob, the two main characters in the series, are against an evil recording company that seeks to pollute the Rainbow Islands' atmosphere by creating constant musical noise, therefore wilting the flora and mutating the fauna. Bub and Bob use a hurdy-gurdy as a weapon to create the rainbows. The game follows the same vertical-scrolling system from the original, but it expands to a third dimension as there are platforms in the background which become accessible through the course of the game. Reception: The game received "generally unfavorable reviews" according to the review aggregation website Metacritic. In Japan, Famitsu gave it a score of 16 out of 40.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GDP-fucose protein O-fucosyltransferase 1** GDP-fucose protein O-fucosyltransferase 1: GDP-fucose protein O-fucosyltransferase 1 also known as peptide-O-fucosyltransferase 1 (O-FucT-1) is an enzyme that in humans is encoded by the POFUT1 gene. GDP-fucose protein O-fucosyltransferase 1: POFUT-1 belongs to the O-Fuc family of proteins all which is involved in the transferring of o-fucose from GDP-β-L-fucose to substrates. POFUT-1 responsible for adding fucose sugars in O linkage to serine or threonine residues between the second and third conserved cysteines in EGF-like repeats on the Notch protein. The protein is an inverting glycosyltransferase, which means that the enzyme uses GDP-β-L-fucose as a donor substrate and transfers the fucose in O linkage to the protein producing fucose-α-O-serine/threonine. GDP-fucose protein O-fucosyltransferase 1: When the gene for POFUT1 is knocked out, or the expression is decreased to very low levels, all Notch signaling is destroyed, which means that fucose on Notch is essential for Notch function. Why this is the case is not yet well understood. Almost all glycosyltransferases reside in the Golgi apparatus. However, POFUT1 as well as the related enzyme POFUT2 have recently been shown to reside in the endoplasmic reticulum. Nomenclature: GDP-fucose protein O-fucosyltransferase 1 is also known as Protein O-fucosyltransferase O-FucT-1 FUT12 OFUCT1 O-FUT Post-translational modification: POFUT-1 is involved in the attachment of Fucose sugars to proteins, However a key pathway is the Post-transitional Modification of NOTCH signal proteins. Post-translational modification: Pre-NOTCH proteins are translated and deposited in to the endoplasmic reticulum and are then first modified by POFUT-1 then by PGLUT-1 then exported the Golgi apparatus. in the endoplasmic reticulum POFUT-1 utilizes its sub-strait GDP-β-L-fucose as a donor for the five carbon sugar Fucose. fucose is then attached to a serine amino acid residue. Once Pre-notch is done being modified by POFUT-1 and POFUT-2, it is then exported to the Golgi apparatus where it is further modified and exported and incorporated into the cell membrane. Species distribution: As NOTCH signaling is conserved in most multi-cellular life, so to are the processes that are involved in the pathway. Because of NOTCH presence in most life forms, not just limited to the kingdom Animlia, it is also present in the kingdom plantae and kingdom fungi. There are several different Homologs in POFUT-1 present in many kingdoms of life. As a drug target: Because POFUT-1 is a key protein in the production of NOTCH signaling protein it has been the target of much research to disrupt it for the purpose of cancer treatment and prevention.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Whittaker–Shannon interpolation formula** Whittaker–Shannon interpolation formula: The Whittaker–Shannon interpolation formula or sinc interpolation is a method to construct a continuous-time bandlimited function from a sequence of real numbers. The formula dates back to the works of E. Borel in 1898, and E. T. Whittaker in 1915, and was cited from works of J. M. Whittaker in 1935, and in the formulation of the Nyquist–Shannon sampling theorem by Claude Shannon in 1949. It is also commonly called Shannon's interpolation formula and Whittaker's interpolation formula. E. T. Whittaker, who published it in 1915, called it the Cardinal series. Definition: Given a sequence of real numbers, x[n], the continuous function x(t)=∑n=−∞∞x[n]sinc(t−nTT) (where "sinc" denotes the normalized sinc function) has a Fourier transform, X(f), whose non-zero values are confined to the region |f| ≤ 1/(2T). When the parameter T has units of seconds, the bandlimit, 1/(2T), has units of cycles/sec (hertz). When the x[n] sequence represents time samples, at interval T, of a continuous function, the quantity fs = 1/T is known as the sample rate, and fs/2 is the corresponding Nyquist frequency. When the sampled function has a bandlimit, B, less than the Nyquist frequency, x(t) is a perfect reconstruction of the original function. (See Sampling theorem.) Otherwise, the frequency components above the Nyquist frequency "fold" into the sub-Nyquist region of X(f), resulting in distortion. (See Aliasing.) Equivalent formulation: convolution/lowpass filter: The interpolation formula is derived in the Nyquist–Shannon sampling theorem article, which points out that it can also be expressed as the convolution of an infinite impulse train with a sinc function: x(t)=(∑n=−∞∞T⋅x(nT)⏟x[n]⋅δ(t−nT))⊛(1Tsinc(tT)). This is equivalent to filtering the impulse train with an ideal (brick-wall) low-pass filter with gain of 1 (or 0 dB) in the passband. If the sample rate is sufficiently high, this means that the baseband image (the original signal before sampling) is passed unchanged and the other images are removed by the brick-wall filter. Convergence: The interpolation formula always converges absolutely and locally uniformly as long as ∑n∈Z,n≠0|x[n]n|<∞. By the Hölder inequality this is satisfied if the sequence (x[n])n∈Z belongs to any of the ℓp(Z,C) spaces with 1 ≤ p < ∞, that is ∑n∈Z|x[n]|p<∞. This condition is sufficient, but not necessary. For example, the sum will generally converge if the sample sequence comes from sampling almost any stationary process, in which case the sample sequence is not square summable, and is not in any ℓp(Z,C) space. Stationary random processes: If x[n] is an infinite sequence of samples of a sample function of a wide-sense stationary process, then it is not a member of any ℓp or Lp space, with probability 1; that is, the infinite sum of samples raised to a power p does not have a finite expected value. Nevertheless, the interpolation formula converges with probability 1. Convergence can readily be shown by computing the variances of truncated terms of the summation, and showing that the variance can be made arbitrarily small by choosing a sufficient number of terms. If the process mean is nonzero, then pairs of terms need to be considered to also show that the expected value of the truncated terms converges to zero. Stationary random processes: Since a random process does not have a Fourier transform, the condition under which the sum converges to the original function must also be different. A stationary random process does have an autocorrelation function and hence a spectral density according to the Wiener–Khinchin theorem. A suitable condition for convergence to a sample function from the process is that the spectral density of the process be zero at all frequencies equal to and above half the sample rate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tacpac** Tacpac: TACPAC (derived from "tactile approach to communication package") is a sensory communication resource using touch and music to develop communication skills. It helps those who have sensory impairment or communication difficulties. It can also help those who have tactile defensiveness, learning difficulties, autism, Down syndrome, and dementia. Design: TACPAC uses music that matches the texture of the object for each activity so that those undergoing TACPAC have an aligned experience. Users can employ this to help them communicate. TACPAC comes in the form of a subscription that is accessed on a website and also via an app for phones and tablets. The resources include music tracks, instruction videos, and downloadable/printable record and instruction sheets. Users can play the music tracks without an internet connection. Design: Through repetition, a receiver learns to express responses that can be understood: e.g., those manifesting like/dislike; desire/rejection; and know/ignorance. Users can begin to respond to stimuli, anticipate activities and relate to the helper. These primal responses that comprise pre-intentional and affective communication can be crucial steps toward more clearly defined intentional communication and even language acquisition.Special-needs educator Laura Pease writes: "One of the most effective ways of establishing contact with deafblind children and so encouraging a communicative response is to share activities with high levels of physical contact and pleasant sensations. These include [...] Tacpac, a package where taped music is linked to a range of tactile sensations." Reception: The number of research projects around TACPAC is growing and it has growing support amongst multi-sensory impairment networks and the UK's Royal National Institute of Blind People.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anaerobic membrane bioreactor** Anaerobic membrane bioreactor: Anaerobic membrane bioreactor or AnMBR is the name of a technology utilized in wastewater treatment. It is a new technology in membrane filtration for biomass retention. AnMBR works by using a membrane bioreactor (MBR). The sewage is filtered and separated leaving the effluent and sludge apart. This sludge is treated anaerobically by mesophilic bacteria which release methane as a byproduct. The biogas can later be combusted to generate heat or electricity. AnMBR is considered to be a sustainable alternative for sewage treatment because the energy that can be generated by methane combustion can exceed the energy required for maintaining the process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Genome-wide association study** Genome-wide association study: In genomics, a genome-wide association study (GWA study, or GWAS), is an observational study of a genome-wide set of genetic variants in different individuals to see if any variant is associated with a trait. GWA studies typically focus on associations between single-nucleotide polymorphisms (SNPs) and traits like major human diseases, but can equally be applied to any other genetic variants and any other organisms. Genome-wide association study: When applied to human data, GWA studies compare the DNA of participants having varying phenotypes for a particular trait or disease. These participants may be people with a disease (cases) and similar people without the disease (controls), or they may be people with different phenotypes for a particular trait, for example blood pressure. This approach is known as phenotype-first, in which the participants are classified first by their clinical manifestation(s), as opposed to genotype-first. Each person gives a sample of DNA, from which millions of genetic variants are read using SNP arrays. If there is significant statistical evidence that one type of the variant (one allele) is more frequent in people with the disease, the variant is said to be associated with the disease. The associated SNPs are then considered to mark a region of the human genome that may influence the risk of disease. Genome-wide association study: GWA studies investigate the entire genome, in contrast to methods that specifically test a small number of pre-specified genetic regions. Hence, GWAS is a non-candidate-driven approach, in contrast to gene-specific candidate-driven studies. GWA studies identify SNPs and other variants in DNA associated with a disease, but they cannot on their own specify which genes are causal.The first successful GWAS published in 2002 studied myocardial infarction. This study design was then implemented in the landmark GWA 2005 study investigating patients with age-related macular degeneration, and found two SNPs with significantly altered allele frequency compared to healthy controls. As of 2017, over 3,000 human GWA studies have examined over 1,800 diseases and traits, and thousands of SNP associations have been found. Except in the case of rare genetic diseases, these associations are very weak, but while each individual association may not explain much of the risk, they provide insight into critical genes and pathways and can be important when considered in aggregate. Background: Any two human genomes differ in millions of different ways. There are small variations in the individual nucleotides of the genomes (SNPs) as well as many larger variations, such as deletions, insertions and copy number variations. Any of these may cause alterations in an individual's traits, or phenotype, which can be anything from disease risk to physical properties such as height. Around the year 2000, prior to the introduction of GWA studies, the primary method of investigation was through inheritance studies of genetic linkage in families. This approach had proven highly useful towards single gene disorders. However, for common and complex diseases the results of genetic linkage studies proved hard to reproduce. A suggested alternative to linkage studies was the genetic association study. This study type asks if the allele of a genetic variant is found more often than expected in individuals with the phenotype of interest (e.g. with the disease being studied). Early calculations on statistical power indicated that this approach could be better than linkage studies at detecting weak genetic effects.In addition to the conceptual framework several additional factors enabled the GWA studies. One was the advent of biobanks, which are repositories of human genetic material that greatly reduced the cost and difficulty of collecting sufficient numbers of biological specimens for study. Another was the International HapMap Project, which, from 2003 identified a majority of the common SNPs interrogated in a GWA study. The haploblock structure identified by HapMap project also allowed the focus on the subset of SNPs that would describe most of the variation. Also the development of the methods to genotype all these SNPs using genotyping arrays was an important prerequisite. Methods: The most common approach of GWA studies is the case-control setup, which compares two large groups of individuals, one healthy control group and one case group affected by a disease. All individuals in each group are typically genotyped at common known SNPs. The exact number of SNPs depends on the genotyping technology, but are typically one million or more. For each of these SNPs it is then investigated if the allele frequency is significantly altered between the case and the control group. In such setups, the fundamental unit for reporting effect sizes is the odds ratio. The odds ratio is the ratio of two odds, which in the context of GWA studies are the odds of case for individuals having a specific allele and the odds of case for individuals who do not have that same allele. Methods: Example: suppose that there are two alleles, T and C. The number of individuals in the case group having allele T is represented by 'A' and the number of individuals in the control group having allele T is represented by 'B'. Similarly, the number of individuals in the case group having allele C is represented by 'X' and the number of individuals in the control group having allele C is represented by 'Y'. In this case the odds ratio for allele T is A:B (meaning 'A to B', in standard odds terminology) divided by X:Y, which in mathematical notation is simply (A/B)/(X/Y). Methods: When the allele frequency in the case group is much higher than in the control group, the odds ratio is higher than 1, and vice versa for lower allele frequency. Additionally, a P-value for the significance of the odds ratio is typically calculated using a simple chi-squared test. Finding odds ratios that are significantly different from 1 is the objective of the GWA study because this shows that a SNP is associated with disease. Because so many variants are tested, it is standard practice to require the p-value to be lower than 5×10−8 to consider a variant significant. Methods: Variations on the case-control approach. A common alternative to case-control GWA studies is the analysis of quantitative phenotypic data, e.g. height or biomarker concentrations or even gene expression. Likewise, alternative statistics designed for dominance or recessive penetrance patterns can be used. Calculations are typically done using bioinformatics software such as SNPTEST and PLINK, which also include support for many of these alternative statistics. GWAS focuses on the effect of individual SNPs. However, it is also possible that complex interactions among two or more SNPs (epistasis) might contribute to complex diseases. Due to the potentially exponential number of interactions, detecting statistically significant interactions in GWAS data is both computationally and statistically challenging. This task has been tackled in existing publications that use algorithms inspired from data mining. Moreover, the researchers try to integrate GWA data with other biological data such as protein-protein interaction network to extract more informative results.A key step in the majority of GWA studies is the imputation of genotypes at SNPs not on the genotype chip used in the study. This process greatly increases the number of SNPs that can be tested for association, increases the power of the study, and facilitates meta-analysis of GWAS across distinct cohorts. Genotype imputation is carried out by statistical methods that impute genotypic data to a set of reference panel of haplotypes, which typically have been densely genotyped using whole-genome sequencing. These methods take advantage of sharing of haplotypes between individuals over short stretches of sequence to impute alleles. Existing software packages for genotype imputation include IMPUTE2, Minimac, Beagle and MaCH.In addition to the calculation of association, it is common to take into account any variables that could potentially confound the results. Sex, age, and ancestry are common examples of confounding variables. Moreover, it is also known that many genetic variations are associated with the geographical and historical populations in which the mutations first arose. Because of this association, studies must take account of the geographic and ethnic background of participants by controlling for what is called population stratification. If they did not do so, the studies could produce false positive results.After odds ratios and P-values have been calculated for all SNPs, a common approach is to create a Manhattan plot. In the context of GWA studies, this plot shows the negative logarithm of the P-value as a function of genomic location. Thus the SNPs with the most significant association stand out on the plot, usually as stacks of points because of haploblock structure. Importantly, the P-value threshold for significance is corrected for multiple testing issues. The exact threshold varies by study, but the conventional genome-wide significance threshold is 5×10−8 to be significant in the face of hundreds of thousands to millions of tested SNPs. GWA studies typically perform the first analysis in a discovery cohort, followed by validation of the most significant SNPs in an independent validation cohort. Results: Attempts have been made at creating comprehensive catalogues of SNPs that have been identified from GWA studies. As of 2009, SNPs associated with diseases are numbered in the thousands.The first GWA study, conducted in 2005, compared 96 patients with age-related macular degeneration (ARMD) with 50 healthy controls. It identified two SNPs with significantly altered allele frequency between the two groups. These SNPs were located in the gene encoding complement factor H, which was an unexpected finding in the research of ARMD. The findings from these first GWA studies have subsequently prompted further functional research towards therapeutical manipulation of the complement system in ARMD.Another landmark publication in the history of GWA studies was the Wellcome Trust Case Control Consortium (WTCCC) study, the largest GWA study ever conducted at the time of its publication in 2007. The WTCCC included 14,000 cases of seven common diseases (~2,000 individuals for each of coronary heart disease, type 1 diabetes, type 2 diabetes, rheumatoid arthritis, Crohn's disease, bipolar disorder, and hypertension) and 3,000 shared controls. This study was successful in uncovering many new disease genes underlying these diseases.Since these first landmark GWA studies, there have been two general trends. One has been towards larger and larger sample sizes. In 2018, several genome-wide association studies are reaching a total sample size of over 1 million participants, including 1.1 million in a genome-wide study of educational attainment follow by another in 2022 with 3 million individuals and a study of insomnia containing 1.3 million individuals. The reason is the drive towards reliably detecting risk-SNPs that have smaller effect sizes and lower allele frequency. Another trend has been towards the use of more narrowly defined phenotypes, such as blood lipids, proinsulin or similar biomarkers. These are called intermediate phenotypes, and their analyses may be of value to functional research into biomarkers.A variation of GWAS uses participants that are first-degree relatives of people with a disease. This type of study has been named genome-wide association study by proxy (GWAX).A central point of debate on GWA studies has been that most of the SNP variations found by GWA studies are associated with only a small increased risk of the disease, and have only a small predictive value. The median odds ratio is 1.33 per risk-SNP, with only a few showing odds ratios above 3.0. These magnitudes are considered small because they do not explain much of the heritable variation. This heritable variation is estimated from heritability studies based on monozygotic twins. For example, it is known that 40% of variance in depression can be explained by hereditary differences, but GWA studies only account for a minority of this variance. Clinical applications and examples: A challenge for future successful GWA study is to apply the findings in a way that accelerates drug and diagnostics development, including better integration of genetic studies into the drug-development process and a focus on the role of genetic variation in maintaining health as a blueprint for designing new drugs and diagnostics. Several studies have looked into the use of risk-SNP markers as a means of directly improving the accuracy of prognosis. Some have found that the accuracy of prognosis improves, while others report only minor benefits from this use. Generally, a problem with this direct approach is the small magnitudes of the effects observed. A small effect ultimately translates into a poor separation of cases and controls and thus only a small improvement of prognosis accuracy. An alternative application is therefore the potential for GWA studies to elucidate pathophysiology. Clinical applications and examples: Hepatitis C treatment One such success is related to identifying the genetic variant associated with response to anti-hepatitis C virus treatment. For genotype 1 hepatitis C treated with Pegylated interferon-alpha-2a or Pegylated interferon-alpha-2b combined with ribavirin, a GWA study has shown that SNPs near the human IL28B gene, encoding interferon lambda 3, are associated with significant differences in response to the treatment. A later report demonstrated that the same genetic variants are also associated with the natural clearance of the genotype 1 hepatitis C virus. These major findings facilitated the development of personalized medicine and allowed physicians to customize medical decisions based on the patient's genotype. Clinical applications and examples: eQTL, LDL and cardiovascular disease The goal of elucidating pathophysiology has also led to increased interest in the association between risk-SNPs and the gene expression of nearby genes, the so-called expression quantitative trait loci (eQTL) studies. The reason is that GWAS studies identify risk-SNPs, but not risk-genes, and specification of genes is one step closer towards actionable drug targets. As a result, major GWA studies by 2011 typically included extensive eQTL analysis. One of the strongest eQTL effects observed for a GWA-identified risk SNP is the SORT1 locus. Functional follow up studies of this locus using small interfering RNA and gene knock-out mice have shed light on the metabolism of low-density lipoproteins, which have important clinical implications for cardiovascular disease. Clinical applications and examples: Atrial fibrillation For example, a meta-analysis accomplished in 2018 revealed the discovery of 70 new loci associated with atrial fibrillation. It has been identified different variants associated with transcription factor coding-genes, such as TBX3 and TBX5, NKX2-5 o PITX2, which are involved in cardiac conduction regulation, in ionic channel modulation and cardiac development. It was also identified new genes involved in tachycardia (CASQ2) or associated with alteration of cardiac muscle cell communication (PKP2). Clinical applications and examples: Schizophrenia Research using a High-Precision Protein Interaction Prediction (HiPPIP) computational model that discovered 504 new protein-protein interactions (PPIs) associated with genes linked to schizophrenia,. While the evidence supporting the genetic basis of schizophrenia is not controversial, one study found that 25 candidate schizophrenia genes discovered from GWAS had little association with schizophrenia, demonstrating that GWAS alone may be insufficient to identify candidate genes. Conservation applications: Population level GWA studies may be used to identify adaptive genes to help evaluate ability of species to adapt to changing environmental conditions as the global climate becomes warmer. This could help determine extirpation risk for species and could therefore be an important tool for conservation planning. Utilizing GWA studies to determine adaptive genes could help elucidate the relationship between neutral and adaptive genetic diversity. Agricultural applications: Plant growth stages and yield components GWA studies act as an important tool in plant breeding. With large genotyping and phenotyping data, GWAS are powerful in analyzing complex inheritance modes of traits that are important yield components such as number of grains per spike, weight of each grain and plant structure. In a study on GWAS in spring wheat, GWAS have revealed a strong correlation of grain production with booting data, biomass and number of grains per spike. GWA study is also a success in study genetic architecture of complex traits in rice. Agricultural applications: Plant pathogens The emergences of plant pathogens have posed serious threats to plant health and biodiversity. Under this consideration, identification of wild types that have the natural resistance to certain pathogens could be of vital importance. Furthermore, we need to predict which alleles are associated with the resistance. GWA studies is a powerful tool to detect the relationships of certain variants and the resistance to the plant pathogen, which is beneficial for developing new pathogen-resisted cultivars. Agricultural applications: Chicken The first GWA study in chickens was done by Abasht and Lamont in 2007. This GWA was used to study the fatness trait in F2 population found previously. Significantly related SNPs were found are on 10 chromosomes (1, 2, 3, 4, 7, 8, 10, 12, 15 and 27). Limitations: GWA studies have several issues and limitations that can be taken care of through proper quality control and study setup. Lack of well defined case and control groups, insufficient sample size, control for population stratification are common problems. On the statistical issue of multiple testing, it has been noted that "the GWA approach can be problematic because the massive number of statistical tests performed presents an unprecedented potential for false-positive results". This is why all modern GWAS use a very low p-value threshold. In addition to easily correctible problems such as these, some more subtle but important issues have surfaced. A high-profile GWA study that investigated individuals with very long life spans to identify SNPs associated with longevity is an example of this. The publication came under scrutiny because of a discrepancy between the type of genotyping array in the case and control group, which caused several SNPs to be falsely highlighted as associated with longevity. The study was subsequently retracted, but a modified manuscript was later published. Now, many GWAS control for genotyping array. If there are substantial differences between groups on the type of genotyping array, as with any confounder, GWA studies could result in a false positive. Another consequence is that such studies are unable to detect the contribution of very rare mutations not included in the array or able to be imputed.Additionally, GWA studies identify candidate risk variants for the population from which their analysis is performed, and with most GWA studies historically stemming from European databases, there is a lack of translation of the identified risk variants to other non-European populations. Alternative strategies suggested involve linkage analysis. More recently, the rapidly decreasing price of complete genome sequencing have also provided a realistic alternative to genotyping array-based GWA studies. High-throughput sequencing does have potential to side-step some of the shortcomings of non-sequencing GWA. Fine-mapping: Genotyping arrays designed for GWAS rely on linkage disequilibrium to provide coverage of the entire genome by genotyping a subset of variants. Because of this, the reported associated variants are unlikely to be the actual causal variants. Associated regions can contain hundreds of variants spanning large regions and encompassing many different genes, making the biological interpretation of GWAS loci more difficult. Fine-mapping is a process to refine these lists of associated variants to a credible set most likely to include the causal variant. Fine-mapping: Fine-mapping requires all variants in the associated region to have been genotyped or imputed (dense coverage), very stringent quality control resulting in high-quality genotypes, and large sample sizes sufficient in separating out highly correlated signals. There are several different methods to perform fine-mapping, and all methods produce a posterior probability that a variant in that locus is causal. Because the requirements are often difficult to satisfy, there are still limited examples of these methods being more generally applied.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hybrid zone** Hybrid zone: A hybrid zone exists where the ranges of two interbreeding species or diverged intraspecific lineages meet and cross-fertilize. Hybrid zones can form in situ due to the evolution of a new lineage but generally they result from secondary contact of the parental forms after a period of geographic isolation, which allowed their differentiation (or speciation). Hybrid zones are useful in studying the genetics of speciation as they can provide natural examples of differentiation and (sometimes) gene flow between populations that are at some point between representing a single species and representing multiple species in reproductive isolation. Definition: Hybrid zones are areas where the hybrid offspring of two divergent taxa (species, subspecies or genetic "forms") are prevalent and there is a cline in the genetic composition of populations from one taxon to the other. The two (or more) genetically differentiated species or lineages contributing to formation of a hybrid zone are regarded as parental forms. Precise definitions of hybrid zones vary; some insist on increased variability of fitness within the zone, others that hybrids be identifiably different from parental forms and others that they represent secondary contact alone. The widths of such zones can vary from tens of metres to hundreds of kilometres. The shape of the zones (clines) can be gradual or stepped. Additionally, hybrid zones may be ephemeral or long-lasting. Definition: Some hybrid zones can be seen as presenting a paradox for the biological definition of a species, usually given as "a population of actually or potentially interbreeding individuals that produce fertile offspring" under what has become known as the Biological Species Concept. Under this definition, both parental forms could be argued to be the same species if they produce fertile offspring at least some of the time. However, the two parental populations or species often remain identifiably distinct, conforming to an alternative, and presently preferred concept of species as "taxa that retain their identity despite gene flow".The clines of hybrid zones can be observed by recording the frequency of certain diagnostic alleles or phenotypic characteristics for either population along a transect between the two parental populations or species. Often the clines take the form of a sigmoidal curve. They can be wide (gradual) or narrow (steep) depending on the ratio of hybrid survival to recombination of genes. Hybrid zones which show no regular transition from one taxon to the other, but rather a patchy distribution of parental forms and subpopulations with hybrid background, are termed mosaic hybrid zones. Definition: Forms Hybrid zones can be either primary or secondary. Primary hybrid zones occur where divergence is taking place between adjacent populations of a previously homogeneous species, possibly leading to parapatric speciation. As a population spreads across a contiguous area it may spread into an abruptly different environment. Through adaptation to the new environment, the adjacent populations begin parapatric divergence. The point of contact between the older population and the newer population is ideally a stepped cline, but due to dispersal across the line, hybridization takes place and a hybrid zone arises. Secondary hybrid zones in turn arise from secondary contact between two populations that were previously allopatric. In practice it can be quite difficult to distinguish between primary and secondary contact by observing an existing hybrid zone. Most of the prominent, recognized hybrid zones are thought to be secondary. Definition: One form of hybrid zone results where one species has undergone allopatric speciation and the two new populations regain contact after a period of geographic isolation. The two populations then mate within an area of contact, producing 'hybrids' which contain a mixture of the alleles distinctive for each population. Thus novel genes flow from either side into the hybrid zone. Genes can also flow back into the distinct populations through interbreeding between hybrids and parental (non-hybrid) individuals (introgression). These processes lead to the formation of a cline between the two pure forms within the hybrid zone. In the centre of such a cline, hybrizymes are commonly found. These are alleles that are normally rare in both species but, probably due to genetic hitchhiking on genes for hybrid fitness, reach high frequencies in the areas where most hybrids are formed.Whereas some hybrid zones may break down due to selection against hybrid individuals (e.g. driving the evolution of reproductive character displacement) or merging of the parental forms, hybrid zones and gene flow do not inevitably lead to merging of the two populations involved, and some hybrid zones may be retained for thousands of years. Some persistent hybrid zones are 'tension zones', where the conflicting effects of dispersal of parental forms and selection against hybrids balance each other. Dispersal of individual parents leads to the creation of more hybrids within the hybrid zone. This may result in introgression between the two parental populations because of backcrossing. However, in the tension zone model, hybrids are less fit than parental forms (perhaps because they lack the coadapted gene complexes of the parentals that make them well adapted to the environments on either side of the hybrid zone), or even inviable or sterile. Inviability or sterility of hybrids forms a barrier to gene flow by making a 'hybrid sink' into which genes from parentals flow but rarely continue into the other parental population. Statistical models suggest that neutral alleles flow across this barrier very slowly, while positively selected alleles move across quite rapidly. An interesting outcome of this model is that tension zones are almost environment independent and can therefore move and empirical cases of this have been found.In contrast to the tension zone model, the bounded hybrid superiority hypothesis predicts that hybrid fitness is enhanced in environments that are intermediate between those of the parental populations or lineages, yielding 'hybrid superiority'. Another model for a persistent hybrid zone is the ecotonal model, in which a hybrid zone occurs over an environmental gradient with each parental lineage being adapted to one part of that gradient. The frequency of alleles finding different equilibria therefore depends on the precise environmental conditions in a particular area. In each location, selection maintains a stable equilibria for each allele, resulting in a smooth cline. The hybrids must therefore be fitter at some point along the cline. Another model is the wave of advance model that sees multiple clines for individual alleles forming due to the progression of advantageous alleles from one population the other. Under the mosaic model, the hybrid zone is maintained by parentals distributed across the landscape among a mosaic of recurring hybrids which are selected against.Certain factors contribute to stability and steepness of hybrid zones within these models by reducing the frequency of inter-population mating and introgression. These include positive assortative mating within populations, habitat selection of different populations and hybrid unfitness. Additionally, it is suggested that individuals in populations near a tension zone (in which hybrids are less fit), evolve methods of only mating with their own population to reduce the prevalence of unfit hybrids. This is dubbed reinforcement, and its importance remains controversial. Marine hybrid zone case study: Hybrid zones are thought to be less common in marine than terrestrial environments. However, blue mussel populations show extensive hybridisation worldwide and are a well studied example of a marine hybrid zone. There are multiple sites of hybridisation between the closely related species Mytilus edulis, Mytilus trossulus and Mytilus galloprovincialis across the North Atlantic and Pacific coasts. These hybrid zones vary considerably. Some hybrid zones, such as the one in Newfoundland in Canada show remarkably few hybrids, while in the Baltic Sea most individuals are hybrids. Marine hybrid zone case study: Based on the fossil record and genetic marker studies the following chronology is used to explain the Canadian mussel hybrid zone: The genus Mytilus is at one point restricted to the North Pacific but spreads to the Atlantic through the Bering Strait around 3.5 million years ago. M. trossulus evolves in the North Pacific and M. edulis in the Atlantic in near allopatry as migration across the Bering Strait is very low. Marine hybrid zone case study: Recently, in post-glacial times, M. trossulus from the Pacific enters the Atlantic and colonises shores on both sides, and meets with the local M. edulis.The Canadian mussel hybrid zone is unusual because both species are found along the entire shore (a mosaic pattern) instead of the typical cline found in most hybrid zones. Studies of mtDNA and allozymes in adult populations show that the distribution of genotypes between the two species is bimodal; pure parental types are most common (representing above 75% of individuals) while backcrosses close to parental forms are the next most prevalent. F1 hybrid crosses represent less than 2.5% of individuals.The low frequency of F1 hybrids coupled with some introgression allows us to infer that although fertile hybrids can be produced, significant reproductive barriers exist and the two species are sufficiently deviated that they are now able to avoid recombinational collapse despite habitat sharing. One reason that could account for keeping taxa separate through prezygotic isolation is that in this region M. edulis spawns over a narrow 2–3 week period in July, while M. trossulus spawned over a more extensive period between late spring to early autumn. No infertility or developmental retardation was found in the hybrid individuals, allowing them to introgress with pure species.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barnstorming (sports)** Barnstorming (sports): In athletics terminology, barnstorming refers to sports teams or individual athletes that travel to various locations, usually small towns, to stage exhibition matches. Barnstorming teams differ from traveling teams in that they operate outside the framework of an established athletic league, while traveling teams are designated by a league, formally or informally, to be a designated visiting team.Barnstorming allowed athletes to compete in two sports; for example, Goose Reece Tatum played basketball for the Harlem Globetrotters and baseball for a Negro leagues barnstorming team. Some barnstorming teams lack home arenas, while others go on "barnstorming tours" in the off-season. History: Teams in baseball's Negro leagues often barnstormed before, during, and after their league's regular season. Hall of Fame baseball pitcher Satchel Paige barnstorm toured with Dempsey Hovland's Caribbean Kings. Hovland founded (and owned) several barnstorming teams, including the Texas Cowgirls (1949–1977), the first integrated professional women's basketball team to tour worldwide, and the New York Harlem Queens. The Harlem Globetrotters and Texas Cowgirls shared training camps, seasons, and circuits. History: While barnstorming is no longer as popular as it was in the 20th century, some teams such as basketball's Harlem Globetrotters, softball's King and His Court founded by Eddie Feigner and ice hockey's Buffalo Sabres Alumni Hockey Team carry on the tradition. In the 1990s the Colorado Silver Bullets women's baseball team resurrected barnstorming because there was no women's league. History: It was very common in the early days of professional American football; for instance, the Los Angeles Wildcats of the first American Football League (AFL) of 1926 played the regular season as a traveling team, then went on a post-season barnstorming tour of Texas and California, with Red Grange and the New York Yankees as the designated opponent for most of these games. NFL teams were also known to barnstorm in small towns against local teams all the way up through World War II. History: Several auto racers, most notably Barney Oldfield, staged exhibitions around the United States in the early twentieth century. In 1914 he barnstormed against the aviator Lincoln Beachey at least 35 times.In rugby union, the Barbarians, an invitation-only team, are famous for having no ground or clubhouse. Teams: American football Los Angeles Wildcats Tampa Cardinals Baseball Caribbean Kings House of David, a baseball team that toured rural United States from the 1920s until the 1950s Indianapolis Clowns Iowa Colored Cowboys Savannah Bananas Basketball Harlem Globetrotters Texas Cowgirls Washington Generals Cricket Bunbury Cricket Club Kaipaki Nation Cricket Club Ice hockey Buffalo Sabres Alumni Hockey Team Flying Fathers Montreal Canadiens Legends Rugby union Barbarian F.C. Teams: Softball King and His Court Other The Lancaster Barnstormers are a professional baseball team based in Lancaster, Pennsylvania. They are a member of the Freedom Division of the Atlantic League of Professional Baseball, and do not engage in actual barnstorming. In popular culture: In the 2001 book Danger Boy by Mark London Williams, Barnstormers is a video game the main character plays, in which players choose humans, vampires, zombies, and other creatures to assemble a barnstorming baseball team that travels the country competing against other teams.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dripstick** Dripstick: A dripstick is a thin hollow tube installed vertically in the bottoms of fuel tanks of many large aircraft, used to check fuel levels. To read a dripstick, it is withdrawn from the lower surface of the wing. When the top of the dripstick is withdrawn below the level of the fuel, fuel enters it and drips through a hole in the cap. Graduations on it indicate the level of fuel in the tank.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Exploration of the Valley of the Amazon** Exploration of the Valley of the Amazon: Exploration of the Valley of the Amazon is a two-volume publication by two young USN lieutenants William Lewis Herndon (vol. 1) and Lardner A. Gibbon (vol. 2). Herndon split the main party in two so that he and Gibbon could explore two different areas of the Valley of the Amazon. The Expedition: In 1851 William Lewis Herndon was ordered to head an expedition exploring the Valley of the Amazon – a vast uncharted area. Departing Lima, Peru, 21 May 1851, Lieut. Herndon, Lieut. Lardner Gibbon, and a small party of six men pressed into the wild and treacherously beautiful jungles. They split up and took different routes to gather even more information on this vast area. After a journey of 4,366 miles (7,026 km), which took Herndon through the wilderness from sea level to heights of 16,199 feet (4,937 m), Herndon reached the city of Pará, Brazil on 11 April 1852. On 26 January 1853, Herndon submitted an encyclopedic and profusely illustrated 414-page report to the Secretary of the Navy John P. Kennedy. The report was later published as Exploration of the Valley of the Amazon. The Expedition: The two volumes, one written by Lieutenant Herndon and the other by Lieutenant Gibbon, were so unusual at that time and of such importance that in an unusual move, it was immediately ordered, "10,000 additional copies be printed for the use of the Senate." Three months later another 20,000 copies were ordered; the book became an international best-seller.Their orders were to report on all possible conditions in the Amazon region that they would each have to traverse alone from Lima, Peru on the Pacific coast to Para, Brazil, the mouth of the Amazon. The two volumes were published by presidential order. Background documents: LETTER OF THE SECRETARY OF THE NAVY, COMMUNICATING A Report of an Exploration of the Valley of the Amazon and its tributaries, made by Lieut. Herndon, in connection with Lieut. Gibbon. FEBRUARY 10, 1853. — Referred to the Committee on Naval Affairs and ordered to be printed. MARCH 3, 1853. — ordered that 10,000 additional copies be printed for the use of the Senate. To the Senate and House of Representatives. I herewith transmit a communication from the Secretary of the Navy, accompanied by the first part of Lieut. Herndon's Report of the Exploration of the Valley of the Amazon and its tributaries, made by him, in connection with Lieut. Lardner Gibbon, under instructions from the Navy Department. MILLARD FILLMORE. WASHINGTON, February 9, 1853. NAVY DEPARTMENT, February, 1853. Background documents: To the President: SIR, In compliance with the notice given in the annual report of this department to the President, and communicated to Congress at the opening of its present session, I have the honor herewith to submit the first part of the Report of Lieut. Herndon, of the Exploration of the Valley of the Amazon and its tributaries, made by him, in connection with Lieut. Lardner Gibbon, under instructions from this department, dated the 15th of February, 1851. Background documents: I am happy to be able to inform you that Lieut. Gibbon reached Pará on his homeward journey some weeks ago, and may very soon be expected to arrive in the United States. When he returns, Lieut. Herndon will have all the materials necessary to complete his report, and will devote himself to that labor with the same assiduity which has characterized his present work. Background documents: I would respectfully beg leave to suggest that, in submitting this report to the House of Representatives, it be accompanied with a request to that body, if it should think proper to direct the printing of this valuable document, that the order for that purpose may include all the remaining portions of the report which may hereafter be furnished; and that the order for printing shall include a suitable direction for the engraving and publication of the maps, charts, and sketches, which will be furnished as necessary illustrations of the subjects treated of in the report. I have the honor to be, with the highest consideration, your obedient servant, JOHN P. KENNEDY. Background documents: WASHINGTON CITY, January 26, 1853. To the Hon. JOHN P. KENNEDY, Secretary of the Navy. SIR: I have the honor to submit part first of the Report of an Exploration of the Valley of the Amazon, made by me, with the assistance of Lieut. Lardner Gibbon, under instructions of the Navy Department, bearing date February 15, 1851. Background documents: The desire expressed by the department for an early report of my exploration of the Amazon, and the general interest manifested in the public mind with regard to the same, have induced me to lay before you at once as full an account of our proceedings as can be made before the return of my companion. The general map which accompanies the report is based upon maps published by the Society for the Diffusion of Useful Knowledge, but corrected and improved according to my own personal observations, and information obtained by me whilst in that country. Background documents: The final report of the expedition will be submitted as soon after Lieut. Gibbon's return as practicable. I am in daily expectation of intelligence from him. At the latest accounts (26 July 1852) he was at Trinidad de Moxos, on the Mamoré, in the Republic of Bolivia, making his preparations for the descent of the Madeira. I have the honor to be, very respectfully, your obedient servant, WM. LEWIS HERNDON, LIEUT. U. S. NAVY.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sugars in wine** Sugars in wine: Sugars in wine are at the heart of what makes winemaking possible. During the process of fermentation, sugars from wine grapes are broken down and converted by yeast into alcohol (ethanol) and carbon dioxide. Grapes accumulate sugars as they grow on the grapevine through the translocation of sucrose molecules that are produced by photosynthesis from the leaves. During ripening the sucrose molecules are hydrolyzed (separated) by the enzyme invertase into glucose and fructose. By the time of harvest, between 15 and 25% of the grape will be composed of simple sugars. Both glucose and fructose are six-carbon sugars but three-, four-, five- and seven-carbon sugars are also present in the grape. Not all sugars are fermentable, with sugars like the five-carbon arabinose, rhamnose and xylose still being present in the wine after fermentation. Very high sugar content will effectively kill the yeast once a certain (high) alcohol content is reached. For these reasons, no wine is ever fermented completely "dry" (meaning without any residual sugar). Sugar's role in dictating the final alcohol content of the wine (and such its resulting body and "mouth-feel") sometimes encourages winemakers to add sugar (usually sucrose) during winemaking in a process known as chaptalization solely in order to boost the alcohol content – chaptalization does not increase the sweetness of a wine. Sucrose: Sucrose is a disaccharide, a molecule composed of the two monosaccharides glucose, and fructose. Invertase is the enzyme cleaves the glycosidic linkage between the glucose and fructose molecules. Sucrose: In most wines, there will be very little sucrose, since it is not a natural constituent of grapes and sucrose added for the purpose of chaptalisation will be consumed in the fermentation. The exception to this rule is Champagne and other sparkling wines, to which an amount of liqueur d'expédition (typically sucrose dissolved in a still wine) is added after the second fermentation in bottle, a practice known as dosage. Sucrose: Glucose Glucose, along with fructose, is one of the primary sugars found in wine grapes. In wine, glucose tastes less sweet than fructose. It is a six-carbon sugar molecule derived from the breakdown of sucrose. At the beginning of the ripening stage there is usually more glucose than fructose present in the grape (as much as five times more) but the rapid development of fructose shifts the ratio to where at harvest there are generally equal amounts. Grapes that are overripe, such as some late harvest wines, may have more fructose than glucose. During fermentation, yeast cells break down and convert glucose first. The linking of glucose molecules with aglycone, in a process that creates glycosides, also plays a role in the resulting flavor of the wine due to their relation and interactions with phenolic compounds like anthocyanins and terpenoids. Sucrose: Fructose Fructose, along with glucose, is one of the principal sugars involved in the creation of wine. At time of harvest, there is usually an equal amount of glucose and fructose molecules in the grape; however, as the grape overripens the level of fructose will become higher. In wine, fructose can taste nearly twice as sweet as glucose and is a key component in the creation of sweet dessert wines. During fermentation, glucose is consumed first by the yeast and converted into alcohol. A winemaker that chooses to halt fermentation (either by temperature control or the addition of brandy spirits in the process of fortification) will be left with a wine that is high in fructose and notable residual sugars. The technique of süssreserve, where unfermented grape must is added after the wine's fermentation is complete, will result in a wine that tastes less sweet than a wine whose fermentation was halted. This is because the unfermented grape must will still have roughly equal parts of fructose and the less sweet tasting glucose. Similarly, the process of chaptalization where sucrose (which is one part glucose and one part fructose) is added will usually not increase the sweetness level of the wine. In wine tasting: In wine tasting, humans are least sensitive to the taste of sweetness (in contrast to sensitivity to bitterness or sourness) with the majority of the population being able to detect sugar or "sweetness" in wines between 1% and 2.5% residual sugar. Additionally, other components of wine such as acidity and tannins can mask the perception of sugar in the wine. Flash release: Flash release is a technique used in wine pressing. The technique allows for a better extraction of wine polysaccharides.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canon EF 500mm lens** Canon EF 500mm lens: The EF 500mm lenses are a group of super-telephoto prime lenses made by Canon that share the same focal length. These lenses have an EF type mount, and fit the Canon EOS line of digital single lens reflex cameras. Canon EF 500mm lens: When used on a camera with a field of view compensation factor of 1.6x, such as the Canon EOS 400D, they provide a narrower field of view, equivalent to an 800 mm lens mounted on a 35mm frame body. With a 1.3x body such as the Canon EOS-1D Mark III, they provide a less narrow field of view, equivalent to a 650mm lens mounted on a 35mm frame body. Canon EF 500mm lens: These lenses are most commonly used by sports and wildlife photographers. Three EF 500mm lenses have been available, and all are L series lenses. The most recent, the f/4L IS II, released in 2012 and is the only model in production. f/4.5L USM f/4L IS USM f/4L IS II USM EF 500mm f/4.5L USM: The EF 500mm f/4.5L USM is a professional L series lens that is now discontinued. This lens is constructed with a metal body and mount, and with plastic extremities and switches. Features of this lens include a wide rubber focus ring that is damped, a distance window with infrared index, the ability to limit the focus range, a focus-preset mechanism, and the ability to set the AF speed. A 9 bladed maximum aperture of f/4.5, gives this lens the ability to create depth of field effects. The optical construction of this lens contains 8 lens elements, including one fluorite lens element, and one UD (Ultra low Dispersion) lens element. This lens uses an inner focusing system, powered by a ring type USM motor. The front of the lens does not rotate nor extend when focusing. This lens is compatible with the Canon Extender EF teleconverters. EF 500mm f/4L IS USM: The EF 500mm f/4L IS USM is a professional L series lens that was designed to replace the EF 500mm f/4.5L USM. This lens is constructed with a metal body and mount, and with plastic extremities and switches. Features of this lens are: a wide rubber focus ring that is damped, a distance window with infrared index, the ability to limit the focus range from: 4.5m to infinity, 4.5m to 10m, and 10m to infinity, it also has a focus-preset mechanism, an image stabilizer that is effective up to two stops and is tripod sensing, an AF stop switch, and weather sealing. An 8 bladed maximum aperture of f/4 gives this lens the ability to create depth of field effects. The optical construction of this lens contains 17 lens elements, including one fluorite lens element, and two UD Lens elements. This lens uses an inner focusing system, powered by a ring type USM motor. The front of the lens does not rotate nor extend when focusing. This lens is compatible with the Canon Extender EF teleconverters. EF 500mm f/4L IS II USM: The EF 500mm f/4L IS II USM is the replacement to the EF 500mm f/4L IS USM. Its specifications are roughly similar to those of the f/4 Mark I, but have a number of significant differences. The Mark II version contains one fewer lens element than its predecessor (16 as opposed to 17), and is 680 grams (1.50 lb) lighter. It also has a 9-blade aperture instead of the 8 blades of the Mark I.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supersonic airfoils** Supersonic airfoils: A supersonic airfoil is a cross-section geometry designed to generate lift efficiently at supersonic speeds. The need for such a design arises when an aircraft is required to operate consistently in the supersonic flight regime. Supersonic airfoils: Supersonic airfoils generally have a thin section formed of either angled planes or opposed arcs (called "double wedge airfoils" and "biconvex airfoils" respectively), with very sharp leading and trailing edges. The sharp edges prevent the formation of a detached bow shock in front of the airfoil as it moves through the air. This shape is in contrast to subsonic airfoils, which often have rounded leading edges to reduce flow separation over a wide range of angle of attack. A rounded edge would behave as a blunt body in supersonic flight and thus would form a bow shock, which greatly increases wave drag. The airfoils' thickness, camber, and angle of attack are varied to achieve a design that will cause a slight deviation in the direction of the surrounding airflow.However, since a round leading edge decreases an airfoil's susceptibility to flow separation, a sharp leading edge implies that the airfoil will be more sensitive to changes in angle of attack. Therefore, to increase lift at lower speeds, aircraft that employ supersonic airfoils also use high-lift devices such as leading edge and trailing edge flaps. Lift and drag: At supersonic conditions, aircraft drag is originated due to: Skin-friction drag due to shearing. Lift and drag: The wave drag due to the thickness (or volume) or zero-lift wave drag Drag due to liftTherefore, the Drag coefficient on a supersonic airfoil is described by the following expression: CD= CD,friction+ CD,thickness+ CD,liftExperimental data allow us to reduce this expression to: CD= CD,O + KCL2 Where CDO is the sum of C(D,friction) and C D,thickness, and k for supersonic flow is a function of the Mach number. The skin-friction component is derived from the presence of a viscous boundary layer which is infinitely close to the surface of the aircraft body. At the boundary wall, the normal component of velocity is zero; therefore an infinitesimal area exists where there is no slip. The zero-lift wave drag component can be obtained based on the supersonic area rule which tells us that the wave-drag of an aircraft in a steady supersonic flow is identical to the average of a series of equivalent bodies of revolution. The bodies of revolution are defined by the cuts through the aircraft made by the tangent to the fore Mach cone from a distant point of the aircraft at an azimuthal angle. This average is over all azimuthal angles. The drag due-to lift component is calculated using lift-analysis programs. The wing design and the lift-analysis programs are separate lifting-surfaces methods that solve the direct or inverse problem of design and lift analysis. Supersonic wing design: Years of research and experience with the unusual conditions of supersonic flow have led to some interesting conclusions about airfoil design. Considering a rectangular wing, the pressure at a point P with coordinates (x,y) on the wing is defined only by the pressure disturbances originated at points within the upstream Mach cone emanating from point P. As result, the wing tips modify the flow within their own rearward Mach cones. The remaining area of the wing does not suffer any modification by the tips and can be analyzed with two-dimensional theory. For an arbitrary planform the supersonic leading and trailing are those portions of the wing edge where the components of the freestream velocity normal to the edge are supersonic. Similarly the subsonic leading and trailing are those portions of the wing edge where the components of the free stream velocity normal to the edge are subsonic. Supersonic wing design: Delta wings have supersonic leading and trailing edges; in contrast arrow wings have a subsonic leading edge and a supersonic trailing edge. When designing a supersonic airfoil two factors that must be considered are shock and expansion waves. Whether a shock or expansion wave is generated at different locations along an airfoil depends on the local flow speed and direction along with the geometry of the airfoil. Summary: Aerodynamic efficiency for supersonic aircraft increases with thin section airfoils with sharp leading and trailing edges. Swept wings where the leading edge is subsonic have the advantage of reducing the wave drag component at supersonic flight speeds; however experiments show that the theoretical benefits are not always attained due to separation of the flow over the surface of the wing; however this can be corrected with design factors. Double-Wedge and Bi-convex airfoils are the most common designs used in supersonic flights. Wave drag is the simplest and most important component of the drag in supersonic flow flight regions. For the optimized aircraft nearly 60% of its drag is skin friction drag, little over 20% is induced drag, and slightly under 20% is wave drag, hence less than 30% of the drag is due to lift.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reading stone** Reading stone: A reading stone is an approximately hemispherical lens that can be placed on top of text to magnify the letters so that people with presbyopia can read it more easily. Reading stones were among the earliest common uses of lenses. Reading stone: The invention of reading stones is often credited to Abbas ibn Firnas in the 9th century, although the regular use of reading stones only began around 1000 AD. Early reading stones were manufactured from rock crystal (quartz) or beryl as well as glass, which could be shaped and polished into stones used for viewing. The Swedish Visby lenses, dating from the 11th or 12th century, may have been reading stones. Reading stone: The function of reading stones was replaced by the use of spectacles from the late 13th century onwards, but modern implementations are still used. In their modern form, they can be found as rod-shaped magnifiers, flat on one side, that magnify a line of text at a time, or as large dome magnifiers which magnify a circular area of a page. Larger Fresnel lenses can be placed over an entire page. The modern forms are usually made of plastic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DNA polymerase alpha** DNA polymerase alpha: DNA polymerase alpha also known as Pol α is an enzyme complex found in eukaryotes that is involved in initiation of DNA replication. The DNA polymerase alpha complex consists of 4 subunits: POLA1, POLA2, PRIM1, and PRIM2.Pol α has limited processivity and lacks 3′ exonuclease activity for proofreading errors. Thus it is not well suited to efficiently and accurately copy long templates (unlike Pol Delta and Epsilon). Instead it plays a more limited role in replication. Pol α is responsible for the initiation of DNA replication at origins of replication (on both the leading and lagging strands) and during synthesis of Okazaki fragments on the lagging strand. The Pol α complex (pol α-DNA primase complex) consists of four subunits: the catalytic subunit POLA1, the regulatory subunit POLA2, and the small and the large primase subunits PRIM1 and PRIM2 respectively. Once primase has created the RNA primer, Pol α starts replication elongating the primer with ~20 nucleotides. Structure: DNA polymerase alpha, like DNA primase, contains iron-sulfur clusters, that are critical in electron transport that uses DNA itself to transfer electrons at very high speeds; this process is involved in detecting DNA damage, and may also be involved in a feedback between the primase complex and the DNA polymerase alpha.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phencyclidine** Phencyclidine: Phencyclidine or phenylcyclohexyl piperidine (PCP), also known as angel dust among other names, is a dissociative anesthetic mainly used recreationally for its significant mind-altering effects. PCP may cause hallucinations, distorted perceptions of sounds, and violent behavior. As a recreational drug, it is typically smoked, but may be taken by mouth, snorted, or injected. It may also be mixed with cannabis or tobacco.Adverse effects may include seizures, coma, addiction, and an increased risk of suicide. Flashbacks may occur despite stopping usage. Chemically, PCP is a member of the arylcyclohexylamine class, and pharmacologically, it is a dissociative anesthetic. PCP works primarily as an NMDA receptor antagonist.PCP is most commonly used in the United States. While usage peaked in the US in the 1970s, between 2005 and 2011 an increase in visits to emergency departments as a result of the drug occurred. As of 2017 in the United States, about 1% of people in 12th grade reported using PCP in the prior year while 2.9% of those over the age of 25 reported using it at some point in their lives. Recreational uses: Phencyclidine is used for its ability to induce a dissociative state. Recreational uses: Effects Behavioral effects can vary by dosage. Low doses produce a numbness in the extremities and intoxication, characterized by staggering, unsteady gait, slurred speech, bloodshot eyes, and loss of balance. Moderate doses (5–10 mg intranasal, or 0.01–0.02 mg/kg intramuscular or intravenous) will produce analgesia and anesthesia. High doses may lead to convulsions. The drug is often illegally produced under poorly controlled conditions; this means that users may be unaware of the actual dose they are taking.Psychological effects include severe changes in body image, loss of ego boundaries, paranoia, and depersonalization. Psychosis, agitation and dysphoria, hallucinations, blurred vision, euphoria, and suicidal impulses are also reported, as well as occasional aggressive behavior.: 48–49  Like many other drugs, PCP has been known to alter mood states in an unpredictable fashion, causing some individuals to become detached, and others to become animated. PCP may induce feelings of strength, power, and invulnerability as well as a numbing effect on the mind.Studies by the Drug Abuse Warning Network in the 1970s show that media reports of PCP-induced violence are greatly exaggerated and that incidents of violence are unusual and often limited to individuals with reputations for aggression regardless of drug use.: 48  Although uncommon, events of PCP-intoxicated individuals acting in an unpredictable fashion, possibly driven by their delusions or hallucinations, have been publicized. Other commonly cited types of incidents include inflicting property damage and self-mutilation of various types, such as pulling one's own teeth.: 48  These effects were not noted in its medicinal use in the 1950s and 1960s, however, and reports of physical violence on PCP have often been shown to be unfounded.Recreational doses of the drug also occasionally appear to induce a psychotic state, with emotional and cognitive impairment that resembles a schizophrenic episode. Users generally report feeling detached from reality.Symptoms are summarized by the mnemonic device RED DANES: rage, erythema (redness of skin), dilated pupils, delusions, amnesia, nystagmus (oscillation of the eyeball when moving laterally), excitation, and skin dryness. Recreational uses: Addiction PCP is self-administered and induces ΔFosB expression in the D1-type medium spiny neurons of the nucleus accumbens, and accordingly, excessive PCP use is known to cause addiction. PCP's rewarding and reinforcing effects are at least partly mediated by blocking the NMDA receptors in the glutamatergic inputs to D1-type medium spiny neurons in the nucleus accumbens. PCP has been shown to produce conditioned place aversion and conditioned place preference in animal studies. Recreational uses: Schizophrenia A 2019 review found that the transition rate from a diagnosis of hallucinogen-induced psychosis (which included PCP) to that of schizophrenia was 26%. This was lower than cannabis-induced psychosis (34%) but higher than amphetamine (22%), opioid (12%), alcohol (10%), and sedative (9%) induced psychoses. In comparison, the transition rate to schizophrenia for "brief, atypical and not otherwise specified" psychosis was found to be 36%. Recreational uses: Methods of administration PCP is easily accessible because of the various routes of administration available. Most commonly, the powder form of the drug is snorted. PCP can also be orally ingested, injected subcutaneously or intravenously, or smoked laced with marijuana or cigarettes. PCP can be ingested through smoking. "Fry" or "sherm" are street terms for marijuana or tobacco cigarettes that are dipped in PCP and then dried. PCP hydrochloride can be insufflated (snorted), depending upon the purity. This is most often referred to as "angel dust". An oral pill can also be compressed from the co-compounded powder form of the drug. This is usually referred to as "peace pill". The free base is quite hydrophobic and may be absorbed through skin and mucus membranes (often inadvertently). This form of the drug is commonly called "wack". Management of intoxication: Management of PCP intoxication mostly consists of supportive care – controlling breathing, circulation, and body temperature – and, in the early stages, treating psychiatric symptoms. Benzodiazepines, such as lorazepam, are the drugs of choice to control agitation and seizures (when present). Typical antipsychotics such as phenothiazines and haloperidol have been used to control psychotic symptoms, but may produce many undesirable side effects – such as dystonia – and their use is therefore no longer preferred; phenothiazines are particularly risky, as they may lower the seizure threshold, worsen hyperthermia, and boost the anticholinergic effects of PCP. If an antipsychotic is given, intramuscular haloperidol has been recommended.Forced acid diuresis (with ammonium chloride or, more safely, ascorbic acid) may increase clearance of PCP from the body, and was somewhat controversially recommended in the past as a decontamination measure. However, it is now known that only around 10% of a dose of PCP is removed by the kidneys, which would make increased urinary clearance of little consequence; furthermore, urinary acidification is dangerous, as it may induce acidosis and worsen rhabdomyolysis (muscle breakdown), a not-unusual manifestation of PCP toxicity. Pharmacology: Pharmacodynamics PCP is well known for its primary action on the NMDA receptor, an ionotropic glutamate receptor. As such, PCP is a noncompetitive NMDA receptor antagonist. The role of NMDAR antagonism in the effect of PCP, ketamine, and related dissociative agents was first published in the early 1980s by David Lodge and colleagues. Other NMDA receptor antagonists include ketamine, tiletamine, dextromethorphan, nitrous oxide, and dizocilpine (MK-801). Pharmacology: Research also indicates that PCP inhibits nicotinic acetylcholine receptors (nAChRs) among other mechanisms. Analogues of PCP exhibit varying potency at nACh receptors and NMDA receptors. Findings demonstrate that presynaptic nAChRs and NMDA receptor interactions influence postsynaptic maturation of glutamatergic synapses and consequently impact synaptic development and plasticity in the brain. These effects can lead to inhibition of excitatory glutamate activity in certain brain regions such as the hippocampus and cerebellum thus potentially leading to memory loss as one of the effects of prolonged use. Acute effects on the cerebellum manifest as changes in blood pressure, breathing rate, pulse rate, and loss of muscular coordination during intoxication.PCP, like ketamine, also acts as a potent dopamine D2High receptor partial agonist in rat brain homogenate and has affinity for the human cloned D2High receptor. This activity may be associated with some of the other more psychotic features of PCP intoxication, which is evidenced by the successful use of D2 receptor antagonists (such as haloperidol) in the treatment of PCP psychosis.In addition to its well explored interactions with NMDA receptors, PCP has also been shown to inhibit dopamine reuptake, and thereby leads to increased extracellular levels of dopamine and hence increased dopaminergic neurotransmission. However, PCP has little affinity for the human monoamine transporters, including the dopamine transporter (DAT). Instead, its inhibition of monoamine reuptake may be mediated by interactions with allosteric sites on the monoamine transporters. PCP is notably a high-affinity ligand of the PCP site 2 (Ki = 154 nM), a not-well-characterized site associated with monoamine reuptake inhibition.Studies on rats indicate that PCP interacts indirectly with opioid receptors (endorphin and enkephalin) to produce analgesia.A binding study assessed PCP at 56 sites including neurotransmitter receptors and transporters and found that PCP had Ki values of >10,000 nM at all sites except the dizocilpine (MK-801) site of the NMDA receptor (Ki = 59 nM), the σ2 receptor (PC12) (Ki = 136 nM), and the serotonin transporter (Ki = 2,234 nM). The study notably found Ki values of >10,000 nM for the D2 receptor, the opioid receptors, the σ1 receptor, and the dopamine and norepinephrine transporters. These results suggest that PCP is a highly selective ligand of the NMDAR and σ2 receptor. However, PCP may also interact with allosteric sites on the monoamine transporters to produce inhibition of monoamine reuptake. Pharmacology: Mechanism of action Phencyclidine is a noncompetive NMDA receptor antagonist that blocks the activity of the NMDA receptor to cause anaesthesia and analgesia without causing cardiorespiratory depression. NMDA is an excitatory receptor in the brain, when activated normally the receptor acts as an ion channel and there is an influx of positive ions through the channel to cause nerve cell depolarisation. Phencyclidine inhibits the NMDA receptor by binding to the specific PCP binding site located within the ion channel. The PCP binding site is within close proximity to the magnesium blocking site, which may explain the similar inhibitory effects. Binding at the PCP site is mediated by two non-covalent interactions within the receptor: hydrogen bonding and hydrophobic interaction. Binding is also controlled by the gating mechanism of the ion channel. Because the PCP site is located within the ion channel, a coagonist such as glycine must bind and open the channel in order for PCP to enter, bind to the PCP site, and block the channel. Pharmacology: Neurotoxicity Some studies found that, like other NMDA receptor antagonists, PCP can cause a kind of brain damage called Olney's lesions in rats. Studies conducted on rats showed that high doses of the NMDA receptor antagonist dizocilpine caused reversible vacuoles to form in certain regions of the rats' brains. All studies of Olney's lesions have only been performed on non-human animals and may not apply to humans. One unpublished study by Frank Sharp reportedly showed no damage by the NDMA antagonist ketamine, a structurally similar drug, far beyond recreational doses, but due to the study never having been published, its validity is controversial. Pharmacology: PCP has also been shown to cause schizophrenia-like changes in N-acetylaspartate and N-acetylaspartylglutamate levels in the rat brain, which are detectable both in living rats and upon necropsy examination of brain tissue. It also induces symptoms in humans that mimic schizophrenia. PCP not only produced symptoms similar to schizophrenia, it also yielded electroencephalogram changes in the thalamocortical pathway (increased delta decreased alpha) and in the hippocampus (increase theta bursts) that were similar to those in schizophrenia. PCP-induced augmentation of dopamine release may link the NMDA and dopamine hypotheses of schizophrenia. Pharmacology: Pharmacokinetics PCP is both water and lipid soluble and is therefore distributed throughout the body quickly. PCP is metabolized into PCHP, PPC and PCAA. The drug is metabolized 90% by oxidative hydroxylation in the liver during the first pass. Metabolites are glucuronidated and excreted in the urine. Nine percent of ingested PCP is excreted in its unchanged form.When smoked, some of the compound is broken down by heat into 1-phenylcyclohexene (PC) and piperidine. Pharmacology: The time taken before the effects of PCP manifest is dependent on the route of administration. The onset of action for inhalation occurs in 2–5 minutes, whereas the effects may take 15 to 60 minutes when ingested orally. Chemistry: PCP is an arylcyclohexylamine. Analogues Fewer than 30 different analogs of PCP were reported as being used on the street during the 1970s and 1980s, mainly in the United States. Only of a few of these compounds were widely used including rolicyclidine (PCPy), eticyclidine (PCE), and tenocyclidine (TCP). Less common analogs include 3-HO-PCP, 3-MeO-PCMo, and 3-MeO-PCP. Chemistry: The generalized structural motif required for PCP-like activity is derived from structure-activity relationship studies of PCP derivatives. All of these derivatives are likely to share some of their psychoactive effects with PCP itself, although a range of potencies and varying mixtures of anesthetic, dissociative, and stimulant effects are known, depending on the particular drug and its substituents. In some countries such as the United States, Australia, and New Zealand, all of these compounds would be considered controlled substance analogs of PCP under the Federal Analog Act and are hence illegal drugs if sold for human consumption. History: PCP was initially made in 1956 and brought to market as an anesthetic medication. Its use in humans was disallowed in the United States in 1965 due to the high rates of side effects, while its use in animals was disallowed in 1978. Moreover, ketamine was discovered and was better tolerated as an anesthetic. PCP is classified as a schedule II drug in the United States. A number of derivatives of PCP have been sold for recreational and non-medical use. Society and culture: Regulation PCP is a Schedule II substance in the United States and its ACSCN is 7471. Its manufacturing quota for 2014 was 19 grams.It is a Schedule I drug by the Controlled Drugs and Substances act in Canada, a List I drug of the Opium Law in the Netherlands, and a Class A substance in the United Kingdom. Society and culture: Frequency of use PCP began to emerge as a recreational drug in major cities in the United States in 1960s. In 1978, People magazine and Mike Wallace of 60 Minutes called PCP the country's "number one" drug problem. Although recreational use of the drug had always been relatively low, it began declining significantly in the 1980s. In surveys, the number of high school students admitting to trying PCP at least once fell from 13% in 1979 to less than 3% in 1990.: 46–49 Cultural depictions Jean-Michel Basquiat depicted two angel dust users in his 1982 painting Dustheads.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nonconcatenative morphology** Nonconcatenative morphology: Nonconcatenative morphology, also called discontinuous morphology and introflection, is a form of word formation and inflection in which the root is modified and which does not involve stringing morphemes together sequentially. Types: Apophony (including Ablaut and Umlaut) In English, for example, while plurals are usually formed by adding the suffix -s, certain words use nonconcatenative processes for their plural forms: foot /fʊt/ → feet /fiːt/;Many irregular verbs form their past tenses, past participles, or both in this manner: freeze /ˈfriːz/ → froze /ˈfroʊz/, frozen /ˈfroʊzən/.This specific form of nonconcatenative morphology is known as base modification or ablaut, a form in which part of the root undergoes a phonological change without necessarily adding new phonological material. In traditional Indo-Europeanist usage, these changes are termed ablaut only when they result from vowel gradations in Proto-Indo-European. An example is the English stem s⌂ng, resulting in the four distinct words: sing-sang-song-sung.: 72  An example from German is the stem spr⌂ch "speak", which results in various distinct forms such as spricht-sprechen-sprach-gesprochen-Spruch.: 72 Changes such as foot/feet, on the other hand, which are due to the influence of a since-lost front vowel, are called umlaut or more specifically I-mutation. Types: Other forms of base modification include lengthening of a vowel, as in Hindi: /mər-/ "die" ↔ /maːr-/ "kill"or change in tone or stress: Chalcatongo Mixtec /káʔba/ "filth" ↔ /káʔbá/ "dirty" English record /ˈrɛkərd/ (noun) ↔ /rɨˈkɔrd/ "to make a record"Consonantal apophony, such as the initial-consonant mutations in Celtic languages, also exists. Types: Transfixation Another form of nonconcatenative morphology is known as transfixation, in which vowel and consonant morphemes are interdigitated. For example, depending on the vowels, the Arabic consonantal root k-t-b can have different but semantically related meanings. Thus, [kataba] 'he wrote' and [kitaːb] 'book' both come from the root k-t-b. Words from k-t-b are formed by filling in the vowels, e.g. kitāb "book", kutub "books", kātib "writer", kuttāb "writers", kataba "he wrote", yaktubu "he writes", etc. In the analysis provided by McCarthy's account of nonconcatenative morphology, the consonantal root is assigned to one tier, and the vowel pattern to another. Extensive use of transfixation only occurs in Afro-Asiatic and some Nilo-Saharan languages (such as Lugbara) and is rare or unknown elsewhere. Types: Reduplication Yet another common type of nonconcatenative morphology is reduplication, a process in which all or part of the root is reduplicated. In Sakha, this process is used to form intensified adjectives: /k̠ɨhɨl/ "red" ↔ /k̠ɨp-k̠ɨhɨl/ "flaming red". Types: Truncation A final type of nonconcatenative morphology is variously referred to as truncation, deletion, or subtraction; the morpheme is sometimes called a disfix. This process removes phonological material from the root. In French, this process can be found in a small subset of plurals (although their spellings follow regular plural-marking rules): /ɔs/ "bone" ↔ /o/ "bones" /œf/ "egg" ↔ /ø/ "eggs" Semitic languages: Nonconcatenative morphology is extremely well developed in the Semitic languages in which it forms the basis of virtually all higher-level word formation (as with the example given in the diagram). That is especially pronounced in Arabic, which also uses it to form approximately 41% of plurals in what is often called the broken plural.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Micromechanics and Microengineering** Journal of Micromechanics and Microengineering: The Journal of Micromechanics and Microengineering is a peer-reviewed scientific journal that covers all aspects of microelectromechanical systems, devices and structures, as well as micromechanics, microengineering, and microfabrication. The editor-in-chief is Weileun Fang (National Tsing Hua University). Abstracting and indexing: The journal had a 2021 impact factor of 2.282 according to the Journal Citation Reports. It is indexed in Inspec, PASCAL, Current Contents/Engineering Computing and Technology, Science Citation Index, Chemical Abstracts, Mass Spectrometry Bulletin, Engineering Index/Compendex, Applied Mechanics Reviews, and VINITI Database RAS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hippocampal prosthesis** Hippocampal prosthesis: A hippocampus prosthesis is a type of cognitive prosthesis (a prosthesis implanted into the nervous system in order to improve or replace the function of damaged brain tissue). Prosthetic devices replace normal function of a damaged body part; this can be simply a structural replacement (e.g. reconstructive surgery or glass eye) or a rudimentary, functional replacement (e.g. a pegleg or hook). Hippocampal prosthesis: However, prosthetics involving the brain have some special categories and requirements. "Input" prosthetics, such as retinal or cochlear implant, supply signals to the brain that the patient eventually learns to interpret as sight or sound. "Output" prosthetics use brain signals to drive a bionic arm, hand or computer device, and require considerable training during which the patient learns to generate the desired action via their thoughts. Both of these types of prosthetics rely on the plasticity of the brain to adapt to the requirement of the prosthesis, thus allowing the user to "learn" the use of his new body part. A cognitive or "brain-to-brain" prosthesis involves neither learned input nor output signals, but the native signals used normally by the area of the brain to be replaced (or supported). Thus, such a device must be able to fully replace the function of a small section of the nervous system—using that section's normal mode of operation. In order to achieve this, developers require a deep understanding of the functioning of the nervous system. The scope of design must include a reliable mathematical model as well as the technology in order to properly manufacture and install a cognitive prosthesis. The primary goal of an artificial hippocampus is to provide a cure for Alzheimer's disease and other hippocampus—related problems. To do so, the prosthesis has to be able to receive information directly from the brain, analyze the information and give an appropriate output to the cerebral cortex; in other words, it must behave just like a natural hippocampus. At the same time, the artificial organ must be completely autonomous, since any exterior power source will greatly increase the risk of infection. Hippocampus: Role The hippocampus is part of the human limbic system, which interacts with the neocortex and other parts of the brain to produce emotions. As a part of the limbic system, the hippocampus plays its part in the formation of emotion in addition to its other roles, such as consolidation of new memories, navigation, and spatial orientation. The hippocampus is responsible for the formation of long term recognition memories. In other words, this is the part of the brain that allows us to associate a face with a name. Because of its close relationship with memory formation, damage to the hippocampus is closely related to Alzheimer's disease. Hippocampus: Anatomy The hippocampus is a bilateral structure, situated under the neocortex. Each hippocampus is "composed of several different subsystem[s] that form a closed feedback loop, with input from the neocortex entering via the entorhinal cortex, propagating through the intrinsic subregions of the hippocampus and returning to the neocortex." In an electronic sense, the hippocampus is composed of a slice of parallel circuits. Essential requirements: Biocompatibility Since the prosthesis will be permanently implanted inside the brain, long term biocompatibility is required. We must also take into account the tendency for supporting braincells like astrocytes to encapsulate the implant. (This is a natural response for braincells, in order to protect neurons), thus impairing its function. Bio-mimetic Being biomimetic means that the implant must be able to fulfill the properties a real biological neuron. To do so we must have an in–depth understanding of brain behavior to build a solid mathematical model to be based upon. The field of computational neuroscience has made headway in this endeavor. Essential requirements: First, we must take into account that, like most of biological processes, the behaviors of neurons are highly nonlinear and depend on many factors: input frequency patterns, etc. Also, a good model must take into account the fact that the expression of a single nerve cell is negligible, since the processes are carried by groups of neurons interacting in network. Once installed, the device must assume all (or at least most) of the function of the damaged hippocampus for a prolonged period of time. First, the artificial neurons must be able to work together in network just like real neurons. Then, they must be able, working and effective synaptics connections with the existing neurons of the brain; therefore a model for silicon/neurons interface will be required. Essential requirements: Size The implant must be small enough to be implantable while minimizing collateral damage during and after the implantation. Bidirectional communication In order to fully assume the function of the damaged hippocampus, the prosthesis must be able to communicate with the existing tissue in a bidirectional manner. in other words, the implant must be able to receive information from the brain and give an appropriate and compressible feedback to the surrounding nerve cell. Personalized The structural and functional characteristic of the brain varies greatly between individuals; therefore any neural implant has to be specific to each individual, which requires a precise model of the hippocampus and the use of advanced brain imagery to determine individual variance. Surgical requirement Since the prosthesis will be installed inside the brain, the operation itself will be much like a tumor removal operation. Although collateral damage will be inevitable, the effect on the patient will be minimal. Model: "In order to incorporate the nonlinear dynamics of biological neurons into neuron models to develop a prosthesis, it is first necessary to measure them accurately. We have developed and applied methods for quantifying the nonlinear dynamics of hippocampal neurons (Berger et al., 1988a,b, 1991, 1992, 1994; Dalal et al., 1997) using principles of nonlinear systems theory (Lee and Schetzen, 1965; Krausz, 1975; P. Z. Marmarelis and Marmarelis, 1978; Rugh, 1981; Sclabassi et al., 1988). In this approach, properties of neurons are assessed experimentally by applying a random interval train of electrical impulses as an input and electrophysiologically recording the evoked output of the target neuron during stimulation (figure 12.2A). The input train consists of a series of impulses (as many as 4064), with interimpulse intervals varying according to a Poisson process having a mean of 500 ms and a range of 0.2–5000 ms. Thus, the input is "broadband" and stimulates the neuron over most of its operating range; that is, the statistical properties of the random train are highly consistent with the known physiological properties of hippocampal neurons. Nonlinear response properties are expressed in terms of the relation between progressively higher-order temporal properties of a sequence of input events and the probability of neuronal output, and are modeled as the kernels of a functional power series." Technology involved: Imaging Technology such as EEG, MEG, fMRI and other type of imaging technology are essential to the installation of the implant, which requires a high precision in order to minimize collateral damage (since the hippocampus is situated inside the cortex), as well as the proper function of the device. Silicon/neuron interface A silicon/neuron interface will be needed for the proper interaction of the silicon neurons of the prosthesis and the biological neurons of the brain. Technology involved: Neuron network processor In the brain, tasks are carried out by groups of interconnected neuronal network rather than a single cell, which means that any prosthesis must be able to simulate this network behavior. To do so, we will need a high number and density of silicon neurons to produce an effective prosthesis; therefore, a High-density Hippocampal Neuron Network Processor will be required in order for the prosthesis to carry out the task of a biological hippocampus. In addition, a neuron/silicon interface will be essential to the bidirectional communication of the implanted prosthesis. The choice of material and the design must ensure long term viability and bio compatibility while ensuring the density and the specificity of the interconnections. Technology involved: Power supply Appropriate power supply is still a major issue for any neural implant. Because the prostheses are implanted inside the brain, long term biocompatibility aside, the power supply will require several specification. First, the power supply must be self recharging. Unlike other prostheses, infection is a much greater issue for neural implant, due to the sensitivity of the brain; therefore an external power source is not envisagable. Because the brain is also highly heat sensitive, the power and the device itself must not generate too much heat to avoid disrupting brain function. Prosthetic neuronal memory silicon chips: A prosthetic neuronal memory silicon chip is a device that imitates the brain's process of creating long-term memories. A prototype for this device was designed by Theodore Berger, a biomedical engineer and neurologist at University of Southern California. Berger started to work on the design in the early 1990s. He partnered with research colleagues that have been able to implant electrodes into rats and monkeys to test restoration of memory function. Recent work shows that the system can form long-term memories in many different behavioral situations. Berger and colleagues hope to eventually use these chips as electronic implants for humans whose brains that suffer from diseases such as Alzheimer's that disrupt neuronal networks. Prosthetic neuronal memory silicon chips: Technology and medical application To begin making a brain prosthesis, Berger and his collaborator Vasilis Marmarelis, a biomedical engineer at USC, worked with the hippocampus slices of rats. Since they knew that neuronal signals travel from one side of the hippocampus to the other, the researchers sent random pulses into the hippocampus, recorded the signals at specific locales to see how they were changed, and then derived equations representing the changes. They then programmed those equations into the computer chips. Prosthetic neuronal memory silicon chips: Next, they had to determine whether a chip could be used as a prosthesis, or implant, for a damaged region in the hippocampus. To do this, they had to figure out whether they could avoid a central component of the pathway in the brain slices. They put electrodes in the region, which carried electrical pulses to an external chip. The chip then executed the transformations that are normally carried out in the hippocampus, and other electrodes sent the signals back to the slice of brain. Memory codes: In 1996, Dr. Sam A. Deadwyler of Wake Forest Baptist Medical Center in Winston-Salem, NC, studied the activity patterns of collections of hippocampal neurons while rats performed a task requiring short-term memory. These 'ensembles' or collections of neurons fired in different patterns in both time and 'space' (in this case, space referred to different neurons distributed throughout the hippocampus) depending on the type of behavior required in the task. More importantly, Deadwyler and his colleagues could identify patterns that clearly distinguished between the various stimuli in the task including position (similar to place cells), behavioral responses, and what part of the task was occurring. Analyses based on the neural ensemble activity alone without looking at those variables could identify and even 'predict' some of those variables even before they occurred. In fact, the patterns would even identify when the rat was about to make an error in the task. Over the following ten years, Deadwyler's laboratory refined the analysis to identify the 'codes' and improved the ability to predict correct and error responses, even to the point of being able to have untrained rats perform the memory task using hippocampal stimulation with codes obtained from fully trained rats. The discovery of the memory codes in hippocampus led Deadwyler to join efforts with Berger for future studies in which Berger's team would develop models of memory function in hippocampus, and Deadwyler's team would test the models in rats and monkeys, and eventually move into human studies. Memory codes: Trials on rats and monkeys To transition to awake, behaving animals, Berger partnered with Deadwyler and Dr. Robert E. Hampson of Wake Forest to test a prototype of the memory prosthetic connected to rat and monkey brains via electrodes to analyze information just like the actual hippocampus. The prosthetic model allowed even a damaged hippocampus to generate new memories. In one demonstration, Deadwyler and Hampson impaired the rats' ability to form long-term memories by using pharmacological agents. These disrupted the neural circuitry that transfers messages between two subregions of the hippocampus. These subregions, CA1 and CA3, interact to create long-term memories. The rats were unable to remember which lever they needed to pull to obtain the reward. The researchers then developed an artificial hippocampus that could duplicate the pattern of interaction between CA3-CA1 interactions by analyzing the neural spikes in the cells with an electrode array, and then playing back the same pattern on the same array. After stimulating the rat hippocampi using the mathematical model of the prosthesis, their ability to identify the correct lever to pull improved dramatically. This artificial hippocampus played a significant role in the developmental stage of a memory prosthetic, as it went on to show that if a prosthetic device and its associated electrodes were implanted in the animals with a malfunctioning hippocampus, the device could potentially restore the memory capability to that of normal rats. Memory codes: Goals for the future The research teams at USC and Wake Forest are working to possibly make this system applicable to humans whose brains suffer damage from Alzheimer's, stroke, or injury, the disruption of neural networks often stops long-term memories from forming. The system designed by Berger and implemented by Deadwyler and Hampson allows the signal processing to take place that would occur naturally in undamaged neurons. Ultimately, they hope to restore the ability to create long-term memories by implanting chips such as these into the brain. Recent development: Theodore Berger and his colleagues at the University of Southern California in Los Angeles have developed a working hippocampal prosthesis that passed the live tissue test in slices of brain tissue in 2004,. In 2011, in collaboration with Drs. Sam A. Deadwyler and Robert E. Hampson at Wake Forest Baptist Medical Center successfully tested a proof-of-concept hippocampal prosthesis in awake, behaving rats. The prosthesis was in the form of multisite electrodes positioned to record from both the input and output "sides" of the damaged hippocampus, the input is gathered and analyzed by external computation chips, an appropriate feedback is computed, then used to stimulate the appropriate output pattern in the brain so that the prosthesis functioned like a real hippocampus. In 2012, the team tested a further implementation in macaques prefrontal cortex, further developing the neural prosthesis technology. In 2013, Hampson et al. successfully tested a hippocampal prosthesis on non-human primates. While the device does not yet consist of a fully implantable "chip," these tests, from rat to monkey, demonstrate the effectiveness of the device as a neural prosthetic, and supports application to human trials. Recent development: Proof of concept for a human hippocampal prosthetic In 2018, a team led by Robert E. Hampson at Wake Forest Baptist Medical, and including Berger and Deadwyler, became the first to demonstrate effectiveness of the prosthetic model in human patients. The subjects underwent implantation of electrodes in the brain at Wake Forest as part of a medical diagnostic procedure for epilepsy. While in the hospital, patients with electrodes in hippocampus volunteered to perform a memory task on computer while hippocampal neural activity was recorded in order for Berger and his team at USC team to customize the hippocampal prosthetic model for that patient. With model in hand, the Wake Forest team was able to demonstrate up to 37% improvement in memory function in patients with memory impaired by disease. The improvement was demonstrated for memories up to 75 minutes after stimulation by the hippocampal prosthetic model. As of 2018, studies are planned to test memory codes for additional attributes and features of items to be remembered as well as duration of memory facilitation in excess of 24 hours.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minkowski's first inequality for convex bodies** Minkowski's first inequality for convex bodies: In mathematics, Minkowski's first inequality for convex bodies is a geometrical result due to the German mathematician Hermann Minkowski. The inequality is closely related to the Brunn–Minkowski inequality and the isoperimetric inequality. Statement of the inequality: Let K and L be two n-dimensional convex bodies in n-dimensional Euclidean space Rn. Define a quantity V1(K, L) by lim ε↓0V(K+εL)−V(K)ε, where V denotes the n-dimensional Lebesgue measure and + denotes the Minkowski sum. Then V1(K,L)≥V(K)(n−1)/nV(L)1/n, with equality if and only if K and L are homothetic, i.e. are equal up to translation and dilation. Remarks: V1 is just one example of a class of quantities known as mixed volumes. If L is the n-dimensional unit ball B, then n V1(K, B) is the (n − 1)-dimensional surface measure of K, denoted S(K). Connection to other inequalities: The Brunn–Minkowski inequality One can show that the Brunn–Minkowski inequality for convex bodies in Rn implies Minkowski's first inequality for convex bodies in Rn, and that equality in the Brunn–Minkowski inequality implies equality in Minkowski's first inequality. The isoperimetric inequality By taking L = B, the n-dimensional unit ball, in Minkowski's first inequality for convex bodies, one obtains the isoperimetric inequality for convex bodies in Rn: if K is a convex body in Rn, then (V(K)V(B))1/n≤(S(K)S(B))1/(n−1), with equality if and only if K is a ball of some radius.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indian Journal of Dermatology** Indian Journal of Dermatology: The Indian Journal of Dermatology is a bimonthly peer-reviewed open-access medical journal published on behalf of the Indian Association of Dermatologists, Venereologists and Leprologists, West Bengal Branch. The journal covers clinical and experimental dermatology, cutaneous biology, dermatological therapeutics, cosmetic dermatology, dermatopathology, and dermatosurgery.It was established in 1955. Abstracting and indexing: The journal is abstracted and indexed in:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spermidine dehydrogenase** Spermidine dehydrogenase: In enzymology, a spermidine dehydrogenase (EC 1.5.99.6) is an enzyme that catalyzes the chemical reaction spermidine + acceptor + H2O ⇌ propane-1,3-diamine + 4-aminobutanal + reduced acceptorThe 3 substrates of this enzyme are spermidine, acceptor, and H2O, whereas its 3 products are propane-1,3-diamine, 4-aminobutanal, and reduced acceptor. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-NH group of donor with other acceptors. The systematic name of this enzyme class is spermidine:acceptor oxidoreductase. This enzyme is also called spermidine:(acceptor) oxidoreductase. This enzyme participates in urea cycle and metabolism of amino groups and beta-alanine metabolism. It has 2 cofactors: FAD, and Heme.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Structural acoustics** Structural acoustics: Structural acoustics is the study of the mechanical waves in structures and how they interact with and radiate into adjacent media. The field of structural acoustics is often referred to as vibroacoustics in Europe and Asia. People that work in the field of structural acoustics are known as structural acousticians. The field of structural acoustics can be closely related to a number of other fields of acoustics including noise, transduction, underwater acoustics, and physical acoustics. Vibrations in structures: Compressional and shear waves (isotropic, homogeneous material) Compressional waves (often referred to as longitudinal waves) expand and contract in the same direction (or opposite) as the wave motion. The wave equation dictates the motion of the wave in the x direction. Vibrations in structures: ∂2u∂x2=1cL2∂2u∂t2 where u is the displacement and cL is the longitudinal wave speed. This has the same form as the acoustic wave equation in one-dimension. cL is determined by properties (bulk modulus B and density ρ ) of the structure according to cL=Bρ When two dimensions of the structure are small with respect to wavelength (commonly called a beam), the wave speed is dictated by Youngs modulus E instead of the B and are consequently slower than in infinite media. Shear waves occur due to the shear stiffness and follows a similar equation, but with the displacement occurring in the transverse direction, perpendicular to the wave motion. Vibrations in structures: ∂2w∂x2=1cs2∂2w∂t2 The shear wave speed is governed by the shear modulus G which is less than E and B , making shear waves slower than longitudinal waves. Bending waves in beams and plates Most sound radiation is caused by bending (or flexural) waves, that deform the structure transversely as they propagate. Bending waves are more complicated than compressional or shear waves and depend on material properties as well as geometric properties. They are also dispersive since different frequencies travel at different speeds. Modeling vibrations Finite element analysis can be used to predict the vibration of complex structures. A finite element computer program will assemble the mass, stiffness, and damping matrices based on the element geometries and material properties, and solve for the vibration response based on the loads applied. [−ω2M+jωB+(1+jη)K]d=F Sound-structure interaction: Fluid-structure Interaction When a vibrating structure is in contact with a fluid, the normal particle velocities at the interface must be conserved (i.e. be equivalent). This causes some of the energy from the structure to escape into the fluid, some of which radiates away as sound, some of which stays near the structure and does not radiate away. For most engineering applications, the numerical simulation of fluid-structure interactions involved in vibro-acoustics may be achieved by coupling the Finite element method and the Boundary element method.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PDF Expert** PDF Expert: PDF Expert is a PDF editing app for iPhone, iPad and Mac. Overview: The app allows a user to read, annotate and edit PDFs, change text and images, fill in forms and sign contracts.PDF Expert was initially launched for iOS in 2010 for the first iPad and is now supported on the iPad, iPhone, and Mac.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trip valve gear** Trip valve gear: Trip valve mechanisms are a class of steam engine valve gear developed to improve efficiency. The trip mechanism allows the inlet valve to be closed rapidly, giving a short, sharp cut-off. The valve itself can be a drop valve or a Corliss valve. Trip valve gear: Trip valve gear was applied to larger stationary engines. It was not used in transport applications, as it was not suitable for high speed.The trip point of the valve mechanism, and therefore the cut-off, would be adjusted either manually or automatically by the governor. The valve is opened by the mechanical valve gear mechanism, and when the trip gear trigger releases the mechanism the valve is snapped closed, usually by a spring acting against a dashpot. Advantages of a Trip Valve Gear: When using the Corliss and other trip valve gears, the possible advantages are: By allowing the inlet-valves to open quickly, full boiler-pressure can be established in the cylinder early in the stroke. Due to how quickly the inlet-valve closes, the area of the work-diagram is increased. This is due to a sharp corner at the cut-off point rather than a rounded curve. When a rounded curve is present here the pressure from wire-drawing is lowered gradually due to the gradual closure of the valve. Due to this gear being created for engines designed to be regulated by varying admission, it has an advantage of making terminal pressure low. These engines secure this regulation without creating irregularities in exhaust functions due to them always having multiple-valve engines. Adjusting these independent valves can be easy if variations at both ends of the cylinder is something that is wanted. Due to the small amount of motion and period of rest for the steam-valves after they have closed helps to reduce the friction in the valve-gear which in turn decreases the loss of power. Disadvantages of a Trip Valve Gear: The trip valve-gear, which is almost always a multiple valve-gear, has the potential to face the following disadvantages: Many times, these gears have multiple complicated parts. Engines which are created with these complicated gears can be expensive. The limitation of rotative speed or the number of revolutions which are caused by the need for engaging catches and valves can be a down fall in these valve-gears. It is important for this valve to release prior to the valve reaching its greatest point of opening. This is an issue due to the way it limits the range.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Piste (fencing)** Piste (fencing): In modern fencing, the piste or strip is the playing area. Regulations require the piste to be 14 metres long and 1.5 metres wide. The last two metres on each end are hash-marked to warn a fencer before they back off the end of the strip, after which is a 1.5 to 2 metre runoff. The piste is also marked at the centre and at the "en garde" lines, located two metres either side of the center line. Piste (fencing): Retreating off the end of the strip with both feet results in a touch awarded for the opponent. Going off the side of the strip with one or both feet halts the fencing action, and is penalized by allowing the opponent to advance one metre before being replaced on guard. If the offending fencer would then be replaced behind the rear limit of the strip because of this, a touch is awarded to the opponent. If play is halted for any reason other than stepping off the side of the piste a fencer may never be replaced on guard behind the rear line. Piste (fencing): After each touch, fencers begin again at the en garde line, 4 metres apart, or if these lines are not available, roughly at a position where their blades can nearly touch when fully extended. If no touch is scored but play was halted, the fencers come en garde at the position they were stopped. Most pistes at fencing tournaments are "grounded" to the scoring box, thus any hits that a fencer makes against the piste will not be registered as a touch. This is to prevent accidental touches to the piste from registering as off target and resulting in a halt. Types of piste: There are three different types of piste: Rubber conductive pisteMade from conductive material with a rubber back; lightweight, approximately 25 kg. Aluminium section pisteMade from sections of rolled aluminum which are bolted together; weighs approximately 300 kg Metallic pisteMade from woven metal with no backing; weighs approximately 70 kg
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BeOS R5.1d0** BeOS R5.1d0: BeOS R5.1d0 or Dano/EXP (also known as EXP, Dano, and incorrectly as Dan0/EXP or Dan0) is the build codename and most commonly used name to refer to a leaked R5.1 prerelease of the Be Operating System. Dano's build date is 15 November 2001, the day of Be Inc.'s closure. Dano features an improved network stack called BONE, initial support for 802.11b wireless networking, some 3D acceleration-capable graphics drivers, a redesigned graphical user interface, a replacement USB subsystem with USB mass storage support, and other improvements. Many of these features had been promised for BeOS R5 a year earlier and not delivered. BeOS R5.1d0: In a potential move towards releasing the system as open source software, many proprietary items had been removed: the MP3 encoder was replaced with LAME, and OpenSSL replaced the RSA Encryption Engine in the NetPositive web browser. The ZETA operating system was initially based on the Dano codebase, though it has since evolved.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antisense RNA** Antisense RNA: Antisense RNA (asRNA), also referred to as antisense transcript, natural antisense transcript (NAT) or antisense oligonucleotide, is a single stranded RNA that is complementary to a protein coding messenger RNA (mRNA) with which it hybridizes, and thereby blocks its translation into protein. The asRNAs (which occur naturally) have been found in both prokaryotes and eukaryotes, and can be classified into short (<200 nucleotides) and long (>200 nucleotides) non-coding RNAs (ncRNAs). The primary function of asRNA is regulating gene expression. asRNAs may also be produced synthetically and have found wide spread use as research tools for gene knockdown. They may also have therapeutic applications. Discovery and history in drug development: Some of the earliest asRNAs were discovered while investigating functional proteins. An example was micF asRNA. While characterizing the outer membrane porin ompC in E.coli, some of the ompC promoter clones observed were capable of repressing the expression of other membrane porin such as ompF. The region responsible for this repression function was found to be a 300 base-pair locus upstream of the ompC promoter. This 300 base-pair region is 70% homologous in sequence with the 5' end of the ompF mRNA and thus the transcript of this 300 base pair locus was complementary to the ompF mRNA. Later on, this transcript, denoted micF, was found to be an asRNA of ompF and capable of downregulating the expression of ompF under stress by forming a duplex with the ompF mRNA. This induces the degradation of the ompF mRNA.Unlike micF RNA being discovered by accident, the majority of asRNAs were discovered by genome wide searches for small regulatory RNAs and by transcriptome analysis. Conventionally, the first step involves computational predictions based on some known characteristics of asRNAs. During computational searches, the encoding regions are excluded. The regions that are predicted to have conserved RNA structures and act as orphan promoters and Rho independent terminators are preferenced during analysis. Because computational searches focuses on the intergenic region, the asRNAs that are transcribed from the opposite strand of an encoding gene are likely to be missed using this method. To detect asRNA transcribed from the encoding region, oligonucleotide microarrays can be used. In this method, one or both strands of encoding genes can be used as probes. In addition to computational searches and microarrays, some asRNAs were discovered by sequencing cDNA clones as well as mapping promoter elements. Although many findings from the approaches mentioned above gave rise to a lot of possible asRNAs, only few were proven to be actual asRNAs via further functional tests. To minimize the number of false positive results, new approaches from recent years have been focusing on strand-specific transcription, chromatin binding noncoding RNAs and single cell studies.The idea of asRNAs as drug targets started in 1978 when Zamecnik and Stephenson found an antisense oligonucleotide to the viral RNA of Rous scarcoma virus that was capable of inhibiting viral replication and protein synthesis. Since then, much effort has been devoted to developing asRNAs as drug candidates. In 1998, the first asRNA drug, fomivirsen, was approved by FDA. Fomivirsen, a 21 base-pair oligonucleotide, was developed to treat cytomegalovirus retinitis in patients with AIDS. It works by targeting the transcribed mRNA of the virus and consequently inhibiting replication of cytomegalovirus. Despite fomivirsen being discontinued in 2004 due to the loss of the market, it served as a successful and inspiring example of using asRNAs as drug targets or drug candidates.Another example of using an asRNA as a therapeutic agent is mipomersen, which was approved by FDA in 2013. Mipomersen was developed to manage the level of low-density lipoprotein cholesterol (LDL) in patients with homozygous familial hypercholesterolemia (HoFH), which is a rare autosomal dominant genetic condition. Because of the high level of total cholesterol (650–1000 mg/dL) and LDL receptor (above 600 mg/dL) in HoFH, patients with HoFH has a high risk for coronary heart disease. Because the protein apo-B-100 has been found to be required to produce very low-density lipoprotein (VLDL) and LDL, mipomersen complements with the mRNA of apo-B-100 and target it for RNAse H dependent degradation. Ultimately, mipomersen is able to reduce the level of LDL. Examples across species: The initial asRNAs discovered were in prokaryotes including plasmids, bacteriophage and bacteria. For example, in plasmid ColE1, the asRNA termed RNA I plays an important role in determining the plasmid copy number by controlling replication. The replication of ColE1 relies on the transcription of a primer RNA named RNA II. Once RNA II is transcribed, it hybridizes to its DNA template and later cleaved by RNase H. In the presence of the asRNA RNA I, RNA I and RNA II forms a duplex which introduces a conformational change of RNA II. Consequently, RNA II cannot hybridize with its DNA template which results in a low copy number of ColE1. In bacteriophage P22, the asRNA sar helps regulate between lytic and lysogenic cycle by control the expression of Ant. Besides being expressed in prokaryotes, asRNAs were also discovered in plants. The most well described example of asRNA regulation in plants is on Flowering Locus C (FLC) gene. FLC gene in Arabidopsis thaliana encodes for a transcription factor that prevent expression of a range of genes that induce floral transition. In cold environment, the asRNA of FLC gene, denoted COOLAIR, is expressed and inhibits the expression of FLC via chromatin modification which consequently allows for flowering. Another well studied example is DOG1 (Delay of Germination 1) gene. Its expression level is negatively regulated by the antisense transcript (asDOG1 or 1GOD) acting in cis. In mammalian cells, a typical example of asRNA regulation is X chromosome inactivation. Xist, an asRNA, can recruit polycomb repressive complex 2 (PRC2) which results in heterochromatinization of the X chromosome. Classification: Antisense RNAs can be classified in different ways. In terms of regulatory mechanisms, some authors group asRNAs into RNA-DNA interactions, RNA-RNA interactions either in nucleus or cytoplasm and RNA-protein interactions (epigenetic). Antisense RNAs can be categorized by the type of the promoters that initiate expression of asRNAs: independent promoters, shared bidirectional promoters or cryptic promoters. In terms of length, although asRNA in general is classified as lncRNAs, there are short asRNAs with length of less than 200 base pairs. Because the regulatory mechanism of asRNAs are found to be species specific, asRNAs can also be classified by species. One of the common ways of classifying asRNAs is by where the asRNAs are transcribe relatively to their target genes: cis-acting and trans-acting. Classification: Cis-acting Cis-acting asRNAs are transcribed from the opposite strand of the target gene at the target gene locus. They often show high degree or complete complementarity with the target gene. If the cis-acting asRNA regulates gene expression by targeting mRNA, it can only target individual mRNA. Upon interactions with the targeting mRNAs, cis-acting asRNAs can either block ribosome binding or recruit RNAase to degrade the targeting mRNAs. Consequently, the function of these cis-acting asRNAs is to repress translation of the targeting mRNAs. Besides cis-acting asRNAs that target mRNAs, there are cis-acting epigenetic silencers and activators. Antisense RNA has been shown to repress the translation of LINE1-ORF2 domain of Entamoeba histolytica. However it is not confirmed yet whether its cis-acting or trans.In terms of epigenetic modification, cis-acting refers to the nature of these asRNAs that regulate epigenetic changes around the loci where they are transcribed. Instead of targeting individual mRNAs, these cis-acting epigenetic regulators can recruit chromatin modifying enzymes which can exert effects on both the transcription loci and the neighboring genes. Classification: Trans-acting Trans-acting asRNAs are transcribed from loci that are distal from the targeting genes. In contrast to cis-acting asRNAs, they display low degree of complementarity with the target gene but can be longer than cis-acting asRNAs. They can also target multiple loci. Because of these properties of trans-acting asRNAs, they form less stable complexes with their targeting transcripts and sometimes require aids from RNA chaperone protein such as Hfq to exert their functions. Due to the complexity of the trans-acting asRNAs, they are currently considered to be less druggable targets. Classification: Function Epigenetic regulation Many examples of asRNAs show the inhibitory effect on transcription initiation via epigenetic modifications. Classification: DNA methylation DNA methylation can result in long term downregulation of specific genes. Repression of functional proteins via asRNA induced DNA methylation has been found in several human disease. In a class of alpha-thalassemia, a type of blood disorder that has reduced level of hemoglobin leading to insufficient oxygen in the tissues, hemoglobin alpha1 gene (HBA1) is downregulated by an abnormal transcript of putative RNA-binding protein Luc7-like (LUC71) that serves as an asRNA to HBA1 and induces methylation of HBA1's promoter. Another example is silencing of a tumor suppressor gene p15INK4b, also called CDKN2B, in acute lymphoblastic leukemia and acute myeloid leukemia. The asRNA that is responsible for this silencing effect is antisense non-coding RNA in the INK locus (ANRIL), which is expressed in the same locus that encodes for p15INK4b. Classification: Histone modification In eukaryotic cells, DNA is tightly packed by histones. Modification on histones can change interactions with DNA which can further induce changes in gene expression. The biological consequences of histone methylation are context dependent. In general, histone methylation leads to gene repression but gene activation can also be achieved. Evidence has shown histone methylation can be induced by asRNAs. For instance, ANRIL, in addition to the ability to induce DNA methylation, can also repress the neighboring gene of CDKN2B, CDKN2A, by recruiting polycomb repressive complex 2 (PRC2) which leads to histone methylation (H3K27me). Another classic example is X chromosome inactivation by XIST.ANRIL induced epigenetic modification is an example of cis acting epigenetic regulation. In addition, Antisense RNA-induced chromatin modification can be both trans-acting. For example, in mammals, the asRNA HOTAIR is transcribed from homeobox C (HOXC) locus but it recruits PRC2 to HOXD which deposits H3K27 and silences HOXD. HOTAIR is highly expressed in primary breast tumors. Classification: Co-transcriptional regulation Epigenetic regulations such as DNA methylation and histone methylation can repress gene expression by inhibiting initiation of transcription. Sometimes, however, gene repression can be achieved by prematurely terminating or slowing down transcription process. AsRNAs can be involved in this level of gene regulation. For example, in bacterial or eukaryotic cells where complex RNA polymerases are present, bidirectional transcription at the same locus can lead to polymerase collision and results in the termination of transcription. Even when polymerase collision is unlikely during weak transcription, polymerase pausing can also occur which blocks elongation and leads to gene repression. One of the examples is repression of IME4 gene by its asRNA RME2. Another way of affecting transcription co-transcriptionally is by blocking splicing. One classic example in human is zinc-finger E-box binding homeobox 2 gene (ZEB2) which encodes E-cadherin, a transcriptional repressor. Efficient translation of ZEB2 mRNA requires the presence of an internal ribosome entry site (IRES) in intron of the mRNA at the 5' end. With the asRNA of ZEB2 being expressed, it can mask the splicing site and maintain the IRES in the mRNA which results in an efficient synthesis of E-cadherin. Lastly, depending on the level of asRNA expression, different isoforms of the sense transcript can be produced. Therefore, asRNA dependent regulation is not limited to on/off mechanism; rather, it presents a fine tone control system. Classification: Post-transcriptional regulation The direct post transcriptional modulation by asRNAs refers to mRNAs being targeted by asRNAs directly; thus, the translation is affected. Some characteristics of this type of asRNAs are described in the cis- and trans- acting asRNAs. This mechanism is relatively fast because both the targeting mRNA and its asRNA need to be present simultaneously in the same cell. As described in the cis-acting asRNAs, the mRNA-asRNA pairing can result in blockage of ribosome entry and RNase H dependent degradation. Overall, mRNA-targeting asRNAs can either activate or inhibit translation of the sense mRNAs with inhibitory effect being the most abundant. Classification: Therapeutic potential As a regulatory element, asRNAs bear many advantages to be considered as a drug target. First of all, asRNAs regulate gene expression at multiple levels including transcription, post-transcription and epigenetic modification. Secondly, the cis-acting asRNAs are sequence specific and exhibits high degree of complementarity with the targeting genes. Thirdly, the expression level of asRNAs is very small compared to that of the targeting mRNAs; therefore, only small amount of asRNAs is required to produce an effect. In terms of drug targets, this represents a huge advantage because only a low dosage is required for effectiveness.Recent years the idea of targeting asRNAs to increase gene expression in a locus specific manner has been drawing much attention. Due to the nature of drug development, it is always easier to have drugs functioning as downregulators or inhibitors. However, there is a need in developing drugs that can activate or upregulate gene expression such as tumor suppressor genes, neuroprotective growth factors and genes that are found silenced in certain Mendelian disorders. Currently, the approach to restore deficient gene expression or protein function include enzyme replacement therapies, microRNA therapies and delivery of functional cDNA. However, each bears some drawbacks. For example, the synthesized protein used in the enzyme replacement therapies often cannot mimic the whole function of the endogenous protein. In addition, enzyme replacement therapies are life-long commitment and carry a large financial burden for the patient. Because of the locus specific nature of asRNAs and evidences of changes in asRNA expression in many diseases, there have been attempts to design single stranded oligonucleotides, referred as antagoNATs, to inhibit asRNAs and ultimately to increase specific gene expression.Despite the promises of asRNAs as drug targets or drug candidates, there are some challenges remained to be addressed. First of all, asRNAs and antagoNATs can be easily degraded by RNase or other degrading enzymes. To prevent degradation of the therapeutic oliogoneucleotides, chemical modification is usually required. The most common chemical modification on the oligonucleotides is adding a phosphorothioate linkage to the backbones. However, the phosphrothioate modification can be proinflammatory. Adverse effects including fever, chills or nausea have been observed after local injection of phosphrothioate modified oligonucleotides. Secondly, off target toxicity also represents a big problem. Despite the locus-specific nature of the endogenous asRNAs, only 10–50% synthesized oligonucleotides showed expected targeting effect. One possible reason for this problem is the high requirement on the structure of the asRNAs to be recognized by the target sequence and RNase H. A single mismatch can result in distortion in the secondary structure and lead to off target effects. Lastly, artificial asRNAs have been shown to have limited intracellular uptake. Although neurons and glia have been shown to have the ability to freely uptake naked antisense oligonucleotides, a traceable carriers such as virus and lipid vesicles would still be ideal to control and monitor the intracellular concentration and metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paul K. Chu** Paul K. Chu: Paul K. Chu (朱劍豪) is a specialist in plasma surface modification and materials science. He is Chair Professor of Materials Engineering in the Department of Physics, Department of Materials Science & Engineering, and Department of Biomedical Engineering at City University of Hong Kong. Biography: He received his BS in mathematics (cum laude) from the Ohio State University in 1977 and MS and PhD in chemistry from Cornell University in 1979 and 1982, respectively. Biography: He is Fellow of the American Physical Society (APS), American Vacuum Society (AVS), Institute of Electrical and Electronics Engineers (IEEE), and Materials Research Society (MRS). He has received more than 30 research / technical awards including the IEEE Nuclear and Plasma Sciences (NPSS) Merit Award in 2007, MRS (Taiwan) JW Mayer Lectureship in 2008, Shanghai (China) Natural Science First Class Award 中國上海自然科學一等獎 in 2011, Chinese Ministry of Education Natural Science First Class Award 中國教育部自然科學一等獎 in 2017, Hubei Province (China) Natural Science Second Class Award 中國湖北省自然科學二等獎 in 2018, and Anhui Province (China) Science and Technology Third Class Award 中國安徽省科學技術三等獎 in 2020. He was awarded Leading Talents of Guangdong Province of China (中國廣東省領軍人才) and Thousand Talents of China (中國國家千人). He is a highly cited researcher in materials science / cross-field according to Web of Science / Clarivate Analytics. Biography: He is Fellow of the Hong Kong Academy of Engineering Sciences (HKAES) (香港工程科學院院士) and Hong Kong Institution of Engineers (HKIE). He is Chairman of the International Plasma-Based Ion Implantation Executive Committee which organizes the biennial International Workshop on Plasma-Based Ion Implantation and Deposition (PBII&D) and a member of the Ion Implantation Technology (IIT) International Committee that organizes the biennial International Conference on Ion Implantation Technology. He holds or has held honorary/visiting professorships in 17 universities and research institutes in China, including Peking University, Fudan University, Nanjing University, Shanghai Jiaotong University, Xi'an Jiaotong University, Harbin Institute of Technology, and Chinese Academy of Sciences. He is supervising senior editor of IEEE Transactions on Plasma Science, associate editor of Materials Science and Engineering: R: Reports, as well as an editorial board member of Biomaterials, Advanced Materials Interfaces, Surface & Coatings Technology, Journal of Vacuum Science and Technology A & B, Surface and Interface Analysis, International Journal of Molecular Engineering, and Cancer Nanotechnology. Publications: He has authored / co-authored more than 2,000 journal papers and given more than 150 plenary, keynote, and invited talks at international scientific conferences. He is co-author of one book, co-editor of eight books, and co-author of more than 40 book chapters on plasma science and engineering, advanced materials, and nanotechnology. He has received 15 best paper awards. Patents: He has been granted more than 30 patents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stained Glass (puzzle)** Stained Glass (puzzle): Stained Glass is a binary determination logic puzzle published by Nikoli. Rules: Stained Glass is played on a field of intersecting vertical, horizontal, and diagonal lines. At the intersections of some of these lines are small circles, either unshaded, grey shaded, or completely darkened. An unshaded circle denotes that there are more unshaded shapes touching that circle than shaded shapes. A grey shaded circle denotes that there are an equal number of shaded and unshaded shapes touching that circle. A completely shaded circle denotes that there are more shaded shapes touching that circle than unshaded shapes.The aim is to shade the correct shapes within the field such that all the circles correctly describe their neighboring areas. Solution methods: White or black circles bordered by one or two shapes result in all bordering shapes being of the type specified by the circle. Once at least half of the bordering shapes on a grey circle are known to be of one type, the other half should also be known.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sturtian glaciation** Sturtian glaciation: The Sturtian glaciation was a worldwide glaciation during the Cryogenian Period when the Earth experienced repeated large-scale glaciations. As of January 2023, the Sturtian glaciation is thought to have lasted from c. 717 Ma to c. 660 Ma, a time span of approximately 57 million years. It is hypothesised to have been be a Snowball Earth event, or contrastingly multiple regional glaciations, and is the longest and most severe known glacial event preserved in the geologic record. Etymology of name: Ultimately, current usage of the term is in reference to the globally significant Sturt Formation (originally Sturtian Tillite) within the Adelaide Superbasin of Australia. The Sturt Formation is named after Sturt Gorge, South Australia; itself named after the Sturt River, which was given its name in April 1831 by British Military Officer Captain Collet Barker, after fellow officer and explorer Charles Sturt.The Sturtian glaciation is an informal, but commonly used name for the older of two worldwide glacial events (the other is known as the Marinoan/Elatina glaciation) preserved in Cryogenian rocks. The term Sturtian was originally defined as a chronostratigraphic unit (Series) and later proposed as an international chronostratigraphic division; however, this has been superseded by international nomenclature. The suggestion of the glacial nature of the Sturt Formation during the early 20th century resulted in discussion about Neoproterozoic glaciations (thought to be Cambrian at the time) and encouraged the research that eventually resulted in the Snowball Earth hypothesis. Geology: Rocks preserving evidence for the Sturtian Glaciation are found on every continent. Notable sections are found in Australia, Canada, China, Ethiopia, Namibia, Siberia, and Svalbard. Geology: According to Eyles and Young, "Glaciogenic rocks figure prominently in the Neoproterozoic stratigraphy of southeastern Australia and the northern Canadian Cordillera. The Sturtian glaciogenic succession (c. 740 Ma) unconformably overlies rocks of the Burra Group." The Sturtian succession includes two major diamictite-mudstone sequences which represent glacial advance and retreat cycles. It is stratigraphically correlated with the Rapitan Group of North America.Reusch's Moraine in northern Norway may have been deposited during this period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dislocation avalanches** Dislocation avalanches: Dislocation avalanches are rapid discrete events during plastic deformation, in which defects are reorganized collectively. This intermittent flow behavior has been observed in microcrystals, whereas macroscopic plasticity appears as a smooth process. Intermittent plastic flow has been observed in several different systems. In AlMg Alloys, interaction between solute and dislocations can cause sudden jump during dynamic strain aging. In metallic glass, it can be observed via shear banding with stress localization; and single crystal plasticity, it shows up as slip burst. However, analysis of the events with orders-magnitude difference in sizes with different crystallographic structure reveals power-law scaling between the number of events and their magnitude, or scale-free flow.This microscopic instability of plasticity can have profound consequences on mechanical behavior of microcrystals. The increased relative size of the fluctuations makes it difficult to control the plastic forming process. Moreover, at small specimen sizes the yield stress is not well defined by the 0.2% plastic strain criterion anymore, since this value varies specimen by specimen.Similar intermittent effects has been studied in many completely different systems, including intermittency of energy dissipation in magnetism (Barkhausen effect), superconductivity, earthquakes, and friction. Background: Macroscopic plasticity are well-described by continuum model. Dislocations motions are characterized by an average velocity τ=ρbv¯ which is known as Orowan's equation. However, this approach completely fails to account for well-known intermittent deformation phenomena such as the spatial localization of dislocation flow into "slip bands"(also known as Lüders band) and the temporal fluctuations in stress-strain curves (the Portevin–Le Chatelier effect first reported in the 1920s). Experimental Approach: Although evidence of intermittent flow behavior is long known and studied, it is not until past two decades that a quantitative understanding of the phenomenon is developed with help of novel experimental techniques. Experimental Approach: Acoustic emission Acoustic emission (AE) is used to record the crackling noise from deforming crystals. The amplitudes of the acoustic signals can be related to the area swept by the fast-moving dislocations and hence to the energy dissipated during deformation events. The result shows that cracking noise is not smooth, with no specific energy scale. Effect of grain structure for "supercritical" flow has been studied in polycrystalline ice. Experimental Approach: Direct mechanical measurement Recent developments in small scale mechanical testing, with sub-nm resolution in displacement and sub-μN resolution in force, now allow to directly study discrete events in stress and strain. Currently, the most prominent method is a miniaturized compression experiment, where a nanoindenter equipped with a flat indentation tip is used. Equipped with in-situ techniques in combination with Transmission electron microscopy, Scanning electron microscopy, and micro-diffraction methods, this nanomechanical testing method can give us rich detail in nanoscale plasticity instabilities in real time. One potential concern in nanomechanical measurement is: how fast can the system respond? Can indentation tip remain contact with the sample and track the deformation? Since dislocation velocity is strongly effected by stress, velocity can be many orders different in different systems. Also, multiscale nature of dislocation avalanche event gives dislocation velocity a large range. For example, single dislocations have been shown to move at speeds of ~10 ms−1 in pure Cu, but dislocation groups moved with ~10−6 ms−1 in Cu-0.5%Al. The opposite is found for iron, where dislocation groups are found to move six orders of magnitude faster in a FeSi-alloy than individual dislocations in pure iron. Experimental Approach: To resolve this issue, Sparks et al. has designed an experiment to measure first fracture of Si beam and compare with theoretical prediction to determine the respond speed of the system. In addition to regular compression experiments, in-situ electrical contact resistance measurements (ECR) were performed. During these in-situ tests, a constant voltage was applied during the deformation experiment to record current evolution during intermittent plastic flow. Result shows that indentation tip remains contact with the sample throughout experiments, which proves the respond speed is fast enough. Theoretical analysis and simulations: Avalanche strain distributions have the general form P(s)=Cs−τexp[−(s/s0)2] where C is a normalization constant, t is a scaling exponent, and s0 is the characteristic strain of the largest avalanches. Theoretical analysis and simulations: Dislocation dynamic simulations have shown that τ can be sometimes close to 1.5, but also, many times higher exponents are observed, with values that may even approach 3 in special circumstances. While traditional mean-field theory predictions suggest the value of 1.5, more advanced mean-field investigations have demonstrated that larger exponents can be caused by non-trivial, but prevalent mechanisms in crystal plasticity that suppress mobile dislocation propulations. Effect of crystal structure on dislocation avalanches: In FCC crystals, scaled velocity shows a main peak in distribution with relatively smooth curve, which is expected from theory except for some disagreement at low velocity. However, in BCC crystal, distribution of scaled velocity is broader and much more dispersed. The result also shows that scaled velocity in BCC is a lot slower than FCC, which is not predicted by mean field theory. A possible explanation this discrepancy is based on different moving speed of edge and screw dislocations in two type of crystals. In FCC crystals, two parts of dislocation move at same velocity, resulting in smooth averaged avalanches profile; whereas in BCC crystals, edge components move fast and escape rapidly, while screw parts propagate slowly, which drag the overall velocity. Based on this explanation, we will also expect a direction dependence of avalanche events in HCP crystals, which currently lack in experimental data.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multimodal therapy** Multimodal therapy: Multimodal therapy (MMT) is an approach to psychotherapy devised by psychologist Arnold Lazarus, who originated the term behavior therapy in psychotherapy. It is based on the idea that humans are biological beings that think, feel, act, sense, imagine, and interact—and that psychological treatment should address each of these modalities. Multimodal assessment and treatment follows seven reciprocally influential dimensions of personality (or modalities) known by their acronym BASIC I.D.: behavior, affect, sensation, imagery, cognition, interpersonal relationships, and drugs/biology. Multimodal therapy: Multimodal therapy is based on the idea that the therapist must address these multiple modalities of an individual to identify and treat a mental disorder. According to MMT, each individual is affected in different ways and in different amounts by each dimension of personality, and should be treated accordingly for treatment to be successful. It sees individuals as products of interplay among genetic endowment, physical environment, and social learning history. To state that learning plays a central role in the development and resolution of our emotional problems is to communicate little. For events to connect, they must occur simultaneously or in close succession. An association may exist when responses one stimulus provokes, are predictable and reliable, similar to those another provokes. In this regard, classical conditioning and operant conditioning are two central concepts in MMT. BASIC I.D.: BASIC I.D. refers to the seven dimensions of personality according to Lazarus. Creating a successful treatment for a specific individual requires that the therapist consider each dimension, and the individual's deficits in each. B represents behavior, which can be manifested through the use of inappropriate acts, habits, gestures, or the lack of appropriate behaviors. A stands for affect, which can be seen as the level of negative feelings or emotions one experiences. S is sensation, or the negative bodily sensations or physiological symptoms such as pain, tension, sweat, nausea, quick heartbeat, etc. I stands for imagery, which is the existence of negative cognitive images or mental pictures. C represents cognition or the degree of negative thoughts, attitudes, or beliefs. The second I stands for interpersonal relationships, and refers to one's ability to form successful relationships with others. It is based on social skills and support systems. BASIC I.D.: D is for drugs and biological functions, and examines the individual's physical health, drug use, and other lifestyle choices.Multimodal therapy addresses the fact that different people depend on or are more influenced by some personality dimensions more than others. Some people are prone to deal with their problems on their own, cognitively, while others are more likely to draw support from others, and others yet are likely to use physical activities to deal with problems, such as exercise or drugs. All reactions are a combination of how the seven dimensions work together in an individual. Once the source of the problem is found, treatment can be used to focus on that specific dimension more than the others. Function: MMT starts after the patient has been assessed based on his/her emotional responses, sensory displays and the manner in which he/she interacts with people around via behavior, affect, sensations, images, cognition, drugs and interpersonal activities. Based on this assessment, the therapist will introduce the patient to the first session. During this time, the therapist and the patient will create a list of problems and the suitable treatments that may suit him/her the most. Since the treatment is based upon individual cases, each remedial strategy is considered as an effective method for the patients. Post the completion of the initial assessment, a more detailed diagnosis is done using questionnaires. The therapist shall diagnose both the actual profile as well as the structural profile of the patient. Such a diagnosis will define the target which both the therapist and the patient would want to achieve once the treatment is complete. Here, the therapist will evaluate different other ways to treat the patient. Often, relaxation tapes are used to calm down the patient. Besides psychotherapy, the therapist will try to include dietary measures and stress management programs to treat patient's associated psychiatric symptoms. The prime focus of the therapist would be to ease the pains of the patient and fulfill his/her needs by studying his/her behavior and mannerisms. Upon the patient's prior consent, the therapist will tape all the sessions and furnish a copy of those tapes to the patient. These tapes act as a supporting resource when the therapist is evaluating the patient's behavior. MMT is a flexible mode of psychotherapy because each treatment plan is devised keeping all the possibilities in mind. In the case of a single patient, the duration of the session could last not more than few hours, depending upon the therapist's analysis of the concerned patient's behavior. However, if the patient shows a condition that needs multiple treatments, then the session could stretch farther so as to enable the therapist to analyse the patient further. CBT: Multimodal therapy originated with cognitive behavioral therapy (CBT), which is a fusion of cognitive therapy and behavior therapy. Behavior therapy focused on the consideration of external behaviors, while cognitive therapy focused on mental aspects and internal processes; combining the two made it possible to utilize both internal and external factors of treatment simultaneously.Arnold Lazarus added the idea that, since personality is multi-dimensional, treatment must also consider multiple dimensions of personality to be effective. His idea of MMT involves examining symptoms on each dimension of personality in order to find the right combination of therapeutic techniques to address them all. Lazarus retained the basic premises of CBT, but believed that more of the individual's specific needs and personality dimensions must be considered.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Recurrent point** Recurrent point: In mathematics, a recurrent point for a function f is a point that is in its own limit set by f. Any neighborhood containing the recurrent point will also contain (a countable number of) iterates of it as well. Definition: Let X be a Hausdorff space and f:X→X a function. A point x∈X is said to be recurrent (for f ) if x∈ω(x) , i.e. if x belongs to its ω -limit set. This means that for each neighborhood U of x there exists n>0 such that fn(x)∈U .The set of recurrent points of f is often denoted R(f) and is called the recurrent set of f . Its closure is called the Birkhoff center of f , and appears in the work of George David Birkhoff on dynamical systems.Every recurrent point is a nonwandering point, hence if f is a homeomorphism and X is compact, then R(f) is an invariant subset of the non-wandering set of f (and may be a proper subset).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Now You See It (American game show)** Now You See It (American game show): Now You See It is an American television game show created by Frank Wayne for Mark Goodson-Bill Todman Productions. The object of Now You See It is to answer general knowledge trivia questions by finding the answers hidden in a grid, similar to a word search puzzle. Now You See It (American game show): Two seasons were produced, both of which aired on CBS. The pilot was taped in October 1973, and featured six players instead of five, as well as a lack of neon lights on the front of the desks. The first series ran from April 1, 1974, until June 13, 1975, and was hosted by Jack Narz who was also hosting the syndicated revival of Concentration. Johnny Olson was the original announcer, with Gene Wood substituting on occasion. The second series ran from April 3 until July 14, 1989, and was hosted by Los Angeles news anchor Chuck Henry. Los Angeles disc jockey Mark Driscoll announced for the first month of the 1989 season, with Don Morrow replacing him for the remainder of the run. Gameplay: 1974-75 version Format #1 The first round of Now You See It under its original format began with four new contestants split into two teams, each with one "outside" and one "inside" contestant. This round, called the Elimination Game, was played using a board of 56 letters that were arranged in four rows of 14; each row contained a series of overlapping words, read left to right. Rows and columns were numbered for scoring purposes, and were referred to respectively as "lines" and "positions." The board was shown briefly to the four contestants, then turned off before any of them could fully memorize it. Gameplay: The outside contestants turned their backs to the board as it was turned back on, and Narz asked a toss-up question for the inside contestants. The first one to buzz-in had to give the number of the line containing the correct answer. If he/she was correct, the other three lines were turned off, and the outside teammate turned to face the board and had to give both the answer and the position number of its starting letter. If the wrong line was given, the correct one was turned on and the outside contestant of the opposing team received a free guess at the answer. If an incorrect word was given, Narz gave the correct one and neither team scored. The point value for each correct answer was determined by adding its line and position numbers; for example, a word that started at position 9 in line 2 would be worth 11 points. Inside contestants could buzz-in before the host had finished a question, but in this case the outside contestant had to respond based only on the portion read to that point. Gameplay: After each question, the entire board was turned on and the outside contestants turned their backs to it again. A total of 12 questions were played, with the members of each team trading places after the first six. The team in the lead at the end of the round advanced to the Semi-Finals, while their opponents were eliminated. Gameplay: The members of the winning team competed against each other in the Semi-Finals, facing a board with four rows of 16 blanks each. Narz read a crossword-style clue, after which the letters of the answer were filled in one at a time as he said "letter," starting at the far left end of the top row. Either contestant could buzz-in at any time; a correct answer scored one point and completed the word, but a miss gave the opponent a free guess before Narz resumed filling in the letters. If the word was not solved before its last letter was revealed, neither contestant scored. Each word after the first overlapped its predecessor by at least one letter; when the row became too full to accommodate any more words, play continued with a new word on the next row down. Gameplay: The first player to guess four words won the round and a prize package, in addition to moving on to face the champion in the Finals. During the first two weeks, no prize package was given to the winner. The goal was increased from four words to five during the third week; this would become permanent when the second format was introduced. Gameplay: The Finals followed the same rules as the Elimination Game, using a new board of 56 letters. A contestant who buzzed-in had to give first the line of the correct answer, then the answer and its position. After 12 questions, the higher-scoring contestant took/retained the championship and advanced to the Solo Game for a chance to win a cash jackpot. During the first episode only, a correct toss-up answer gave that contestant sole control of the round until he/she missed a question. The opponent could then take control by giving the correct answer; if he/she also missed, the next question was played as a toss-up. After the first episode, this format was abandoned and all words were played as toss-ups. Gameplay: Beginning with the 101st episode and continuing until the adoption of the second main game format, contestants were asked to scan the board and write down one word from the board each at the beginning of each half of the Elimination Game and the Finals. A contestant or team earned a 10-point bonus for correctly answering a question with one of their chosen words, but had to reveal it immediately upon using it in order to score. Gameplay: Format #2 On December 16, 1974, Now You See It underwent a significant change in format. The game began with two new players playing the Qualifying Round, which was played in the exact same manner as the Semi-Finals prior to the change. Guessing five words first won the round and played against the returning champion in the Championship Round. Gameplay: The Championship Round was played in the same manner as the Finals in the prior format, and the scoring format was still the same. The difference was that the goal was to reach 100 points first, as opposed to being ahead when the final bell rang. In addition, the values of all words were doubled once either contestant reached 50 points. Whoever reached 100 points first became champion and played the Solo Game for the jackpot. If the champion won the Solo Game, the contestant defeated returned for the next game as champion-designate. Gameplay: Under this format, a new game began as soon as the previous Solo Game ended. Play continued until time ran out, then resumed on the next episode. Gameplay: 1989 version On October 20, 1988, a pilot was filmed. When Now You See It was picked up in 1989, it kept the basic game format introduced late in the first series' run, with a qualifying round and a championship round. Like it had done with its recent revivals of Blockbusters and Concentration two years earlier, Mark Goodson Productions gave the show a technological upgrade with a computer generated board. The game was conducted on a three-tiered stage, with each round of play conducted on one of the three levels and covered by a large ring covered in chase lighting that was raised when play moved to that level and lowered when the round was over. Gameplay: Qualifying RoundThe qualifying round was played on the first level and, as in the previous qualifying round, two contestants faced off to try to advance to face the day's champion (or champion-designate). The object, still, was to find the words that fit the clues given by the host. Like before, the board consisted of four lines and fourteen letters. Gameplay: The major difference came in scoring. Neither the lines nor the positions of the letters were numbered. Instead, the value for a correct answer was 100 points and decreased by five points for every third of a second that elapsed before someone buzzed in. The first contestant to buzz in had to give the line containing the answer, and then the word itself, to score. If the counter reached 25 points with no guesses, the contestants would be told what line the word was on to assist them. Gameplay: The qualifying round consisted of two boards, with the second being played for double points. Originally, the second board was also played for the same number of points, with points doubling if time was running short. The first person to 1,000 points moved on to face the champion in the next round of play. Championship RoundIn the championship round, the winner of the qualifying round moved up to the second level and faced off against the returning champion, whose nametag had a crown atop of it so the audience knew. The two competed for cash and tried to be the first to reach or surpass $1,000. The first board was worth $200, increasing by $100 for each one that followed. Every board had six hidden words that fit a specific category and was revealed to the audience and viewers first. After Henry revealed the category for the board in play, a window popped up in front of each contestant so they could see the board. Gameplay: The first player to find a word fitting the category was given the chance to earn the monetary value of the board. To do so, he/she was shown the board again and given twenty seconds to locate the other five words in the category. Doing so won the board and the monetary value attached was added to their score. If either player failed to come up with the necessary five answers on their turn, the opposing player was given a chance to claim the money. Henry would remind him/her of what answers had already been given, and he/she was shown the board for five seconds to come up with one of the remaining answers; successfully doing so won the value of the board, while failure meant the first contestant claimed the board's value. Gameplay: The first contestant to reach or pass the $1,000 goal became champion, got to keep their bank, and advanced to the Solo Game. Gameplay: Solo Game (All versions) In the Solo Game, the champion was given one minute to find ten words on a brand new board. The champion viewed the board on a telestrator screen. On the original Now You See It, the Solo Game board had four rows of 16 letters each and the screen was embedded in Jack Narz's desk. The 1989 series positioned their bonus area at center stage, with the podium on the third tier of the set. Inside the podium was the screen to display the computer generated board, which like all of the boards to that point was four rows of fourteen letters each. Gameplay: After a clue was given, the champion had to find the relevant word, say it, and circle it with an electronic pencil. He/she could pass as often as desired, then return to those words after all ten clues had been given if time permitted. The champion could offer multiple guesses to the same clue, but could move on to the next one only by passing or finding the correct word. Gameplay: The champion won a cash jackpot for finding all words before time expired, or $100 per found word otherwise. On both series, the Solo Game's jackpot started at $5,000. For each unsuccessful playing on the original Now You See It, the jackpot increased by $1,000 to a maximum of $25,000. Any champion who won the jackpot immediately retired from the show, and the contestant he/she had defeated in the Finals became the champion-designate for the next show or match (depending on the format; as noted above the change resulted in games that straddled episodes). A contestant was not limited as champion otherwise; champions played until they were defeated or won the Solo Game. The highest jackpot in the Narz version that was won was $21,000. Gameplay: On the 1989 revival, each loss in the Solo Game added $5,000 to the jackpot. Champions retired after playing the Solo Game five times or accumulating a total of $75,000 in winnings, whichever came first. The highest jackpot won in the Henry version was $50,000. Production information: Theme Both versions used the instrumental theme "Chump Change," composed by Quincy Jones and Bill Cosby. For a brief period, the 1970s version used an alternate theme written by Edd Kalehoff, but returned to "Chump Change" shortly thereafter. Production information: Broadcast history 1974–1975 The first version ran from April 1, 1974 to June 13, 1975, at 11:00 a.m. (10:00 Central) with Jack Narz hosting, replacing The $10,000 Pyramid, which moved to ABC one month after its CBS cancellation. Initially, it did well against Alex Trebek's American debut on NBC (The Wizard of Odds) but, three months later, NBC gave Trebek a new show called High Rollers at that slot and Now You See It began to struggle while the producers altered the format several times. The show was taped at Television City Studios in Studio 33, the current home of The Price Is Right. Some episodes used Studio 41, which at the time was the stage of CBS's Tattletales, another Goodson-Todman show.NBC's resurgence in its morning lineup in early 1975 with the likes of Wheel of Fortune prompted CBS to clean house, canceling The Joker's Wild along with Now You See It. Gambit (the show actually facing Wheel), which had begun in 1972 at 11/10, returned to that slot after Now You See It's departure from the schedule. Production information: This version aired occasionally on Game Show Network during the 1990s and 2000s until the network chose not to renew its contract with FremantleMedia (which now owns the Goodson-Todman library). The show currently airs as part of Buzzr's weekday morning lineup. (9:30 a.m. EST) 1989 Years after the original Now You See It came to an end, Mark Goodson Productions decided to try the show again once a pilot was done on October 20, 1988. In 1989, CBS again aired the show on daytime, and the new Now You See It debuted on April 3 at 10:30 a.m. (9:30 Central) in place of Card Sharks, another Mark Goodson show, which had been airing in that timeslot since January 1986. Los Angeles news anchor Chuck Henry, was host, and the show was again taped at Studio 33 inside Television City in Hollywood. Production information: Not only did it face its sister Mark Goodson-packaged game Classic Concentration on NBC, but the new Now You See It faced a vastly changed television market. Syndicated talk shows such as Donahue and Sally Jessy Raphael had become popular and made games like Now You See It seem tame and quaint. Daytime viewership declined since 1975, from a surge in cable and pay channels giving viewers more choices than ever. With greater possibility for local advertising revenue from the talk shows, numerous stations passed despite the solid performance of its lead-in, Family Feud. Production information: Production of the syndicated game show Wheel of Fortune, at that point the biggest ratings winner in syndication, had moved to Studio 33 at Television City for the upcoming seventh season after spending the previous six at NBC Studios in Burbank, California. Meanwhile, NBC had cancelled the daytime edition of Wheel that had aired since 1975 due to a decline in ratings. The final episode aired on June 30, 1989. CBS took advantage of Wheel in its studio space and relaunched the daytime series shortly after the show left Burbank. Now You See It was the only CBS morning show that was not performing well in the ratings, and the network declined to extend their commitment past the original fifteen weeks. Production information: Now You See It came to an end on July 11, 1989 after seventy-five episodes. The show closed with the entire stage crew joining Chuck Henry and the day’s champion to bid farewell. Wheel launched the following Monday and lasted until January 11, 1991 in the time slot before finishing its run on NBC later that year. Merchandise: Home game A board game based on the 1974–1975 version was made by the Milton Bradley Company (referred to simply as Milton Bradley) in 1974. Computer game A computer game based on the 1989 version was made by Gametek in 1990.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comma splice** Comma splice: In written English usage, a comma splice or comma fault is the use of a comma to join two independent clauses. For example: It is nearly half past five, we cannot reach town before dark. The comma splice is sometimes used in literary writing to convey a particular mood of informality. In the United States it is usually considered an error in English writing style. Some authorities on English usage consider comma splices appropriate in limited situations, such as informal writing or with short similar phrases. Overview: Comma splices are rare in most published writing but are common among inexperienced writers of English.The original 1918 edition of The Elements of Style by William Strunk Jr. advises using a semicolon, not a comma, to join two grammatically complete clauses, except when the clauses are "very short" and "similar in form", for example: The gate swung apart, the bridge fell, the portcullis was drawn up. Overview: Comma splices are similar to run-on sentences, which join two independent clauses without any punctuation and without a coordinating conjunction such as and, but for, etc. Sometimes the two types of sentences are treated differently based on the presence or absence of a comma, but most writers consider the comma splice as a special type of run-on sentence. According to Garner's Modern English Usage: [M]ost usage authorities accept comma splices when (1) the clauses are short and closely related, (2) there is no danger of a miscue, and (3) the context is informal ... But even when all three criteria are met, some readers are likely to object. Overview: Comma splices often arise when writers use conjunctive adverbs (such as furthermore, however, or moreover) to separate two independent clauses instead of using a coordinating conjunction. In literature: Comma splices are also occasionally used in fiction, poetry, and other forms of literature to convey a particular mood or informal style. Some authors use commas to separate short clauses only. The comma splice is more commonly found in works from the 18th and 19th century, when written prose mimicked speech more closely.Fowler's Modern English Usage describes the use of the comma splice by the authors Elizabeth Jolley and Iris Murdoch: We are all accustomed to the ... conjoined sentences that turn up from children or from our less literate friends... Curiously, this habit of writing comma-joined sentences is not uncommon in both older and present-day fiction. Modern examples: I have the bed still, it is in every way suitable for the old house where I live now (E. Jolley); Marcus ... was of course already quite a famous man, Ludens had even heard of him from friends at Cambridge (I. Murdoch). In literature: Journalist Oliver Kamm writes of novelist Jane Austen's use of the comma splice, "Tastes in punctuation are not constant. It makes no sense to accuse Jane Austen of incorrect use of the comma, as no one would have levelled this charge against her at the time. Her conventions of usage were not ours."The author and journalist Lynne Truss writes in Eats, Shoots & Leaves that "so many highly respected writers observe the splice comma that a rather unfair rule emerges on this one: only do it if you're famous." Citing Samuel Beckett, E. M. Forster, and Somerset Maugham, she says: "Done knowingly by an established writer, the comma splice is effective, poetic, dashing. Done equally knowingly by people who are not published writers, it can look weak or presumptuous. Done ignorantly by ignorant people, it is awful."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**L-methionine (S)-S-oxide reductase** L-methionine (S)-S-oxide reductase: In enzymology, a L-methionine (S)-S-oxide reductase (EC 1.8.4.13) is an enzyme that catalyzes the chemical reaction L-methionine + thioredoxin disulfide + H2O ⇌ L-methionine (S)-S-oxide + thioredoxinThe 3 substrates of this enzyme are L-methionine, thioredoxin disulfide, and H2O, whereas its two products are L-methionine (S)-S-oxide and thioredoxin. L-methionine (S)-S-oxide reductase: This enzyme belongs to the family of oxidoreductases, specifically those acting on a sulfur group of donors with a disulfide as acceptor. The systematic name of this enzyme class is L-methionine:thioredoxin-disulfide S-oxidoreductase. Other names in common use include fSMsr, methyl sulfoxide reductase I and II, acetylmethionine sulfoxide reductase, methionine sulfoxide reductase, L-methionine:oxidized-thioredoxin S-oxidoreductase, methionine-S-oxide reductase, and free-methionine (S)-S-oxide reductase. This enzyme participates in methionine metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gene redundancy** Gene redundancy: Gene redundancy is the existence of multiple genes in the genome of an organism that perform the same function. Gene redundancy can result from gene duplication. Such duplication events are responsible for many sets of paralogous genes. When an individual gene in such a set is disrupted by mutation or targeted knockout, there can be little effect on phenotype as a result of gene redundancy, whereas the effect is large for the knockout of a gene with only one copy. Gene knockout is a method utilized in some studies aiming to characterize the maintenance and fitness effects functional overlap. Gene redundancy: Classical models of maintenance propose that duplicated genes may be conserved to various extents in genomes due to their ability to compensate for deleterious loss of function mutations. These classical models do not take into account the potential impact of positive selection. Beyond these classical models, researchers continue to explore the mechanisms by which redundant genes are maintained and evolve. Gene redundancy has long been appreciated as a source of novel gene origination; that is, new genes may arise when selective pressure exists on the duplicate, while the original gene is maintained to perform the original function, as proposed by newer models. Origin and Evolution of Redundant Genes: Gene redundancy most often results from Gene duplication. Three of the more common mechanisms of gene duplication are retroposition, unequal crossing over, and non-homologous segmental duplication. Retroposition is when the mRNA transcript of a gene is reverse transcribed back into DNA and inserted into the genome at a different location. During unequal crossing over, homologous chromosomes exchange uneven portions of their DNA. This can lead to the transfer of one chromosome's gene to the other chromosome, leaving two of the same gene on one chromosome, and no copies of the gene on the other chromosome. Non-homologous duplications result from replication errors that shift the gene of interest into a new position. A tandem duplication then occurs, creating a chromosome with two copies of the same gene. Figure 1 provides a visualization of these three mechanisms. When a gene is duplicated within a genome, the two copies are initially functionally redundant. These redundant genes are considered paralogs as they accumulate changes over time, until they functionally diverge.Much research is centered around the question of how redundant genes persist. Three models have arisen to attempt to explain preservation of redundant genes: adaptive radiation, divergence, and escape from adaptive conflict. Notably, retainment following a duplication event is influenced by type of duplication event and type of gene class. That is, some gene classes are better suited for redundancy following a small scale duplication or whole genome duplication event. Redundant genes are more likely to survive when they are involved in complex pathways and are the product of whole genome duplication or multifamily duplication.The currently accepted outcomes for single gene duplicates include: gene loss (non-functionalization), functional divergence, and conservation for increased genetic robustness. Otherwise, multigene families may undergo concerted evolution, or birth and death evolution. Concerted evolution is the idea that genes in a group, such as a gene family, evolve in parallel. The birth death evolution concept is that the gene family undergoes strong purifying selection. Origin and Evolution of Redundant Genes: Functional Divergence As the genome replicates over many generations, the redundant gene's function will most likely evolve due to Genetic drift. Genetic drift influences genetic redundancy by either eliminating variants or fixing variants in the population. In the event that genetic drift maintains the variants, the gene may accumulate mutations that change the overall function. However, many redundant genes may diverge but retain original function by mechanisms such as subfunctionalization, which preserves original gene function albeit by complementary action of the duplicates. The three mechanisms of functional divergence in genes are nonfunctionalization (or gene loss), neofunctionalization and subfunctionalization.During nonfunctionalization, or degeneration/gene loss, one copy of the duplicated gene acquires mutations that render it inactive or silent. Non-functionalization is often the result of single gene duplications. At this time, the gene has no function and is called a pseudogene. Pseudogenes can be lost over time due to genetic mutations. Neofunctionalization occurs when one copy of the gene accumulates mutations that give the gene a new, beneficial function that is different than the original function. Subfunctionalization occurs when both copies of the redundant gene acquire mutations. Each copy becomes only partially active; two of these partial copies then act as one normal copy of the original gene. Figure 2 to the right provides a visualization of this concept. Origin and Evolution of Redundant Genes: Transposable Elements Transposable elements play various roles in functional differentiation. By enacting recombination, transposable elements can move redundant sequences in the genome. This change in sequence structure and location is a source of functional divergence. Transposable elements potentially impact gene expression, given that they contain a sizeable amount of micro-RNAs. Gene Maintenance Hypotheses: The evolution and origin of redundant genes remain unknown, largely because evolution happens over such a long period of time. Theoretically, a gene can not be maintained without mutation unless it has a selective pressure acting on it. Gene redundancy, therefore, would allow both copies of the gene to accumulate mutations as long as the other was still able to perform its function. This means that all redundant genes should theoretically become a pseudogene and eventually be lost. Scientists have devised two hypothesis as to why redundant genes can remain in the genome: the backup hypothesis and the piggyback hypothesis.The backup hypothesis proposes that redundant genes remain in the genome as a sort of "back-up plan". If the original gene loses its function, the redundant gene is there to take over and keep the cell alive. The piggyback hypothesis states that two paralogs in the genome have some kind of non-overlapping function as well as the redundant function. In this case, the redundant part of the gene remains in the genome due to the proximity to the area that codes for the unique function. The reason redundant genes remain in the genome is an ongoing question and gene redundancy is being studied by researchers everywhere. There are many hypotheses in addition to the backup and piggyback models. For example, at the University of Michigan, a study provides the theory that redundant genes are maintained in the genome by reduced expression. Research: Gene Families and Phylogeny Researchers often use the history of redundant genes in the form of gene families to learn about the phylogeny of a species. It takes time for redundant genes to undergo functional diversification; the degree of diversification between orthologs tells us how closely related the two genomes are. Gene duplication events can also be detected by looking at increases in gene duplicates. Research: A good example of using gene redundancy in evolutionary studies is the Evolution of the KCS gene family in plants. This paper studies how one KCS gene evolved into an entire gene family via duplication events. The number of redundant genes in the species allows researchers to determine when duplication events took place and how closely related species are. Research: Locating and Characterizing Redundant Genes Currently, there are three ways to detect paralogs in a known genomic sequence: simple homology (FASTA), gene family evolution (TreeFam) and orthology (eggNOG v3). Researchers often construct phylogenies and utilize microarrays to compare the structures of genomes to identify redundancy. Methods like creating syntenic alignments and analysis of orthologous regions are used to compare multiple genomes. Single genomes can be scanned for redundant genes using exhaustive pairwise comparisons. Before performing more laborious analyses of redundant genes, researchers typically test for functionality by comparing open reading frame length and the rates between silent and non-silent mutations. Since the Human Genome Project's completion, researchers are able to annotate the human genome much more easily. Using online databases like the Genome Browser at UCSC, researchers can look for homology in the sequence of their gene of interest. Research: Breast Cancer Disposition Genes The mode of duplication by which redundancy occurs has been found to impact the classifications in breast cancer disposition genes. Gross duplications complicate clinical interpretation because it is difficult to discern if they occur in tandem. Recent methods, like DNA breakpoint assay, have been used to determine tandem status. In turn, these tandem gross duplications can be more accurately screened for pathogenic status. This research has important implications for evaluating risk of breast cancer. Research: Pathogen Resistance in Triticeae grasses Researchers have also identified redundant genes that confer selective advantage on the organismal level. The partial ARM1 gene, a redundant gene resulting from a partial duplication, has been found to confer resistance to Blumeria graminis, a mildew fungus. This gene exists in members of the Triticeae tribe, including wheat, rye, and barley. Human Redundant Genes: Olfactory Receptors The Human Olfactory Receptor (OR) gene family contains 339 intact genes and 297 pseudogenes. These genes are found in different locations throughout the genome, but only about 13% are on different chromosomes or on distantly spaced loci. 172 subfamilies of OR genes have been found in humans, each at its own loci. Because the genes in each of these subfamilies are structurally and functionally similar, and in close proximity to each other, it is hypothesized that each evolved from single genes undergoing duplication events. The high number of subfamilies in humans explains why we are able to recognize so many odors. Human Redundant Genes: Human OR genes have homologues in other mammals, such as mice, that demonstrate the evolution of Olfactory Receptor genes. One particular family that is involved in the initial event of odor perception has been found to be highly conserved throughout all of vertebrate evolution. Human Redundant Genes: Disease Duplication events and redundant genes have often been thought to have a role in some human diseases. Large scale whole genome duplication events that occurred early in vertebrate evolution may be the reason that human monogenic disease genes often contain a high number of redundant genes. Chen et al. hypothesizes that the functionally redundant paralogs in human monogenic disease genes mask the effects of dominant deleterious mutations, thereby maintaining the disease gene in the human genome.Whole genome duplications may be a leading cause of retention of some tumor causing genes in the human genome. For example, Strout et al. have shown that tandem duplication events, likely via homologous recombination, are linked to acute myeloid leukemia. The partial duplication of the ALL1 (MLL) gene is a genetic defect has been found in patients with acute myeloid leukemia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gasoline gallon equivalent** Gasoline gallon equivalent: Gasoline gallon equivalent (GGE) or gasoline-equivalent gallon (GEG) is the amount of an alternative fuel it takes to equal the energy content of one liquid gallon of gasoline. GGE allows consumers to compare the energy content of competing fuels against a commonly known fuel, namely gasoline. Gasoline gallon equivalent: It is difficult to compare the cost of gasoline with other fuels if they are sold in different units and physical forms. GGE attempts to solve this. One GGE of CNG and one GGE of electricity have exactly the same energy content as one gallon of gasoline. In this way, GGE provides a direct comparison of gasoline with alternative fuels, including those sold as a gas (natural gas, propane, hydrogen) and as metered electricity. Definition: In 1994, the US National Institute of Standards and Technology (NIST) defined "gasoline gallon equivalent (GGE) [as] 5.660 pounds of natural gas." Compressed natural gas (CNG), for example, is a gas rather than a liquid. It can be measured by its volume in standard cubic feet (ft3) at atmospheric conditions, by its weight in pounds (lb), or by its energy content in joules (J), British thermal units (BTU), or kilowatt-hours (kW·h). CNG sold at filling stations in the US is priced in dollars per GGE. Definition: Using GGE to as a measure compare the stored energy of various fuels for use in an internal combustion engine is only one input for consumers, who typically are interested in the annual cost of driving a vehicle, which requires considering the amount of useful work that can be extracted from a given fuel. This is measured by the car's overall efficiency. In the context of GGE, a real world measure of overall efficiency is the fuel economy or fuel consumption advertised by motor vehicle manufacturers. Definition: Efficiency and consumption To start, only a fraction of the stored energy of a given fuel (measured in BTU or kW-hr) can be converted to useful work by the vehicle's engine. The measure of this is engine efficiency, often called thermal efficiency in the case of internal combustion engines. A diesel cycle engine can be as much as 40% to 50% efficient at converting fuel into work, where a typical automotive gasoline engine's efficiency is about 25% to 30%.In general, an engine is designed to run on a single fuel source and substituting one fuel for another may affect the thermal efficiency. Each fuel–engine combination requires adjusting the mix of air and fuel. This can be a manual adjustment using tools and test instruments or done automatically in computer-controlled fuel injected and multi-fuel vehicles. Forced induction for an internal combustion engine using supercharger or turbocharger may also affect the optimum fuel–air mix and thermal efficiency. Definition: The overall efficiency of converting a unit of fuel to useful work (rotation of the driving wheels) includes consideration of thermal efficiency along with dynamic losses that are inherent and specific to the design of a given vehicle. Thermal efficiency is affected by both friction and heat losses; for internal combustion engines, some of the stored energy is lost as heat through the exhaust or cooling system. In addition, friction inside the engine happens along the cylinder walls, crankshaft rod bearings and main bearings, camshaft bearings, drive chains or gears, plus other miscellaneous and minor bearing surfaces. Other dynamic losses can be caused by friction outside the motor/engine, including loads from the generator / alternator, power steering pump, A/C compressor, transmission, transfer case (if four-wheel-drive), differential(s) and universal joints, plus rolling resistance of the pneumatic tires. The vehicle's external styling affects its aerodynamic drag, which is another dynamic loss that must be considered for overall efficiency. Definition: In battery or electric vehicles, calculating the vehicle's overall efficiency of useful work begins with the charge–discharge rate of the battery pack, generally 80% to 90%. Next is the conversion of stored energy to distance traveled under power. Generally speaking, an electrical motor is far more efficient than an internal combustion engine at converting the stored potential energy into useful work; in an electric vehicle, traction motor efficiency can approach 90%, as there is minimal waste heat coming off the motor parts, and zero heat cast off by the coolant radiator and out of the exhaust. An electric motor typically has internal friction only at the main axle bearings. Additional losses will affect the overall efficiency, similar to a conventional internal combustion car, including rolling resistance, aerodynamic drag, accessory power, climate control, and drivetrain losses. See table below translating retail electricity costs for a GGE in BTU. Definition: Overall efficiency is measured and reported, typically by government testing, through operating the vehicle in a standardized driving cycle designed to replicate typical use, while providing a consistent basis for comparison between vehicles. Cars sold in the United States are advertised by their measured overall efficiency (fuel economy) in miles per gallon (mpg). The MPG of a given vehicle starts with the thermal efficiency of the fuel and engine, less all of the above elements of friction. The fuel consumption is an equivalent measure for cars sold outside the United States, typically measured in litres per 100 km traveled; in general, the fuel consumption and miles per gallon would be reciprocals with appropriate conversion factors, but because different countries use different driving cycles to measure fuel consumption, fuel economy and fuel consumption are not always directly comparable. Definition: Miles per gallon of gasoline equivalent (MPGe) The MPGe metric was introduced in November 2010 by EPA in the Monroney label of the Nissan Leaf electric car and the Chevrolet Volt plug-in hybrid. The ratings are based on EPA's formula, in which 33.7 kilowatt hours of electricity is equivalent to one gallon of gasoline (giving a heating value of 115,010 BTU/US gal), and the energy consumption of each vehicle during EPA's five standard drive cycle tests simulating varying driving conditions. All new cars and light-duty trucks sold in the U.S. are required to have this label showing the EPA's estimate of fuel economy of the vehicle. Gasoline gallon equivalent tables: Rates per kWh for residential electricity in the USA range from $0.0728 (Idaho) to $0.166 (Alaska), $0.22 (San Diego Tier 1, while Tier 2 is $.40) and $0.2783 (Hawaii). Specific fuels: Compressed natural gas One GGE of natural gas is 126.67 cubic feet (3.587 m3) at standard conditions. This volume of natural gas has the same energy content as one US gallon of gasoline (based on lower heating values: 900 BTU/cu ft (9.3 kWh/m3) of natural gas and 114,000 BTU/US gal (8.8 kWh/L) for gasoline).One GGE of CNG pressurized at 2,400 psi (17 MPa) is 0.77 cubic feet (22 litres; 5.8 US gallons). This volume of CNG at 2,400 psi has the same energy content as one US gallon of gasoline (based on lower heating values: 148,144 BTU/cu ft (1,533.25 kWh/m3) of CNG and 114,000 BTU/US gal (8.8 kWh/L) of gasoline. Using Boyle's law, the equivalent GGE at 3,600 psi (25 MPa) is 0.51 cubic feet (14 litres; 3.8 US gallons). Specific fuels: The National Conference of Weights & Measurements (NCWM) has developed a standard unit of measurement for compressed natural gas, defined in the NIST Handbook 44 Appendix D as follows: "1 Gasoline [US] gallon equivalent (GGE) means 2.567 kg (5.660 lb) of natural gas."When consumers refuel their CNG vehicles in the US, the CNG is usually measured and sold in GGE units. This is fairly helpful as a comparison to gallons of gasoline. Specific fuels: Ethanol and blended fuels (E85) 1.5 US gallons (5.7 litres) of ethanol has the same energy content as 1.0 US gal (3.8 L) of gasoline. Specific fuels: The energy content of ethanol is 76,100 BTU/US gal (5.89 kilowatt-hours per litre), compared to 114,100 BTU/US gal (8.83 kWh/L) for gasoline. (see chart above) A flex-fuel vehicle will experience about 76% of the fuel mileage MPG when using E85 (85% ethanol) products as compared to 100% gasoline. Simple calculations of the BTU values of the ethanol and the gasoline indicate the reduced heat values available to the internal combustion engine. Pure ethanol provides 2/3 of the heat value available in pure gasoline. Specific fuels: In the most common calculation, that is, the BTU value of pure gasoline vs gasoline with 10% ethanol, the latter has just over 96% BTU value of pure gasoline. Gasoline BTU varies relating to the Reid vapor pressure (causing easier vaporization in winter blends containing ethanol (ethanol is difficult to start a vehicle on when it is cold out) and anti-knock additives. Such additives offer a reduction in BTU value.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GB-PVR** GB-PVR: GB-PVR was a PVR (personal video recorder aka digital video recorder) application, running on Microsoft Windows, whose main function was scheduling TV recordings and playing back live TV. GB-PVR is no longer under active development and has been superseded by NextPVR, also known as nPVR.GB-PVR also acts as a home media center software with a digital video recorder, a radio station online tuner, a music and movie player, a library of images and other features. GB-PVR: Although GB-PVR supports open interfaces, the core engine code is closed. However developing personal plug-ins is an option to extend the application and these can be closed or open source, depending on the developer's interests. These plug-ins can be developed in C#, VB.NET or C++ and some examples are available in the GB-PVR official Forums and the GB-PVR Documentation wiki websites. The softwarewas developed with an interface which allows user to change the skin view or other graphic elements as the wallpaper. GB-PVR: GB-PVR is mostly an MPEG recording and playback system, but may also play other non-MPG content such as AVI (DivX/Xvid), WMV, and other formats that are supported by the codecs installed into a computer's. It requires a supported TV tuner card, a VMR9 capable display adapter (video card), and a supported MPEG2 Decoder. Other requirements are listed on the GB-PVR web site. Features: Integrated graphical user interface to manage all functionality 10-foot user interface for large screen displays TV Guide for scheduling of recordings Support for season recordings Support for automatically converting recordings to DivX/Xvid/WMV/iPod etc. Support for manual recordings on a specified channel at a specified time Timeshift television allowing for pausing live TV etc. Multidec support enabling the use of a wide range of softcams and other DVB plugins. Teletext DVB Subtitles Support for recording multiple digital channels at the same time with 1 tuner card when channels are on the same frequency SRT Subtitles Access to music, videos and photos inside the computer. Net radio FM radio Support for HDTV Multi-lingual support, with language packs available for many languages. NextPVR: NextPVR is the successor of GB-PVR 1.4.7 (August 29, 2009), and includes most features of GB-PVR, and others. As of December 2020 it was at revision 5.1.0. Supported capture cards: Capture or tuner cards are devices that allows a computer to record video signal, receive television signal and playback video. Some examples of capture cards are: Analog TV cards These capture cards are the most popular as they allow to receive television signals with a computer, moreover some of them also act as video capture, recording television programs in the computer. Some examples are: Hauppauge Adaptec ATI AVerMedia Conexant and more... Supported capture cards: Digital TV cards Depending on the type of device these cards can allow to tune the reception of digital signals as DVB-T and DVB-S, ATSC HDTV or QAM HDTV signals. These devices also can include BDA drivers. Some examples are listed below: DVB-T AND DVB-S DEVICES Hauppauge Fusion HDTV ATI HDTV Kworld and more...ATSC HDTV DEVICES Fusion HDTV Hauppauge AverMedia SiliconDustQAM HDTV DEVICES OnAir GT SiliconDust HDHomeRun Hauppauge HVR-1600 Hauppauge HVR-1800GO7007SB BASED DEVICES Generic Conexant “Blackbird” based card Plextor PX-M402U, PX-TV402U Lifeview TV Walker Remote control: IR or RF signal transmitters and receivers are used for GB-PVR remote control. Software makes the interpretation of the signals. With few buttons the user can interact with GB-PVR. Some manufacturers have developed remotes for remote PC wake up. Playback: Playback of many video formats, MPEG, AVI, DivX, Xvid, TS, etc. Extensible playback mechanism allowing additional file types to be added with correct codecs installed Automatic aspect ratio control DVD playback from either DVD drive or DVD image on hard disk. Supports VMR9/VMR7/Overlay video renderers VMR9 full screen Exclusive mode Music visualizations. Plug-ins: Plug-in DLLs go in the gbpvr/plugins directory, and plug-in skins in the skin directory. Usually plug-ins are distributed as zip files, and can be extracted to the gbpvr root directory. When there is a skin which is not available in the plug-ins zip file, it has to be copied into the current skin directory. Plug-ins: Some plug-ins may be configured in the GB-PVR configuration tool, but most of them are configured by editing the plugins Manual/ Skin (skin.xml) Available in version 1.3.7 (current version 1.3.11 to be confirmed) AnimeLibrary (Collection of Anime episodes, images...) BurnDVDX2 (DVD creator from MPGs) Cinema (Shows information of our local cinemas, time, films...) DVB-TRadio (Pluggin for playing DVB-T Radio channels) GameZone (Front-end for multiple emulators) GraphRecorder (Allow to use an external to record programs for GB-PVR) MLPanel (Design screen-savers, picture slide...) MovieWiz (Movie manager) Music (Fast database driven music player application with free text search and tag edit) SkinPlugin (Allows users to browse themes and change the look of GB-PVR on-the-fly.) SS2Recorder (Record and watch Live TV by TechniSat) Torrents (Torrent files manager using ‘uTorrent’) TVListings (TV guide) Weather (on demand weather channel) WebCams (Allows users to view webcams all around the world) Client/server support: When configuring the machine which is running GB-PVR, the possibilities are client and server mode. The server is responsible of recordings that the clients can schedule. Therefore, clients do not need a recording service. A client PC just can watch TV and recordings from the server, which has to share them. Clients can also use the EPG which runs in the server. There are two different sharing modes: Streaming (only supports MPEG2) File sharingOther clients supported: Hauppauge MediaMVP NMT (Network Media Tank)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clean and press** Clean and press: The clean and press is a two-part weight training exercise whereby a loaded barbell is lifted from the floor to the shoulders (the clean) and pushed overhead (the press). The lift was a component of the sport of Olympic weightlifting from 1928 to 1972, but was removed due to difficulties in judging proper technique. Movement: Clean phase In the clean movement, after taking a big breath and setting the back, the lifter jumps the bar up through triple extension (in very quick succession) of the hips, knees and then ankles. When the legs have driven the bar as high as possible, the lifter pulls under the bar by violently shrugging (contracting) the trapezius muscles of the upper back ("traps") dropping into a deep squat position and spinning the hands around the bar so the elbows are extended in front.At the same time, the arms are brought up with the elbows extended in front of the chest so the bar may now lie across or "rest" across the palms, the front of the shoulder or deltoid muscles, and the clavicles. At this point the lifter should be in a full squat position, with his buttocks on or very close to the heels, sitting erect with the bar resting comfortably across the deltoids and fingers. By keeping a rigid torso and maintaining a deep breath hold the bar bends over the lifter's clavicle. Movement: Press phase Once the bar is on the anterior deltoids, the lifter proceeds to the press, pushing the bar overhead and locking it out with completely extended arms. Jerking movements, bending of the legs, excessive backward leaning, or displacement of the feet are prohibited. Removal from the Olympics: By the 1950s, lax enforcement of the rules in international competition had allowed the press phase of the lift, by rule an upright, rigid body movement performed by the shoulders and arms, to evolve into a "layback" movement that utilized the larger muscles of the legs, hips, and torso, enabling the lifter to "cheat" to lift more weight. Historian John D. Fair wrote: "The rules had been clear about maintaining a vertical position and disallowing bending of the legs since the 1930s, but much depended on how these movements were interpreted and the political dispositions of officials and juries." In 1964, Olympic weightlifting referee George W. Kirkley wrote that the "clause of the rule which defines the permitted lean-back as 'not exaggerated' is in my view a weak spot, because it is virtually impossible to get any universal agreement of interpretation as to what constitutes 'exaggerated.'"After World War II, the situation was compounded by Cold War tensions: in 1956, Bob Hoffman, coach of the U.S. Olympic weightlifting team, accused international judges of pro-Soviet, anti-American bias, disqualifying legal American presses and allowing rule-breaking Soviet ones. Fair, however, while acknowledging the Soviet role in the erosion of press form, wrote that "the twin trends of loose pressing and lax officiating were well in place" before the Soviets entered international competition. The International Weightlifting Federation resolved the situation by removing the clean and press from the Olympic weightlifting program after the 1972 games in Munich.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toy train** Toy train: A toy train is a toy that represents a train. It is distinguished from a model train by an emphasis on low cost and durability, rather than scale modeling. A toy train can be as simple as a toy that can run on a track, or it might be operated by electricity, clockwork or live steam. It is typically constructed from wood, plastic or metal. Many of today's steam trains might be considered as real ones as well, providing they are not strictly scale or not enough detailed ones in favor of a robustness appropriate for children or an inexpensive production. Definitions: "Toy train" usually refers to a reduced-scale model of a train for children to play with. Some similar but larger vehicles are made for children to ride in, typically in parks and playgrounds; often they run on tires and not tracks. If they are meant to resemble trains then these too are called toy trains. Small trains are sometimes also called toy trains. In India, many trains that run on meter-gauge tracks and that are meant for adults are called toy trains. The trains that run on the Darjeeling Himalayan Railway are an example. Definitions: For highly accurate train models made for serious hobbyists and collectors, the term "model train" is preferred. Standards: The first widely adopted standards for toy trains running on track were introduced in Leipzig, Germany in 1891 by Märklin. See also List of rail transport modelling scales. USA classification: Z Gauge [1:220] N Gauge [1:160] HO Gauge [1:87] S Gauge [1:64] O Gauge [1:43 to 1:48, varies] G Gauge [1:20] Märklin measured the gauge as the distance between the centers of the two outer rails, rather than the distance between the outer rails themselves. Lionel's standard gauge is allegedly the result of Lionel's misreading these standards, as are the variances in O gauge between the United States and Europe. USA classification: Most of these standards never really caught on, due to their large size, which made them impractical to use indoors, as well as the high price of manufacturing. Wide gauge trains, which are close in size to 2 gauge, are produced in limited quantities today, as are 1 gauge and O gauge trains. Of these, O gauge is the most popular. USA classification: The modern standards for toy trains also include S gauge, HO scale, N scale, and Z scale, in descending order of size. HO and N scale are the most popular model railway standards of today; inexpensive sets sold in toy stores and catalogs are less realistic than those sold to hobbyists. O gauge arguably remains the most popular toy train standard. Another size that is attracting interest among hobbyists is building and operating trains from Lego, or L gauge, which is roughly 1/38 scale. USA classification: A "de facto" standard is used by some companies making wooden toy trains that run on wooden tracks. This is usually referred to as "Brio" or "Thomas" compatible in reference to two major companies. The term "Vario System" introduced by the company Eichhorn, refers to a variant of the connecting system used by some modern wooden track producers. The tracks don't use rails as such but rather grooves set apart a certain distance. The same "gauge" is used by the "Lionel Great American Adventure series" produced by Learning Curve, the Plarail system from Tomy and Trackmaster. Although the rolling stock of each system may be used to some extent on the tracks of other systems the compatibility beyond simple straight track and large radius curves may be rather limited. USA classification: Playmobil is an example of a company that offers a complete play world system based on its small plastic dolls and has later extended its play world to railways. It has developed two train systems to date. One is aimed at larger children using electric trains and remote control. This track system is designed such that it can also be used outside much like a garden train. The other system is designed for preschool children or even toddlers. An example of a system aimed at the very young is offered among others by the company "Wader Toys". This includes tracks for road and rail as well as waterways. The elements are very simple in design, sturdy and washable as they are thought for play including such environments as sandboxes, mud and water. To scale detail is a very minor issue with such systems that focus rather on sturdiness, avoiding sharp edges and avoiding parts that could be a choking hazard. USA classification: Although the words "scale" and "gauge" are often used interchangeably, many toy train manufacturers historically had little concern with depicting accurate scale. American Flyer tended to boast its closer accuracy compared to other manufacturers. The terms "O scale" and "S scale" tend to imply serious scale modeling, while the terms "O gauge" and "S gauge" tend to imply toy trains manufactured by Lionel and American Flyer, respectively. While S gauge is fairly consistent at 1:64 scale, O gauge trains represent a variety of sizes. O gauge track happens to be 1/45 the size of real-world standard gauge track, so manufacturers in Continental Europe have traditionally used 1:45 for O gauge trains. British manufacturers rounded this up to 1:43, which is seven millimeters to the foot. U.S. manufacturers rounded it down to 1:48, which is a quarter-inch to the foot. However, most engaged in a practice of selective compression in order to make the trains fit in a smaller space, causing the actual scale to vary, and numerous manufacturers produced 1:64 scale trains—the proper size for S gauge—in O gauge, especially for cost-conscious lines. USA classification: Some of the earliest O gauge trains made of tinplate weren't scale at all, made to unrealistic, whimsical proportions similar in length to modern HO scale, but anywhere from one and a half to two times as wide and tall.Some adult fans of toy trains operate their trains, while others only collect. Some toy train layouts are accessorized with scale models in an attempt to be as realistic as possible, while others are accessorized with toy buildings, cars, and figures. Some hobbyists will only buy accessories that were manufactured by the same company who made their trains. This practice is most common among fans of Marx and Lionel. History: The earliest toy trains were made of lead and had no moving parts. Some had wheels that turned, but these had to be pushed or pulled. A few of the early 19th-century push toy trains were made of tinplate, like the large, durable, stylized locomotive toys in the U.S., which were painted red and gold and decorated with hearts and flowers. History: Around 1875, technological advancements in materials and manufacturing allowed tin to be stamped, cut, rolled, and lithographed faster than ever before.Toy trains were revolutionized when Märklin, a German firm that specialized in doll house accessories, sought to create an equivalent toy for boys where a constant revenue stream could be ensured by selling add-on accessories for years after the initial purchase. In addition to boxed sets containing a train and track, Märklin offered extra track, rolling stock, and buildings sold separately, creating the predecessor to the modern model train layout featuring buildings and scenery in addition to an operating train.Electric trains followed, with the first appearing in 1897, produced by the U.S. firm Carlisle & Finch. As residential use of electricity became more common in the early 20th century, electric trains gained popularity and as time went on, these electric trains grew in sophistication, gaining lighting, the ability to change direction, to emit a whistling sound, to smoke, to remotely couple and uncouple cars and even load and unload cargo. Toy trains from the first half of the 20th century were often made of lithographed tin; later trains were often made mostly of plastic.Prior to the 1950s, there was little distinction between toy trains and model railroads—model railroads were toys by definition. Pull toys and wind-up trains were marketed towards children, while electric trains were marketed towards teenagers, particularly teenaged boys. It was during the 1950s that the modern emphasis on realism in model railroading started to catch on. History: Consumer interest in trains as toys waned in the late 1950s, but has experienced resurgence since the late 1990s due in large part to the popularity of Thomas the Tank Engine.Today, S gauge and O gauge railroads are still considered toy trains even by their adherents and are often accessorized with semi-scale model buildings by Plasticville or K-Line (who owns the rights to the Plasticville-like buildings produced by Marx from the 1950s to the 1970s). However, due to their high cost, one is more likely to find an HO scale or N scale train set in a toy store than an O scale set.Many modern electric toy trains contain sophisticated electronics that emit digitized sound effects and allow the operator to safely and easily run multiple remote control trains on one loop of track. In recent years, many toy train operators will operate a train with a TV camera in the front of the engine and hooked up to a screen, such as computer monitor. This will show an image, similar to that of a real (smaller size) railroad.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bateman transform** Bateman transform: In the mathematical study of partial differential equations, the Bateman transform is a method for solving the Laplace equation in four dimensions and wave equation in three by using a line integral of a holomorphic function in three complex variables. It is named after the English mathematician Harry Bateman, who first published the result in (Bateman 1904). The formula asserts that if ƒ is a holomorphic function of three complex variables, then ϕ(w,x,y,z)=∮γf((w+ix)+(iy+z)ζ,(iy−z)+(w−ix)ζ,ζ)dζ is a solution of the Laplace equation, which follows by differentiation under the integral. Furthermore, Bateman asserted that the most general solution of the Laplace equation arises in this way.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar eclipse of August 1, 1943** Solar eclipse of August 1, 1943: An annular solar eclipse occurred on Sunday, August 1, 1943. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon's apparent diameter is smaller than the Sun's, blocking most of the Sun's light and causing the Sun to look like an annulus (ring). An annular eclipse appears as a partial eclipse over a region of the Earth thousands of kilometres wide. Solar eclipse of August 1, 1943: Annularity was visible in the southern Indian Ocean, with the only land being Île Amsterdam in French Madagascar (now belonging to French Southern and Antarctic Lands). A partial solar eclipse was visible from Australia, Indonesia, Malaysia, eastern Madagascar, Antarctica's Wilkes Land. Related eclipses: Solar eclipses 1942–1946 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit.Note: The partial solar eclipse on September 10, 1942 occurs in the previous lunar year eclipse set. Related eclipses: Saros 125 Solar saros 125, repeating every about 18 years and 11 days, contains 73 events. The series started with a partial solar eclipse on February 4, 1060. It has total eclipses from June 13, 1276, to July 16, 1330. It has hybrid eclipses on July 26, 1348, and August 7, 1366, and annular eclipses from August 17, 1384, to August 22, 1979. The series ends at member 73 as a partial eclipse on April 9, 2358. The longest total eclipse occurred on June 25, 1294, at 1 minute and 11 seconds; the longest annular eclipse occurred on July 10, 1907, at 7 minutes and 23 seconds. Related eclipses: Metonic series The metonic series repeats eclipses every 19 years (6939.69 days), lasting about 5 cycles. Eclipses occur in nearly the same calendar date. In addition, the octon subseries repeats 1/5 of that or every 3.8 years (1387.94 days). All eclipses in this table occur at the Moon's ascending node.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gallbladder** Gallbladder: In vertebrates, the gallbladder, also known as the cholecyst, is a small hollow organ where bile is stored and concentrated before it is released into the small intestine. In humans, the pear-shaped gallbladder lies beneath the liver, although the structure and position of the gallbladder can vary significantly among animal species. It receives and stores bile, produced by the liver, via the common hepatic duct, and releases it via the common bile duct into the duodenum, where the bile helps in the digestion of fats. Gallbladder: The gallbladder can be affected by gallstones, formed by material that cannot be dissolved – usually cholesterol or bilirubin, a product of hemoglobin breakdown. These may cause significant pain, particularly in the upper-right corner of the abdomen, and are often treated with removal of the gallbladder (called a cholecystectomy). Cholecystitis, inflammation of the gallbladder, has a wide range of causes, including result from the impaction of gallstones, infection, and autoimmune disease. Structure: The gallbladder is a hollow grey-blue organ that sits in a shallow depression below the right lobe of the liver. In adults, the gallbladder measures approximately 7 to 10 centimetres (2.8 to 3.9 inches) in length and 4 centimetres (1.6 in) in diameter when fully distended. The gallbladder has a capacity of about 50 millilitres (1.8 imperial fluid ounces).The gallbladder is shaped like a pear, with its tip opening into the cystic duct. The gallbladder is divided into three sections: the fundus, body, and neck. The fundus is the rounded base, angled so that it faces the abdominal wall. The body lies in a depression in the surface of the lower liver. The neck tapers and is continuous with the cystic duct, part of the biliary tree. The gallbladder fossa, against which the fundus and body of the gallbladder lie, is found beneath the junction of hepatic segments IVB and V. The cystic duct unites with the common hepatic duct to become the common bile duct. At the junction of the neck of the gallbladder and the cystic duct, there is an out-pouching of the gallbladder wall forming a mucosal fold known as "Hartmann's pouch".Lymphatic drainage of the gallbladder follows the cystic node, which is located between the cystic duct and the common hepatic duct. Lymphatics from the lower part of the organ drain into lower hepatic lymph nodes. All the lymph finally drains into celiac lymph nodes. Structure: Microanatomy The gallbladder wall is composed of a number of layers. The innermost surface of the gallbladder wall is lined by a single layer of columnar cells with a brush border of microvilli, very similar to intestinal absorptive cells. Underneath the epithelium is an underlying lamina propria, a muscular layer, an outer perimuscular layer and serosa. Unlike elsewhere in the intestinal tract, the gallbladder does not have a muscularis mucosae, and the muscular fibres are not arranged in distinct layers.The mucosa, the inner portion of the gallbladder wall, consists of a lining of a single layer of columnar cells, with cells possessing small hair-like attachments called microvilli. This sits on a thin layer of connective tissue, the lamina propria. The mucosa is curved and collected into tiny outpouchings called rugae.A muscular layer sits beneath the mucosa. This is formed by smooth muscle, with fibres that lie in longitudinal, oblique and transverse directions, and are not arranged in separate layers. The muscle fibres here contract to expel bile from the gallbladder. A distinctive feature of the gallbladder is the presence of Rokitansky–Aschoff sinuses, deep outpouchings of the mucosa that can extend through the muscular layer, and which indicate adenomyomatosis. The muscular layer is surrounded by a layer of connective and fat tissue.The outer layer of the fundus of gallbladder, and the surfaces not in contact with the liver, are covered by a thick serosa, which is exposed to the peritoneum. The serosa contains blood vessels and lymphatics. The surfaces in contact with the liver are covered in connective tissue. Structure: Variation The gallbladder varies in size, shape, and position among different people. Rarely, two or even three gallbladders may coexist, either as separate bladders draining into the cystic duct, or sharing a common branch that drains into the cystic duct. Additionally, the gallbladder may fail to form at all. Gallbladders with two lobes separated by a septum may also exist. These abnormalities are not likely to affect function and are generally asymptomatic.The location of the gallbladder in relation to the liver may also vary, with documented variants including gallbladders found within, above, on the left side of, behind, and detached or suspended from the liver. Such variants are very rare: from 1886 to 1998, only 110 cases of left-lying liver, or less than one per year, were reported in scientific literature.An anatomical variation can occur, known as a Phrygian cap, which is an innocuous fold in the fundus, named after its resemblance to the Phrygian cap. Structure: Development The gallbladder develops from an endodermal outpouching of the embryonic gut tube. Early in development, the human embryo has three germ layers and abuts an embryonic yolk sac. During the second week of embryogenesis, as the embryo grows, it begins to surround and envelop portions of this sac. The enveloped portions form the basis for the adult gastrointestinal tract. Sections of this foregut begin to differentiate into the organs of the gastrointestinal tract, such as the esophagus, stomach, and intestines.During the fourth week of embryological development, the stomach rotates. The stomach, originally lying in the midline of the embryo, rotates so that its body is on the left. This rotation also affects the part of the gastrointestinal tube immediately below the stomach, which will go on to become the duodenum. By the end of the fourth week, the developing duodenum begins to spout a small outpouching on its right side, the hepatic diverticulum, which will go on to become the biliary tree. Just below this is a second outpouching, known as the cystic diverticulum, that will eventually develop into the gallbladder. Function: The main functions of the gallbladder are to store and concentrate bile, also called gall, needed for the digestion of fats in food. Produced by the liver, bile flows through small vessels into the larger hepatic ducts and ultimately through the cystic duct (parts of the biliary tree) into the gallbladder, where it is stored. At any one time, 30 to 60 millilitres (1.0 to 2.0 US fl oz) of bile is stored within the gallbladder.When food containing fat enters the digestive tract, it stimulates the secretion of cholecystokinin (CCK) from I cells of the duodenum and jejunum. In response to cholecystokinin, the gallbladder rhythmically contracts and releases its contents into the common bile duct, eventually draining into the duodenum. The bile emulsifies fats in partly digested food, thereby assisting their absorption. Bile consists primarily of water and bile salts, and also acts as a means of eliminating bilirubin, a product of hemoglobin metabolism, from the body.The bile that is secreted by the liver and stored in the gallbladder is not the same as the bile that is secreted by the gallbladder. During gallbladder storage of bile, it is concentrated 3-10 fold by removal of some water and electrolytes. This is through the active transport of sodium and chloride ions across the epithelium of the gallbladder, which creates an osmotic pressure that also causes water and other electrolytes to be reabsorbed. Clinical significance: Gallstones Gallstones form when the bile is saturated, usually with either cholesterol or bilirubin. Most gallstones do not cause symptoms, with stones either remaining in the gallbladder or passed along the biliary system. When symptoms occur, severe "colicky" pain in the upper right part of the abdomen is often felt. If the stone blocks the gallbladder, inflammation known as cholecystitis may result. If the stone lodges in the biliary system, jaundice may occur; if the stone blocks the pancreatic duct, pancreatitis may occur. Gallstones are diagnosed using ultrasound. When a symptomatic gallstone occurs, it is often managed by waiting for it to be passed naturally. Given the likelihood of recurrent gallstones, surgery to remove the gallbladder is often considered. Some medication, such as ursodeoxycholic acid, may be used; lithotripsy, a non-invasive mechanical procedure used to break down the stones, may also be used. Clinical significance: Inflammation Known as cholecystitis, inflammation of the gallbladder is commonly caused by obstruction of the duct with gallstones, which is known as cholelithiasis. Blocked bile accumulates, and pressure on the gallbladder wall may lead to the release of substances that cause inflammation, such as phospholipase. There is also the risk of bacterial infection. An inflamed gallbladder is likely to cause sharp and localised pain, fever, and tenderness in the upper, right corner of the abdomen, and may have a positive Murphy's sign. Cholecystitis is often managed with rest and antibiotics, particularly cephalosporins and, in severe cases, metronidazole. Additionally the gallbladder may need to be removed surgically if inflammation has progressed far enough. Clinical significance: Gallbladder removal A cholecystectomy is a procedure in which the gallbladder is removed. It may be removed because of recurrent gallstones and is considered an elective procedure. A cholecystectomy may be an open procedure, or a laparoscopic one. In the surgery, the gallbladder is removed from the neck to the fundus, and so bile will drain directly from the liver into the biliary tree. About 30 percent of patients may experience some degree of indigestion following the procedure, although severe complications are much rarer. About 10 percent of surgeries lead to a chronic condition of postcholecystectomy syndrome. Clinical significance: Complication Biliary injury (bile duct injury) is the traumatic damage of the bile ducts. It is most commonly an iatrogenic complication of cholecystectomy — surgical removal of gall bladder, but can also be caused by other operations or by major trauma. The risk of biliary injury is more during laparoscopic cholecystectomy than during open cholecystectomy. Biliary injury may lead to several complications and may even cause death if not diagnosed in time and managed properly. Ideally biliary injury should be managed at a center with facilities and expertise in endoscopy, radiology and surgery.Biloma is collection of bile within the abdominal cavity. It happens when there is a bile leak, for example after surgery for removing the gallbladder (laparoscopic cholecystectomy), with an incidence of 0.3–2%. Other causes are biliary surgery, liver biopsy, abdominal trauma, and, rarely, spontaneous perforation. Clinical significance: Cancer Cancer of the gallbladder is uncommon and mostly occurs in later life. When cancer occurs, it is mostly of the glands lining the surface of the gallbladder (adenocarcinoma). Gallstones are thought to be linked to the formation of cancer. Other risk factors include large (>1 cm) gallbladder polyps and having a highly calcified "porcelain" gallbladder.Cancer of the gallbladder can cause attacks of biliary pain, yellowing of the skin (jaundice), and weight loss. A large gallbladder may be able to be felt in the abdomen. Liver function tests may be elevated, particularly involving GGT and ALP, with ultrasound and CT scans being considered medical imaging investigations of choice. Cancer of the gallbladder is managed by removing the gallbladder, however, as of 2010, the prognosis remains poor.Cancer of the gallbladder may also be found incidentally after surgical removal of the gallbladder, with 1–3% of cancers identified in this way. Gallbladder polyps are mostly benign growths or lesions resembling growths that form in the gallbladder wall, and are only associated with cancer when they are larger in size (>1 cm). Cholesterol polyps, often associated with cholesterolosis ("strawberry gallbladder", a change in the gallbladder wall due to excess cholesterol), often cause no symptoms and are thus often detected in this way. Clinical significance: Tests Tests used to investigate for gallbladder disease include blood tests and medical imaging. A full blood count may reveal an increased white cell count suggestive of inflammation or infection. Tests such as bilirubin and liver function tests may reveal if there is inflammation linked to the biliary tree or gallbladder, and whether this is associated with inflammation of the liver, and a lipase or amylase may be elevated if there is pancreatitis. Bilirubin may rise when there is obstruction of the flow of bile. A CA 19-9 level may be taken to investigate for cholangiocarcinoma.An ultrasound is often the first medical imaging test performed when gallbladder disease such as gallstones are suspected. An abdominal X-ray or CT scan is another form of imaging that may be used to examine the gallbladder and surrounding organs. Other imaging options include MRCP (magnetic resonance cholangiopancreatography), ERCP and percutaneous or intraoperative cholangiography. A cholescintigraphy scan is a nuclear imaging procedure used to assess the condition of the gallbladder. Other animals: Most vertebrates have gallbladders, but the form and arrangement of the bile ducts may vary considerably. In many species, for example, there are several separate ducts running to the intestine, rather than the single common bile duct found in humans. Several species of mammals (including horses, deer, rats, and laminoids), several species of birds (such as pigeons and some psittacine species), lampreys and all invertebrates do not have a gallbladder.The bile from several species of bears is used in traditional Chinese medicine; bile bears are kept alive in captivity while their bile is extracted, in an industry characterized by animal cruelty. History: Depictions of the gallbladder and biliary tree are found in Babylonian models found from 2000 BCE, and in ancient Etruscan model from 200 BCE, with models associated with divine worship.Diseases of the gallbladder are known to have existed in humans since antiquity, with gallstones found in the mummy of Princess Amenen of Thebes dating to 1500 BCE. Some historians believe the death of Alexander the Great may have been associated with an acute episode of cholecystitis. The existence of the gallbladder has been noted since the 5th century, but it is only relatively recently that the function and the diseases of the gallbladder has been documented, particularly in the last two centuries.The first descriptions of gallstones appear to have been in the Renaissance, perhaps because of the low incidence of gallstones in earlier times owing to a diet with more cereals and vegetables and less meat. Anthonius Benevinius in 1506 was the first to draw a connection between symptoms and the presence of gallstones. Ludwig Georg Courvoisier, after examining a number of cases in 1890 that gave rise to the eponymous Courvoisier's law, stated that in an enlarged, nontender gallbladder, the cause of jaundice is unlikely to be gallstones.The first surgical removal of a gallstone (cholecystolithotomy) was in 1676 by physician Joenisius, who removed the stones from a spontaneously occurring biliary fistula. Stough Hobbs in 1867 performed the first recorded cholecystotomy, although such an operation was in fact described earlier by French surgeon Jean Louis Petit in the mid eighteenth century. German surgeon Carl Langenbuch performed the first cholecystectomy in 1882 for a sufferer of cholelithiasis. Before this, surgery had focused on creating a fistula for drainage of gallstones. Langenbuch reasoned that given several other species of mammal have no gallbladder, humans could survive without one.The debate whether surgical removal of the gallbladder or simply gallstones was preferred was settled in the 1920s, with the consensus that removal of the gallbladder was preferred. It was only in the mid and late parts of the twentieth century that medical imaging techniques such as use of contrast medium and CT scans were used to view the gallbladder. The first laparoscopic cholecystectomy performed by Erich Mühe of Germany in 1985, although French surgeons Phillipe Mouret and Francois Dubois are often credited for their operations in 1987 and 1988 respectively. Society and culture: To have "gall" is associated with bold, belligerent behaviour, whereas to have "bile" is associated with sourness.In the Chinese medicine, the gallbladder (膽) is associated with the Wuxing element of wood, in excess its emotion is belligerence and in deficiency cowardice and judgement, in the Chinese language it is related to a myriad of idioms, including using terms such as "a body completely [of] gall" (渾身是膽) to describe a forward person, and "single, alone gallbladder hero" (孤膽英雄) to describe a lone hero, or "they have a lot of gall to talk like that".In the Zangfu theory of Chinese medicine it is a extraordinary Fu or yang organ, as it holds bile. The gallbladder not only has a digestive role, but is seen as the seat of decision-making and judgement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Escutcheon (furniture)** Escutcheon (furniture): An escutcheon ( ih-SKUTCH-ən) is a general term for a decorative plate used to conceal a functioning, non-architectural item. Escutcheon is an Old Norman word derived from the Latin word scutum, meaning a shield. Escutcheons are most often used in conjunction with mechanical, electrical, and plumbing components and fixtures where a pipe, tube, or conduit passes through a wall [or other material] surface. The escutcheon is used to bridge the gap between the outside diameter of the pipe and the inside diameter of the opening in said surface. Escutcheon (furniture): An escutcheon can also refer to an item of door furniture. In this case, it is an architectural item that surrounds a keyhole or lock cylinder, and is often part of a lockset. Escutcheons help to protect a lock cylinder from being drilled out or snapped, and to protect the surrounding area from damage and wear from the end of the key when it misses the keyhole.Some escutcheons come in pairs with a plain one to go on the outside of the door while the matching escutcheon inside has a rotating cover to prevent prying eyes. The cover also prevents insects and dust from getting into the house/room.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mathematics (journal)** Mathematics (journal): Mathematics is a semi-monthly peer-reviewed open-access scientific journal that covers all aspects of mathematics. It publishes theoretical and experimental research articles, short communications, and reviews. It was established in 2013 and is published by MDPI. The editor-in-chief is Francisco Chiclana (De Montfort University). Abstracting and indexing: The journal is abstracted and indexed in: Current Contents/Physical, Chemical & Earth Sciences EBSCO databases Metadex ProQuest databases Science Citation Index Expanded Scopus zbMATH Open (from 2013 to 2018)According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.592.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Well-known text representation of geometry** Well-known text representation of geometry: Well-known text (WKT) is a text markup language for representing vector geometry objects. A binary equivalent, known as well-known binary (WKB), is used to transfer and store the same information in a more compact form convenient for computer processing but that is not human-readable. The formats were originally defined by the Open Geospatial Consortium (OGC) and described in their Simple Feature Access. The current standard definition is in the ISO/IEC 13249-3:2016 standard. Geometric objects: WKT can represent the following distinct geometric objects: Point, MultiPoint LineString, MultiLineString Polygon, MultiPolygon, Triangle PolyhedralSurface TIN (Triangulated irregular network) GeometryCollectionCoordinates for geometries may be 2D (x, y), 3D (x, y, z), 4D (x, y, z, m) with an m value that is part of a linear referencing system or 2D with an m value (x, y, m). Three-dimensional geometries are designated by a "Z" after the geometry type and geometries with a linear referencing system have an "M" after the geometry type. Empty geometries that contain no coordinates can be specified by using the symbol EMPTY after the type name. Geometric objects: WKT geometries are used throughout OGC specifications and are present in applications that implement these specifications. For example, PostGIS contains functions that can convert geometries to and from a WKT representation, making them human readable. The OGC standard definition requires a polygon to be topologically closed. It also states that if the exterior linear ring of a polygon is defined in a counterclockwise direction, then it will be seen from the "top". Any interior linear rings should be defined in opposite fashion compared to the exterior ring, in this case, clockwise. The following are some other examples of geometric WKT strings: (Note: Each item below is an individual geometry.) Well-known binary: Well-known binary (WKB) representations are typically shown in hexadecimal strings. The first byte indicates the byte order for the data: 00 : big endian 01 : little endianThe next 4 bytes are a 32-bit unsigned integer for the geometry type, as described below: Each data type has a unique data structure, such as the number of points or linear rings, followed by coordinates in 64-bit double numbers. For example, the geometry POINT(2.0 4.0) is represented as: 000000000140000000000000004010000000000000, where: 1-byte integer 00 or 0: big endian 4-byte integer 00000001 or 1: POINT (2D) 8-byte float 4000000000000000 or 2.0: x-coordinate 8-byte float 4010000000000000 or 4.0: y-coordinate Format variations: EWKT and EWKB – Extended Well-Known Text/Binary A PostGIS-specific format that includes the spatial reference system identifier (SRID) and up to 4 ordinate values (XYZM). For example: SRID=4326;POINT(-44.3 60.1) to locate a longitude/latitude coordinate using the WGS 84 reference coordinate system. It also supports circular curves, following elements named (but not fully defined) within the original WKT: CircularString, CompoundCurve, CurvePolygon and CompoundSurface. Format variations: AGF Text – Autodesk Geometry Format An extension to OGC's Standard (at the time), to include curved elements; most notably used in MapGuide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Muddler** Muddler: A muddler is a bartender's tool, used like a pestle to mash—or muddle—fruits, herbs and spices in the bottom of a glass to release their flavor. Description: The tool is shaped like a small baseball bat and must be long enough to touch the bottom of the glass being used. The bottom of a muddler may be textured, toothed, or smooth. Muddlers can be made from plastic, stainless steel, or wood. Use: Ingredients are muddled in the bottom of a glass before any liquids are added.Cocktails that require the use of a muddler include: Mojito, made with light rum Caipirinha, made with cachaça Caipiroska, made with vodka Mint julep, made with Bourbon whiskey Old fashioned, made with whiskey or brandy
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bitruncation** Bitruncation: In geometry, a bitruncation is an operation on regular polytopes. It represents a truncation beyond rectification. The original edges are lost completely and the original faces remain as smaller copies of themselves. Bitruncated regular polytopes can be represented by an extended Schläfli symbol notation t1,2{p,q,...} or 2t{p,q,...}. In regular polyhedra and tilings: For regular polyhedra (i.e. regular 3-polytopes), a bitruncated form is the truncated dual. For example, a bitruncated cube is a truncated octahedron. In regular 4-polytopes and honeycombs: For a regular 4-polytope, a bitruncated form is a dual-symmetric operator. A bitruncated 4-polytope is the same as the bitruncated dual, and will have double the symmetry if the original 4-polytope is self-dual. A regular polytope (or honeycomb) {p, q, r} will have its {p, q} cells bitruncated into truncated {q, p} cells, and the vertices are replaced by truncated {q, r} cells. Self-dual {p,q,p} 4-polytope/honeycombs An interesting result of this operation is that self-dual 4-polytope {p,q,p} (and honeycombs) remain cell-transitive after bitruncation. There are 5 such forms corresponding to the five truncated regular polyhedra: t{q,p}. Two are honeycombs on the 3-sphere, one a honeycomb in Euclidean 3-space, and two are honeycombs in hyperbolic 3-space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**78K** 78K: 78K is the trademark name of 16- and 8-bit microcontroller family: 23-4–23-5 : 78  manufactured by Renesas Electronics, originally developed by NEC: 229  started in 1986.: 7, line 2  The basis of 78K Family is an accumulator-based register-bank CISC architecture. 78K: 78K is a single-chip microcontroller, which usually integrates; program ROM, data RAM, serial interfaces, timers, I/O ports, an A/D converter, an interrupt controller, and a CPU core, on one die.: 412 Its application area is mainly simple mechanical system controls and man-machine interfaces.Regarding software development tools, C compilers and macro-assemblers are available.: 99  As for development tool hardware, full probing-pod type and debug port type in-circuit emulators, and flash ROM programmers: 22–24  are available. 78K: Historically, the family has 11 series with 9 instruction set architectures. As of 2018, 3 instruction set architectures, those are 8-bit 78K0, 8-bit 78K0S, and 16-/8-bit 78K0R, are still promoted for customers' new designs. But in most of cases, migration to RL78 Family, which is a successor of 78K0R and almost binary level compatible with 78K0R,: 20  is recommended. Variants: 78K0 Series 78K0 Series (also known as 78K/0) is a long-running 8-bit single chip microcontroller, which is the basis of 78K0S and 78K0R Series. It contains 8× 8-bit registers ×4 banks. For 16-bit calculating instructions, it performs ALU operation twice. Each instructions are performed serially without instruction pipelining. It has 16-bit 64K Byte address space. Some variants of 78K0 have affordable and compact type 8-bit R-2R D/A converter, which does not have monotonicity because it is not trimmed for adjustment nor followed by operational amplifier. Variants: In its earlier stage, the Program Memory was one-time PROM (OTP), UV-EPROM, or mask ROM. But with the times, it became flash memory. 78K0S Series 78K0S Series (also known as 78K/0S) is a low-end version of 78K0. It has 8× 8-bit registers, but without any banks. In addition, some instructions, such as multiplication and division, are removed from 78K0 instruction set architecture. 78K0R Series 78K0R Series is a 16-bit single-chip microcontroller with 3-stage instruction pipelining. Variants: Its instruction set is similar to 78K0 and covers 16- and 8-bit operations. It has 20-bit 1M Byte address space. 75 instructions out of 80 are identical with that of RL78 Family; its successor.: 20 178K0 Series 178K0 Series (also known as 178K/0) is a successor of NEC's 17K Family 4-bit microcontroller for DTS (Digital Tuning Systems) and remote controls. Variants: It integrates 17K family's peripheral functions with the 78K0 8-bit CPU core on a chip. 178K0S Series 178K0S Series (also known as 178K/0S) is also a successor of 17K Family with the 78K0S CPU core. 78K4 Series 78K4 Series (also known as 78K/4) is a 16-bit single-chip microcontroller with 16 and 8-bit operations. Variants: It has 16× 8-bit registers ×4 banks, which can be also used for 8× 16-bit registers ×4 banks. Some of these registers can be also used as 24-bit extension for addressing modes. It has 24-bit 16M Byte address space. It has microcode-based operations named Macro Service with interrupt functions.: §23.8, 560–593 78K7 Series 78K7 Series (also known as 78K/7) is a 32-bit single-chip microcontroller with 32, 16 and 8 bit operations. It has 8× 32-bit registers ×16 banks, which can be also used for 16× 16-bit registers ×16 banks and 16× 8-bit registers ×16 banks. It has microcode-based operations named Macro Service with interrupt functions. It has 24-bit 16M Byte linear address space. It is used for some Quantum Fireball products,: Photo 2  but shortly replaced with V850 Family 32-bit RISC microcontrollers. Variants: 78K6 Series 78K6 Series (also known as 78K/6) is a 16-bit single-chip microcontroller. Its life-time was short, and less variants. 78K1 Series 78K1 Series (also known as 78K/1) is an 8-bit single-chip microcontroller. It has 8× 8-bit registers ×4 banks. 78K1 series is targeted for servo controls of videocassette recorders. μPD78148 sub-series integrates 2 operational amplifiers. Variants: 78K3 Series 78K3 Series (also known as 78K/3) is a 16-bit single-chip microcontroller with 16 and 8 bit operations. It has 16× 8-bit ×8 banks, which can be also used for 8× 16-bit registers ×8 banks. Its address space is 16-bit 64K Byte. It is developed as high-end series of 78K Family. It has microcode-based operations named Macro Service with interrupt functions.: §13.4, 261–280 This series is used for hard disk drives, especially Quantum Fireball Series. Variants: μPD78364 sub-series is used for inverter compressor controls. It is also used for traction control systems of some cars. 78K2 Series 78K2 Series (also known as 78K/2) is an 8-bit single-chip microcontroller. It has 8× 8-bit registers ×4 banks. It is developed as general purpose series of 78K Family. Predecessors: 87AD Family 87AD Family: 229  is an 8-bit single-chip microcontroller. It has 8× 8-bit registers ×4 banks. Its instruction set architecture became the basis of 78K. 17K Family 17K Family: 229  is a 4-bit single-chip microcontroller, especially dedicated for DTS (Digital Tuning Systems) and remote controls. It has 2 plane of 128× 4-bit register files, and sophisticated fully orthogonal instruction set. This instruction set is completely different from that of 78K Family.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Picardy third** Picardy third: A Picardy third, (; French: tierce picarde) also known as a Picardy cadence or Tierce de Picardie, is a major chord of the tonic at the end of a musical section that is either modal or in a minor key. This is achieved by raising the third of the expected minor triad by a semitone to create a major triad, as a form of resolution.For example, instead of a cadence ending on an A minor chord containing the notes A, C, and E, a Picardy third ending would consist of an A major chord containing the notes A, C♯, and E. The minor third between the A and C of the A minor chord has become a major third in the Picardy third chord. Picardy third: Philosopher Peter Kivy writes:Even in instrumental music, the picardy third retains its expressive quality: it is the "happy third". ... Since at least the beginning of the seventeenth century, it is no longer enough to describe it as a resolution to the more consonant triad; it is a resolution to the happier triad as well. ... The picardy third is absolute music's happy ending. Furthermore, I hypothesize that in gaining this expressive property of happiness or contentment, the picardy third augmented its power as the perfect, most stable cadential chord, being both the most emotionally consonant chord, so to speak, as well as the most musically consonant. Picardy third: According to Deryck Cooke, "Western composers, expressing the 'rightness' of happiness by means of a major third, expressed the 'wrongness' of grief by means of the minor third, and for centuries, pieces in a minor key had to have a 'happy ending' – a final major chord (the 'tierce de Picardie') or a bare fifth."As a harmonic device, the Picardy third originated in Western music in the Renaissance era. Illustration: What makes this a Picardy cadence is shown by the red natural sign. Instead of the expected B-flat (which would make the chord minor) the accidental gives us a B natural, making the chord major. Listen to the final four measures of "I Heard the Voice of Jesus Say" with (Play ) and without (Play) Picardy third (harmony by R. Vaughan Williams). History: Name The term was first used in 1768 by Jean-Jacques Rousseau, although the practice was used in music centuries earlier. Rousseau argues that “the term is used jokingly by musicians”, suggesting it might have never had an academic basis, a tangible origin, and might have sprung out of idiomatic jokes in France in the first half of the 18th century. But his attempt at explaining why this term was used remains unconvincing: “the [practice] remained longer in Church Music, and, consequently, in Picardy, where there is music in a lot of cathedrals and churches”. History: Robert Hall hypothesizes that, instead of deriving from the Picardy region of France, it comes from the Old French word "picart", meaning "pointed" or "sharp" in northern dialects, and thus refers to the musical sharp that transforms the minor third of the chord into a major third.The few Old French dictionaries in which the word picart (fem. picarde) appears give “aigu, piquant” as a definition. While piquant is quite straightforward — meaning spiky, pointy, sharp — aigu is much more ambiguous, because it has the inconvenience of having at least three meanings: “high-pitched/treble”, “sharp” as in a sharp blade, and “acute”. Considering the definitions also state the term can refer to a nail ("clou") (read masonry nail), a pike or a spit, it seems aigu might be there used to mean "pointy" / “sharp”. However, not “sharp” in the desired sense, the one relating to a raised pitch, but in the sense of a sharp blade, which would thus completely discredit the word picart as the origin for the Picardy third, which also seems unlikely considering the possibility that aigu was also used to refer to a high(er)-pitched note, and a treble sound, thus perfectly explaining the use of the word picarde to designate a chord whose third is higher than it should be.Not to be ignored is the existence of the proverb "ressembler le Picard" ("to resemble an inhabitant of Picard") which meant “éviter le danger” (to avoid danger). This would link back to the humorous character of the term, that would have thus been used to mock supposedly cowardly composers who used the Picardy third as a way to avoid the gravity of the minor third, and perhaps the backlash they would have faced from the academic elite and the Church by going against the time’s scholasticism.Ultimately, the origin of the name "tierce picarde" will likely never be known for sure, but what evidence there is seems to point towards these idiomatic jokes and proverbs as well as the literal meaning of picarde as high-pitched and treble. History: Use In medieval music, such as that of Machaut, neither major nor minor thirds were considered stable intervals, and so cadences were typically on open fifths. As a harmonic device, the Picardy third originated in Western music in the Renaissance era. By the early seventeenth century, its use had become established in practice in music that was both sacred (as in the Schütz example above) and secular: Examples of the Picardy third can be found throughout the works of J. S. Bach and his contemporaries, as well as earlier composers such as Thoinot Arbeau and John Blow. Many of Bach's minor key chorales end with a cadence featuring a final chord in the major: In his book Music and Sentiment, Charles Rosen shows how Bach makes use of the fluctuations between minor and major to convey feeling in his music. Rosen singles out the Allemande from the keyboard Partita No. 1 in B-flat, BWV 825 to exemplify "the range of expression then possible, the subtle variety of inflections of sentiment contained with a well-defined framework". The following passage from the first half of the piece starts in F major, but then, in bar 15, "Turning to the minor mode with a chromatic bass and then back to the major for the cadence adds still new intensity." Many passages in Bach's religious works follow a similar expressive trajectory involving major and minor keys that may sometimes take on a symbolic significance. For example, David Humphreys (1983, p. 23) sees the "languishing chromatic inflections, syncopations and appoggiaturas" of the following episode from the St Anne Prelude for organ, BWV 552 from Clavier-Übung III as "showing Christ in his human aspect. Moreover the poignant angularity of the melody, and in particular the sudden turn to the minor, are obvious expressions of pathos, introduced as a portrayal of his Passion and crucifixion": Notably, Bach's two books of The Well-Tempered Clavier, composed in 1722 and 1744 respectively, differ considerably in their application of Picardy thirds, which occur unambiguously at the end of all of the minor-mode preludes and all but one of the minor-mode fugues in the first book. In the second book, however, fourteen of the minor-mode movements end on a minor chord, or occasionally, on a unison. Manuscripts vary in many of these cases. History: While the device was used less frequently during the Classical era, examples can be found in works by Haydn and Mozart, such as the slow movement of Mozart's Piano Concerto 21, K. 467: Philip Radcliffe says that the dissonant harmonies here "have a vivid foretaste of Schumann and the way they gently melt into the major key is equally prophetic of Schubert". At the end of his opera Don Giovanni, Mozart uses the switch from minor to major to considerable dramatic effect: "As the Don disappears, screaming in agony, the orchestra settles in on a chord of D major. The change of mode offers no consolation, though: it is more like the tierce de Picardie, the 'Picardy third' (a famous misnomer derived from tierce picarte, 'sharp third'), the major chord that was used to end solemn organ preludes and toccatas in the minor keys in days of old."The fierce C minor drama that pervades the Allegro con brio ed appassionato movement from Beethoven's last Piano Sonata, Op. 111, dissipates as the prevailing tonality turns to the major in its closing bars "in conjunction with a concluding diminuendo to end the movement, somewhat unexpectedly, on a note of alleviation or relief". History: The switch from minor to major was a device used frequently and to great expressive effect by Schubert in both his songs and instrumental works. In his book on the song cycle Winterreise, singer Ian Bostridge speaks of the "quintessentially Schubertian effect in the final verse" of the opening song "Gute Nacht", "as the key shifts magically from minor to major". History: Susan Wollenberg describes how the first movement of Schubert's Fantasia in F minor for piano four-hands, D 940, "ends in an extended Tierce de Picardie". The subtle change from minor to major occurs in the bass at the beginning of bar 103: In the Romantic era, those of Chopin's nocturnes that are in a minor key almost always end with a Picardy third. A notable structural employment of this device occurs with the finale of the Tchaikovsky Fifth Symphony, where the motto theme makes its first appearance in the major mode. Interpretation: According to James Bennighof: "Replacing an expected final minor chord with a major chord in this way is a centuries-old technique—the raised third of the chord, in this case G♯ rather than G natural, was first dubbed a 'Picardy third' (tierce de Picarde) in print by Jean-Jacques Rousseau in 1797 ... to express [the idea that] hopefulness might seem unremarkable, or even clichéd." Notable examples: The Christian hymn tune "Picardy", often sung with the text "Let All Mortal Flesh Keep Silence", is based on a French carol from the 17th century or earlier. It is in a minor key, but the final chord is changed to major on the final verse. (Unknown) – "Coventry Carol" (written not later than 1591). Modern harmonisations of this carol include the famously distinctive finishing major Picardy third in the melody, but the original 1591 harmonisation went much further with this device, including Picardy thirds at seven of the twelve tonic cadences notated, including all three such cadences in its chorus. The Beatles – "I'll Be Back", from the soundtrack album of the film A Hard Day's Night. Ian MacDonald speaks of the way "Lennon is harmonised by McCartney in shifting major and minor thirds, resolving on a Picardy third at the end of the first and second verses". Notable examples: Beethoven – Hammerklavier, slow movement Brahms – Piano Trio No. 1, scherzo Sarah Connor – "From Sarah with Love", final cadence Coots and Gillespie, "You Go to My Head". Ted Gioia describes the song as starting "in the major key, but from the second bar onward, Mr. Coots seems intent on creating a feverish dream quality tending more to the minor mode" before finally reaching a cadence in the major. Notable examples: Dvořák – New World Symphony, finale Bob Dylan – "Ain't Talkin'", the final song on Modern Times (2006), is played in E minor but ends (and ends the album) with a ringing E major chord. Notable examples: Roberta Flack – "Killing Me Softly with His Song" ending and resolution. According to Flack: "My classical background made it possible for me to try a number of things with [the song's arrangement]. I changed parts of the chord structure and chose to end on a major chord. [The song] wasn't written that way." Oliver Nelson – "Stolen Moments", from the 1961 album The Blues and the Abstract Truth; Ted Gioia sees "the brief resolve into the tonic major in bar four of the melody" as "a clever hook... one of the many interesting twists" in this jazz composition. Notable examples: Joni Mitchell – "Tin Angel", from Clouds (1969); the Picardy third lands on the lyric "I found someone to love today". According to Katherine Monk, the Picardy third in this song, "suggests Mitchell is internally aware of romantic love's inability to provide true happiness but, gosh darn it, it's a nice illusion all the same." Donna Summer – “I Feel Love” (1977) alternates throughout with an accompaniment of "synth swirls: major and minor; it’s basically a version of what Franz Schubert did for his whole career." The Fireballs – "Vaquero", This (1961) Tex-Mex instrumental composed by George Tomsco and Norman Petty is clearly in the key of E minor, and yet ends with a ringing E Major chord." Hall & Oates – "Maneater"; each verse has a Picardy third in the middle, moving from a major seventh in the second measure to a flat second in the third measure, and finally ending on a major first in the fourth measure. In the song's original key of B minor, this is an A major chord to a C major chord, ending on a B major chord. Notable examples: The Turtles – "Happy Together" (1967) alternates between major and minor keys with the last chord of the outro featuring a Picardy third.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non-ferrous extractive metallurgy** Non-ferrous extractive metallurgy: Non-ferrous extractive metallurgy is one of the two branches of extractive metallurgy which pertains to the processes of reducing valuable, non-iron metals from ores or raw material. Metals like zinc, copper, lead, aluminium as well as rare and noble metals are of particular interest in this field, while the more common metal, iron, is considered a major impurity. Like ferrous extraction, non-ferrous extraction primarily focuses on the economic optimization of extraction processes in separating qualitatively and quantitatively marketable metals from its impurities (gangue).Any extraction process will include a sequence of steps or unit processes for separating highly pure metals from undesirables in an economically efficient system. Unit processes are usually broken down into three categories: pyrometallurgy, hydrometallurgy, and electrometallurgy. In pyrometallurgy, the metal ore is first oxidized through roasting or smelting. The target metal is further refined at high temperatures and reduced to its pure form. In hydrometallurgy, the object metal is first dissociated from other materials using a chemical reaction, which is then extracted in pure form using electrolysis or precipitation. Finally, electrometallurgy generally involves electrolytic or electrothermal processing. The metal ore is either distilled in an electrolyte or acid solution, then magnetically deposited onto a cathode plate (electrowinning); or smelted then melted using an electric arc or plasma arc furnace (electrothermic reactor).Another major difference in non-ferrous extraction is the greater emphasis on minimizing metal losses in slag. This is widely due to the exceptional scarcity and economic value of certain non-ferrous metals which are, inevitably, discarded during the extraction process to some extent. Thus, material resource scarcity and shortages are of great concern to the non-ferrous industry. Recent developments in non-ferrous extractive metallurgy now emphasize the reprocessing and recycling of rare and non-ferrous metals from secondary raw materials (scrap) found in landfills. History: Prehistory of non-ferrous extractive metallurgy In general, prehistoric extraction of metals, particularly copper, involved two fundamental stages: first, the smelting of copper ore at temperatures exceeding 700 °C is needed to separate the gangue from the copper; second, melting the copper, which requires temperatures exceeding its melting point of 1080 °C. Given the available technology at the time, accomplishing these extreme temperatures posed a significant challenge. Early smelters developed ways to effectively increase smelting temperatures by feeding the fire with forced flows of oxygen.Copper extraction in particular is of great interest in archeometallurgical studies since it dominated other metals in Mesopotamia from the early Chalcolithic until the mid-to-late sixth century BC. There is a lack of consensus among archaeometallurgists on the origin of non-ferrous extractive metallurgy. Some scholars believe that extractive metallurgy may have been simultaneously or independently discovered in several parts of the world. The earliest known use of pyrometallurgical extraction of copper occurred in Belovode, eastern Serbia, from the late sixth to early fifth millennium BC. However, there is also evidence of copper smelting in Tal-i-Iblis, southeastern Iran, which dates back to around the same period. During this period, copper smelters used large in-grown pits filled with coal, or crucibles to extract copper, but by the fourth millennium BC this practice had begun to phase out in favor of the smelting furnace, which had a larger production capacity. From the third millennium onward, the invention of the reusable smelting furnace was crucial to the success of large-scale copper production and the robust expansion of the copper trade through the Bronze Age.The earliest silver objects began appearing in the late fourth millennium BC in Anatolia, Turkey. Prehistoric silver extraction is strongly associated with the extraction of the less valuable metal, lead; although evidence of lead extraction technology predates silver by at least 3 millennia. Silver and lead extractions are also associated because the argentiferous (silver-bearing) ores used in the process often contains both elements. History: In general, prehistoric silver recovery was broken down into three phases: First, the silver-lead ore is roasted to separate the silver and lead from the gangue. The metals are then melted at high temperature ( greater than 1100 °C) in the crucible while air is blown over the molten metal (cupellation). Finally, lead is oxidized to form lead monoxide (PbO) or is absorbed into the walls of the crucible, leaving the refined silver behind. History: The silver-lead cupellation method was first used in Mesopotamia between 4000 and 3500 BC. Silver artifacts, dating around 3600 BC, were discovered in Naqada, Egypt. Some of these cast silver artifacts contained less than 0.5% lead, which strongly indicates cupellation. History: Early to late Anglo-Saxon cupellation Cupellation was also being used in parts of Europe to extract gold, silver, zinc, and tin by the late ninth to tenth century AD. Here, one of the earliest examples of an integrated unit process for extracting more than one precious metal was first introduced by Theophilus around the twelfth century. First, the gold-silver ore is melted down in the crucible, but with an excess amount of lead. The intense heat then oxidizes the lead which reacts quickly and binds with the impurities in the gold-silver ore. Since both gold and silver have low reactivity with the impurities, they remain behind once the slag is removed. The last stage involves parting, in which the silver is separated from the gold. First the gold-silver alloy is hammered into thin sheets and placed into a vessel. The sheets were then covered in urine, which contains sodium chloride (NaCl). The vessel is then capped and heated for several hours until the chlorides bind with the silver, creating silver chloride (AgCl). Finally, the silver chloride powder is then removed and smelted to recover the silver, while the pure gold remains intact. History: Hydrometallurgy in Chinese antiquity During the Song Dynasty, Chinese copper output from domestic mining was in decline and the resulting shortages caused miners to seek alternative methods for extracting copper. The discovery of a new “wet process” for extracting copper from mine water was introduced between the eleventh and twelfth century, which helped to mitigate their loss of supply. History: Similar to the Anglo-Saxon method for cupellation, the Chinese employed the use of a base metal to extract the target metal from its impurities. First, the base metal, iron, is hammered into thin sheets. The sheets are then placed into a trough filled with “vitriol water” i.e., copper mining water which is then left to steep for several day. The mining water contains copper salts in the form of copper sulfate CuSO4. The iron then reacts with the copper, displacing it from the sulfate ions, causing the copper to precipitate onto the iron sheets, forming a "wet" powder. Finally, the precipitated copper is collected and refined further through the traditional smelting process. This is the first large-scale use of a hydrometallurgical process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poly(methyl acrylate)** Poly(methyl acrylate): Poly(methyl acrylate) (PMA) is a family of organic polymers with the formula (CH2CHCO2CH3)n. It is a synthetic acrylate polymer derived from methyl acrylate monomer. The polymers are colorless. This homopolymer is far less important than copolymers derived from methyl acrylate and other monomers. PMA is softer than polymethyl methacrylate (PMMA), It is tough, leathery, and flexible. Copolymers: Far more important than PMA are copolymers produced from methyl acrylate and one or more of the following comonomers methyl methacrylate, styrene, acrylonitrile, vinyl acetate, vinyl chloride, vinylidene chloride, and butadiene. Properties: It has a low glass-transition temperature about 10 °C (12.5 °C in case of PMA38). It is soluble in dimethyl sulfoxide (DMSO). PMA is water-sensitive and unlike PMMA, is not stable against alkalies.High-energy radiation leads to cross linking in PMA. However in polymethyl methacrylate (PMMA), a compound similar to PMA, degradation occurs instead. Uses: Also used in leather finishing and textiles.Derivatives of this polymer are commonly used in orally administerd pharmaceutical formulations to target specific regions of the gastrointestinal tract.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kodaira surface** Kodaira surface: In mathematics, a Kodaira surface is a compact complex surface of Kodaira dimension 0 and odd first Betti number. The concept is named after Kunihiko Kodaira. Kodaira surface: These are never algebraic, though they have non-constant meromorphic functions. They are usually divided into two subtypes: primary Kodaira surfaces with trivial canonical bundle, and secondary Kodaira surfaces which are quotients of these by finite groups of orders 2, 3, 4, or 6, and which have non-trivial canonical bundles. The secondary Kodaira surfaces have the same relation to primary ones that Enriques surfaces have to K3 surfaces, or bielliptic surfaces have to abelian surfaces. Invariants: If the surface is the quotient of a primary Kodaira surface by a group of order k = 1,2,3,4,6, then the plurigenera Pn are 1 if n is divisible by k and 0 otherwise. Kodaira surface: Hodge diamond: Examples: Take a non-trivial line bundle over an elliptic curve, remove the zero section, then quotient out the fibers by Z acting as multiplication by powers of some complex number z. This gives a primary Kodaira surface.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Palinopsia** Palinopsia: Palinopsia (Greek: palin for "again" and opsia for "seeing") is the persistent recurrence of a visual image after the stimulus has been removed. Palinopsia is not a diagnosis; it is a diverse group of pathological visual symptoms with a wide variety of causes. Visual perseveration is synonymous with palinopsia.In 2014, Gersztenkorn and Lee comprehensively reviewed all cases of palinopsia in the literature and subdivided it into two clinically relevant groups: illusory palinopsia and hallucinatory palinopsia. Hallucinatory palinopsia, usually due to seizures or posterior cortical lesions, describes afterimages that are formed, long-lasting, and high resolution. Illusory palinopsia, usually due to migraines, head trauma, prescription drugs, visual snow or hallucinogen persisting perception disorder (HPPD), describes afterimages that are affected by ambient light and motion and are unformed, indistinct, or low resolution. Presentation: People with palinopsia frequently report other visual illusions and hallucinations such as photopsias, dysmetropsia i.e. Alice in Wonderland syndrome (micropsia, macropsia, teleopsia, and pelopsia), visual snow, oscillopsia, entoptic phenomena, and cerebral polyopia. Cause: Posterior visual pathway cortical lesions (tumor, abscess, hemorrhage, infarction, arteriovenous malformation, cortical dysplasia, aneurysm) and various seizure causes (hyperglycemia, ion channel mutations, Creutzfeldt–Jakob disease, idiopathic seizures, etc.) cause focal cortical hyperactivity or hyperexcitability, resulting in inappropriate, persistent activation of a visual memory circuit. Pathophysiology: Illusory palinopsia is a dysfunction of visual perception, resulting from diffuse, persistent alterations in neuronal excitability that affect physiological mechanisms of light or motion perception. Illusory palinopsia is caused by migraines, visual snow, HPPD, prescription drugs, head trauma, or may be idiopathic. Trazodone, nefazodone, mirtazapine, topiramate, clomiphene, oral contraceptives, and risperidone have been reported to cause illusory palinopsia. A patient frequently has multiple types of illusory palinopsia, which represent dysfunctions in both light and motion perception. Light and motion are processed via different pathways, suggesting diffuse or global excitability alterations. Diagnosis: Differentiation from physiological afterimages Palinopsia is a pathological symptom and should be distinguished from physiological afterimages, a common and benign phenomenon. Physiological afterimages appear when viewing a bright stimulus and shifting visual focus. For example, after staring at a computer screen and looking away, a vague afterimage of the screen remains in the visual field. A stimulus consistently produces the same afterimage, which is dependent on the stimulus intensity and contrast, the time of fixation, and the retinal adaptation state. Physiological afterimages are usually the complementary color of the original stimulus (negative afterimage), while palinoptic afterimages are usually the same color as the original stimulus (positive afterimage). There is some ambiguity between illusory palinopsia and physiological afterimages since there are not concrete symptomatic criteria which determines if an afterimage is pathological. Diagnosis: Illusory versus hallucinatory Illusory palinopsia is due to an abnormality in the original perception of a stimulus and is similar to a visual illusion: the distorted perception of a real external stimulus. Hallucinatory palinopsia is due to an abnormality after a stimulus has been encoded in visual memory and is similar to a complex visual hallucination: the creation of a formed visual image where none exists.External conditions such as stimulus intensity, background contrast, fixation, and movement typically affect the generation and severity of illusory palinopsia but not hallucinatory palinopsia. Illusory palinopsia consists of afterimages that are short-lived or unformed, occur in the same location in the visual field as the original stimulus, and are continuous or predictable. Hallucinatory palinopsia describes formed afterimages and scenes that are lifelike, high-resolution, long-lasting, occur anywhere in the visual field, and are unpredictable. Illusory palinopsia are caused by diffuse neuronal pathology such as global alterations in neurotransmitter receptors, while hallucinatory palinopsia is typically caused by focal cortical pathology.The clinical characteristics that separate illusory from hallucinatory palinopsia also help differentiate and assess risk in visual illusions and hallucinations. Complex (formed) visual hallucinations are more worrisome than simple visual hallucinations or visual illusions. Research: Research needs to be performed on the efficacy of the various pharmaceuticals for treating illusory palinopsia. It is unclear if the symptoms' natural history and treatment are influenced by the cause. It is also not clear if there is treatment efficacy overlap for illusory palinopsia and the other co-existing diffuse persistent illusory phenomenon such as visual snow, oscillopsia, dysmetropsia, and halos.Future advancements in fMRI could potentially further our understanding of hallucinatory palinopsia and visual memory. Increased accuracy in fMRI might also allow for the observation of subtle metabolic or perfusional changes in illusory palinopsia, without the use of ionizing radiation present in CT scans and radioactive isotopes. Studying the psychophysics of light and motion perception could advance our understanding of illusory palinopsia, and vice versa. For example, incorporating patients with visual trailing into motion perception studies could advance our understanding of the mechanisms of visual stability and motion suppression during eye movements (e.g. saccadic suppression).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded