text
stringlengths
60
353k
source
stringclasses
2 values
**Komi (Go)** Komi (Go): Komi (込み, コミ) in the game of Go are points added to the score of the player with the white stones as compensation for playing second. The value of Black's first-move advantage is generally considered to be between 5 and 7 points by the end of the game. Komi (Go): Standard komi is 6.5 points under the Japanese and Korean rules; under Chinese, Ing and AGA rules standard komi is 7.5 points; under New Zealand rules standard komi is 7 points. Komi typically applies only to games where both players are evenly ranked. In the case of a one-rank difference, the stronger player will typically play with the white stones and players often agree on a simple 0.5-point komi to break a tie (jigo) in favour of white, or no komi at all. Komidashi (コミ出し) is the more complete Japanese language term. The Chinese term is tiē mù (simplified Chinese: 贴目; traditional Chinese: 貼目) and the Korean term is deom (덤). Komi (Go): Efforts have been made to determine the value of komi for boards much smaller than the standard 19x19 grid for go, such as 7x7. When introducing Environmental Go, Elwyn Berlekamp made a broad generalisation of komi to illustrate the practical value of the temperature concept from combinatorial game theory. Whole number and halves: Conventional komi in most competitions is a half-integer such as 6.5 points. This is convenient and the prevailing usage for knock-out tournaments, since it makes a tied game (jigo in Japanese) and rematches less likely (a drawn game is still possible under Japanese rules since the Japanese rule prohibiting repeated positions applies only to the simplest possibility, called 'ko'). In a club or friendly game this is not a problem, so a value such as 6 points is just as practical. Within a Swiss system draw, tied games are not convenient and tiebreakers are used. Whole number and halves: Some argue there is nothing wrong in having a tie. Forbidding a draw may misrepresent one player as superior when there is no difference in skill. History: White is at a disadvantage because Black gets to move first, giving that player sente ("initiative"). Records show that the winning percentage of Black is higher. The importance of playing first was, however, not dealt with by the rules until the 1920s, and then only tentatively. History: The compensation (komi) system was introduced into professional Go in Japan as a gradual process of innovation, beginning in the 1920s. The Hisekai, a Go organization established in 1922 and dissolved with the formation of the Nihon Ki-in in 1924, used a 4.5 point komi among its many rule innovations. The correct value of komi has been re-evaluated over the years, as professional opening strategy has evolved. History: At first, komi could be as low as 2.5 points or 3 points. It was later increased to 4.5, and then 5.5 points. A komi of 5.5 points was used for a long time, but research found that 5.5 points was insufficient to compensate for White's disadvantage. Statistical analyses of the year's games would sometimes appear in the Igo Nenkan (Kido Yearbook), backing up the intuition of many top players. The use of databases confirmed figures such as 53% victories for Black, not just at the highest level. History: Komi was then raised to 6.5. Some events use as high as 7.5 points. Under the Chinese method of counting, the difference between 5.5 and 6.5 points is of minimal effect. Chinese sources usually in fact quote figures that are halved, such as 2.75 for 5.5, at least for Chinese domestic competitions, as one stone (the scoring increment typically used in China) is equivalent to two points. History: Handicap games are almost universally played with a komi of 0.5 points. The advantage of playing one or more black stones (the number usually calculated as the difference in player's rank) before the white player's first move constitutes the remainder of the handicap, with the 0.5 komi determining white as winner in games that would otherwise be a draw. John Fairbairn, a Go historian, has written on the history of komi. In his 1977 Introduction to Go he stated that the value was about 5. Effects on strategy: Since very minor mistakes can cost one point, discussion of the 'true' value for komi makes little sense, except at the level of the top-ranked players in the world. These are (in most cases) also the opening-theory experts, and evaluate opening strategies in practical play against their peers. Effects on strategy: The introduction and then increase of komi has led to ever more ambitious or aggressive strategies for Black, the first player. In the days before komi, White as second player had to disrupt the smooth working of Black's classical strategies, described sometimes as aiming for a sure win by 3 points. From the introduction of komi in most pro events, around 1950, Black's older methods had to be reconsidered, since White suddenly needed appreciably less (in pro terms) in secure area. The 3-3 point became an interesting play for White, where previously it appeared experimental, and was developed in particular by Go Seigen and Sakata Eio. Effects on strategy: In the following decades a mixture of classical and shinfuseki techniques became normal. The most obvious effect was the replacement of the 4-3 point by the 4-4 point as the most common way to first occupy a corner. Perfect Komi: In theory a perfect value of Komi would make each game result in Jigo (draw) given perfect play by both sides. Since in practice no human or computer can play perfect Go, this value is not known with certainty. However under area scoring rules and in the absence of Seki the perfect Komi can be shown to be an odd integer and statistics from professional and computer play suggest that 7 is the correct value. Local variations: Although 6.5 points is a common komi as of 2007; each country, association, and tournament may set its own specific komi: In Japan, the usual komi was once about 2.5 points. Some time later, it was raised to 4.5 points. In 1955 the Oza became the first tournament to adopt 5.5. The value of 5.5 became standard over some decades. The Nihon Ki-in increased the komi to 6.5 in 2002, citing Black's 51.855% win rate under the old rule. Local variations: In Korea, it used to be 5.5, but is now 6.5. In China, 5.5 points was common, but 7.5 is now standard. A value of 6.5 would seldom give a different result from 5.5 due to Chinese scoring rules. Local variations: In America, American Go association (AGA) official rules used to specify 5.5 points, however they later suggested also experimenting with values up to 8.5 points in both informal games and tournaments in order to gather data to determine the effects of increasing U.S. komi officially. The American Go Association changed komi from 5.5 to 7.5 in August 2004, effective 2005. Local variations: The New Zealand rules specify a komi of 7. For the Ing Foundation (Ing rules) komi is specified as 8 points. Due to the different counting method used by the Ing system, this komi is equivalent to 7.5 points under the Japanese rules. Types: Fixed compensation point system By far the most common type of komi is a fixed compensation point system. A fixed number of points, determined by the Go organization or the tournament director, is given to the second player (White) in an even game (without handicaps) to make up for first-player (Black) advantage. Auction komi As no one can be absolutely sure of the ideal value for komi, systems without fixed komi are used in some amateur matches and tournaments. This is called auction komi. Examples of auction komi systems: the players do an "auction" by saying: "I am willing to play black against XXX komi" and the player who wins the auction (offers the highest komi) plays black. one player chooses the size of the komi, and the other player then chooses to play black or white. This version of auction komi becomes equivalent to the pie rule to Go, if choosing the size of the komi is considered to be a move that white player makes before the game would normally start. Pie rule One player chooses komi, and the other player chooses whether to play black or white.OR Black places his first stone, after that white decides whether he wants to play black or white.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vertigo** Vertigo: Vertigo is a condition in which a person has the sensation of movement or of surrounding objects moving when they are not. Often it feels like a spinning or swaying movement. This may be associated with nausea, vomiting, sweating, or difficulties walking. It is typically worse when the head is moved. Vertigo is the most common type of dizziness.The most common disorders that result in vertigo are benign paroxysmal positional vertigo (BPPV), Ménière's disease, and vestibular neuritis. Less common causes include stroke, brain tumors, brain injury, multiple sclerosis, migraines, trauma, and uneven pressures between the middle ears. Physiologic vertigo may occur following being exposed to motion for a prolonged period such as when on a ship or simply following spinning with the eyes closed. Other causes may include toxin exposures such as to carbon monoxide, alcohol, or aspirin. Vertigo typically indicates a problem in a part of the vestibular system. Other causes of dizziness include presyncope, disequilibrium, and non-specific dizziness.Benign paroxysmal positional vertigo is more likely in someone who gets repeated episodes of vertigo with movement and is otherwise normal between these episodes. Benign vertigo episodes generally last less than one minute. The Dix-Hallpike test typically produces a period of rapid eye movements known as nystagmus in this condition. In Ménière's disease there is often ringing in the ears, hearing loss, and the attacks of vertigo last more than twenty minutes. In vestibular neuritis the onset of vertigo is sudden, and the nystagmus occurs even when the person has not been moving. In this condition vertigo can last for days. More severe causes should also be considered, especially if other problems such as weakness, headache, double vision, or numbness occur.Dizziness affects approximately 20–40% of people at some point in time, while about 7.5–10% have vertigo. About 5% have vertigo in a given year. It becomes more common with age and affects women two to three times more often than men. Vertigo accounts for about 2–3% of emergency department visits in the developed world. Classification: Vertigo is classified into either peripheral or central, depending on the location of the dysfunction of the vestibular pathway, although it can also be caused by psychological factors.Vertigo can also be classified into objective, subjective, and pseudovertigo. Objective vertigo describes when the person has the sensation that stationary objects in the environment are moving. Subjective vertigo refers to when the person feels as if they are moving. The third type is known as pseudovertigo, an intensive sensation of rotation inside the person's head. While this classification appears in textbooks, it is unclear what relation it has to the pathophysiology or treatment of vertigo. Classification: Peripheral Vertigo that is caused by problems with the inner ear or vestibular system, which is composed of the semicircular canals, the vestibule (utricle and saccule), and the vestibular nerve is called "peripheral", "otologic", or "vestibular" vertigo. The most common cause is benign paroxysmal positional vertigo (BPPV), which accounts for 32% of all peripheral vertigo. Other causes include Ménière's disease (12%), superior canal dehiscence syndrome, vestibular neuritis, and visual vertigo. Any cause of inflammation such as common cold, influenza, and bacterial infections may cause transient vertigo if it involves the inner ear, as may chemical insults (e.g., aminoglycosides) or physical trauma (e.g., skull fractures). Motion sickness is sometimes classified as a cause of peripheral vertigo.People with peripheral vertigo typically present with mild to moderate imbalance, nausea, vomiting, hearing loss, tinnitus, fullness, and pain in the ear. In addition, lesions of the internal auditory canal may be associated with facial weakness on the same side. Due to a rapid compensation process, acute vertigo as a result of a peripheral lesion tends to improve in a short period of time (days to weeks). Classification: Central Vertigo that arises from injury to the balance centers of the central nervous system (CNS), often from a lesion in the brainstem or cerebellum, is called "central" vertigo and is generally associated with less prominent movement illusion and nausea than vertigo of peripheral origin. Central vertigo may have accompanying neurologic deficits (such as slurred speech and double vision), and pathologic nystagmus (which is pure vertical/torsional). Central pathology can cause disequilibrium, which is the sensation of being off balance. The balance disorder associated with central lesions causing vertigo is often so severe that many people are unable to stand or walk.A number of conditions that involve the central nervous system may lead to vertigo including: lesions caused by infarctions or hemorrhage, tumors present in the cerebellopontine angle such as a vestibular schwannoma or cerebellar tumors, epilepsy, cervical spine disorders such as cervical spondylosis, degenerative ataxia disorders, migraine headaches, lateral medullary syndrome, Chiari malformation, multiple sclerosis, parkinsonism, as well as cerebral dysfunction. Central vertigo may not improve or may do so more slowly than vertigo caused by disturbance to peripheral structures. Alcohol can result in positional alcohol nystagmus (PAN). Signs and symptoms: Vertigo is a sensation of spinning while stationary. It is commonly associated with nausea or vomiting, unsteadiness (postural instability), falls, changes to a person's thoughts, and difficulties in walking. Recurrent episodes in those with vertigo are common and frequently impair the quality of life. Blurred vision, difficulty in speaking, a lowered level of consciousness, and hearing loss may also occur. The signs and symptoms of vertigo can present as a persistent (insidious) onset or an episodic (sudden) onset.Persistent onset vertigo is characterized by symptoms lasting for longer than one day and is caused by degenerative changes that affect balance as people age. Naturally, the nerve conduction slows with aging and a decreased vibratory sensation is common. Additionally, there is a degeneration of the ampulla and otolith organs with an increase in age. Persistent onset is commonly paired with central vertigo signs and symptoms.The characteristics of an episodic onset vertigo are indicated by symptoms lasting for a smaller, more memorable amount of time, typically lasting for only seconds to minutes. Pathophysiology: The neurochemistry of vertigo includes six primary neurotransmitters that have been identified between the three-neuron arc that drives the vestibulo-ocular reflex (VOR). Glutamate maintains the resting discharge of the central vestibular neurons and may modulate synaptic transmission in all three neurons of the VOR arc. Acetylcholine appears to function as an excitatory neurotransmitter in both the peripheral and central synapses. Gamma-Aminobutyric acid (GABA) is thought to be inhibitory for the commissures of the medial vestibular nucleus, the connections among the cerebellar Purkinje cells, the lateral vestibular nucleus, and the vertical VOR. Pathophysiology: Three other neurotransmitters work centrally. Dopamine may accelerate vestibular compensation. Norepinephrine modulates the intensity of central reactions to vestibular stimulation and facilitates compensation. Histamine is present only centrally, but its role is unclear. Dopamine, histamine, serotonin, and acetylcholine are neurotransmitters thought to produce vomiting. It is known that centrally acting antihistamines modulate the symptoms of acute symptomatic vertigo. Diagnosis: Tests for vertigo often attempt to elicit nystagmus and to differentiate vertigo from other causes of dizziness such as presyncope, hyperventilation syndrome, disequilibrium, or psychiatric causes of lightheadedness. Tests of vestibular system (balance) function include electronystagmography (ENG), Dix-Hallpike maneuver, rotation tests, head-thrust test, caloric reflex test, and computerized dynamic posturography (CDP).The HINTS test, which is a combination of three physical examination tests that may be performed by physicians at the bedside, has been deemed helpful in differentiating between central and peripheral causes of vertigo. The HINTS test involves the horizontal head impulse test, observation of nystagmus on primary gaze, and the test of skew. CT scans or MRIs are sometimes used by physicians when diagnosing vertigo.Tests of auditory system (hearing) function include pure tone audiometry, speech audiometry, acoustic reflex, electrocochleography (ECoG), otoacoustic emissions (OAE), and the auditory brainstem response test.A number of specific conditions can cause vertigo. In the elderly, however, the condition is often multifactorial.A recent history of underwater diving can indicate a possibility of barotrauma or decompression sickness involvement, but does not exclude all other possibilities. The dive profile (which is frequently recorded by dive computer) can be useful to assess a probability for decompression sickness, which can be confirmed by therapeutic recompression. Diagnosis: Benign paroxysmal positional vertigo Benign paroxysmal positional vertigo (BPPV) is the most common vestibular disorder and occurs when loose calcium carbonate debris has broken off of the otoconial membrane and enters a semicircular canal thereby creating the sensation of motion. People with BPPV may experience brief periods of vertigo, usually under a minute, which occur with change in the position.This is the most common cause of vertigo. It occurs in 0.6% of the population yearly with 10% having an attack during their lifetime. It is believed to be due to a mechanical malfunction of the inner ear. BPPV may be diagnosed with the Dix-Hallpike test and can be effectively treated with repositioning movements such as the Epley maneuver. Diagnosis: Ménière's disease Ménière's disease is an inner ear disorder of unknown origin, but is thought to be caused by an increase in the amount of endolymphatic fluid present in the inner ear (endolymphatic hydrops). However, this idea has not been directly confirmed with histopathologic studies, but electrophysiologic studies have been suggestive of this mechanism. Ménière's disease frequently presents with recurrent, spontaneous attacks of severe vertigo in combination with ringing in the ears (tinnitus), a feeling of pressure or fullness in the ear (aural fullness), severe nausea or vomiting, imbalance, and hearing loss. As the disease worsens, hearing loss will progress. Diagnosis: Vestibular neuritis Vestibular neuritis presents with severe vertigo with associated nausea, vomiting, and generalized imbalance and is believed to be caused by a viral infection of the inner ear, although several theories have been put forward and the cause remains uncertain. Individuals with vestibular neuritis do not typically have auditory symptoms, but may experience a sensation of aural fullness or tinnitus. Persisting balance problems may remain in 30% of people affected. Diagnosis: Vestibular migraine Vestibular migraine is the association of vertigo and migraines and is one of the most common causes of recurrent, spontaneous episodes of vertigo. The cause of vestibular migraines is currently unclear; however, one hypothesized cause is that the stimulation of the trigeminal nerve leads to nystagmus in individuals with migraines. Approximately 40% of all migraine patients will have an accompanying vestibular syndrome, such as vertigo, dizziness, or disruption of the balance system.Other suggested causes of vestibular migraines include the following: unilateral neuronal instability of the vestibular nerve, idiopathic asymmetric activation of the vestibular nuclei in the brainstem, and vasospasm of the blood vessels supplying the labyrinth or central vestibular pathways resulting in ischemia to these structures. Vestibular migraines are estimated to affect 1–3% of the general population and may affect 10% of people with migraine . Additionally, vestibular migraines tend to occur more often in women and rarely affect individuals after the sixth decade of life. Diagnosis: Motion sickness Motion sickness is common and is related to vestibular migraine. It is nausea and vomiting in response to motion and is typically worse if the journey is on a winding road or involves many stops and starts, or if the person is reading in a moving car. It is caused by a mismatch between visual input and vestibular sensation. For example, the person is reading a book that is stationary in relation to the body, but the vestibular system senses that the car, and thus the body, is moving. Diagnosis: Alternobaric vertigo Alternobaric vertigo is caused by a pressure difference between the middle ear cavities, usually due to blockage or partial blockage of one eustachian tube, usually when flying or diving underwater. It is most pronounced when the diver is in the vertical position; the spinning is toward the ear with the higher pressure and tends to develop when the pressures differ by 60 cm of water or more. Diagnosis: Decompression sickness Vertigo is recorded as a symptom of decompression sickness in 5.3% of cases by the U.S. Navy as reported by Powell, 2008 including isobaric decompression sickness. Diagnosis: Decompression sickness can also be caused at a constant ambient pressure when switching between gas mixtures containing different proportions of different inert gases. This is known as isobaric counterdiffusion, and presents a problem for very deep dives. For example, after using a very helium-rich trimix at the deepest part of the dive, a diver will switch to mixtures containing progressively less helium and more oxygen and nitrogen during the ascent. Nitrogen diffuses into tissues 2.65 times slower than helium, but is about 4.5 times more soluble. Switching between gas mixtures that have very different fractions of nitrogen and helium can result in "fast" tissues (those tissues that have a good blood supply) increasing their total inert gas loading. This is often found to provoke inner ear decompression sickness, as the ear seems particularly sensitive to this effect. Diagnosis: Stroke A stroke (either ischemic or hemorrhagic) involving the posterior fossa is a cause of central vertigo. Risk factors for a stroke as a cause of vertigo include increasing age and known vascular risk factors. Presentation may more often involve headache or neck pain, additionally, those who have had multiple episodes of dizziness in the months leading up to presentation are suggestive of stroke with prodromal TIAs. The HINTS exam as well as imaging studies of the brain (CT, CT angiogram, MRI) are helpful in diagnosis of posterior fossa stroke. Diagnosis: Vertebrobasilar insufficiency Vertebrobasilar insufficiency, notably Bow Hunter's syndrome, is a rare cause of positional vertigo, especially when vertigo is triggered by rotation of the head. Management: Definitive treatment depends on the underlying cause of vertigo. People with Ménière's disease have a variety of treatment options to consider when receiving treatment for vertigo and tinnitus including: a low-salt diet and intratympanic injections of the antibiotic gentamicin or surgical measures such as a shunt or ablation of the labyrinth in refractory cases. Management: Common drug treatment options for vertigo may include the following: Anticholinergics such as hyoscine hydrobromide (scopolamine) Anticonvulsants such as topiramate or valproic acid for vestibular migraines Antihistamines such as betahistine, dimenhydrinate, or meclizine, which may have antiemetic properties Beta blockers such as metoprolol for vestibular migraine Corticosteroids such as methylprednisolone for inflammatory conditions such as vestibular neuritis or dexamethasone as a second-line agent for Ménière's diseaseAll cases of decompression sickness should be treated initially with 100% oxygen until hyperbaric oxygen therapy (100% oxygen delivered in a high-pressure chamber) can be provided. Several treatments may be necessary, and treatment will generally be repeated until either all symptoms resolve, or no further improvement is apparent. Etymology: Vertigo is from the Latin word, vertō, which means "a whirling or spinning movement".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phenprobamate** Phenprobamate: Phenprobamate (Gamaquil, Isotonil, Actozine) is a centrally acting skeletal muscle relaxant, with additional sedative and anticonvulsant effects. Overdose is similar to barbiturates. Its mechanism of action is probably similar to meprobamate. Phenprobamate has been used in humans as an anxiolytic, and is still sometimes used in general anesthesia and for treating muscle cramps and spasticity. Phenprobamate is still used in some European countries, but it has generally been replaced by newer drugs. Phenprobamate is metabolized by oxidative degradation of the carbamate group and ortho-hydroxylation of the benzene ring, and is eliminated in urine by the kidneys. Phenprobamate: Doses range from 400 to 800 mg, up to 3 times a day.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TBX5 (gene)** TBX5 (gene): T-box transcription factor TBX5, (T-box protein 5) is a protein that in humans is encoded by the TBX5 gene. Abnormalities in the TBX5 gene can result in altered limb development, Holt-Oram syndrome, Tetra-amelia syndrome, and cardiac and skeletal problems. This gene is a member of a phylogenetically conserved family of genes that share a common DNA-binding domain, the T-box. T-box genes encode transcription factors involved in the regulation of developmental processes. This gene is closely linked to related family member T-box 3 (ulnar mammary syndrome) on human chromosome 12. TBX5 (gene): TBX5 is located on the long arm of chromosome 12. TBX5 produces a protein called T-box protein 5 that acts as a transcription factor. TBX5 is involved with forelimb and heart development. This gene impacts the early development of the forelimb by triggering fibroblast growth factor, FGF10. Function: TBX5 is a transcription factor that codes for the protein called T-box 5. The transcription factors it encodes are necessary for development, especially in the pattern formation of upper limbs and cardiac growth. TBX5 is involved with the development of the four heart chambers, the electrical conducting system, and the septum separating the right and left sides of the heart. Along with playing roles in the development of the heart, septum, and electrical system of the heart, it also activates genes that are involved in the development of the upper limbs, the arms and hands. This gene is also involved in the muscle connective tissue for muscle and tendon patterning. A study showed that deletion of TBX5 in forelimbs causes disruption in the muscle and tendon patterning without affecting the skeleton's development. T-box protein 5 expression is in the cells of the lateral plate mesoderm which form the forelimb bud and the cascade of limb initiation. In its absence, no forelimb bud forms. The encoded protein plays a major role in limb development, specifically during limb bud initiation. For instance, in chickens Tbx5 specifies forelimb status. The activation of Tbx5 and other T-box proteins by Hox genes activates signaling cascades that involve the Wnt signaling pathway and FGF signals in limb buds. Ultimately, Tbx5 leads to the development of apical ectodermal ridge (AER) and zone of polarizing activity (ZPA) signaling centers in the developing limb bud, which specify the orientation growth of the developing limb. Together with Tbx4, Tbx5 plays a role in patterning the soft tissues (muscles and tendons) of the musculoskeletal system.As a protein-coding gene, TBX5 encodes for the protein T-box Transcription Factor 5, which is a part of the T-box family of transcription factors. It also interacts with other genes, such as GATA4 and NKX2-5, and the BAF chromatin-remodeling complex to drive and repress gene expression during development. Function: Role in non-human animals Mice that were genetically modified to not have the TBX5 gene did not survive gestation, due to the heart not developing past embryonic day E10.5. Mice that only had one working copy of TBX5 were born with morphological problems such as enlarged hearts, atrial and ventral septum defects, and limb malformations similar to those found in the Holt-Oram Syndrome.Pigeons with feathered feet have Tbx5 active in the hind feet, which cause them to develop feathered hindlimbs with thicker bones, more similar to their frontlimb wings. Function: Role in human embyronic development A gene "knockout" model for TBX5 by CRISPR/Cas9 genome editing has been created. This homozygous TBX5 knockout human embryonic stem cell line, called TBX5-KO maintained stem cell-like morphology, pluripotency markers, normal karyotype, and could differentiate into all three germ layers in vivo. This cell line can provide an in vitro platform for studying the pathogenic mechanisms and biological function of TBX5 in the heart development. By understanding what happens in development without this gene, further treatment options for fetuses with a TBX5 mutation might be possible to prevent the severe cardiac defects associated with Holt-Oram Syndrome. Clinical significance: Mutations in this gene can result in Holt–Oram syndrome, a developmental disorder affecting the heart and upper limbs. Holt-Oram syndrome can cause a hole in the septum, bone abnormalities in the fingers, wrists, or arms, and a conduction disease leading to abnormal heart rates and arrhythmias. The most common cardiac issue associated with this condition is the malformation of the septum, which separates the left and right sides of the heart.Tetra-amelia syndrome is a condition where forelimb malformation occurs because FGF-10 is not triggered due to Tbx5 mutations. This condition can lead to the absence of one or both forelimbs. Clinical significance: Skeletally, there may be abnormally bent fingers, sloping shoulders, and phocomelia. Cardiac defects include ventral and atrial septation and problems with the conduction system. Several transcript variants encoding different isoforms have been described for this gene. Interactions: TBX5 (gene) has been shown to interact with: GATA4 and NKX2-5.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bidirectional texture function** Bidirectional texture function: Bidirectional texture function (BTF) is a 6-dimensional function depending on planar texture coordinates (x,y) as well as on view and illumination spherical angles. In practice this function is obtained as a set of several thousand color images of material sample taken during different camera and light positions. Bidirectional texture function: The BTF is a representation of the appearance of texture as a function of viewing and illumination direction. It is an image-based representation, since the geometry of the surface is unknown and not measured. BTF is typically captured by imaging the surface at a sampling of the hemisphere of possible viewing and illumination directions. BTF measurements are collections of images. The term BTF was first introduced in and similar terms have since been introduced including BSSRDF and SBRDF (spatial BRDF). SBRDF has a very similar definition to BTF, i.e. BTF is also a spatially varying BRDF. Bidirectional texture function: To cope with a massive BTF data with high redundancy, many compression methods were proposed.Application of the BTF is in photorealistic material rendering of objects in virtual reality systems and for visual scene analysis, e.g., recognition of complex real-world materials using bidirectional feature histograms or 3D textons. Biomedical and biometric applications of the BTF include recognition of skin texture.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bioartificial heart** Bioartificial heart: A bioartificial heart is an engineered heart that contains the extracellular structure of a decellularized heart and cellular components from a different source. Such hearts are of particular interest for therapy as well as research into heart disease. The first bioartificial hearts were created in 2008 using cadaveric rat hearts. In 2014, human-sized bioartificial pig hearts were constructed. Bioartificial hearts have not been developed yet for clinical use, although the recellularization of porcine hearts with human cells opens the door to xenotransplantation. Background: Heart failure is one of the leading causes of death. In 2013, an estimate of 17.3 million deaths per year out of the 54 million total deaths was caused by cardiovascular diseases, meaning that 31.5% of the world's total death was caused by this. Often, the only viable treatment for end-stage heart failure is organ transplantation. Currently organ supply is insufficient to meet the demand, which presents a large limitation in an end-stage treatment plan. A theoretical alternative to traditional transplantation processes is the engineering of personalized bioartificial hearts. Researchers have had many successful advances in the engineering of cardiovascular tissue and have looked towards using decellularized and recellularized cadaveric hearts in order to create a functional organ. Decellularization-recellularization involves using a cadaveric heart, removing the cellular contents while maintaining the protein matrix (decellularization), and subsequently facilitating growth of appropriate cardiovascular tissue inside the remaining matrix (recellularization).Over the past years, researchers have identified populations of cardiac stem cells that reside in the adult human heart. This discovery sparked the idea of regenerating the heart cells by taking the stem cells inside the heart and reprogramming them into cardiac tissues. The importance of these stem cells are self-renewal, the ability to differentiate into cardiomyocytes, endothelial cells and smooth vascular muscle cells, and clonogenicity. These stem cells are capable of becoming myocytes, which are for stabilizing the topography of the intercellular components, as well as to help control the size and shape of the heart, as well as vascular cells, which serve as a cell reservoir for the turnover and the maintenance of the mesenchymal tissues. However, in vivo studies have demonstrated that the regenerative ability of implanted cardiac stem cells lies in the associated macrophage-mediated immune response and concomitant fibroblast-mediated wound healing and not in their functionality, since these effects were observed for both live and dead stem cells. Methodology: The preferred method to remove all cellular components from a heart is perfusion decellularization. This technique involves perfusing the heart with detergents such as SDS and Triton X-100 dissolved in distilled water.The remaining ECM is composed of structural elements such as collagen, laminin, elastin and fibronectin. The ECM scaffold promotes proper cellular proliferation and differentiation, vascular development, as well as providing mechanical support for cellular growth. Because minimal DNA material remains after the decellularization process, the engineered organ is biocompatible with the transplant recipient, regardless of species. Unlike traditional transplant options, recellularized hearts are less immunogenic and have a decreased risk of rejection.Once the decellularized heart has been sterilized to remove any pathogens, the recellularization process can occur. Multipotent cardiovascular progenitors are then added to the decellularized heart and with additional exogenous growth factors, are stimulated to differentiate into cardiomyocytes, smooth muscle cells and endothelial cells. Recellularized heart functionality: The most promising results come from recellularized rat hearts. After only 8 days of maturation, the heart models were stimulated with an electrical signal to provide pacing. The heart models showed a unified contraction with a force equivalent to ~2% of a normal rat heart or ~25% of that of a 16-week-old human heart.Although far from use in a clinical setting, there have been great advances in the field of bioartificial heart generation. The use of decellularization and recellularization processes, has led to the production of a three dimensional matrix that promotes cellular growth; the repopulation of the matrix containing appropriate cell composition; and the bioengineering of organs demonstrating functionality (limited) and responsiveness to stimuli. This area shows immense promise and with future research may redefine treatment of end stage heart failure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Triangular fibrocartilage** Triangular fibrocartilage: The triangular fibrocartilage complex (TFCC) is formed by the triangular fibrocartilage discus (TFC), the radioulnar ligaments (RULs) and the ulnocarpal ligaments (UCLs). Structure: Triangular fibrocartilage disc The triangular fibrocartilage disc (TFC) is an articular discus that lies on the pole of the distal ulna. It has a triangular shape and a biconcave body; the periphery is thicker than its center. The central portion of the TFC is thin and consists of chondroid fibrocartilage; this type of tissue is often seen in structures that can bear compressive loads. This central area is often so thin that it is translucent and in some cases it is even absent. The peripheral portion of the TFC is well vascularized, while the central portion has no blood supply. Structure: This discus is attached by thick tissue to the base of the ulnar styloid and by thinner tissue to the edge of the radius just proximal to the radiocarpal articular surface. Structure: Radioulnar ligaments The radioulnar ligaments (RULs) are the principal stabilizers of the distal radioulnar joint (DRUJ). There are two RULs: the palmar and dorsal radioulnar ligaments.These ligaments arise from the distal radius medial border and insert on the ulna at two separate and distinct sites: the ulna styloid and the fovea (a groove that separates the ulnar styloid from the ulnar head). Each ligament consists of a superficial component and a deep component. The superficial components insert directly onto the ulna styloid. The deep components insert more anterior, into the fovea adjacent to the articular surface of the dome of the distal ulna.The ligaments are composed of longitudinally oriented lamellar collagen to resist tensile loads and have a rich vascular supply to allow healing. Structure: Ulnocarpal ligaments The ulnocarpal ligaments (UCLs) consist of the ulnolunate and the ulnotriquetral ligaments. They originate from the ulnar styloid and insert into the carpal bones of the wrist: the ulnolunate ligament inserts into the lunate bone and the ulnotriquetral ligament into the triquetrum bone. These ligaments prevent dorsal migration of the distal ulna. They are more taut during supination, because in supination ulnar styloid moves away from the carpal bones volar side. Function: The primary functions of the TFCC: To cover the ulna head by extending the articular surface of the distal radius. Load transmission across the ulnocarpal joint and partially load absorbing Allows forearm rotation by giving a strong but flexible connection between the distal radius and ulna. It also supports the ulnar portion of the carpus. Load transmission The TFCC is important in load transmission across the ulnar aspect of the wrist. The TFC transmits and absorbs compressive forces. Function: The ulnar variance influences the amount of load that is transmitted through the distal ulna. The load transmission is directly proportional to this ulnar variance. In neutral ulnar variance, approximately 20 percent of the load is transmitted. With negative ulnar variance, the load across the TFC is decreased. This occurs during supination, because the radius moves distally on the ulna and creates a negative ulnar variance. With positive ulnar variance it is reversed. The load that is transmitted across the TFC is then increased. This positive ulnar variance occurs during pronation. Function: Rotation The TFCC is a major stabilizer of the DRUJ. To control the forearm rotation the DRUJ acts in concert with the proximal radioulnar joint. The connection between the distal radius and the distal ulna, maintain the congruency of the DRUJ. This attachment is mainly created by the RULs of the TFCC. These ligaments support the joint through its arc of rotation.The role of the TFCC in supination and in pronation is a matter of dispute. Some authors (Schuind et al.) concluded that the dorsal fibers of the TFCC tighten in pronation, and the palmar fibers in supination. These conclusions are opposite of those published by Af Ekenstam and Hagert.Both parties are in fact right, as the RULs consists of two ligaments each made of another two components: the superficial and the deep ligaments. During supination, the superficial palmar and the deep dorsal ligaments are tightened, preventing palmar translation of the ulna. In pronation, this is reversed: the superficial dorsal and the deep palmar ligaments are tightened and prevent dorsal translation of the ulna. Clinical significance: The TFCC has a substantial risk for injury and degeneration because of its anatomic complexity and multiple functions. Application of an extension-pronation force to an axial-load wrist, such as in a fall on an outstretched hand, causes most of the traumatic injuries of the TFCC. Dorsal rotation injury, such as when a drill binds and rotates the wrist instead of the bit, can also cause traumatic injuries. Clinical significance: Injury may also occur from a distraction force applied to the volar forearm or wrist. Finally, tears of the TFCC are frequently found by patients with distal radius fractures.Perforations and defects in the TFCC are not all traumatic. There is an age related correlation with lesions in the TFCC, but many of these defects are asymptomatic. These lesions common occur by patients with positive ulnar variance.Chronic and excessive loading through the ulnocarpal joint, causes degenerative TFCC tears. These tears are a component of ulnar impaction syndrome. Clinical significance: Even though natural degeneration of the ulnocarpal joint is very common, it is important to recognize. In cadavaric examinations, 30% to 70% of the cases had TFCC perforations and chondromalacia of the ulnar head, lunate, and triquetrum. Cases with ulnar-negative variance had fewer degenerative changes. Palmer classification of TFCC lesions The Palmer classification is the most recognized scheme; it divides TFCC lesions into these two categories: traumatic and degenerative. This classification provides an anatomic description of tears, it does not guide treatment or indicate prognosis. Clinical significance: Class 1 – Traumatic Class 1A. Central perforation Class 1B. Ulnar avulsion (with or without styloid fracture) Class 1C. Distal avulsion (from carpus) Class 1D. Radial avulsion (with or without sigmoid notch fracture)Class 2 – Degenerative (ulnar impaction syndrome) Class 2A. TFCC wear Class 2B. TFCC wear with lunate and/or ulnar head chondromalacia Class 2C. TFCC perforation with lunate and/or ulnar head chondromalacia Class 2D. TFCC perforation with lunate and/or ulnar head chondromalacia, and with lunotriquetral ligament perforation Class 2E. TFCC perforation with lunate and/or ulnar head chondromalacia, with lunotriquetral ligament perforation, and with ulnocarpal arthritis Symptoms Patients with a TFCC injury usually experience pain or discomfort located at the ulnar side of the wrist, often just above the ulnar styloid. However, there are also some patients who report diffuse pain throughout the entire wrist. Clinical significance: Rest can reduce pain and activity can make it worse, especially with rotating movements (supination and pronation) of the wrist or movements of the hand sideways in ulnar direction. Other symptoms patients with a TFCC injury frequently mention are: swelling, loss of grip strength, instability, and grinding or clicking sounds (crepitus) that can occur during activity of the wrist. Diagnosis AnamnesisInjuries to the TFCC may be preceded by a fall on a pronated outstretched arm; a rotational injury to the forearm; an axial load trauma to the wrist; or a distraction injury of the wrist in ulnar direction. However, not all patients can recall that a preceding trauma occurred. Physical examinationPalpation: The best place to palpate the TFCC is between the extensor carpi ulnaris (ECU) and the flexor carpi ulnaris (FCU), distal to the ulnar styloid and proximal to the pisiform bone. Tenderness in this area may be consistent with a TFCC lesion. Piano key sign: Dorsal DRUJ instability can cause a protruding ulna head, which can be pressed down. When you release the pressure, it will spring back in position again, just like a piano key. DRUJ stress test: With this provocation maneuver, the wrist is held in pronated or supinated position, while the physician attempts to manipulate the distal ulna in dorsal and volar direction. Painful laxity indicates DRUJ instability and suggests RUL pathology. Ulnar grind test: The forearm is fixated and the wrist is held in dorsiflexion. The physician then applies axial load, while he rotates and deviates the wrist in ulnar direction. Pain and crepitations during this provocation maneuver suggest DRUJ instability or arthritis. Imaging X-ray: X-rays of the wrist are made in two directions: posterior-anterior (PA) and lateral. Radiographs are useful to diagnose or rule out possible bone fractures, a positive ulnar variance or osteoarthritis. The TFCC is not visible on an X-ray, regardless of its condition. MRI: is, together with the findings of a careful physical examination, a helpful diagnostic tool to assess the condition of the TFCC. Nevertheless, the incidence of false-positive and false-negative MRI results is high. Arthrography: a dye is injected into the wrist joint. If there is a TFCC lesion the dye will leak from one joint compartment to another. Wrist arthroscopy: is an invasive diagnostic tool, but it remains to this day the most accurate way to identify TFCC lesions.Note: Imaging techniques can only be relevant together with the clinical findings of a carefully performed physical examination. Other than a TFCC injury, there are many possible causes for ulnar-sided wrist pain. Clinical significance: Differential diagnosis of TFCC injuries Tendinopathy of the ECU Ulnar styloid fracture Distal radius fracture DRUJ arthritis Pisiform bone fractures Hamate bone fractures Carpal instability Midcarpal instability Hypothenar hammer syndrome (ulnar artery thrombosis) Treatment The initial treatment for both traumatic and degenerative TFCC lesions, with a stable DRUJ, is conservative (nonsurgical) therapy. Patients may be advised to wear a temporary splint or cast to immobilize the wrist and forearm for four to six weeks. The immobilization allows scar tissue to develop which can help heal the TFCC. In addition, oral NSAIDs and corticosteroid joint injections can be prescribed for pain relief. Physiotherapy and occupational therapy can help patients recover after immobilization or surgery. Wrist support straps used in sports can also be used in mild cases to compress and minimize movement of the area.Indications for acute TFCC surgery are: a clearly unstable DRUJ, or the existence of additional unstable or displaced fractures. TFCC surgery is also indicated when conservative treatment proves insufficient in about 8–12 weeks. Clinical significance: Fractures of the radius bone are often associated by TFCC damage. If the fracture is treated surgically it is recommended to evaluate and if necessary repair the TFCC as well. Closed fractures (where the skin is still intact) of the radius bone are treated non-surgically with cast; the immobilization can also help heal the TFCC. Clinical significance: Surgical Arthroscopic debridement of TFC discus tissueThe central part of the TFC has no blood supply and therefore has no healing capacity. When a tear occurs in this area of the TFC, it typically creates an unstable flap of tissue that is likely to catch on other joint surfaces. Removing the damaged tissue (debridement) is then indicated. Arthroscopic debridement as a treatment for degenerative TFC tears associated with positive ulnar variance, unfortunately, show poor results. Clinical significance: Arthroscopic repair of TFCC ligamentsSuturing TFCC ligaments can sometimes be performed arthroscopically. But only if there is no serious damage to the ligaments or other surrounding structures. Even after a short period of time torn ligaments tend to retract and therefore lose length. Retracted ligament ends are impossible to suture together again and a reconstruction may be necessary. Open surgical repair of the TFCCOpen surgery is usually required for degenerative or more complex TFCC injuries, or if additional damage to the wrist or forearm caused instability or displacement. It is a more invasive surgical technique compared to arthroscopic treatment, but the surgeon has better visibility and access to the TFCC. Options for open surgery Suturing of the RULs. This is, just like arthroscopic suturing of these ligaments, only possible when the damage is not too serious and if both ends of the ruptured ligament are not yet retracted. Anatomic reconstruction of the RULs using a tendon graft (e.g., the palmaris longus). The tendon graft is tunneled through drilled holes in the ulnar and radius bones. This procedure is indicated for DRUJ instability caused by an irreparable TFCC. Capsular or extensor retinaculum plication. This surgical technique aims to improve DRUJ stability by shortening the joint capsule or the extensor retinaculum. It is mostly used for minor DRUJ instability and is less invasive compared to a complete RUL reconstruction. Shortening of the ulnar bone. Patients with a positive ulnar variance are more susceptible to TFCC damage. Shortening the ulnar bone may help relieve the excess pressure to the TFCC and prevent further degeneration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aluminium arsenate** Aluminium arsenate: Aluminium arsenate is an inorganic compound with the formula AlAsO4. It is most commonly found as an octahydrate. It is a colourless solid that is produced by the reaction between sodium arsenate and a soluble aluminium salt. Aluminium arsenate occurs naturally as the mineral mansfieldite. Anhydrous form is known as an extremely rare, fumarolic mineral alarsite A synthetic hydrate of aluminium arsenate is produced by hydrothermal method. with the formulation Al2O3·3As2O5·10H2O.Modification of aluminium orthoarsenate was carried out by heating different samples to different temperatures. Both amorphous and crystalline forms were obtained. The solubility product was determined to be 10−18.06 for aluminium arsenate hydrate of formula AlAsO4·3.5H2O. Aluminium arsenate: Like gallium arsenate and boron arsenate, it adopts the α-quartz-type structure. The high pressure form has a rutile-type structure in which aluminium and arsenic are six-coordinate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gyárfás–Sumner conjecture** Gyárfás–Sumner conjecture: In graph theory, the Gyárfás–Sumner conjecture asks whether, for every tree T and complete graph K , the graphs with neither T nor K as induced subgraphs can be properly colored using only a constant number of colors. Equivalently, it asks whether the T -free graphs are χ -bounded. Gyárfás–Sumner conjecture: It is named after András Gyárfás and David Sumner, who formulated it independently in 1975 and 1981 respectively. It remains unproven.In this conjecture, it is not possible to replace T by a graph with cycles. As Paul Erdős and András Hajnal have shown, there exist graphs with arbitrarily large chromatic number and, at the same time, arbitrarily large girth. Using these graphs, one can obtain graphs that avoid any fixed choice of a cyclic graph and clique (of more than two vertices) as induced subgraphs, and exceed any fixed bound on the chromatic number.The conjecture is known to be true for certain special choices of T , including paths, stars, and trees of radius two. Gyárfás–Sumner conjecture: It is also known that, for any tree T , the graphs that do not contain a subdivision of T are χ -bounded.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indirect approach** Indirect approach: The Indirect approach is a military strategy described and chronicled by B. H. Liddell Hart after World War I. It was an attempt to find a solution to the problem of high casualty rates in conflict zones with high force to space ratios, such as the Western Front on which he served. The strategy calls for armies to advance along the line of least resistance. Quotations: From Liddell Hart "Throughout the ages, effective results in war have rarely been attained unless the approach has had such indirectness as to ensure the opponent’s unreadiness to meet it… In strategy, the longest way round is often the shortest way home.” A direct approach to the object exhausts the attacker and hardens the resistance by compression, where as an indirect approach loosens the defender's hold by upsetting his balance". Quotations: From Sun Tzu "In all fighting, the direct method may be used for joining battle, but indirect methods will be needed in order to secure victory. In battle, there are not more than two methods of attack – the direct and the indirect; yet these two in combination give rise to an endless series of maneuvers. The direct and the indirect lead on to each other in turn. It is like moving in a circle – you never come to an end. Who can exhaust the possibilities of their combination?" Principles: There were two fundamental principles which governed the indirect approach. Direct attacks on firm defensive positions almost never work and should never be attempted. Principles: To defeat the enemy, one must first disrupt his equilibrium. This cannot be an effect of the main attack; it must take place before the main attack is commenced.While Liddell Hart originally developed the theory for infantry, contact with J. F. C. Fuller helped change his theory more towards tanks. The indirect approach would become a major factor in the development of blitzkrieg. Often misunderstood, the indirect approach is not a treatise against fighting direct battles; it was still based on the Clausewitzian ideal of direct combat and the destruction of an enemy force by arms. It was in reality an attempt to create a doctrine for the remobilization of warfare after the costly attrition of the strategic stalemate of the First World War.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Retroactive data structure** Retroactive data structure: In computer science a retroactive data structure is a data structure which supports efficient modifications to a sequence of operations that have been performed on the structure. These modifications can take the form of retroactive insertion, deletion or updating an operation that was performed at some time in the past. Some applications of retroactive data structures: In the real world there are many cases where one would like to modify a past operation from a sequence of operations. Listed below are some of the possible applications: Error correction: Incorrect input of data. The data should be corrected and all the secondary effects of the incorrect data be removed. Some applications of retroactive data structures: Bad data: When dealing with large systems, particularly those involving a large amount of automated data transfer, it is not uncommon. For example, suppose one of the sensors for a weather network malfunctions and starts to report garbage data or incorrect data. The ideal solution would be to remove all the data that the sensor produced since it malfunctioned along with all the effects the bad data had on the overall system. Some applications of retroactive data structures: Recovery: Suppose that a hardware sensor was damaged but is now repaired and data is able to be read from the sensor. We would like to be able to insert the data back into the system as if the sensor was never damaged in the first place. Manipulation of the past: Changing the past can be helpful in the cases of damage control and retroactive data structures are designed for intentional manipulation of the past. Time as a spatial dimension: It is not possible to consider time as an additional spatial dimension. To illustrate this suppose we map the dimension of time onto an axis of space. The data structure we will use to add the spatial time dimension is a min-heap. Let the y axis represent the key values of the items within the heap and the x axis is the spatial time dimension. After several insertions and delete-min operations (all done non-retroactively) our min-heap would appear like in figure 1. Now suppose we retroactively insert zero to the beginning of the operation list. Our min-heap would appear like in figure 2. Notice how the single operation produces a cascading effect which affects the entire data structure. Thus we can see that while time can be drawn as a spatial dimension, operations with time involved produces dependence which have a ripple when modifications are made with respect to time. Comparison to persistence: At first glance the notion of a retroactive data structures seems very similar to persistent data structures since they both take into account the dimension of time. The key difference between persistent data structures and retroactive data structures is how they handle the element of time. A persistent data structure maintains several versions of a data structure and operations can be performed on one version to produce another version of the data structure. Since each operation produces a new version, each version thus becomes an archive that cannot be changed (only new versions can be spawned from it). Since each version does not change, the dependence between each version also does not change. In retroactive data structures we allow changes to be made directly to previous versions. Since each version is now interdependent, a single change can cause a ripple of changes of all later versions. Figures 1 and 2 show an example of this rippling effect. Definition: Any data structure can be reformulated in a retroactive setting. In general the data structure involves a series of updates and queries made over some period of time. Let U = [ut1, ut2, ut3, ..., utm] be the sequence of update operations from t1 to tm such that t1 < t2 < ... < tm. The assumption here is that at most one operation can be performed for a given time t. Definition: Partially retroactive We define the data structure to be partially retroactive if it can perform update and query operations at the current time and support insertion and deletion operations in the past. Thus for partially retroactive we are interested in the following operations: Insert(t, u): Insert a new operation u into the list U at time t. Definition: Delete(t): Delete the operation at time t from the list U.Given the above retroactive operations, a standard insertion operation would now the form of Insert(t, "insert(x)"). All retroactive changes on the operational history of the data structure can potentially affect all the operations at the time of the operation to the present. For example, if we have ti-1 < t < ti+1, then Insert(t, insert(x)) would place a new operation, op, between the operations opi-1 and opi+1. The current state of the data structure (i.e.: the data structure at the present time) would then be in a state such the operations opi-1, op and opi+1 all happened in a sequence, as if the operation op was always there. See figure 1 and 2 for a visual example. Definition: Fully retroactive We define the data structure to be fully retroactive if in addition to the partially retroactive operations we also allow for one to perform queries about the past. Similar to how the standard operation insert(x) becomes Insert(t, "insert(x)") in the partially retroactive model, the operation query(x) in the fully retroactive model now has the form Query(t, "query(x)"). Retroactive running times The running time of retroactive data structures are based on the number of operations, m, performed on the structure, the number of operations r that were performed before the retroactive operation is performed, and the maximum number of elements n in the structure at any single time. Definition: Automatic retro-activity The main question regarding automatic retro-activity with respect to data structures is whether or not there is a general technique which can convert any data structure into an efficient retroactive counterpart. A simple approach is to perform a roll-back on all the changes made to the structure prior to the retroactive operation that is to be applied. Once we have rolled back the data structure to the appropriate state we can then apply the retroactive operation to make the change we wanted. Once the change is made we must then reapply all the changes we rolled back before to put the data structure into its new state. While this can work for any data structure, it is often inefficient and wasteful especially once the number of changes we need to roll-back is large. To create an efficient retroactive data structure we must take a look at the properties of the structure itself to determine where speed ups can be realized. Thus there is no general way to convert any data structure into an efficient retroactive counterpart. Erik D. Demaine, John Iacono and Stefan Langerman prove this.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Macmillan English Dictionary for Advanced Learners** Macmillan English Dictionary for Advanced Learners: Macmillan English Dictionary for Advanced Learners, also known as MEDAL, is an advanced learner's dictionary first published in 2002 by Macmillan Education. It shares most of the features of this type of dictionary: it provides definitions in simple language, using a controlled defining vocabulary; most words have example sentences to illustrate how they are typically used; and information is given about how words combine grammatically or in collocations. MEDAL also introduced a number of innovations. These include: "collocation boxes" giving lists of high-frequency collocates, identified using Sketch Engine software word frequency information, with the most frequent 7500 English words shown in red and categorised in three frequency bands, based on the idea, derived from Zipf's law, that a relatively small number of high-frequency words account for a high percentage of most texts "metaphor boxes", showing how the vocabulary used for expressing common concepts (such as "anger") tends to reflect a common metaphorical framework. This is based on George Lakoff's ideas of conceptual metaphor a 50-page section providing guidance on writing academic English, based on a collaboration with the Centre for English Corpus Linguistics in Louvain, Belgium and using the Centre's learner corpus dataThe Macmillan English Dictionary also existed as an electronic dictionary, available free on the Web. Like most online dictionaries, it benefits from being able to update content regularly with new words and meanings. In addition to the dictionary, the online version had a thesaurus function enabling users to find synonyms for any word, phrase or meaning. There was also a blog (the Macmillan Dictionary Blog) with daily postings on language issues, especially on global English and language change. An "Open Dictionary" allowed users to provide their own dictionary entries for new words they had come across. The online edition was recognised as a good example of this emerging genre of reference publishing. The website of the electronic dictionary and the blog were closed on 30 June 2023. Related publications: Macmillan Essential Dictionary, a shorter version that contains the most basic vocabulary (over 45,000 headwords)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Snowboarding** Snowboarding: Snowboarding is a recreational and competitive activity that involves descending a snow-covered surface while standing on a snowboard that is almost always attached to a rider's feet. It features in the Winter Olympic Games and Winter Paralympic Games. Snowboarding: Snowboarding was developed in the United States, inspired by skateboarding, sledding, surfing, and skiing. It became popular around the world, and was introduced as a Winter Olympic Sport at Nagano in 1998 and featured in the Winter Paralympics at Sochi in 2014. As of 2015, its popularity (as measured by equipment sales) in the United States peaked in 2007 and has been in a decline since. History: The first snowboards were developed in 1965 when Sherman Poppen, an engineer in Muskegon, Michigan, invented a toy for his daughters by fastening two skis together and attaching a rope to one end so he would have some control as they stood on the board and glided downhill. Dubbed the "snurfer" (combining snow and surfer) by his wife Nancy, the toy proved so popular among his daughters' friends that Poppen licensed the idea to a manufacturer, Brunswick Corporation, that sold about a million snurfers over the next decade. And, in 1966 alone, over half a million snurfers were sold.Modern snowboarding was pioneered by Tom Sims and Jake Burton Carpenter, who both contributed significant innovations and started influential companies. In February 1968, Poppen organized the first snurfing competition at a Michigan ski resort that attracted enthusiasts from all over the country. One of those early pioneers was Tom Sims, a devotee of skateboarding (a sport born in the 1950s when kids attached roller skate wheels to small boards that they steered by shifting their weight). In the 1960s, as an eighth grader in Haddonfield, New Jersey, Sims crafted a snowboard in his school shop class by gluing carpet to the top of a piece of wood and attaching aluminum sheeting to the bottom. He produced commercial snowboards in the mid-70s including the Skiboard (also known as the Lonnie Toft flying banana) a molded polyethylene bottom with a Lonnie Toft signature skateboard deck attached to the top. Others experimented with board-on-snow configurations at this time, including Welsh skateboard enthusiasts Jon Roberts and Pete Matthews developed their own snowboards to use at their local dry ski slope.Also during this same period, in 1977, Jake Burton Carpenter, a Vermont native who had enjoyed snurfing since the age of 14, impressed the crowd at a Michigan snurfing competition with bindings he had designed to secure his feet to the board. That same year, he founded Burton Snowboards in Londonderry, Vermont. The "snowboards" were made of wooden planks that were flexible and had water ski foot traps. Very few people picked up snowboarding because the price of the board was considered too high at $38 and were not allowed on many ski hills, but eventually Burton would become the biggest snowboarding company in the business. Burton's early designs for boards with bindings became the dominant features in snowboarding. History: The first competitions to offer prize money were the National Snurfing Championship, held at Muskegon State Park in Muskegon, Michigan. In 1979, Jake Burton Carpenter came from Vermont to compete with a snowboard of his own design. There were protests about Jake entering with a non-snurfer board. Paul Graves, and others, advocated that Jake be allowed to race. A "modified" "Open" division was created and won by Jake as the sole entrant. That race was considered the first competition for snowboards and is the start of what became competitive snowboarding. Ken Kampenga, John Asmussen and Jim Trim placed first, second and third respectively in the Standard competition with best two combined times of 24.71, 25.02 and 25.41; and Jake Carpenter won prize money as the sole entrant in the "open" division with a time of 26.35. In 1980 the event moved to Pando Winter Sports Park near Grand Rapids, Michigan because of a lack of snow that year at the original venue.In the early 1980s, Aleksey Ostatnigrosh and Alexei Melnikov, two Snurfers from the Soviet Union, patented design changes to the Snurfer to allow jumping by attaching a bungee cord, a single footed binding to the Snurfer tail, and a two-foot binding design for improved control.As snowboarding became more popular in the 1970s and 1980s, pioneers such as Dimitrije Milovich (founder of Winterstick out of Salt Lake City, UT), Jake Burton Carpenter (founder of Burton Snowboards from Londonderry, Vermont), Tom Sims (founder of Sims Snowboards), David Kemper (founder of Kemper Snowboards) and Mike Olson (founder of Gnu Snowboards) came up with new designs for boards and mechanisms that slowly developed into the snowboards and other related equipment. From these developments, modern snowboarding equipment usually consists of a snowboard with specialized bindings and boots.In April 1981, the "King of the Mountain" Snowboard competition was held at Ski Cooper in Colorado. Tom Sims along with an assortment of other snowboarders of the time were present. One entrant showed up on a homemade snowboard with a formica bottom that turned out to not slide so well on the snow. History: In 1982, the first USA National Snowboard race was held near Woodstock, Vermont, at Suicide Six. The race, organized by Graves, was won by Burton's first team rider Doug Bouton.In 1983, the first World Championship halfpipe competition was held at Soda Springs, California. Tom Sims, founder of Sims Snowboards, organized the event with the help of Mike Chantry, a snowboard instructor at Soda Springs.In 1985, the first World Cup was held in Zürs, Austria, further cementing snowboarding's recognition as an official international competitive sport. History: In 1990, the International Snowboard Federation (ISF) was founded to provide universal contest regulations. In addition, the United States of America Snowboard Association (USASA) provides instructing guidelines and runs snowboard competitions in the U.S. today, high-profile snowboarding events like the Winter X Games, Air & Style, US Open, Olympic Games and other events are broadcast worldwide. Many alpine resorts have terrain parks. History: At the 1998 Winter Olympic Games in Nagano, Japan, Snowboarding became an official Olympic event. France's Karine Ruby was the first ever to win an Olympic gold medal for Woman's Snowboarding at the 1998 Olympics, while Canadian Ross Rebagliati was the first ever to win an Olympic gold medal for Men's Snowboarding. History: Initially, ski areas adopted the sport at a much slower pace than the winter sports public. Indeed, for many years, there was animosity between skiers and snowboarders, which led to an ongoing skier vs snowboarder feud. Early snowboards were banned from the slopes by park officials. For several years snowboarders would have to take a small skills assessment prior to being allowed to ride the chairlifts. It was thought that an unskilled snowboarder would wipe the snow off the mountain. In 1985, only seven percent of U.S. ski areas allowed snowboarding, with a similar proportion in Europe. As equipment and skills improved, gradually snowboarding became more accepted. In 1990, most major ski areas had separate slopes for snowboarders. Now, approximately 97% of all ski areas in North America and Europe allow snowboarding, and more than half have jumps, rails and half pipes. History: In 2004, snowboarding had 6.6 million active participants. An industry spokesman said that "twelve year-olds are out-riding adults." The same article said that most snowboarders are 18–24 years old and that women constitute 25% of participants. History: There were 8.2 million snowboarders in the US and Canada for the 2009–2010 season. There was a 10% increase over the previous season, accounting for more than 30% of all snow sports participants.On 2 May 2012, the International Paralympic Committee announced that adaptive snowboarding (dubbed "para-snowboarding") would debut as a men's and women's medal event in the 2014 Paralympic Winter Games taking place in Sochi, Russia. Styles: Since snowboarding's inception as an established winter sport, it has developed various styles, each with its own specialized equipment and technique. The most common styles today are: freeride, freestyle, and freecarve/race. These styles are used for both recreational and professional snowboarding. While each style is unique, there is overlap between them. Styles: Jibbing "Jibbing" is the term for technical riding on non-standard surfaces. The word "jib" is both a noun and a verb, depending on the usage of the word. As a noun: a jib includes metal rails, boxes, benches, concrete ledges, walls, vehicles, rocks and logs. As a verb: to jib is referring to the action of jumping, sliding, or riding on top of objects other than snow. It is directly influenced by grinding a skateboard. Jibbing is a freestyle snowboarding technique of riding. Typically jibbing occurs in a snowboard resort park but can also be done in urban environments. Styles: Freeriding Freeriding is a style without a set of governing rules or set course, typically on natural, un-groomed terrain. The basic allows for various snowboarding styles in a fluid motion and spontaneity through naturally rugged terrain. It can be similar to freestyle with the exception that no man-made features are utilized. See also Backcountry snowboarding. Freestyle Freestyle snowboarding is any riding that includes performing tricks. In freestyle, the rider utilizes natural and man-made features such as rails, jumps, boxes, and innumerable others to perform tricks. It is a popular all-inclusive concept that distinguishes the creative aspects of snowboarding, in contrast to a style like alpine snowboarding. Alpine snowboarding Alpine snowboarding is a discipline within the sport of snowboarding. It is practiced on groomed pistes. It has been an Olympic event since 1998. Styles: Sometimes called freecarving or hardbooting(due to the equipment used), this discipline usually takes place on hard packed snow or groomed runs(although it can be practiced in any and all conditions) and focuses on carving linked turns, much like surfing or longboarding, and is seen as superior to other disciplines in many Europeans countries. Little or no jumping takes place in this discipline. Alpine Snowboarding consists of a small portion of the general snowboard population, that has a well connected social community and its own specific board manufacturers, most situated in Europe. Alpine Snowboard equipment includes a ski-like hardshell boot and plate binding system with a true directional snowboard that is stiffer and narrower to manage linking turns with greater forces and speed. Shaped skis can thank these "freecarve" snowboards for the cutting-edge technology leading to their creation. A skilled alpine snowboarder can link numerous turns into a run placing their body very close to the ground each turn, similar to a motocross turn or waterski carve. Depending on factors including stiffness, turning radius and personality this can be done slowly or fast. Styles: Carvers make perfect half-circles out of each turn, changing edges when the snowboard is perpendicular to the fall line and starting every turn on the downhill edge. Carving on a snowboard is like riding a roller coaster, because the board will lock into a turn radius and provide what feels like multiple Gs of acceleration.Alpine snowboarding shares more visual similarities with skiing equipment than it does with snowboarding equipment. Compared to freestyle snowboarding gear: boards are narrower, longer, and stiffer to improve carving performance boots are made from a hard plastic shell, making it flex differently from a regular snowboard boot and is designed differently to ski boots although they look similar. Styles: bindings have a bail or step-in design and are sometimes placed on suspension plates to provide a layer of isolation between an alpine snowboarder and the board, to decrease the level of vibrations felt by the rider, creating a better overall experience when carving, and to give extra weight to the board among other uses. Styles: Slopestyle Competitors perform tricks while descending a course, moving around, over, across, up, or down terrain features. The course is full of obstacles including boxes, rails, jumps, jibs, or anything else the board or rider can slide across. Slopestyle is a judged event and winning a slopestyle contest usually comes from successfully executing the most difficult line in the terrain park while having a smooth flowing line of difficult, mistake-free tricks performed on the obstacles. However, overall impression and style can play factor in winning a slopestyle contest and the rider who lands the hardest tricks will not always win over the rider who lands easier tricks on more difficult paths. Styles: Big air Big air competitions are contests where riders perform tricks after launching off a man made jump built specifically for the event. Competitors perform tricks in the air, aiming to attain sizable height and distance, all while securing a clean landing. Many competitions also require the rider to do a complex trick. Not all competitions call for a trick to win the gold; some intermittent competitions are based solely on height and distance of the launch of the snowboarder. Some competitions also require the rider to do a specific trick to win the major prize. One of the first snowboard competitions where Travis Rice attempted and landed a "double back flip backside 180" took place at the 2006 Red Bull Gap Session. Styles: Half-pipe The half-pipe is a semi-circular ditch dug into the mountain or purpose-built ramp made up of snow, with walls between 8 and 23 feet (7.0 m). Competitors perform tricks while going from one side to the other and while in the air above the sides of the pipe. Styles: Snowboard Cross Snowboard Cross, also known as "Boardercross", "Boarder X", or "Snowboard X", and commonly abbreviated as "SBX", or just "BX", is a snowboarding discipline consisting of several (typically 4 to 6) riders racing head-to-head down a course with jumps, berms and other obstacles constructed out of snow. Snowboard cross began in the 1980s, earning its place as an official Winter Olympic event in the 2006 Turin games. Unlike other snowboard racing disciplines such as parallel giant slalom, competitors race on a single course together. Styles: Snowboard racing In snowboard racing, riders must complete a downhill course constructed of a series of turning color indicators (gates) placed in the snow at prescribed distances apart. A gate consists of a tall pole and a short pole, connected by a triangular panel. The racer must pass around the short side of the gate, passing the long side of the gate doesn't count. There are 3 main formats used in snowboard racing including; single person, parallel courses or multiple people on the course at the same time (SBX). Competitions: Snowboarding contests are held throughout the world and range from grassroots competitions to professional events contested worldwide. Some of the larger snowboarding contests include: the European Air & Style, the Japanese X-Trail Jam, Burton Global Open Series, Shakedown, FIS World Championships, the annual FIS World Cup, the Winter X Games, Freeride World Tour and the Winter Dew Tour. Snowboarding has been a Winter Olympic sport since 1998 Winter Olympics. Since its inauguration, Olympic snowboarding has seen many additions and removals of events. During the 2018 Winter Olympics, snowboarding events contested included big air, halfpipe, parallel giant slalom, slopestyle and snowboard cross. Competitions: Snowboarder Magazine's Superpark event was created in 1996. Over 150 of the World's top pros are invited to advance freestyle snowboarding on the most progressive terrain parks.Part of the snowboarding approach is to ensure maximum fun, friendship and event quality. Reflecting this perspective of snowboarding, you can find "Anti Contests" including are an important part of its identity including The Holy Oly Revival at The Summit at Snoqualmie, The Nate Chute Hawaiian Classic at Whitefish, the original anti-contest, the World Quarterpipe Championships and the Grenade Games. Competitions: The United States of America Snowboarding and Freeski Association (USASA) features grassroots-level competitions designed to be a stepping stone for aspiring athletes looking to progress up the competition pipeline. The USASA consists of 36 regional series in which anyone can compete against athletes in a multitude of classes. For snowboarding, USASA contests regional events in six primary disciplines (Slalom, Giant Slalom, Slopestyle, Halfpipe, Boardercross, and Rail Jam), where competitors earn points towards a national ranking and qualify to compete at the USASA National Championships. Subculture: The snowboarding way of life came about as a natural response to the culture from which it emerged. Early on, there was a rebellion against skiing culture and the view that snowboarders were inferior. Skiers did not easily accept this new culture on their slopes. The two cultures contrasted each other in several ways including how they spoke, acted, and their entire style of clothing. Snowboarders first embraced the punk and later the hip-hop look into their style. Words such as "dude", "gnarly", and "Shred the Gnar" are some examples of words used in the snowboarding culture. Snowboarding subculture became a crossover between the urban and suburban styles on snow, which made an easy transition from surfing and skateboarding culture over to snowboarding culture. In fact many skateboarders and surfers in the winter months snowboarded, and were the early snowboarders.The early stereotypes of snowboarding included "lazy", "grungy", "punk", "stoners", "troublemakers", and numerous others, many of which are associated with skateboarding and surfing as well. However, these stereotypes may be considered out of style. Snowboarding has become a sport that encompasses a very diverse international based crowd and fanbase of many millions, so much so that it is no longer possible to stereotype such a large community. Reasons for these dying stereotypes include how mainstream and popular the sport has become, with the shock factor of snowboarding's quick take off on the slopes wearing off. Skiers and snowboarders are becoming used to each other, showing more respect to each other on the mountain. "The typical stereotype of the sport is changing as the demographics change". While these two subcultures are now becoming accustomed to each other, there are still three resorts, in the United States, which do not allow snowboarding. Alta, Deer Valley, and Mad River Glen are the last skiing only resorts in North America and have become a focal point over time for the remaining animosity between snowboarding and skiing. Common Injuries: Common injuries in snowboarding differ between professional and recreational groups. The most common type of injury for snowboarders is injury to the upper body. In recreational snowboarding, wrist injuries are more likely to occur. Among professional snowboarders, injuries to the lower half, specifically the knee joint, are more likely to occur. When injured, snowboarders are twice as likely to get a fracture as skiers. Other minor injuries that happen are "wrist injuries, shoulder soft tissue injuries, ankle injuries, concussions, and clavicle fractures, were seen injuries are very common when snowboarding". In recreational and inexperienced "Most Injuries to snowboarders occurred more often while they were traveling at reckless speed on moderate slopes". Another way injuries happen is because they try sticking with someone that is a higher skill level, which they are not capable of handling because of the lack of skill they possess. Some major injuries that occur during snowboarding are head, and spinal Injuries, "The main cause of spinal fractures in snowboarders was Jump landing failure and Compression type fractures occur in about 80% of snowboarders with vertebral fractures because they frequently fall backwards, and this can cause axial loading and anterior compression fractures". Injuries to the upper body are much less common among professional snowboarders. "Most of the professionals and elite snowboarders frequently sustain injuries when trying to execute challenging tricks at high speeds and with increased levels of force to the lower limbs". Safety and precautions: Like some other winter sports, snowboarding comes with a certain level of risk.The average snowboarder is a male in their early twenties, and there are three times as many men as there are women in the sport. Snowboarders have a 2.4 times greater risk of fractures than skiers, particularly in the upper extremities. Conversely, snowboarders have a lower risk of knee injuries than skiers.The injury rate for snowboarding is about four to six per thousand persons per day, which is around double the injury rate for alpine skiing. Injuries are more likely amongst beginners, especially those who do not take lessons with professional instructors. A quarter of all injuries occur to first-time riders and half of all injuries occur to those with less than a year of experience. Experienced riders are less likely to suffer injury, but the injuries that do occur tend to be more severe.Two thirds of injuries occur to the upper body and one third to the lower body. This contrasts with alpine skiing where two thirds of injuries are to the lower body. The most common types of injuries are sprains, which account for around 40% of injuries. The most common point of injury is the wrists – 40% of all snowboard injuries are to the wrists and 24% of all snowboard injuries are wrist fractures. There are around 100,000 wrist fractures worldwide among snowboarders each year. For this reason the use of wrist guards, either separate or built into gloves, is very strongly recommended. They are often compulsory in beginner's classes and their use reduces the likelihood of wrist injury by half. In addition it is important for snow boarders to learn how to fall without stopping the fall with their hand by trying to "push" the slope away, as landing a wrist which is bent at a 90 degree angle increase the chance of it breaking. Rather, landing with the arms stretched out (like a wing) and slapping the slope with the entire arm is an effective way to break a fall. This is the method used by practitioners of judo and other martial arts to break a fall when they are thrown against the floor by a training partner. Safety and precautions: The risk of head injury is two to six times greater for snowboarders than for skiers and injuries follow the pattern of being rarer, but more severe, with experienced riders. Head injuries can occur both as a consequence of a collision and when failing to carry out a heel-side turn. The latter can result in the rider landing on his or her back and slamming the back of his or her head onto the ground, resulting in an occipital head injury. For this reason, helmets are widely recommended. Protective eyewear is also recommended as eye injury can be caused by impact and snow blindness can be a result of exposure to strong ultra-violet light in snow-covered areas. The wearing of ultra-violet-absorbing goggles is recommended even on hazy or cloudy days as ultra-violet light can penetrate clouds.Unlike ski bindings, snowboard bindings are not designed to release automatically in a fall. The mechanical support provided by the feet being locked to the board has the effect of reducing the likelihood of knee injury – 15% of snowboard injuries are to the knee, compared with 45% of all skiing injuries. Such injuries are typically to the knee ligaments, bone fractures are rare. Fractures to the lower leg are also rare but 20% of injuries are to the foot and ankle. Fractures of the talus bone are rare in other sports but account for 2% of snowboard injuries – a lateral process talus fracture is sometimes called "snowboarder's ankle" by medical staff. This particular injury results in persistent lateral pain in the affected ankle yet is difficult to spot in a plain X-ray image. It may be misdiagnosed as just a sprain, with possibly serious consequences as not treating the fracture can result in serious long-term damage to the ankle. The use of portable ultrasound for mountainside diagnostics has been reviewed and appears to be a plausible tool for diagnosing some of the common injuries associated with the sport.Four to eight percent of snowboarding injuries take place while the person is waiting in ski-lift lines or entering and exiting ski lifts. Snowboarders push themselves forward with a free foot while in the ski-lift line, leaving the other foot (usually that of the lead leg) locked on the board at a 9–27 degree angle, placing a large torque force on this leg and predisposing the person to knee injury if a fall occurs. Snowboard binding rotating devices are designed to minimize the torque force, Quick Stance being the first developed in 1995. They allow snowboarders to turn the locked foot straight into the direction of the tip of the snowboard without removing the boot from the boot binding. Safety and precautions: Avalanches are a clear danger when on snowy mountain slopes. Safety and precautions: It is best to learn the different kinds of avalanches, how to prevent causing one and how to react when one is going to happen. Also when going out onto the snow, all who practice an activity with increased chances of injury should have a basic First Aid knowledge and know how to deal with injuries that may occur.Snowboarding boots should be well-fitted, with toes snug in the end of the boot when standing upright and slightly away from the end when in the snowboarding position. Padding or "armor" is recommended on other body parts such as hips, knees, spine, and shoulders. To further help avoid injury to body parts, especially knees, it is recommended to use the right technique. To acquire the right technique, one should be taught by a qualified instructor. Also, when snowboarding alone, precaution should be taken to avoid tree wells, a particularly dangerous area of loose snow that may form at the base of trees. Safety and precautions: Some care is also required when waxing a board as fluorocarbon waxes emit toxic fumes when overheated. Waxing is best performed in a ventilated area with care being taken to use the wax at the correct temperature – the wax should be melted but not smoking or smoldering.In a study conducted to examine the types of snowboarding injuries and changes in injury patterns over time, data was collected on injured snowboarders and skiers in a base-lodge clinic of a ski resort in Vermont over 18 seasons (1988–2006) and included extensive information about injury patterns, demographics, and experience. In conclusion of the study, the highest rate of injury was among young, inexperienced, female snowboarders. Injury rates in snowboarders have fluctuated over time but still remain higher than skiers. No evidence was found that those who spend more time in terrain parks are over represented in the injury population. Media: Films Snowboarding films have become a main part of progression in the sport. Each season, many films are released, usually in autumn. These are made by many snowboard-specific video production companies as well as manufacturing companies that use these films as a form of advertisement. Snowboarding videos usually contain video footage of professional riders sponsored by companies. An example of commercial use of snowboarding films would be The White Album, a film by snowboarding legend and filmmaker Dave Seoane about Shaun White, that includes cameos by Tony Hawk and was sponsored by PlayStation, Mountain Dew and Burton Snowboards. Snowboarding films are also used as documentation of snowboarding and showcasing of current trends and styles of the sport. In addition, the 2011 movie The Art of Flight showcased snowboarders such as Travis Rice attempting to attain greater feats in the sport of snowboarding. Media: However, sometimes the snowboarding industry is not supportive of all snowboarding-themed films. In 2013, The Crash Reel, a feature-length documentary by filmmaker Lucy Walker about former Shaun White rival Kevin Pearce, premiered on the film festival circuit to critical acclaim and was subsequently broadcast on HBO. Using Pearce's career-ending traumatic brain injury and subsequent recovery as a backdrop, the film examines the physical dangers inherent to pro snowboarders and other extreme sports professional athletes under pressure by sponsors and the media to perform increasingly spectacular feats. Although there are significant references to various brands in the film, Walker is "adamant" that the snowboarding industry did not sponsor the film in any way and in fact has been unsupportive, despite the film's mainstream media success. Media: Magazines Snowboard magazines are integral in promoting the sport, although less so with the advent of the internet age. Photo incentives are written into many professional riders' sponsorship contracts giving professionals not only a publicity but a financial incentive to have a photo published in a magazine. Snowboard magazine staff travel with professional riders throughout the winter season and cover travel, contests, lifestyle, rider and company profiles, and product reviews. Snowboard magazines have recently made a push to expand their brands to the online market, and there has also been a growth in online-only publications. Popular magazines include Transworld Snowboarding (USA), Snowboarder Magazine (USA), Snowboard Magazine (USA), and Whitelines (UK). Media: Video games Snowboarding video games provide interactive entertainment on and off season. Most games for this genre have been made for consoles, such as the Xbox and PlayStation. A plethora of online casual snowboarding games also exist along with games for mobile phone. Notable persons: Callan Chythlook-Sifsof (born 1989), American snowboarder Rosey Fletcher (born 1975), American snowboarder Peter Foley (born 1965 or 1966), American former snowboarding coach; suspended for 10 years for sexual misconduct Ayumu Hirano (born 1998), Japanese snowboarder Chloe Kim (born 2000), American snowboarder Max Parrot (born 1994), Canadian snowboarder Zoi Sadowski-Synnott (born 2001), New Zealand snowboarder Shaun White (born 1986), American snowboarder and skateboarder Su Yiming (born 2004), Chinese snowboarder
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High Definition (radio program)** High Definition (radio program): High Definition was a Canadian radio program, which debuted on February 4, 2006, on the CBC Radio One network. The series, an eight-episode short run series hosted by Don McKellar, examined and analyzed television's role in modern popular culture. It aired in a time slot previously occupied by O'Reilly on Advertising, a program which offered a similar perspective on advertising.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pinch-induced behavioral inhibition** Pinch-induced behavioral inhibition: Pinch-induced behavioural inhibition (PIBI), also called dorsal immobility, transport immobility or clipnosis, is a partially inert state which results from a gentle squeeze of the nape, the skin at the back of the neck. It is mostly observed among cats and allows a mother cat to carry her kitten easily with her jaws. It can be used to restrain most cats effectively in a domestic or veterinary context. The phenomenon also occurs in other animals, such as squirrels and mice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Land mine** Land mine: A land mine, or landmine, is an explosive weapon concealed under or camouflaged on the ground, and designed to destroy or disable enemy targets, ranging from combatants to vehicles and tanks, as they pass over or near it.Such a device is typically detonated automatically by way of pressure when a target steps on it or drives over it, although other detonation mechanisms are also sometimes used. A land mine may cause damage by direct blast effect, by fragments that are thrown by the blast, or by both. Landmines are typically laid throughout an area, creating a minefield which is dangerous to cross. Land mine: The use of land mines is controversial because of their potential as indiscriminate weapons. They can remain dangerous many years after a conflict has ended, harming civilians and the economy. Seventy-eight countries are contaminated with land mines and 15,000–20,000 people are killed every year while many more are injured. Approximately 80% of land mine casualties are civilians, with children as the most affected age group. Most killings occur in times of peace. With pressure from a number of campaign groups organised through the International Campaign to Ban Landmines, a global movement to prohibit their use led to the 1997 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, also known as the Ottawa Treaty. To date, 164 nations have signed the treaty, but these do not include China, the Russian Federation, or the United States. Definition: In the Anti-Personnel Mine Ban Convention (also known as the "Ottawa Treaty") and the "Protocol on Mines, Booby-Traps and Other Devices", a mine is defined as a "munition designed to be placed under, on or near the ground or other surface area and to be exploded by the presence, proximity or contact of a person or vehicle". Similar in function is the booby-trap, which the protocol defines as "any device or material which is designed, constructed or adapted to kill or injure and which functions unexpectedly when a person disturbs or approaches an apparently harmless object or performs an apparently safe act". Such actions might include opening a door or picking up an object. Normally, mines are mass-produced and placed in groups, while booby traps are improvised and deployed one at a time. Also, booby traps can be non-explosive devices such as punji sticks. Overlapping both categories is the improvised explosive device (IED), which is "a device placed or fabricated in an improvised manner incorporating explosive material, destructive, lethal, noxious, incendiary, pyrotechnic materials or chemicals designed to destroy, disfigure, distract or harass. They may incorporate military stores, but are normally devised from non-military components." Some meet the definition of mines or booby traps and are also referred to as "improvised", "artisanal" or "locally manufactured" mines. Other types of IED are remotely activated, so are not considered mines.Remotely delivered mines are dropped from aircraft or carried by devices such as artillery shells or rockets. Another type of remotely delivered explosive is the cluster munition, a device that releases several sub munitions ("bomblets") over a large area. The use, transfer, production, and stockpiling of cluster munitions is prohibited by the international CCM treaty. If bomblets do not explode, they are referred to as unexploded ordnance (UXO), along with unexploded artillery shells and other explosive devices that were not manually placed (that is, mines and booby traps are not UXOs). Explosive remnants of war (ERW) include UXOs and abandoned explosive ordnance (AXO), devices that were never used and were left behind after a conflict.Land mines are divided into two types: anti-tank mines, which are designed to disable tanks or other vehicles; and anti-personnel mines, which are designed to injure or kill people. History: The history of land mines can be divided into three main phases: In the ancient world, buried spikes provided many of the same functions as modern mines. Mines using gunpowder as the explosive were used from the Ming dynasty to the American Civil War. Subsequently, high explosives were developed and used in land mines. History: Before explosives Some fortifications in the Roman Empire were surrounded by a series of hazards buried in the ground. These included goads, one-foot-long (30 cm) pieces of wood with iron hooks on their ends; lilia (lilies, so named after their appearance), which were pits in which sharpened logs were arranged in a five-point pattern; and abatis, fallen trees with sharpened branches facing outwards. As with modern land mines, they were "victim-operated", often concealed, and formed zones that were wide enough so that the enemy could not do much harm from outside, but were under fire (from spear throws, in this case) if they attempted to remove the obstacles. A notable use of these defenses was by Julius Caesar in the Battle of Alesia. His forces were besieging Vercingetorix, the leader of the Gauls, but Vercingetorix managed to send for reinforcements. To maintain the siege and defend against the reinforcements, Caesar formed a line of fortifications on both sides, and they played an important role in his victory. Lilies were also used by Scots against the English at the Battle of Bannockburn in 1314, and by Germans at the Battle of Passchendaele in the First World War.A more easily deployed defense used by the Romans was the caltrop, a weapon 12–15 cm across with four sharp spikes that are oriented so that when it is thrown on the ground, one spike always points up. As with modern antipersonnel mines, caltrops are designed to disable soldiers rather than kill them; they are also more effective in stopping mounted forces, who lack the advantage of being able to carefully scrutinize each step they take (though forcing foot-mounted forces to take the time to do so has benefits in and of itself). They were used by the Jin dynasty in China at the Battle of Zhongdu to slow down the advance of Genghis Khan's army; Joan of Arc was wounded by one in the Siege of Orléans; in Japan they are known as tetsu-bishu and were used by ninjas from the fourteenth century onward. Caltrops are still strung together and used as roadblocks in some modern conflicts. History: Gunpowder East Asia Gunpowder, an explosive mixture of sulfur, charcoal and potassium nitrate was invented in China by the 10th century and was used in warfare soon after. An "enormous bomb", credited to Lou Qianxia, was used in 1277 by the Chinese at the Battle of Zhongdu.A 14th-century military treatise, the Huolongjing (fire dragon manual), describes hollow cast iron cannonball shells filled with gunpowder. The wad of the mine was made of hard wood, carrying three different fuses in case of defective connection to the touch hole. These fuses were long and lit by hand, so they required carefully timed calculations of enemy movements.The Huolongjing also describes land mines that were set off by enemy movement. A 9-foot (3 m) length of bamboo was waterproofed by wrapping it in cowhide and covering it with oil. It was filled with compressed gunpowder and lead or iron pellets, sealed with wax and concealed in a trench. The triggering mechanism was not fully described until the early 17th century. When the enemy stepped onto hidden boards, they dislodged a pin, causing a weight to fall. A cord attached to the weight was wrapped around a drum attached to two steel wheels; when the weight fell, the wheels struck sparks against flint, igniting a set of fuses leading to multiple mines. A similar mechanism was used in the first wheellock musket in Europe as sketched by Leonardo da Vinci around 1500 AD.Another victim-operated device was the "underground sky-soaring thunder", which lured bounty hunters with halberds, pikes, and lances planted in the ground. If they pulled on one of these weapons, the butt end disturbed a bowl underneath and a slow-burning incandescent material in the bowl ignited the fuses. History: Europe and the United States At Augsburg in 1573, three centuries after the Chinese invented the first pressure-operated mine, a German military engineer by the name of Samuel Zimmermann invented the Fladdermine (flying mine). It consisted of a few pounds of black powder buried near the surface and was activated by stepping on it or tripping a wire that made a flintlock fire. Such mines were deployed on the slope in front of a fort. They were used during the Franco-Prussian War, but were probably not very effective because a flintlock does not work for long when left untended.Another device, the fougasse, was not victim-operated or mass-produced, but it was a precursor of modern fragmentation mines and the claymore mine. It consisted of a cone-shape hole with gunpowder at the bottom, covered either by rocks and scrap iron (stone fougasse) or mortar shells, similar to large black powder hand grenades (shell fougasse). It was triggered by a flintlock connected to a tripwire on the surface. It could sometimes cause heavy casualties but required high maintenance due to the susceptibility of black powder to dampness. Consequently, it was mainly employed in the defenses of major fortifications, in which role it used in several European wars of the eighteenth century and the American Revolution.One of the greatest limitations of early land mines was the unreliable fuses and their susceptibility to dampness. This changed with the invention of the safety fuse. Later, command initiation, the ability to detonate a charge immediately instead of waiting several minutes for a fuse to burn, became possible after electricity was developed. An electric current sent down a wire could ignite the charge with a spark. The Russians claim first use of this technology in the Russo-Turkish War of 1828–1829, and with it the fougasse remained useful until it was superseded by the claymore in the 1960s.Victim-activated mines were also unreliable because they relied on a flintlock to ignite the explosive. The percussion cap, developed in the early 19th century, made them much more reliable, and pressure-operated mines were deployed on land and sea in the Crimean War (1853–1856).During the American Civil War, the Confederate brigadier general Gabriel J. Rains deployed thousands of "torpedoes" consisting of artillery shells with pressure caps, beginning with the Battle of Yorktown in 1862. As a captain, Rains had earlier employed explosive booby traps during the Seminole Wars in Florida in 1840. Over the course of the war, mines only caused a few hundred casualties, but they had a large effect on morale and slowed down the advance of Union troops. Many on both sides considered the use of mines barbaric, and in response, generals in the Union Army forced Confederate prisoners to remove the mines. History: High explosives Starting in the 19th century, more powerful explosives than gunpowder were developed, often for non-military reasons such as blasting train tunnels in the Alps and Rockies. Guncotton, up to four times more powerful than gunpowder, was invented by Christian Schonbein in 1846. It was dangerous to make until Frederick Augustus Abel developed a safe method in 1865. From the 1870s to the First World War, it was the standard explosive used by the British military.In 1847, Ascanio Sobrero invented nitroglycerine to treat angina pectoris and it turned out to be a much more powerful explosive than guncotton. It was very dangerous to use until Alfred Nobel found a way to incorporate it in a solid mixture called dynamite and developed a safe detonator. Even then, dynamite needed to be stored carefully or it could form crystals that detonated easily. Thus, the military still preferred guncotton.In 1863, the German chemical industry developed trinitrotoluene (TNT). This had the advantage that it was difficult to detonate, so it could withstand the shock of firing by artillery pieces. It was also advantageous for land mines for several reasons: it was not detonated by the shock of shells landing nearby; it was lightweight, unaffected by damp, and stable under a wide range of conditions; it could be melted to fill a container of any shape, and it was cheap to make. Thus, it became the standard explosive in mines after the First World War. History: Between the American Civil War and the First World War The British used mines in the Siege of Khartoum. A Sudanese Mahdist force much larger than British strength was held off for ten months, but the town was ultimately taken and the British massacred. In the Boer War (1899–1903), they succeeded in holding Mafeking against Boer forces with the help of a mixture of real and fake minefields; and they laid mines alongside railroad tracks to discourage sabotage.In the Russo-Japanese War of 1904–1905, both sides used land and sea mines, although the effect on land mainly affected morale. The naval mines were far more effective, destroying several battleships. History: First World War One sign of the increasing power of explosives used in land mines was that, by the First World War, they burst into about 1,000 high-velocity fragments; in the Franco-Prussian War (1870), it had only been 20 to 30 fragments. Nevertheless, antipersonnel mines were not a big factor in the war because machine guns, barbed wire and rapid-fire artillery were far more effective defenses. An exception was in Africa (now Tanzania and Namibia) where the warfare was much more mobile.Towards the end of the war, the British started to use tanks to break through trench defenses. The Germans responded with anti-tank guns and mines. Improvised mines gave way to mass-produced mines consisting of wooden boxes filled with guncotton, and minefields were standardized to stop masses of tanks from advancing.Between world wars, the future Allies did little work on land mines, but the Germans developed a series of anti-tank mines, the Tellermines (plate mines). They also developed the Schrapnell mine (also known as the S-mine), the first bounding mine. When triggered, this jumped up to about waist height and exploded, sending thousands of steel balls in all directions. Triggered by pressure, trip wires or electronics, it could harm soldiers within an area of about 2,800 square feet. History: Second World War Tens of millions of mines were laid in the Second World War, particularly in the deserts of North Africa and the steppes of Eastern Europe, where the open ground favored tanks. However, the first country to use them was Finland. They were defending against a much larger Soviet force with over 6,000 tanks, twenty times the number the Finns had; but they had terrain that was broken up by lakes and forests, so tank movement was restricted to roads and tracks. Their defensive line, the Mannerheim Line, integrated these natural defenses with mines, including simple fragmentation mines mounted on stakes.While the Germans were advancing rapidly using blitzkrieg tactics, they did not make much use of mines. After 1942, however, they were on the defensive and became the most inventive and systematic users of mines. Their production shot up and they began inventing new types of mines as the Allies found ways to counter the existing ones. To make it more difficult to remove antitank mines, they surrounded them with S-mines and added anti-handling devices that would explode when soldiers tried to lift them. They also took a formal approach to laying mines and they kept detailed records of the locations of mines.In the Second Battle of El Alamein in 1942, the Germans prepared for an Allied attack by laying about half a million mines in two fields running across the entire battlefield and five miles deep. Nicknamed the "Devil's gardens", they were covered by 88 mm anti-tank guns and small-arms fire. The Allies prevailed, but at the cost of over half their tanks; 20 percent of the losses were caused by mines.The Soviets learned the value of mines from their war with Finland, and when Germany invaded they made heavy use of them, manufacturing over 67 million. At the Battle of Kursk, which put an end to the German advance, they laid over a million mines in eight belts with an overall depth of 35 kilometres.Mines forced tanks to slow down and wait for soldiers to go ahead and remove the mines. The main method of breaching minefields involved prodding the dirt with a bayonet or stick at an angle of 30 degrees (to avoid putting pressure on the top of the mine and detonating it). Since all mines at the beginning of the war had metal casings, metal detectors could be used to speed up the locating of mines. A Polish officer, Józef Kosacki, developed a portable mine detector known as the Polish mine detector. To counter the detector, Germans developed mines with wooden casings, the Schu-mine 42 (antipersonnel) and Holzmine 42 (anti-tank). Effective, cheap and easy to make, the schu mine became the most common mine in the war. Mine casings were also made of glass, concrete and clay. The Russians developed a mine with a pressed-cardboard casing, the PMK40, and the Italians made an anti-tank mine out of bakelite. In 1944, the Germans created the Topfmine, an entirely non-metallic mine. They ensured that they could detect their own mines by covering them with radioactive sand; the Allies did not find this out until after the war.Several mechanical methods for clearing mines were tried. Heavy rollers were attached to tanks or cargo trucks, but they did not last long and their weight made the tanks considerably slower. Tanks and bulldozers pushed ploughs that pushed aside any mines to a depth of 30 cm. The Bangalore torpedo, a long thin tube filled with explosives, was invented in 1912 and used to clear barbed wire; larger versions such as the Snake and the Conger were developed for clearing mines, but were not very effective. One of the best options was the flail, which had weights attached by chains to rotating drums. The first version, the Scorpion, was attached to the Matilda tank and used in the Second Battle of El Alamein. The Crab, attached to the Sherman tank, was faster, at 2 kilometers per hour; it was used during D-Day and the aftermath. History: Cold War During the Cold War, the members of NATO were concerned about massive armored attacks by the Soviet Union. They planned for a minefield stretching across the entire West German border, and developed new types of mine. The British designed an anti-tank mine, the Mark 7, to defeat rollers by detonating the second time it was pressed. It also had a 0.7-second delay so the tank would be directly over the mine. They also developed the first scatterable mine, the No. 7 ("Dingbat"). The Americans used the M6 antitank mine and tripwire-operated bounding antipersonnel mines such as the M2 and M16.In the Korean War, land mine use was dictated by the steep terrain, narrow valleys, forest cover and lack of developed roads. This made tanks less effective and more easily stopped by mines. However, mines laid near roads were often easy to spot. In response to this problem, the US developed the M24, a mine that was placed off to the side of the road. When triggered by a tripwire, it fired a rocket. However, the mine was not available until after the war.The Chinese had a lot of success with massed infantry attacks. The extensive forest cover limited the range of machine guns, but anti-personnel mines were effective. However, mines were poorly recorded and marked, often becoming as much a hazard to allies as enemies. Tripwire-operated mines were not defended by pressure mines; the Chinese were often able to disable them and reuse them against UN forces.Looking for more destructive mines, the Americans developed the Claymore, a directional fragmentation mine that hurls steel balls in a 60-degree arc at a lethal speed of 1,200 metres per second. They also developed a pressure-operated mine, the M14 ("toe-popper"). These, too, were ready too late for the Korean war. History: In 1948, the British developed the No. 6 antipersonnel mine, a minimum-metal mine with a narrow diameter, making it difficult to detect with metal detectors or prodding. Its three-pronged pressure piece inspired the nickname "carrot mine". However, it was unreliable in wet conditions. In the 1960s the Canadians developed a similar, but more reliable mine, the C3A1 ("Elsie") and the British army adopted it. The British also developed the L9 bar mine, a wide anti-tank mine with a rectangular shape, which covered more area, allowing a minefield to be laid four times as fast as previous mines. They also upgraded the Dingbat to the Ranger, a plastic mine that was fired from a truck-mounted discharger that could fire 72 mines at a time.In the 1950s, the US Operation Doan Brook studied the feasibility of delivering mines by air. This led to three types of air-delivered mine. Wide area anti-personnel mines (WAAPMs) were small steel spheres that discharged tripwires when they hit the ground; each dispenser held 540 mines. The BLU-43 Dragontooth was small and had a flattened W shape to slow its descent, while the gravel mine was larger. Both were packed by the thousand into bombs. All three were designed to inactivate after a period of time, but any that failed to activate presented a safety challenge. Over 37 million Gravel mines were produced between 1967 and 1968, and when they were dropped in places like Vietnam their locations were unmarked and unrecorded. A similar problem was presented by unexploded cluster munitions.The next generation of scatterable mines arose in response to the increasing mobility of war. The Germans developed the Skorpion system, which scattered AT2 mines from a tracked vehicle. The Italians developed a helicopter delivery system that could rapidly switch between SB-33 anti-personnel mines and SB-81 anti-tank mines. The US developed a range of systems called the Family of Scatterable Mines (FASCAM) that could deliver mines by fast jet, artillery, helicopter and ground launcher. History: Middle eastern conflicts The Iraq-Iran War, the Gulf War, and the Islamic State have all contributed to land mine saturation in Iraq from the 1980s through 2020. Iraq is now the most saturated country in the world with landmines. Countries that provided land mines during the Iran-Iraq War included Belgium, Canada, Chile, China, Egypt, France, Italy, Romania, Singapore, the former Soviet Union and the U.S., and were concentrated in the Kurdish areas in the northern area of Iraq. During the Gulf War, the U.S. deployed 117,634 mines, with 27,967 being anti-personnel mines and 89,667 being anti-vehicle mines. The U.S. did not use land mines during the Iraq War. History: Invasion of Ukraine During the 2022 Russian Invasion of Ukraine, both Russian and Ukrainian forces have used land mines. Ukrainian officials claim Russian forces planted thousands of land mines or other explosive devices during their withdrawal from Ukrainian cities, including in civilian areas. Russian forces have also utilized remotely delivered anti-personnel mines such as the POM-3 Chemical and nuclear In the First World War, the Germans developed a device, nicknamed the "Yperite Mine" by the British, that they left behind in abandoned trenches and bunkers. It was detonated by a delayed charge, spreading mustard gas ("Yperite"). In the Second World War they developed a modern chemical mine, the Sprüh-Büchse 37 (Bounding Gas Mine 37), but never used it. The United States developed the M1 chemical mine , which used mustard gas, in 1939; and the M23 chemical mine, which used the VX nerve agent, in 1960. The Soviets developed the KhF, a "bounding chemical mine". The French had chemical mines and the Iraqis were believed to have them before the invasion of Kuwait. In 1997, the Chemical Weapons Convention came into force, prohibiting the use of chemical weapons and mandating their destruction. As of April 30, 2019, 97% of the declared stockpiles of chemical weapons were destroyed.For a few decades during the Cold War, the U.S. developed atomic demolition munitions, often referred to as nuclear land mines. These were portable nuclear bombs that could be placed by hand, and could be detonated remotely or with a timer. Some of these were deployed in Europe. Governments in West Germany, Turkey and Greece wanted to have nuclear minefields as a defense against attack from the Warsaw Pact. However, such weapons were politically and tactically infeasible, and by 1989 the last of these munitions was retired. The British also had a project, codenamed Blue Peacock, to develop nuclear mines to be buried in Germany; the project was cancelled in 1958. Characteristics and function: A conventional land mine consists of a casing that is mostly filled with the main charge. It has a firing mechanism such as a pressure plate; this triggers a detonator or igniter, which in turn sets off a booster charge. There may be additional firing mechanisms in anti-handling devices. Characteristics and function: Firing mechanisms and initiating actions A land mine can be triggered by a number of things including pressure, movement, sound, magnetism and vibration. Anti-personnel mines commonly use the pressure of a person's foot as a trigger, but tripwires are also frequently employed. Most modern anti-vehicle mines use a magnetic trigger to enable it to detonate even if the tires or tracks did not touch it. Advanced mines are able to sense the difference between friendly and enemy types of vehicles by way of a built-in signature catalog. This will theoretically enable friendly forces to use the mined area while denying the enemy access. Characteristics and function: Many mines combine the main trigger with a touch or tilt trigger to prevent enemy engineers from defusing it. Land mine designs tend to use as little metal as possible to make searching with a metal detector more difficult; land mines made mostly of plastic have the added advantage of being very inexpensive. Some types of modern mines are designed to self-destruct, or chemically render themselves inert after a period of weeks or months to reduce the likelihood of civilian casualties at the conflict's end. These self-destruct mechanisms are not absolutely reliable, and most land mines laid historically are not equipped in this manner. There is a common misperception that a landmine is armed by stepping on it and only triggered by stepping off, providing tension in movies. In fact the initial pressure trigger will detonate the mine, as they are designed to kill or maim, not to make someone stand very still until it can be disarmed. Characteristics and function: Anti-handling devices Anti-handling devices detonate the mine if someone attempts to lift, shift or disarm it. The intention is to hinder deminers by discouraging any attempts to clear minefields. There is a degree of overlap between the function of a boobytrap and an anti-handling device insofar as some mines have optional fuze pockets into which standard pull or pressure-release boobytrap firing devices can be screwed. Alternatively, some mines may mimic a standard design, but actually be specifically intended to kill deminers, such as the MC-3 and PMN-3 variants of the PMN mine. Anti-handling devices can be found on both anti-personnel mines and anti-tank mines, either as an integral part of their design or as improvised add-ons. For this reason, the standard render safe procedure for mines is often to destroy them on site without attempting to lift them. Characteristics and function: Smart mines "Smart mines" utilize a number of advanced technologies developed in the late 20th and early 21st century. Most commonly, this includes mechanisms to deactivate or self-destruct the mine after a preset period of time. This is intended to reduce civilian casualties and simplify demining. Other innovations include "self-healing" minefields, which detect gaps in the field and can direct the mines to rearrange their positions and eliminate it. Anti-tank mines: Anti-tank mines were created not long after the invention of the tank in the First World War. At first improvised, purpose-built designs were developed. Set off when a tank passes, they attack the tank at one of its weaker areas – the tracks. They are designed to immobilize or destroy vehicles and their occupants. In U.S. military terminology destroying the vehicles is referred to as a catastrophic kill while only disabling its movement is referred to as a mobility kill. Anti-tank mines: Anti-tank mines are typically larger than anti-personnel mines and require more pressure to detonate. The high trigger pressure, normally requiring 100 kilograms (220 lb) prevents them from being set off by infantry or smaller vehicles of lesser importance. More modern anti-tank mines use shaped charges to focus and increase the armor penetration of the explosives. Anti-personnel mines: Anti-personnel mines are designed primarily to kill or injure people, as opposed to vehicles. They are often designed to injure rather than kill to increase the logistical support (evacuation, medical) burden on the opposing force. Some types of anti-personnel mines can also damage the tracks or wheels of armored vehicles. Anti-personnel mines: In the asymmetric warfare conflicts and civil wars of the 21st century, improvised explosives, known as IEDs, have partially supplanted conventional landmines as the source of injury to dismounted (pedestrian) soldiers and civilians. IEDs are used mainly by insurgents and terrorists against regular armed forces and civilians. The injuries from the anti-personnel IED were recently reported in BMJ Open to be far worse than with landmines resulting in multiple limb amputations and lower body mutilation. Warfare: Land mines were designed for two main uses: To create defensive tactical barriers, channelling attacking forces into predetermined fire zones or slowing an invading force's progress to allow reinforcements to arrive. Warfare: To act as passive area-denial weapons (to deny the enemy use of valuable terrain, resources or facilities when active defense of the area is not desirable or possible).Land mines are currently used in large quantities mostly for this first purpose, thus their widespread use in the demilitarized zones (DMZs) of likely flashpoints such as Cyprus, Afghanistan and Korea. As of 2013, the only governments that still laid land mines were Myanmar in its internal conflict, and Syria in its civil war.In military science, minefields are considered a defensive or harassing weapon, used to slow the enemy down, to help deny certain terrain to the enemy, to focus enemy movement into kill zones, or to reduce morale by randomly attacking material and personnel. In some engagements during World War II, anti-tank mines accounted for half of all vehicles disabled. Warfare: Since combat engineers with mine-clearing equipment can clear a path through a minefield relatively quickly, mines are usually considered effective only if covered by fire. Warfare: The extents of minefields are often marked with warning signs and cloth tape, to prevent friendly troops and non-combatants from entering them. Of course, sometimes terrain can be denied using dummy minefields. Most forces carefully record the location and disposition of their own minefields, because warning signs can be destroyed or removed, and minefields should eventually be cleared. Minefields may also have marked or unmarked safe routes to allow friendly movement through them. Warfare: Placing minefields without marking and recording them for later removal is considered a war crime under Protocol II of the Convention on Certain Conventional Weapons, which is itself an annex to the Geneva Conventions. Warfare: Artillery and aircraft scatterable mines allow minefields to be placed in front of moving formations of enemy units, including the reinforcement of minefields or other obstacles that have been breached by enemy engineers. They can also be used to cover the retreat of forces disengaging from the enemy, or for interdiction of supporting units to isolate front line units from resupply. In most cases these minefields consist of a combination of anti-tank and anti-personnel mines, with the anti-personnel mines making removal of the anti-tank mines more difficult. Mines of this type used by the United States are designed to self-destruct after a preset period of time, reducing the requirement for mine clearing to only those mines whose self-destruct system did not function. Some designs of these scatterable mines require an electrical charge (capacitor or battery) to detonate. After a certain period of time, either the charge dissipates, leaving them effectively inert or the circuitry is designed such that upon reaching a low level, the device is triggered, thus destroying the mine. Warfare: Guerrilla warfare None of the conventional tactics and norms of mine warfare applies when they are employed in a guerrilla role: The mines are not used in defensive roles (for specific position or area). Mined areas are not marked. Mines are usually placed singly and not in groups covering an area. Warfare: Mines are often left unattended (not covered by fire).Land mines were commonly deployed by insurgents during the South African Border War, leading directly to the development of the first dedicated mine-protected armoured vehicles in South Africa. Namibian insurgents used anti-tank mines to throw South African military convoys into disarray before attacking them. To discourage detection and removal efforts, they also laid anti-personnel mines directly parallel to the anti-tank mines. This initially resulted in heavy South African military and police casualties, as the vast distances of road network vulnerable to insurgent sappers every day made comprehensive detection and clearance efforts impractical. The only other viable option was the adoption of mine-protected vehicles which could remain mobile on the roads with little risk to their passengers even if a mine was detonated. South Africa is widely credited with inventing the v-hull, a vee-shaped hull for armoured vehicles which deflects mine blasts away from the passenger compartment.During the ongoing Syrian Civil War, Iraqi Civil War (2014–2017) and Yemeni Civil War (2015–present) landmines have been used for both defensive and guerrilla purposes. Warfare: Laying mines Minefields may be laid by several means. The preferred, but most labour-intensive, way is to have engineers bury the mines, since this will make the mines practically invisible and reduce the number of mines needed to deny the enemy an area. Mines can be laid by specialized mine-laying vehicles. Mine-scattering shells may be fired by artillery from a distance of several tens of kilometers. Warfare: Mines may be dropped from helicopters or airplanes, or ejected from cluster bombs or cruise missiles. Warfare: Anti-tank minefields can be scattered with anti-personnel mines to make clearing them manually more time-consuming; and anti-personnel minefields are scattered with anti-tank mines to prevent the use of armored vehicles to clear them quickly. Some anti-tank mine types are also able to be triggered by infantry, giving them a dual purpose even though their main and official intention is to work as anti-tank weapons. Warfare: Some minefields are specifically booby-trapped to make clearing them more dangerous. Mixed anti-personnel and anti-tank minefields, anti-personnel mines under anti-tank mines, and fuses separated from mines have all been used for this purpose. Often, single mines are backed by a secondary device, designed to kill or maim personnel tasked with clearing the mine. Multiple anti-tank mines have been buried in stacks of two or three with the bottom mine fuzed, to multiply the penetrating power. Since the mines are buried, the ground directs the energy of the blast in a single direction—through the bottom of the target vehicle or on the track. Warfare: Another specific use is to mine an aircraft runway immediately after it has been bombed to delay or discourage repair. Some cluster bombs combine these functions. One example was the British JP233 cluster bomb which includes munitions to damage (crater) the runway as well as anti-personnel mines in the same cluster bomb. As a result of the anti-personnel mine ban it was withdrawn from British Royal Air Force service, and the last stockpiles of the mine were destroyed on October 19, 1999. Demining: Metal detectors were first used for demining, after their invention by the Polish officer Józef Kosacki. His invention, known as the Polish mine detector, was used by the Allies alongside mechanical methods, to clear the German mine fields during the Second Battle of El Alamein when 500 units were shipped to Field Marshal Montgomery's Eighth Army.The Nazis used captured civilians who were chased across minefields to detonate the explosives. According to Laurence Rees "Curt von Gottberg, the SS-Obergruppenführer who, during 1943, conducted another huge anti-partisan action called Operation Kottbus on the eastern border of Belarus, reported that 'approximately two to three thousand local people were blown up in the clearing of the minefields'."Whereas the placing and arming of mines is relatively inexpensive and simple, the process of detecting and removing them is typically expensive, slow, and dangerous. This is especially true of irregular warfare where mines were used on an ad hoc basis in unmarked and undocumented areas. Anti-personnel mines are most difficult to find, due to their small size and many being made almost entirely of non-metallic materials specifically to evade metal detectors. Demining: Manual clearing remains the most effective technique for clearing mine fields, although hybrid techniques involving the use of animals and robots are being developed. Many animals are desirable due to having a strong sense of smell capable of detecting a land mine. Animals such as rats and dogs can be trained to detect the explosive agent.Other techniques involve the use of geolocation technologies. As of 2008 a joint team of researchers at the University of New South Wales and Ohio State University was working to develop a system based on multi-sensor integration. Furthermore, defence firms have been increasingly competing on the creation of unmanned demining systems. In addition to conventional remote control mine defusing robots that operate either through precise mechanical dismantling, electronic destabilization and kinetic triggering methods, fully autonomous methods are in development. Notably, these autonomous methods utilize unmanned ground systems, or more recently subterranean systems such as the EMC Operations Termite, using either outward pressure differentials along system bodies, or corkscrew mechanisms. Demining: The laying of land mines has inadvertently led to a positive development in the Falkland Islands. Minefields laid near the sea during the Falklands War have become favorite places for penguins, which do not weigh enough to detonate the mines. Therefore, they can breed safely, free of human intrusion. These odd sanctuaries have proven so popular and lucrative for ecotourism that efforts existed to prevent removal of the mines, but the area has since been demined. International treaties: The use of land mines is controversial because they are indiscriminate weapons, harming soldier and civilian alike. They remain dangerous after the conflict in which they were deployed has ended, killing and injuring civilians and rendering land impassable and unusable for decades. To make matters worse, many factions have not kept accurate records (or any at all) of the exact locations of their minefields, making removal efforts painstakingly slow. These facts pose serious difficulties in many developing nations where the presence of mines hampers resettlement, agriculture, and tourism. The International Campaign to Ban Landmines campaigned successfully to prohibit their use, culminating in the 1997 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, known informally as the Ottawa Treaty. International treaties: The Treaty came into force on March 1, 1999. The treaty was the result of the leadership of the Governments of Canada, Norway, South Africa and Mozambique working with the International Campaign to Ban Landmines, launched in 1992. The campaign and its leader, Jody Williams, won the Nobel Peace Prize in 1997 for its efforts. International treaties: The treaty does not include anti-tank mines, cluster bombs or claymore-type mines operated in command mode and focuses specifically on anti-personnel mines, because these pose the greatest long term (post-conflict) risk to humans and animals since they are typically designed to be triggered by any movement or pressure of only a few kilograms, whereas anti-tank mines require much more weight (or a combination of factors that would exclude humans). Existing stocks must be destroyed within four years of signing the treaty. International treaties: Signatories of the Ottawa Treaty agree that they will not use, produce, stockpile or trade in anti-personnel land mines. In 1997, there were 122 signatories; as of early 2016, 162 countries have joined the Treaty. Thirty-six countries, including the People's Republic of China, the Russian Federation and the United States, which together may hold tens of millions of stockpiled antipersonnel mines, are not party to the Convention. Another 34 have yet to sign on. The United States did not sign because the treaty lacks an exception for the Korean Demilitarized Zone. International treaties: There is a clause in the treaty, Article 3, which permits countries to retain land mines for use in training or development of countermeasures. Sixty-four countries have taken this option. International treaties: As an alternative to an outright ban, 10 countries follow regulations that are contained in a 1996 amendment of Protocol II of the Convention on Conventional Weapons (CCW). The countries are China, Finland, India, Israel, Morocco, Pakistan, South Korea and the United States. Sri Lanka, which had adhered to this regulation, announced in 2016 that it would join the Ottawa Treaty.Submunitions and Unexploded ordnance from cluster munitions can also function as landmines, in that they continue to kill and maim indiscriminately long after conflicts have ended. The Convention on Cluster Munitions (CCM) is an international treaty that prohibits the use, distribution, or manufacture of cluster munitions. The CCM entered into force in 2010, and has been ratified by over 100 countries. Manufacturers: Before the Ottawa Treaty was adopted, the Arms Project of Human Rights Watch identified "almost 100 companies and government agencies in 48 countries" that had manufactured "more than 340 types of antipersonnel landmines in recent decades". Five to ten million mines were produced per year with a value of $50 to $200 million. The largest producers were probably China, Italy and the Soviet Union. The companies involved included giants such as Daimler-Benz, the Fiat Group, the Daewoo Group, RCA and General Electric.As of 2017, the Landmine & Cluster Munition Monitor identified four countries that were "likely to be actively producing" land mines: India, Myanmar, Pakistan and South Korea. Another seven states reserved the right to make them but were probably not doing so: China, Cuba, Iran, North Korea, Russia, Singapore, and Vietnam.In recent years, arms industry manufacturers have been utilizing non-static mines that can be specifically targeted in order to remove the imprecision of antipersonnel devices, promoting the use of movable underground systems, movable above ground systems and systems that can be expired (automatically or manually via strategic operators.) Development of systems such as Termite, by arms firm EMC Operations has led to criticism from proponents of past multilateral agreements against the placement of landmines and submunitions due to expectations of similar long-dormancy period issues after systems break or fail after it was announced that vehicles would likely be armed to destroy static targets, rather than focus purely on demining efforts. Impacts: Throughout the world there are millions of hectares that are contaminated with land mines. Impacts: Casualties From 1999 to 2017, the Landmine Monitor has recorded over 120,000 casualties from mines, IEDs and explosive remnants of war; it estimates that another 1,000 per year go unrecorded. The estimate for all time is over half a million. In 2017, at least 2,793 were killed and 4,431 injured. 87% of the casualties were civilians and 47% were children (less than 18 years old). The largest numbers of casualties were in Afghanistan (2,300), Syria (1,906), and Ukraine (429). Impacts: Environmental Natural disasters can have a significant impact on efforts to demine areas of land. For example, the floods that occurred in Mozambique in 1999 and 2000 may have displaced hundreds of thousands of land mines left from the war. Uncertainty about their locations delayed recovery efforts. Impacts: Land degradation From a study by Asmeret Asefaw Berhe, land degradation caused by land mines "can be classified into five groups: access denial, loss of biodiversity, micro-relief disruption, chemical composition, and loss of productivity". The effects of an explosion depend on: "(i) the objectives and methodological approaches of the investigation; (ii) concentration of mines in a unit area; (iii) chemical composition and toxicity of the mines; (iv) previous uses of the land and (v) alternatives that are available for the affected populations". Impacts: Access denial The most prominent ecological issue associated with landmines (or fear of them) is denial of access to vital resources (where "access" refers to the ability to use resources, in contrast to "property", the right to use them). The presence and fear of presence of even a single landmine can discourage access for agriculture, water supplies and possibly conservation measures. Reconstruction and development of important structures such as schools and hospitals are likely to be delayed, and populations may shift to urban areas, increasing overcrowding and the risk of spreading diseases.Access denial can have positive effects on the environment. When a mined area becomes a "no-man's land", plants and vegetation have a chance to grow and recover. For example, formerly arable lands in Nicaragua returned to forests and remained undisturbed after the establishment of landmines. Similarly, the penguins of the Falkland Islands have benefited because they are not heavy enough to trigger the mines present. However, these benefits can only last as long as animals, tree limbs, etc. do not detonate the mines. In addition, long idle periods could "potentially end up creating or exacerbating loss of productivity", particularly within land of low quality. Impacts: Loss of biodiversity Landmines can threaten biodiversity by wiping out vegetation and wildlife during explosions or demining. This extra burden can push threatened and endangered species to extinction. They have also been used by poachers to target endangered species. Displace people refugees hunt animals for food and destroy habitat by making shelters.Shrapnel, or abrasions of bark or roots caused by detonated mines, can cause the slow death of trees and provide entry sites for wood-rotting fungi. When landmines make land unavailable for farming, residents resort to the forests to meet all of their survival needs. This exploitation furthers the loss of biodiversity. Impacts: Chemical contamination Near mines that have exploded or decayed, soils tend to be contaminated, particularly with heavy metals. Products produced from the explosives, both organic and inorganic substances, are most likely to be "long lasting, water-soluble and toxic even in small amounts". They can be implemented either "directly or indirectly into soil, water bodies, microorganisms and plants with drinking water, food products or during respiration".Toxic compounds can also find their way into bodies of water and accumulate in land animals, fish and plants. They can act "as a nerve poison to hamper growth", with deadly effect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GP Andromedae** GP Andromedae: GP Andromedae (often abbreviated to GP And) is a Delta Scuti variable star in the constellation Andromeda. It is a pulsating star, with its brightness varying with an amplitude of 0.55 magnitudes around a mean magnitude of 10.7. System: GP Andromedae is a main sequence Population I star of spectral type A3, placing it in the instability strip of the Hertzsprung-Russell diagram where Delta Scuti variables lay.A visual companion star 11 arcseconds away, named TYC 1739-1526-2, shares a common proper motion and has a similar distance (measured by parallax) as GP Andromedae. There is no proof, however, that the two stars are gravitationally bound. Variability: The observed variability of GP Andromedae is typical for a Delta Scuti variable; it's a purely monoperiodic radial pulsating star with a period of 0.0787 days. The period of pulsations is slowly and continuously increasing, matching the predictions of stellar evolution models for Delta Scuti variables.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyphema** Hyphema: Hyphema is the medical condition of bleeding in the front (anterior) chamber of the eye between the iris and the cornea. People usually first notice a loss or decrease in vision. The eye may also appear to have a reddish tinge, or it may appear as a small pool of blood at the bottom of the iris in the cornea. A traumatic hyphema is caused by a blow to the eye. A hyphema can also occur spontaneously. Presentation: A decrease in vision or a loss of vision is often the first sign of a hyphema. People with microhyphema may have slightly blurred or normal vision. A person with a full hyphema may not be able to see at all (complete loss of vision). The person's vision may improve over time as the blood moves by gravity lower in the anterior chamber of the eye, between the iris and the cornea. In many people, the vision will improve, however some people may have other injuries related to trauma to the eye or complications related to the hyphema. A microhyphema, where red blood cells are hanging in the anterior chamber of the eye, is less severe. A layered hyphema when fresh blood is seen lower in the anterior chamber is moderately severe. A full hyphema (total hyphema), when blood fills up the chamber completely, is the most severe. Presentation: Complications While the vast majority of hyphemas resolve on their own without issue, sometimes complications occur. Traumatic hyphema may lead to increased intraocular pressure (IOP), peripheral anterior synechiae, atrophy of the optic nerve, staining of the cornea with blood, re-bleeding, and impaired accommodation.Secondary hemorrhage, or rebleeding of the hyphema, is thought to worsen outcomes in terms of visual function and lead to complications such as glaucoma, corneal staining, optic atrophy, or vision loss. Rebleeding occurs in 4–35% of hyphema cases and is a risk factor for glaucoma. Young children with traumatic hyphema are at an increased risk of developing amblyopia, an irreversible condition. Causes: Hyphemas are frequently caused by injury, and may partially or completely block vision. The most common causes of hyphema are intraocular surgery, blunt trauma, and lacerating trauma. Hyphemas may also occur spontaneously, without any inciting trauma. Spontaneous hyphemas are usually caused by the abnormal growth of blood vessels (neovascularization), tumors of the eye (retinoblastoma or iris melanoma), uveitis, or vascular anomalies (juvenile xanthogranuloma). Additional causes of spontaneous hyphema include: rubeosis iridis, myotonic dystrophy, leukemia, hemophilia, and von Willebrand disease. Conditions or medications that cause thinning of the blood, such as aspirin, warfarin, or drinking alcohol may also cause hyphema. Source of bleeding in hyphema with blunt trauma to eye is circulus iridis major artery. Treatment: The main goals of treatment are to decrease the risk of re-bleeding within the eye, corneal blood staining, and atrophy of the optic nerve. Small hyphemas can usually be treated on an outpatient basis. There is little evidence that most of the commonly used treatments for hyphema (antifibrinolytic agents [oral and systemic aminocaproic acid, tranexamic acid, and aminomethylbenzoic acid], corticosteroids [systemic and topical], cycloplegics, miotics, aspirin, conjugated estrogens, traditional Chinese medicine, monocular versus bilateral patching, elevation of the head, and bed rest) are effective at improving visual acuity after two weeks. Surgery may be necessary for non-resolving hyphemas, or hyphemas that are associated with high pressure that does not respond to medication. Surgery can be effective for cleaning out the anterior chamber and preventing corneal blood staining.If pain management is necessary, acetaminophen can be used. Aspirin and ibuprofen should be avoided, because they interfere with platelets' ability to form a clot and consequently increase the risk of additional bleeding. Sedation is not usually necessary for patients with hyphema. Treatment: Aminocaproic or tranexamic acids are often prescribed for hyphema on the basis that they reduce the risk of rebleeding by inhibiting the conversion of plasminogen to plasmin, and thereby keeping clots stable. However, the evidence for their effectiveness is limited and aminocaproic acid may actually cause hyphemas to take longer to clear. Prognosis: Hyphemas require urgent assessment by an optometrist or ophthalmologist as they may result in permanent visual impairment.A long-standing hyphema may result in hemosiderosis and heterochromia. Blood accumulation may also cause an elevation of the intraocular pressure. On average, the increased pressure in the eye remains for six days before dropping. Most uncomplicated hyphemas resolve within 5–6 days. Epidemiology: As of 2012, the rate of hyphemas in the United States are about 20 cases per 100,000 people annually. The majority of people with a traumatic hyphema are children and young adults. 60% of traumatic hyphemas are sports-related, and there are more cases in males compared to females.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ReactiveX** ReactiveX: ReactiveX (also known as Reactive Extensions) is a software library originally created by Microsoft that allows imperative programming languages to operate on sequences of data regardless of whether the data is synchronous or asynchronous. It provides a set of sequence operators that operate on each item in the sequence. It is an implementation of reactive programming and provides a blueprint for the tools to be implemented in multiple programming languages. Overview: ReactiveX is an API for asynchronous programming with observable streams.Asynchronous programming allows programmers to call functions and then have the functions "callback" when they are done, usually by giving the function the address of another function to execute when it is done. Programs designed in this way often avoid the overhead of having many threads constantly starting and stopping. Observable streams (i.e. streams that can be observed) in the context of Reactive Extensions are like event emitters that emit three events: next, error, and complete. An observable emits next events until it either emits an error event or a complete event. However, at that point it will not emit any more events, unless it is subscribed to again. Motivation For sequences of data, it combines the advantages of iterators with the flexibility of event-based asynchronous programming. It also works as a simple promise, eliminating the pyramid of doom that results from multiple layers of callbacks. Overview: Observables and observers ReactiveX is a combination of ideas from the observer and the iterator patterns and from functional programming.An observer subscribes to an observable sequence. The sequence then sends the items to the observer one at a time, usually by calling the provided callback function. The observer handles each one before processing the next one. If many events come in asynchronously, they must be stored in a queue or dropped. In ReactiveX, an observer will never be called with an item out of order or (in a multi-threaded context) called before the callback has returned for the previous item. Asynchronous calls remain asynchronous and may be handled by returning an observable. Overview: It is similar to the iterators pattern in that if a fatal error occurs, it notifies the observer separately (by calling a second function). When all the items have been sent, it completes (and notifies the observer by calling a third function). The Reactive Extensions API also borrows many of its operators from iterator operators in other programming languages. Overview: Reactive Extensions is different from functional reactive programming as the Introduction to Reactive Extensions explains: It is sometimes called "functional reactive programming" but this is a misnomer. ReactiveX may be functional, and it may be reactive, but "functional reactive programming" is a different animal. One main point of difference is that functional reactive programming operates on values that change continuously over time, while ReactiveX operates on discrete values that are emitted over time. (See Conal Elliott's work for more-precise information on functional reactive programming.) Reactive operators An operator is a function that takes one observable (the source) as its first argument and returns another observable (the destination, or outer observable). Then for every item that the source observable emits, it will apply a function to that item, and then emit it on the destination Observable. It can even emit another Observable on the destination observable. This is called an inner observable. Overview: An operator that emits inner observables can be followed by another operator that in some way combines the items emitted by all the inner observables and emits the item on its outer observable. Examples include: switchAll – subscribes to each new inner observable as soon as it is emitted and unsubscribes from the previous one. mergeAll – subscribes to all inner observables as they are emitted and outputs their values in whatever order it receives them. concatAll – subscribes to each inner observable in order and waits for it to complete before subscribing to the next observable.Operators can be chained together to create complex data flows that filter events based on certain criteria. Multiple operators can be applied to the same observable. Some of the operators that can be used in Reactive Extensions may be familiar to programmers who use functional programming language, such as map, reduce, group, and zip. There are many other operators available in Reactive Extensions, though the operators available in a particular implementation for a programming language may vary. Overview: Reactive operator examples Here is an example of using the map and reduce operators. We create an observable from a list of numbers. The map operator will then multiply each number by two and return an observable. The reduce operator will then sum up all the numbers provided to it (the value of 0 is the starting point). Calling subscribe will register an observer that will observe the values from the observable produced by the chain of operators. With the subscribe method, we are able to pass in an error-handling function, called whenever an error is emitted in the observable, and a completion function when the observable has finished emitting items. Overview: The above example uses the RxJS implementation of Reactive Extensions for the JavaScript programming language. History: Reactive Extensions (Rx) was created by the Cloud Programmability Team at Microsoft around 2011, as a byproduct of a larger effort called Volta. It was originally intended to provide an abstraction for events across different tiers in an application to support tier splitting in Volta. The project's logo represents an electric eel, which is a reference to Volta. The extensions suffix in the name is a reference to the Parallel Extensions technology which was invented around the same time; the two are considered complementary. History: The initial implementation of Rx was for .NET Framework and was released on June 21, 2011. Later, the team started the implementation of Rx for other platforms, including JavaScript and C++. The technology was released as open source in late 2012, initially on CodePlex. Later, the code moved to GitHub.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Centre for Genetic Engineering and Biotechnology** International Centre for Genetic Engineering and Biotechnology: The International Centre for Genetic Engineering and Biotechnology (ICGEB) was established as a project of the United Nations Industrial Development Organization (UNIDO) in 1983. The Organisation has three Component laboratories with over 45 ongoing research projects in Infectious and Non-communicable diseases, Medical, Industrial and Plant Biology Biotechnology in: Trieste, Italy, New Delhi, India and Cape Town, South Africa. On February 3, 1994, under the direction of Arturo Falaschi the ICGEB became an autonomous International Organisation and now has over 65 Member States across world regions. Its main pillars of action comprise: Research, Advanced Education through PhD and Postdoctoral Fellowships, International Scientific Meetings and Courses, competitive Grants for scientists in Member States and Technology Transfer to industry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SEC61B** SEC61B: Protein transport protein Sec61 subunit beta is a protein that in humans is encoded by the SEC61B gene.The Sec61 complex is the central component of the protein translocation apparatus of the endoplasmic reticulum (ER) membrane. The Sec61 complex forms a transmembrane channel where proteins are translocated across and integrated into the ER membrane. This complex consists of three membrane proteins- alpha, beta, and gamma. This gene encodes the beta-subunit protein. The Sec61 subunits are also observed in the post-ER compartment, suggesting that these proteins can escape the ER and recycle back. There is evidence for multiple polyadenylated sites for this transcript.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spine with fluid (hieroglyph)** Spine with fluid (hieroglyph): The use of the Spine with fluid hieroglyph is for words showing "length", as opposed to 'breadth', (Egyptian usekh-(breadth, width)-for example, the Usekh collar). Some example words for 'length' are: to be long, length, to extend, extended; and for to expand, to dilate, words like: joy, gladness, pleasure, delight.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Logic gate** Logic gate: A logic gate is an idealized or physical device that performs a Boolean function, a logical operation performed on one or more binary inputs that produces a single binary output. Depending on the context, the term may refer to an ideal logic gate, one that has, for instance, zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device (see ideal and real op-amps for comparison). In the real world, the primary way of building logic gates uses diodes or transistors acting as electronic switches. Today, most logic gates are made from MOSFETs (metal–oxide–semiconductor field-effect transistors). They can also be constructed using vacuum tubes, electromagnetic relays with relay logic, fluidic logic, pneumatic logic, optics, molecules, or even mechanical elements. With amplification, logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all of Boolean logic, and therefore, all of the algorithms and mathematics that can be described with Boolean logic. Logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), and computer memory, all the way up through complete microprocessors, which may contain more than 100 million logic gates. Compound logic gates AND-OR-Invert (AOI) and OR-AND-Invert (OAI) are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.In reversible logic, Toffoli or Fredkin gates are used. Electronic gates: A functionally complete logic system may be composed of relays, valves (vacuum tubes), or transistors. The simplest family of logic gates uses bipolar transistors, and is called resistor–transistor logic (RTL). Unlike simple diode logic gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in early integrated circuits. For higher speed and better density, the resistors used in RTL were replaced by diodes resulting in diode–transistor logic (DTL). Transistor–transistor logic (TTL) then supplanted DTL. As integrated circuits became more complex, bipolar transistors were replaced with smaller field-effect transistors (MOSFETs); see PMOS and NMOS. To reduce power consumption still further, most contemporary chip implementations of digital systems now use CMOS logic. CMOS uses complementary (both n-channel and p-channel) MOSFET devices to achieve a high speed with low power dissipation. Electronic gates: For small-scale logic, designers now use prefabricated logic gates from families of devices such as the TTL 7400 series by Texas Instruments, the CMOS 4000 series by RCA, and their more recent descendants. Increasingly, these fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack many mixed logic gates into a single integrated circuit. The field-programmable nature of programmable logic devices such as FPGAs has reduced the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed. Other types of logic gates include, but are not limited to: Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gain voltage amplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate. Electronic gates: Another important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for the internal workings of the gates, provided the limitations of each integrated circuit are considered. The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fan-out limit'. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speed synchronous circuits. Additional delay can be caused when many inputs are connected to an output, due to the distributed capacitance of all the inputs and wiring and the finite amount of current that each output can provide. History and development: The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705), influenced by the ancient I Ching's binary system. Leibniz established that using the binary system combined the principles of arithmetic and logic. History and development: In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. Early electro-mechanical computers were constructed from switches and relay logic rather than the later innovations of vacuum tubes (thermionic valves) or transistors (from which later electronic computers were constructed). Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built electromechanical logic gates for his computer Z1 (from 1935 to 1938). History and development: From 1934 to 1936, NEC engineer Akira Nakashima, Claude Shannon and Victor Shestakov introduced switching circuit theory in a series of papers showing that two-valued Boolean algebra, which they discovered independently, can describe the operation of switching circuits. Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Switching circuit theory became the foundation of digital circuit design, as it became widely known in the electrical engineering community during and after World War II, with theoretical rigor superseding the ad hoc methods that had prevailed previously.Metal–oxide–semiconductor (MOS) devices in the forms of PMOS and NMOS were demonstrated by Bell Labs engineers Mohamed M. Atalla and Dawon Kahng in 1960. Both types were later combined and adapted into complementary MOS (CMOS) logic by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963.Active research is taking place in molecular logic gates. Symbols: There are two sets of symbols for elementary logic gates in common use, both defined in ANSI/IEEE Std 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and derives from United States Military Standard MIL-STD-806 of the 1950s and 1960s. It is sometimes unofficially described as "military", reflecting its origin. The "rectangular shape" set, based on ANSI Y32.14 and other early industry standards as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation of a much wider range of devices than is possible with the traditional symbols. The IEC standard, IEC 60617-12, has been adopted by other standards, such as EN 60617-12:1999 in Europe, BS EN 60617-12:1999 in the United Kingdom, and DIN EN 60617-12:1998 in Germany. Symbols: The mutual goal of IEEE Std 91-1984 and IEC 617-12 was to provide a uniform method of describing the complex logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and OR gates. They could be medium-scale circuits such as a 4-bit counter to a large-scale circuit such as a microprocessor. Symbols: IEC 617-12 and its renumbered successor IEC 60617-12 do not explicitly show the "distinctive shape" symbols, but do not prohibit them. These are, however, shown in ANSI/IEEE Std 91 (and 91a) with this note: "The distinctive-shape symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard." IEC 60617-12 correspondingly contains the note (Section 2.1) "Although non-preferred, the use of other symbols recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form complex symbols (for example, use as embedded symbols) is discouraged." This compromise was reached between the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with one another. Symbols: In the 1980s, schematics were the predominant method to design both circuit boards and custom ICs known as gate arrays. Today custom ICs and the field-programmable gate array are typically designed with Hardware Description Languages (HDL) such as Verilog or VHDL. Truth tables: Output comparison of 1-input logic gates. Output comparison of 2-input logic gates. Universal logic gates: Charles Sanders Peirce (during 1880–1881) showed that NOR gates alone (or alternatively NAND gates alone) can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933. The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called Peirce's arrow. Consequently, these gates are sometimes called universal logic gates. De Morgan equivalent symbols: By use of De Morgan's laws, an AND function is identical to an OR function with negated inputs and outputs. Likewise, an OR function is identical to an AND function with negated inputs and outputs. A NAND gate is equivalent to an OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs. De Morgan equivalent symbols: This leads to an alternative set of symbols for basic gates that use the opposite core symbol (AND or OR) but with the inputs and outputs negated. Use of these alternative symbols can make logic circuit diagrams much clearer and help to show accidental connection of an active high output to an active low input or vice versa. Any connection that has logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice versa. Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams – thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated. De Morgan equivalent symbols: A De Morgan symbol can show more clearly a gate's primary logical purpose and the polarity of its nodes that are considered in the "signaled" (active, on) state. Consider the simplified case where a two-input NAND gate is used to drive a motor when either of its inputs are brought low by a switch. The "signaled" state (motor on) occurs when either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol shows both inputs and output in the polarity that will drive the motor. De Morgan equivalent symbols: De Morgan's theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as combinations of only NOR gates, for economic reasons. Data storage and sequential logic: Logic gates can also be used to hold a state, allowing data storage. A storage element can be constructed by connecting several gates in a "latch" circuit. Latching circuitry is used in static random-access memory. More complicated designs that use clock signals and that change only on a rising or falling edge of the clock are called edge-triggered "flip-flops". Formally, a flip-flop is called a bistable circuit, because it has two stable states which it can maintain indefinitely. The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. When using any of these gate setups the overall system has memory; it is then called a sequential logic system since its output can be influenced by its previous state(s), i.e. by the sequence of input states. In contrast, the output from combinational logic is purely a combination of its present inputs, unaffected by the previous input and output states. Data storage and sequential logic: These logic circuits are used in computer memory. They vary in performance, based on factors of speed, complexity, and reliability of storage, and many different types of designs are used based on the application. Three-state logic gates: A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used on buses of the CPU to allow multiple chips to send data. A group of three-states driving a line with a suitable control circuit is basically equivalent to a multiplexer, which may be physically distributed over separate devices or plug-in cards. Three-state logic gates: In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High impedance would mean that the output is effectively disconnected from the circuit. Manufacturing: Since the 1990s, most logic gates are made in CMOS (complementary metal oxide semiconductor) technology that uses both NMOS and PMOS transistors. Often millions of logic gates are packaged in a single integrated circuit. Manufacturing: Non-electronic logic gates Non-electronic implementations are varied, though few of them are used in practical applications. Many early electromechanical digital computers, such as the Harvard Mark I, were built from relay logic gates, using electro-mechanical relays. Logic gates can be made using pneumatic devices, such as the Sorteberg relay or mechanical logic gates, including on a molecular scale. Various types of fundamental logic gates have been constructed using molecules (molecular logic gates), which are based on chemical inputs and spectroscopic outputs. Logic gates have been made out of DNA (see DNA nanotechnology) and used to create a computer called MAYA (see MAYA-II). Logic gates can be made from quantum mechanical effects, see quantum logic gate. Photonic logic gates use nonlinear optical effects. Manufacturing: In principle any method that leads to a gate that is functionally complete (for example, either a NOR or a NAND gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not needed, and can be replaced by digital multiplexers, which can be built using only simple logic gates (such as NAND gates, NOR gates, or AND and OR gates). Manufacturing: Logic families There are several logic families with different characteristics (power consumption, speed, cost, size) such as: RDL (resistor–diode logic), RTL (resistor-transistor logic), DTL (diode–transistor logic), TTL (transistor–transistor logic) and CMOS. There are also sub-variants, e.g. standard CMOS logic vs. advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower PMOS transistors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transcendental anatomy** Transcendental anatomy: Transcendental anatomy, also known as philosophical anatomy, was a form of comparative anatomy that sought to find ideal patterns and structures common to all organisms in nature. The term originated from naturalist philosophy in the German provinces, and culminated in Britain especially by scholars Robert Knox and Richard Owen, who drew from Goethe and Lorenz Oken. From the 1820s to 1859, it persisted as the medical expression of natural philosophy before the Darwinian revolution.Amongst its various definitions, transcendental anatomy has four main tenets: the presupposition of an Ideal Plan among the multiplicity of visible structures in the animal and plant kingdom, and that the Plan determines function the Ideal Plan acted as a force for the maintenance of anatomical uniformity (as opposed to diversity-inducing forces of Nature) the belief that this a priori Plan was discoverable the desire to discover universal Laws underlying anatomical differences. History: Johann Wolfgang Goethe was one of many naturalists and anatomists in the nineteenth century who was in search of an Ideal Plan in nature. In Germany, this was known as Urpflanze for the plant kingdom and Urtier for animals. He popularized the term "morphology" for this search. Transcendental anatomy first derived from the naturalist philosophy known as Naturphilosophie.In the 1820s, French anatomist Etienne Reynaud Augustin Serres (1786–1868) popularized the term transcendental anatomy to refer to the collective morphology of animal development. Synonymous expressions such as philosophical anatomy, higher anatomy, and transcendental morphology also arose at this time. History: Some advocates regarded transcendental anatomy as the ultimate explanation for biological structures, while others saw it as one of several necessary explanatory devices. Vertebral theory: Transcendental anatomists theorized that the bones of the skull were "cranial vertebra", or modified bones from the vertebrae. Owen ardently supported the theory as major evidence for his theory of homology.The theory has since been discredited.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OONI** OONI: The Open Observatory of Network Interference (OONI) is a project that monitors internet censorship globally. It relies on volunteers to run software that detects blocking and reports the findings to the organization. As of June 2023, OONI has analyzed 1,468.4 million network connections in 241 countries. Development: OONI was officially launched in 2012 as a free software project under The Tor Project, aiming to study and showcase global internet censorship. In 2017, OONI launched OONI Probe, a mobile app that runs a series of network measurements. These measurements detects blocked websites, apps, and other tools in addition to the presence of middleboxes. Results of these tests can be utilized through the OONI Explorer and API. Till 2018 the project received $1,286,070 of funding from the Open Technology Fund. Tests: The current tests deployed by OONI are as follows: Web connectivity DNS consistency HTTP host HTTP requests Facebook Messenger access Telegram access WhatsApp access Signal (software) access HTTP header field manipulation detection HTTP invalid request line detection Meek fronted requests Tor bridge access Vanilla Tor access Lantern access Psiphon access Dynamic Adaptive Streaming over HTTP streaming NDT (Network Diagnostic Test) Notable cases: OONI has confirmed data analyzing the 2019 Internet blackout in Iran. On 24 February 2019, Cuban independent news outlet Tremenda Nota confirmed the blocking of its website a few hours before a referendum in Cuba. A new Constitution was voted in the country for the first time in decades. OONI network measurement data confirmed the blocking of the site along with several other independent media websites during the referendum. The network had previously confirmed 41 websites blocked in the country in 2017. Cases of internet censorship and network disruptions during elections have also been detected in Benin, Zambia, and Togo. In May 2019, OONI reported that the Chinese Government blocked all language editions of Wikipedia. Following the 2022 Russian invasion of Ukraine, OONI confirmed that most Russian Internet Service Providers started blocking access to Twitter, Facebook, Instagram, BBC, Deutsche Welle, Radio Free Europe, Voice of America, Interfax, Meduza, Dozhd, The New Times and 200rf (a website launched by the Ministry of Internal Affairs of Ukraine to enable Russians to find their family members who were captured or killed during the war).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Golden sombrero** Golden sombrero: In baseball, a golden sombrero is a player's inglorious feat of striking out four times in a single game. Etymology: The term derives from hat trick, and since four is bigger than three, the rationale was that a four-strikeout performance should be referred to by a bigger hat, such as a sombrero. Though one account credits San Diego Padres player Carmelo Martínez with inventing the term in the 1980s, "sombrero" was already in use to describe a four-strikeout game as early as 1977, and "golden sombrero" appeared in print in a 1979 article about slang used by the minor league Jackson Mets.The "Olympic Rings" or platinum sombrero applies to a player striking out five times in a game.A horn refers to a player striking out six times in a game; the term was coined by pitcher Mike Flanagan after teammate Sam Horn of the Baltimore Orioles accomplished the feat in an extra-inning game in 1991. Alternate names for this accomplishment are titanium sombrero or double platinum sombrero. Major League Baseball: Notable recent four-strikeout games On August 8, 2023, San Diego Padres Juan Soto recorded his first golden sombrero. Major League Baseball: On August 4, 2009, Tampa Bay Rays third baseman Evan Longoria went 2-for-6, recording a golden sombrero and two home runs. The second home run was a walk off home run. This feat was also accomplished by Brandon Moss of the Oakland Athletics on April 30, 2013 in a 19-inning game against the Los Angeles Angels.On May 29, 2015, San Diego Padres catcher Derek Norris struck out swinging in his first four plate appearances, then hit a walk-off grand slam, becoming the first MLB player in the modern era to achieve a golden sombrero and a walk-off grand slam in the same game.On July 30, 2016, New York Yankees player Alex Rodriguez became the first MLB player to earn a golden sombrero after the age of 40 while having earned one before the age of 20.On October 11, 2017, Chicago Cubs third baseman Kris Bryant (0-for-4) and New York Yankees right fielder Aaron Judge (0-for-5) each recorded golden sombreros. Judge's sombrero was his third in the ALDS; he became the only player since 1903 to have three four-strikeout games in the same postseason. Prior to the start of the 2017 World Series, golden sombreros in the 2017 postseason had already tied the record set in 1997. An increase in the use of starting pitchers as relievers has been suggested as a cause. Major League Baseball: Major league players with the most four-strikeout games Notable five-strikeout games Sammy Sosa, Ray Lankford, and Javier Baez are the only players to earn a platinum sombrero more than twice.On March 31, 1996, Ron Karkovice became the first player to earn a platinum sombrero on Opening Day. On March 30, 2023, Max Muncy of the Los Angeles Dodgers recorded five strikeouts in an opening day game against the Arizona Diamondbacks. Major League Baseball: On July 25, 2017, Chicago Cubs infielder Javier Baez went 0-for-5, recording a platinum sombrero. On the same day, Seattle Mariners designated hitter Nelson Cruz went 0-for-6 with five strikeouts, also recording a platinum sombrero. This marked the first time in major league history in which two players from two different games achieved platinum sombreros on the same day. Major League Baseball: On April 3, 2018, Giancarlo Stanton recorded a platinum sombrero in his home debut for the New York Yankees. Stanton was booed as he left the field after his fifth strikeout. Five days later, he became the first player to record two platinum sombreros in one season when he went 0-for-7 and struck out to end the game with two runners on and the Yankees down by one run. Stanton later recorded a golden sombrero in Game 1 of the 2018 American League Division Series, his second career playoff game. Major League Baseball: On June 22, 2016, Washington Nationals outfielder Michael A. Taylor recorded a platinum sombrero in a game against the Los Angeles Dodgers. In a performance one sportswriter suggested might be "the worst game in baseball history", Taylor went 0-for-5 with five Ks while leaving five men on base, and committed an error in the ninth inning that lost his team the game.On June 4, 2018, New York Yankees outfielder Aaron Judge earned a platinum sombrero and struck out a total of eight times over the course of a doubleheader against the Detroit Tigers, setting an MLB record.On May 26, 2019, Colorado Rockies shortstop Trevor Story recorded a platinum sombrero in a nine-inning game against the Baltimore Orioles. The next day, May 27, Chicago Cubs shortstop Javier Baez also recorded a platinum sombrero against the Houston Astros, his second.On June 18, 2019, Boston Red Sox designated hitter J. D. Martinez and Minnesota Twins third baseman Miguel Sanó recorded platinum sombreros in a seventeen-inning game.On September 30, 2020, St. Louis Cardinals outfielder Harrison Bader earned his platinum sombrero in Game One of the 2020 National League Wild Card Series between the Cardinals and the San Diego Padres, joining George Pipgras of the 1932 New York Yankees Reggie Sanders of the 1995 Cincinnati Reds, Cleveland Guardians Andres Gimenez and Tampa Bay Rays Jose Siri as the only players in MLB history to accomplish this feat in the playoffs. He finished 0-for-5 with six men left on base, though his Cardinals won 7–4.On October 8, 2022, Cleveland Guardians second baseman Andres Gimenez and Tampa Bay Rays centerfielder Jose Siri recorded platinum sombreros in a 15-inning playoff game, the only known occasion in which two players each had five strikeouts in the same playoff game. Gimenez finished 0–5 and was on deck when the game ended 1–0 with a walkoff home run by Oscar Gonzalez. On Aprll 1, 2023, Baltimore Orioles second baseman Ramón Urías recorded a platinum sombrero against the Boston Red Sox on exclusively swinging strikeouts, three times against starting pitcher Chris Sale, a fourth against reliever Josh Winckowski, and the fifth against reliever Chris Martin.On July 30, 2023, New York Yankees first baseman Anthony Rizzo earned his first platinum sombrero in a 9-3 loss to the Baltimore Orioles in which Yankees batters struck out eighteen times. This was one of the lowest points of a slump reaching back to May 29th. During this stretch, Rizzo batted .168/.272/.224 with only one home run and nine RBIs over 44 games. Major League Baseball: Major league players with six strikeouts in a game Only eight players have had six strikeouts in one game, as listed in the following table. All eight instances occurred in games that were completed in extra innings; the record for strikeouts in a nine-inning game is five. Minor League Baseball: The professional baseball record for strikeouts in a single game belongs to Khalil Lee, who as a member of the minor league Lexington Legends, Class A affiliate of the Kansas City Royals, struck out eight times in a 21-inning game in 2017. College baseball: University of Texas catcher Cameron Rupp struck out six times in the Texas Longhorns record-setting 25-inning game against the Boston College Eagles on May 30, 2009.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inch of mercury** Inch of mercury: Inch of mercury (inHg and ″Hg) is a non-SI unit of measurement for pressure. It is used for barometric pressure in weather reports, refrigeration and aviation in the United States. Inch of mercury: It is the pressure exerted by a column of mercury 1 inch (25.4 mm) in height at the standard acceleration of gravity. Conversion to metric units depends on the temperature of mercury, and hence its density; typical conversion factors are: In older literature, an "inch of mercury" is based on the height of a column of mercury at 60 °F (15.6 °C). Inch of mercury: 1 inHg60 °F = 3,376.85 pascals (33.7685 hPa)In Imperial units: 1 inHg60 °F = 0.489 771 psi, or 2.041 771 inHg60 °F = 1 psi. Applications: Aircraft and automobiles Aircraft altimeters measure the relative pressure difference between the lower ambient pressure at altitude and a calibrated reading on the ground. In the United States, Canada and Japan, these altimeter readings are provided in inches of mercury, but most other nations use hectopascals. Ground readings vary with weather and along the route of the aircraft as it travels, so current readings are relayed periodically by air traffic control. Aircraft operating at higher altitudes (at or above what is called the transition altitude, which varies by country) set their barometric altimeters to a standard pressure of 29.92 inHg (1 atm = 29.92 inHg) or 1013.25 hPa (1 hPa = 1 mbar) regardless of the actual sea level pressure. The resulting altimeter readings are known as flight levels. Applications: Piston engine aircraft with constant-speed propellers also use inches of mercury to measure manifold pressure, which is indicative of engine power produced in engines equipped with a supercharger or turbosupercharger (naturally aspirated engines measure manifold vacuum instead). In automobile racing, particularly United States Auto Club and Champ Car Indy car racing, inches of mercury was the unit used to measure turbocharger inlet pressure. However, the inch of mercury is still used today in car performance modification to measure the amount of vacuum or pressure within the engine's intake manifold. This can be seen on "boost gauges (forced induction) or vacuum gauges (natural induction), which give a rough indication of the relative power being produced at any given time. Applications: Cooling systems In air conditioning and refrigeration, inHg is often used to describe "inches of mercury vacuum", or pressures below ambient atmospheric pressure, for recovery of refrigerants from air conditioning and refrigeration systems, as well as for leak testing of systems while under a vacuum, and for dehydration of refrigeration systems. The low-side gauge in a refrigeration gauge manifold indicates pressures below ambient in "inches of mercury vacuum" (inHg), down to a 30 inHg vacuum. Applications: Inches of mercury is also used in automotive cooling system vacuum test and fill tools. A technician will use this tool to remove air from modern automotive cooling systems, test the system's ability to hold vacuum, and subsequently refill using the vacuum as suction for the new coolant. Typical minimum vacuum values are between 22 and 27 inHg. Vacuum brakes Inches of mercury was the usual unit of pressure measurement in railway vacuum brakes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Red Hen Systems** Red Hen Systems: Red Hen Systems, Inc. is a technology company that develops integrated hardware and software solutions for multimedia asset mapping. Their spatial digital video recorders (DVRs) and still cameras instantly geotag video and still photos at the time of data collection. Corresponding mapping software provides the capability to view photographs, video, and audio using GPS coordinates. History: In Fall 1997, Neil Havermale and Ken Burgess founded Red Hen Systems, Inc. as a software engineering and systems manufacturer, developing in-field data collection, harvest yield recording systems, and input prescription mapping software for agricultural applications. Red Hen Systems, Inc. became an early leader in using GPS (Global Positioning System) data to sample and map field data, as well as process and monitor crop yields.A year later, the company was searching for a way to edge-detect entry into and exit from a field at harvest to better align crop yield maps. The result was the use of hand-held video recorders that evolved into a video mapping system, developed by Ken Burgess, to help crop scouts identify and locate weeds and pests in the field. This system soon proved there would be new uses for geo-referenced imagery beyond the agricultural field. History: Today, Red Hen Systems provides technology to a wide variety of markets including GIS professionals, aerial video, the military, natural resource and environmental management, transportation, and utilities. Products: Photo Mapping Technology Red Hen Systems produces two products for certain Nikon DSLR cameras capable of geotagging digital still images. The Blue2CAN connects the camera with a Bluetooth-enabled GPS unit, allowing the camera’s software to geotag photos as they are taken. For non-Bluetooth applications, the DX-GPS includes a Garmin Geko unit designed to connect directly to the camera. Red Hen Systems is also a pioneer in smartphone integration, providing Blue2CAN users the ability to link their Android smart phone's GPS with the Blue2CAN and its Nikon hosts as of September 2010. Products: Video Mapping Technology The integrated hardware and software solutions produced by Red Hen Systems create Geospatially Enabled Media (GEM). GEM is a full-motion video metadata recording system which associates the what (video) with the who, what, when, where, and why's via UTC, GPS, and other encoding. This is done by DAC and ADC recording of digital GPS and other metadata as an audible modem squelch into the left audio channel of a video camera's stereo recordings. The significance of this technology is that it allows for the editing of video while maintaining the integrity of its corresponding metadata. The encoding device that does this is referred to as a video mapping system, or VMS. Currently, there are multiple versions of this device capable of suiting many different applications. In addition to individual VMS units, Red Hen Systems produces packaged digital video recorder (DVR) solutions that store geo-referenced video as it is produced. Products: Software The development of the VMS led to the creation of software to decode this audio signal. In 2001, Red Hen Systems received a patent of a method to match this metadata with the recorded images. The original software designed to do this was made to work with tape-based media (as opposed to digital), and was called VMS player. This early version was noted for its ease of use and flexibility since it supported maps from ESRI, a major producer of GIS products. Red Hen software has continued to evolve with video technology, moving from tape to mini-DVD to digital media, with each iteration allowing for more information. Products: Current offerings include hardware for mapping photo and video, and software for ESRI and non-ESRI applications, as well as a server for organization-wide content sharing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**C16orf82** C16orf82: C16orf82 is a protein that, in humans, is encoded by the C16orf82 gene. C16orf82 encodes a 2285 nucleotide mRNA transcript which is translated into a 154 amino acid protein using a non-AUG (CUG) start codon. The gene has been shown to be largely expressed in the testis, tibial nerve, and the pituitary gland, although expression has been seen throughout a majority of tissue types. The function of C16orf82 is not fully understood by the scientific community. Gene: Locus C16orf82 is located in humans at locus 16p12.1 on the positive strand. Gene: General features The gene encodes for a 2285 nucleotide mRNA transcript that is intronless. Human intronless genes represent a unique subset of the genome that are often involved in signaling, sperm formation, immune responses, or development. C16orf82 being such a gene indicates it may play a role in one of these processes. Translation of C16orf82 initiates at a non-AUG (CUG) start codon. The presence of the non-canonical start codon suggests possible increased regulation of C16orf82 translation and/or possibly could allow for the translation of protein products that start with leucine instead of methionine as seen in proteins coded for by some genes present in the major histocompatibility complex. DNA level regulation: Promoter The C16orf82 promoter region has been predicted to contain a number of transcription factor binding sites including binding sites for transcription factors within the SOX family. The presence of the SOX family transcription binding sites suggests that C16orf82 may play a role in sex determination. Actual transcription factor functional studies show binding of the C16ORF82 promoter by ARNT, ELF5, SMAD4, and STAT3. DNA level regulation: Expression C16orf82 expression in humans has been observed in major organ systems including the heart, liver, brain, and kidney at a constant level. The tissue in which C16orf82 has been seen to be most highly expressed has been the testis, both by microarray experiments as well as RNA-seq. C16orf82 expression is also highly variable between individuals, with some expressing the gene in large amounts while others barely express the gene within the same tissue type. Micro RNA (miR-483) over expression has been shown to knock down C16orf82 expression. Protein: General features The C16orf82 protein is 154 amino acids in length with an approximate molecular weight of 16.46 kDa with a predicted isoelectric point of 6.06. There are no known variants or isoforms of C16orf82. Domains C16orf82 contains one domain, DUF4694, which currently has a function that is uncharacterized. The domain spans from amino acid 8 to amino acid 153. DUF4694 contains a SSGY (serine-serine-glycine-tyrosine) sequence motif that is found in a majority of the protein's orthologs. There is no presence of a transmembrane domain thus the protein is not a transmembrane protein. Cellular localization The localization of C16orf82 within a cell has been predicted to be nuclear. A bipartite nuclear localization signal can be found starting at Arg107. Protein: Post-translational modifications The human C16orf82 protein has been predicted to be phosphorylated at a number of serine residues. O-linked glycosylation has also been predicted to happen at a number of sites, including some that overlap with the aforementioned phosphorylation sites. The sites of overlap between the two types of post-translational modifications could play important regulatory roles in the activity and lifespan of the human C16orf82 protein. Protein: Secondary structure The secondary structure of the human C16orf82 protein has been predicted to be largely disordered by a number of modeling programs. Evolution/homology: Paralogs No paralogs of C16orf82 exist within humans. Evolution/homology: Orthologs C16orf82 has over 100 predicted orthologs, which all reside in the class mammalia and more precisely the subclass eutheria. All of the orthologs contained the domain DUF4964. The most distant ortholog detected was within the nine-banded armadillo (Dasypus novemcinctus) within the order Cingluata. Below is a table of 20 orthologs from various orders within the subclass eutheria with the sequence identity and time since divergence in relation to humans. Evolution/homology: Rate of evolution C16orf82's rate of evolution was determined to be relatively fast even in comparison to fibrinogen, a gene that has been shown to evolve quickly. Clinical significance: Behavioral disorders C16orf82 has been associated with Schizophrenia through a genome-wide association study and autism based on copy number variation analysis. Currently, research has not shown if C16orf82 plays any direct role in either of these disorders.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conodont Alteration Index** Conodont Alteration Index: The Conodont Alteration Index (CAI) is used to estimate the maximum temperature reached by a sedimentary rock using thermal alteration of conodont fossils. Conodonts in fossiliferous carbonates are prepared by dissolving the matrix with weak acid, since the conodonts are composed of apatite and thus do not dissolve as readily as carbonate. The fossils are then compared to the index under a microscope. The index was first developed by Anita Epstein and colleagues at the United States Geological Survey.The CAI ranges from 1 to 6, as follows: The CAI is commonly used by paleontologists due to its ease of measurement and the abundance of Conodonta throughout marine carbonates of the Paleozoic. However, the organism disappears from the fossil record after the Triassic period, so the CAI is not available to analyze rocks younger than 200 million years. Additionally, the index can be positively skewed in regions of hydrothermal alteration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Network intelligence** Network intelligence: Network intelligence (NI) is a technology that builds on the concepts and capabilities of deep packet inspection (DPI), packet capture and business intelligence (BI). It examines, in real time, IP data packets that cross communications networks by identifying the protocols used and extracting packet content and metadata for rapid analysis of data relationships and communications patterns. Also, sometimes referred to as Network Acceleration or piracy. NI is used as a middleware to capture and feed information to network operator applications for bandwidth management, traffic shaping, policy management, charging and billing (including usage-based and content billing), service assurance, revenue assurance, market research mega panel analytics, lawful interception and cyber security. It is currently being incorporated into a wide range of applications by vendors who provide technology solutions to Communications Service Providers (CSPs), governments and large enterprises. NI extends network controls, business capabilities, security functions and data mining for new products and services needed since the emergence of Web 2.0 and wireless 3G and 4G technologies. Background: The evolution and growth of Internet and wireless technologies offer possibilities for new types of products and services, as well as opportunities for hackers and criminal organizations to exploit weaknesses and perpetrate cyber crime. Network optimization and security solutions therefore need to address the exponential increases in IP traffic, methods of access, types of activity and volume of content generated. Traditional DPI tools from established vendors have historically addressed specific network infrastructure applications such as bandwidth management, performance optimization and quality of service (QoS). DPI focuses on recognizing different types of IP traffic as part of a CSP's infrastructure. NI provides more granular analysis. It enables vendors to create an information layer with metadata from IP traffic to feed multiple applications for more detailed and expansive visibility into network-based activity. Background: NI technology goes beyond traditional DPI, since it not only recognizes protocols but also extracts a wide range of valuable metadata. NI's value-add to solutions traditionally based on DPI has attracted the attention of industry analysts who specialize in DPI market research. For example, Heavy Reading now includes NI companies on its Deep Packet Inspection Semi-Annual Market Tracker. Business Intelligence for data networks: In much the same way that BI technology synthesizes business application data from a variety of sources for business visibility and better decision-making, NI technology correlates network traffic data from a variety of data communication vehicles for network visibility, enabling better cyber security and IP services. With ongoing changes in communications networks and how information can be exchanged, people are no longer linked exclusively to physical subscriber lines. The same person can communicate in multiple ways – FTP, Webmail, VoIP, instant messaging, online chat, blogs, social networks – and from different access points via desktops, laptops and mobile devices. Business Intelligence for data networks: NI provides the means to quickly identify, examine and correlate interactions involving Internet users, applications, and protocols whether or not the protocols are tunneled or follow the OSI model. The technology enables a global understanding of network traffic for applications that need to correlate information such as who contacts whom, when, where and how, or who accesses what database, when, and the information viewed. When combined with traditional BI tools that examine service quality and customer care, NI creates a powerful nexus of subscriber and network data. Use in telecommunications: Telcos, Internet Service Providers (ISPs) and Mobile Network Operators (MNOs) are under increasing competitive pressures to move to smart pipe business models. The cost savings and revenue opportunities driving smart pipe strategies also apply to Network Equipment Providers, Software Vendors and Systems Integrators that serve the industry. Because NI captures detailed information from the hundreds of IP applications that cross mobile networks, it provides the required visibility and analysis of user demand to create and deliver differentiating services, as well as manage usage once deployed. NI as enabling technology for smart pipe applications Customer metrics are especially important for telecom companies to understand consumer behaviors and create personalized IP services. NI enables faster and more sophisticated Audience Measurement, User Behavior Analysis, Customer Segmentation, and Personalized Services. Real-time network metrics are equally important for companies to deliver and manage services. NI classifies protocols and applications from layers 2 through 7, generates metadata for communication sessions, and correlates activity between all layers, applicable for bandwidth & resource optimization, QoS, Content-Based Billing, quality of experience, VoIP Fraud Monitoring and regulatory compliance. Use in cloud computing: The economics and deployment speed of cloud computing is fueling rapid adoption by companies and government agencies. Among concerns, however, are risks of information security, e-discovery, regulatory compliance and auditing. NI mitigates the risks by providing Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) vendors with real-time situational awareness of network activity, and critical transparency to allay fears of potential customers. A vendor can demonstrate hardened network security to prevent Data Leakage or Data Theft and an irrefutable audit trail of all network transaction – communication and content – related to a customer's account, assuming compliance to regulation and standards. Use in government: NI extracts and correlates information such as who contacts whom, when where and how, providing situational awareness for Lawful Interception and Cyber Security. Real-time data capture, extraction and analysis allow security specialists to take preventive measures and protect network assets in real time as a complement post-mortem analysis after an attack. Use in business: Because NI combines real-time network monitoring with IP metadata extraction, it enhances the effectiveness of applications for Database Security, Database Auditing and Network Protection. The network visibility afforded by NI can also be used to build enhancements and next-generation solutions for Network Performance Management, WAN Optimization, Customer Experience Management, Content Filtering, and internal billing of networked applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phenotypic plasticity** Phenotypic plasticity: Phenotypic plasticity refers to some of the changes in an organism's behavior, morphology and physiology in response to a unique environment. Fundamental to the way in which organisms cope with environmental variation, phenotypic plasticity encompasses all types of environmentally induced changes (e.g. morphological, physiological, behavioural, phenological) that may or may not be permanent throughout an individual's lifespan.The term was originally used to describe developmental effects on morphological characters, but is now more broadly used to describe all phenotypic responses to environmental change, such as acclimation (acclimatization), as well as learning. The special case when differences in environment induce discrete phenotypes is termed polyphenism. Phenotypic plasticity: Generally, phenotypic plasticity is more important for immobile organisms (e.g. plants) than mobile organisms (e.g. most animals), as mobile organisms can often move away from unfavourable environments. Nevertheless, mobile organisms also have at least some degree of plasticity in at least some aspects of the phenotype. One mobile organism with substantial phenotypic plasticity is Acyrthosiphon pisum of the aphid family, which exhibits the ability to interchange between asexual and sexual reproduction, as well as growing wings between generations when plants become too populated. Phenotypic plasticity: Water fleas (Daphnia magna) have shown both phenotypic plasticity and the ability to genetically evolve to deal with the heat stress of warmer, urban pond waters. Examples: Plants Phenotypic plasticity in plants includes the timing of transition from vegetative to reproductive growth stage, the allocation of more resources to the roots in soils that contain low concentrations of nutrients, the size of the seeds an individual produces depending on the environment, and the alteration of leaf shape, size, and thickness. Leaves are particularly plastic, and their growth may be altered by light levels. Leaves grown in the light tend to be thicker, which maximizes photosynthesis in direct light; and have a smaller area, which cools the leaf more rapidly (due to a thinner boundary layer). Conversely, leaves grown in the shade tend to be thinner, with a greater surface area to capture more of the limited light. Dandelion are well known for exhibiting considerable plasticity in form when growing in sunny versus shaded environments. The transport proteins present in roots also change depending on the concentration of the nutrient and the salinity of the soil. Some plants, Mesembryanthemum crystallinum for example, are able to alter their photosynthetic pathways to use less water when they become water- or salt-stressed.Because of phenotypic plasticity, it is hard to explain and predict the traits when plants are grown in natural conditions unless an explicit environment index can be obtained to quantify environments. Identification of such explicit environment indices from critical growth periods being highly correlated with sorghum and rice flowering time enables such predictions. Additional work is being done to support the agricultural industry, which faces severe challenges in prediction of crop phenotypic expression in changing environments. Since many crops supporting the global food supply are grown in a wide variety of environments, understanding and ability to predict crop genotype by environment interaction will be essential for future food stability. Examples: Phytohormones and leaf plasticity Leaves are very important to a plant in that they create an avenue where photosynthesis and thermoregulation can occur. Evolutionarily, the environmental contribution to leaf shape allowed for a myriad of different types of leaves to be created. Leaf shape can be determined by both genetics and the environment. Environmental factors, such as light and humidity, have been shown to affect leaf morphology, giving rise to the question of how this shape change is controlled at the molecular level. This means that different leaves could have the same gene but present a different form based on environmental factors. Plants are sessile, so this phenotypic plasticity allows the plant to take in information from its environment and respond without changing its location. In order to understand how leaf morphology works, the anatomy of a leaf must be understood. The main part of the leaf, the blade or lamina, consists of the epidermis, mesophyll, and vascular tissue. The epidermis contains stomata which allows for gas exchange and controls perspiration of the plant. The mesophyll contains most of the chloroplast where photosynthesis can occur. Developing a wide blade/lamina can maximize the amount of light hitting the leaf, thereby increasing photosynthesis, however too much sunlight can damage the plant. Wide lamina can also catch wind easily which can cause stress to the plant, so finding a happy medium is imperative to the plants’ fitness. The Genetic Regulatory Network is responsible for creating this phenotypic plasticity and involves a variety of genes and proteins regulating leaf morphology. Examples: Phytohormones have been shown to play a key role in signaling throughout the plant, and changes in concentration of the phytohormones can cause a change in development.Studies on the aquatic plant species Ludwigia arcuata have been done to look at the role of abscisic acid (ABA), as L. arcuata is known to exhibit phenotypic plasticity and has two different types of leaves, the aerial type (leaves that touch the air) and the submerged type (leaves that are underwater). When adding ABA to the underwater shoots of L. arcuata, the plant was able to produce aerial type leaves underwater, suggesting that increased concentrations of ABA in the shoots, likely caused by air contact or a lack of water, triggers the change from the submerged type of leaf to the aerial type. This suggests ABA's role in leaf phenotypic change and its importance in regulating stress through environmental change (such as adapting from being underwater to above water). In the same study, another phytohormone, ethylene, was shown to induce the submerged leaf phenotype unlike ABA, which induced aerial leaf phenotype. Because ethylene is a gas, it tends to stay endogenously within the plant when underwater – this growth in concentration of ethylene induces a change from aerial to submerged leaves and has also been shown to inhibit ABA production, further increasing the growth of submerged type leaves. These factors (temperature, water availability, and phytohormones) contribute to changes in leaf morphology throughout a plants lifetime and are vital to maximize plant fitness. Examples: Animals The developmental effects of nutrition and temperature have been demonstrated. The gray wolf (Canis lupus) has wide phenotypic plasticity. Additionally, male speckled wood butterflies have two morphs: one with three dots on its hindwing, and one with four dots on its hindwings. The development of the fourth dot is dependent on environmental conditions – more specifically, location and the time of year. In amphibians, Pristimantis mutabilis has remarkable phenotypic plasticity, as well as Agalychnis callidryas whose embryos exhibit phenotypic plasticity, hatching early in response to disturbance to protect themselves. Another example is the southern rockhopper penguin. Rockhopper penguins are present at a variety of climates and locations; Amsterdam Island's subtropical waters, Kerguelen Archipelago and Crozet Archipelago's subantarctic coastal waters. Due to the species plasticity they are able to express different strategies and foraging behaviors depending on the climate and environment. A main factor that has influenced the species' behavior is where food is located. Examples: Temperature Plastic responses to temperature are essential among ectothermic organisms, as all aspects of their physiology are directly dependent on their thermal environment. As such, thermal acclimation entails phenotypic adjustments that are found commonly across taxa, such as changes in the lipid composition of cell membranes. Temperature change influences the fluidity of cell membranes by affecting the motion of the fatty acyl chains of glycerophospholipids. Because maintaining membrane fluidity is critical for cell function, ectotherms adjust the phospholipid composition of their cell membranes such that the strength of van der Waals forces within the membrane is changed, thereby maintaining fluidity across temperatures. Examples: Diet Phenotypic plasticity of the digestive system allows some animals to respond to changes in dietary nutrient composition, diet quality, and energy requirements.Changes in the nutrient composition of the diet (the proportion of lipids, proteins and carbohydrates) may occur during development (e.g. weaning) or with seasonal changes in the abundance of different food types. These diet changes can elicit plasticity in the activity of particular digestive enzymes on the brush border of the small intestine. For example, in the first few days after hatching, nestling house sparrows (Passer domesticus) transition from an insect diet, high in protein and lipids, to a seed based diet that contains mostly carbohydrates; this diet change is accompanied by two-fold increase in the activity of the enzyme maltase, which digests carbohydrates. Acclimatizing animals to high protein diets can increase the activity of aminopeptidase-N, which digests proteins.Poor quality diets (those that contain a large amount of non-digestible material) have lower concentrations of nutrients, so animals must process a greater total volume of poor-quality food to extract the same amount of energy as they would from a high-quality diet. Many species respond to poor quality diets by increasing their food intake, enlarging digestive organs, and increasing the capacity of the digestive tract (e.g. prairie voles, Mongolian gerbils, Japanese quail, wood ducks, mallards). Poor quality diets also result in lower concentrations of nutrients in the lumen of the intestine, which can cause a decrease in the activity of several digestive enzymes.Animals often consume more food during periods of high energy demand (e.g. lactation or cold exposure in endotherms), this is facilitated by an increase in digestive organ size and capacity, which is similar to the phenotype produced by poor quality diets. During lactation, common degus (Octodon degus) increase the mass of their liver, small intestine, large intestine and cecum by 15–35%. Increases in food intake do not cause changes in the activity of digestive enzymes because nutrient concentrations in the intestinal lumen are determined by food quality and remain unaffected. Intermittent feeding also represents a temporal increase in food intake and can induce dramatic changes in the size of the gut; the Burmese python (Python molurus bivittatus) can triple the size of its small intestine just a few days after feeding.AMY2B (Alpha-Amylase 2B) is a gene that codes a protein that assists with the first step in the digestion of dietary starch and glycogen. An expansion of this gene in dogs would enable early dogs to exploit a starch-rich diet as they fed on refuse from agriculture. Data indicated that the wolves and dingo had just two copies of the gene and the Siberian Husky that is associated with hunter-gatherers had just three or four copies, whereas the Saluki that is associated with the Fertile Crescent where agriculture originated had 29 copies. The results show that on average, modern dogs have a high copy number of the gene, whereas wolves and dingoes do not. The high copy number of AMY2B variants likely already existed as a standing variation in early domestic dogs, but expanded more recently with the development of large agriculturally based civilizations. Examples: Parasitism Infection with parasites can induce phenotypic plasticity as a means to compensate for the detrimental effects caused by parasitism. Commonly, invertebrates respond to parasitic castration or increased parasite virulence with fecundity compensation in order to increase their reproductive output, or fitness. For example, water fleas (Daphnia magna), exposed to microsporidian parasites produce more offspring in the early stages of exposure to compensate for future loss of reproductive success. A reduction in fecundity may also occur as a means of re-directing nutrients to an immune response, or to increase longevity of the host. This particular form of plasticity has been shown in certain cases to be mediated by host-derived molecules (e.g. schistosomin in snails Lymnaea stagnalis infected with trematodes Trichobilharzia ocellata) that interfere with the action of reproductive hormones on their target organs. Changes in reproductive effort during infection is also thought to be a less costly alternative to mounting resistance or defence against invading parasites, although it can occur in concert with a defence response.Hosts can also respond to parasitism through plasticity in physiology aside from reproduction. House mice infected with intestinal nematodes experience decreased rates of glucose transport in the intestine. To compensate for this, mice increase the total mass of mucosal cells, cells responsible for glucose transport, in the intestine. This allows infected mice to maintain the same capacity for glucose uptake and body size as uninfected mice.Phenotypic plasticity can also be observed as changes in behaviour. In response to infection, both vertebrates and invertebrates practice self-medication, which can be considered a form of adaptive plasticity. Various species of non-human primates infected with intestinal worms engage in leaf-swallowing, in which they ingest rough, whole leaves that physically dislodge parasites from the intestine. Additionally, the leaves irritate the gastric mucosa, which promotes the secretion of gastric acid and increases gut motility, effectively flushing parasites from the system. The term "self-induced adaptive plasticity" has been used to describe situations in which a behavior under selection causes changes in subordinate traits that in turn enhance the ability of the organism to perform the behavior. For example, birds that engage in altitudinal migration might make "trial runs" lasting a few hours that would induce physiological changes that would improve their ability to function at high altitude.Woolly bear caterpillars (Grammia incorrupta) infected with tachinid flies increase their survival by ingesting plants containing toxins known as pyrrolizidine alkaloids. The physiological basis for this change in behaviour is unknown; however, it is possible that, when activated, the immune system sends signals to the taste system that trigger plasticity in feeding responses during infection.Reproduction The red-eyed tree frog, Agalychnis callidryas, is an arboreal frog (hylid) that resides in the tropics of Central America. Unlike many frogs, the red-eyed tree frog has arboreal eggs which are laid on leaves hanging over ponds or large puddles and, upon hatching, the tadpoles fall into the water below. One of the most common predators encountered by these arboreal eggs is the cat-eyed snake, Leptodeira septentrionalis. In order to escape predation, the red-eyed tree frogs have developed a form of adaptive plasticity, which can also be considered phenotypic plasticity, when it comes to hatching age; the clutch is able to hatch prematurely and survive outside of the egg five days after oviposition when faced with an immediate threat of predation. The egg clutches take in important information from the vibrations felt around them and use it to determine whether or not they are at risk of predation. In the event of a snake attack, the clutch identifies the threat by the vibrations given off which, in turn, stimulates hatching almost instantaneously. In a controlled experiment conducted by Karen Warkentin, hatching rate and ages of red-eyed tree frogs were observed in clutches that were and were not attacked by the cat-eyed snake. When a clutch was attacked at six days of age, the entire clutch hatched at the same time, almost instantaneously. However, when a clutch is not presented with the threat of predation, the eggs hatch gradually over time with the first few hatching around seven days after oviposition, and the last of the clutch hatching around day ten. Karen Warkentin's study further explores the benefits and trade-offs of hatching plasticity in the red-eyed tree frog. Evolution: Plasticity is usually thought to be an evolutionary adaptation to environmental variations that is reasonably predictable and occurs within the lifespan of an individual organism, as it allows individuals to 'fit' their phenotype to different environments. If the optimal phenotype in a given environment changes with environmental conditions, then the ability of individuals to express different traits should be advantageous and thus selected for. Hence, phenotypic plasticity can evolve if Darwinian fitness is increased by changing phenotype. A similar logic should apply in artificial evolution attempting to introduce phenotypic plasticity to artificial agents. However, the fitness benefits of plasticity can be limited by the energetic costs of plastic responses (e.g. synthesizing new proteins, adjusting expression ratio of isozyme variants, maintaining sensory machinery to detect changes) as well as the predictability and reliability of environmental cues (see Beneficial acclimation hypothesis). Evolution: Freshwater snails (Physa virgata), provide an example of when phenotypic plasticity can be either adaptive or maladaptive. In the presence of a predator, bluegill sunfish, these snails make their shell shape more rotund and reduce growth. This makes them more crush-resistant and better protected from predation. However, these snails cannot tell the difference in chemical cues between the predatory and non-predatory sunfish. Thus, the snails respond inappropriately to non-predatory sunfish by producing an altered shell shape and reducing growth. These changes, in the absence of a predator, make the snails susceptible to other predators and limit fecundity. Therefore, these freshwater snails produce either an adaptive or maladaptive response to the environmental cue depending on whether predatory sunfish are present or not.Given the profound ecological importance of temperature and its predictable variability over large spatial and temporal scales, adaptation to thermal variation has been hypothesized to be a key mechanism dictating the capacity of organisms for phenotypic plasticity. The magnitude of thermal variation is thought to be directly proportional to plastic capacity, such that species that have evolved in the warm, constant climate of the tropics have a lower capacity for plasticity compared to those living in variable temperate habitats. Termed the "climatic variability hypothesis", this idea has been supported by several studies of plastic capacity across latitude in both plants and animals. However, recent studies of Drosophila species have failed to detect a clear pattern of plasticity over latitudinal gradients, suggesting this hypothesis may not hold true across all taxa or for all traits. Some researchers propose that direct measures of environmental variability, using factors such as precipitation, are better predictors of phenotypic plasticity than latitude alone.Selection experiments and experimental evolution approaches have shown that plasticity is a trait that can evolve when under direct selection and also as a correlated response to selection on the average values of particular traits. Plasticity and climate change: Unprecedented rates of climate change are predicted to occur over the next 100 years as a result of human activity. Phenotypic plasticity is a key mechanism with which organisms can cope with a changing climate, as it allows individuals to respond to change within their lifetime. This is thought to be particularly important for species with long generation times, as evolutionary responses via natural selection may not produce change fast enough to mitigate the effects of a warmer climate. Plasticity and climate change: The North American red squirrel (Tamiasciurus hudsonicus) has experienced an increase in average temperature over this last decade of almost 2 °C. This increase in temperature has caused an increase in abundance of white spruce cones, the main food source for winter and spring reproduction. In response, the mean lifetime parturition date of this species has advanced by 18 days. Food abundance showed a significant effect on the breeding date with individual females, indicating a high amount of phenotypic plasticity in this trait.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benz plane** Benz plane: In mathematics, a Benz plane is a type of 2-dimensional geometrical structure, named after the German mathematician Walter Benz. The term was applied to a group of objects that arise from a common axiomatization of certain structures and split into three families, which were introduced separately: Möbius planes, Laguerre planes, and Minkowski planes. Möbius plane: Starting from the real Euclidean plane and merging the set of lines with the set of circles to form a set of blocks results in an inhomogeneous incidence structure: three distinct points determine one block, but lines are distinguishable as a set of blocks that pairwise mutually intersect at one point without being tangent (or no points when parallel). Adding to the point set the new point ∞ , defined to lie on every line results in every block being determined by exactly three points, as well as the intersection of any two blocks following a uniform pattern (intersecting at two points, tangent or non-intersecting). This homogeneous geometry is called classical inversive geometry or a Möbius plane. The inhomogeneity of the description (lines, circles, new point) can be seen to be non-substantive by using a 3-dimensional model. Using a stereographic projection, the classical Möbius plane may be seen to be isomorphic to the geometry of plane sections (circles) on a sphere in Euclidean 3-space. Analogously to the (axiomatic) projective plane, an (axiomatic) Möbius plane defines an incidence structure. Möbius planes may similarly be constructed over fields other than the real numbers. Laguerre plane: Starting again from R2 and taking the curves with equations y=ax2+bx+c (parabolas and lines) as blocks, the following homogenization is effective: Add to the curve y=ax2+bx+c the new point (∞,a) . Hence the set of points is (R∪∞)×R . This geometry of parabolas is called the classical Laguerre plane (Originally it was designed as the geometry of the oriented lines and circles. Both geometries are isomorphic.) As for the Möbius plane, there exists a 3-dimensional model: the geometry of the elliptic plane sections on an orthogonal cylinder (in R3 ). An abstraction leads (analogously to the Möbius plane) to the axiomatic Laguerre plane. Minkowski plane: Starting from R2 and merging the lines y=mx+d,m≠0 with the hyperbolas y=ax−b+c,a≠0 in order to get the set of blocks, the following idea homogenizes the incidence structure: Add to any line the point (∞,∞) and to any hyperbola y=ax−b+c,a≠0 the two points (b,∞),(∞,c) . Hence the point set is (R∪{∞})2 . This geometry of the hyperbolas is called the classical Minkowski plane. Minkowski plane: Analogously to the classical Möbius and Laguerre planes, there exists a 3-dimensional model: The classical Minkowski plane is isomorphic to the geometry of plane sections of a hyperboloid of one sheet (non-degenerate quadric of index 2) in 3-dimensional projective space. Similar to the first two cases we get the (axiomatic) Minkowski plane. Planar circle geometries or Benz planes: Because of the essential role of the circle (considered as the non-degenerate conic in a projective plane) and the plane description of the original models the three types of geometries are subsumed to planar circle geometries or in honor of Walter Benz, who considered these geometric structures from a common point of view, Benz planes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aerotolerant anaerobe** Aerotolerant anaerobe: Aerotolerant anaerobes use fermentation to produce ATP. They do not use oxygen, but they can protect themselves from reactive oxygen molecules. In contrast, obligate anaerobes can be harmed by reactive oxygen molecules.There are three categories of anaerobes. Where obligate aerobes require oxygen to grow, obligate anaerobes are damaged by oxygen, aerotolerant organisms cannot use oxygen but tolerate its presence, and facultative anaerobes use oxygen if it is present but can grow without it.Most aerotolerant anaerobes have superoxide dismutase and (non-catalase) peroxidase but don't have catalase. More specifically, they may use a NADH oxidase/NADH peroxidase (NOX/NPR) system or a glutathione peroxidase system. An example of an aerotolerant anaerobe is Cutibacterium acnes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tiddlywinks** Tiddlywinks: Tiddlywinks is a game played on a flat felt mat with sets of small discs called "winks", a pot, which is the target, and a collection of squidgers, which are also discs. Players use a "squidger" (nowadays made of plastic) to shoot a wink into flight by flicking the squidger across the top of a wink and then over its edge, thereby propelling it into the air. The offensive objective of the game is to score points by sending your own winks into the pot. The defensive objective of the game is to prevent your opponents from potting their winks by "squopping" them: shooting your own winks to land on top of your opponents' winks. As part of strategic gameplay, players often attempt to squop their opponents' winks and develop, maintain and break up large piles of winks. Tiddlywinks: Tiddlywinks is sometimes considered a simple-minded, frivolous children's game, rather than a sophisticated strategic game. However, the modern competitive game of tiddlywinks made a strong comeback at the University of Cambridge in 1955. The modern game uses far more complex rules and a consistent set of high-grade equipment. Etymology: Tiddlywinks derives from British rhyming slang for an unlicensed public house or a small inn only licensed to sell beer and cider (tiddlywink, kiddlywink). Tiddly was slang for an alcoholic drink. It may be related to pillywinks. Rules: Tiddlywinks is a competitive game involving four colours of winks. Each player controls the winks of a colour, the colours being blue, green, red and yellow. Red and blue are always partners against green and yellow. There are six winks of each colour, which begin the game in the corners of a felt mat measuring 6 feet by 3 feet. This mat is ordinarily placed on a table, and a pot is placed at its centre. There are two primary methods of play with the four colors of winks: a pairs game, and a singles game. The pairs game involves four players, playing in partnerships, with each winker playing a single color. The singles game involves a single winker playing against another single winker, each playing two colors of winks in alternation. Rules: The players take turns, and there are two basic aims: to cover (or squop) opponent winks, and to get one's own winks into the pot. As in pool or snooker, if a player pots a wink of their own colour, they are entitled to an extra shot, and this enables a skilled player to pot all of their winks in one turn. The point of squopping, which is the key element distinguishing the modern competitive game from the child's game (though recognized in even the earliest rules from 1890), is that a wink that is covered (even partially) may not be played by its owner. The wink on top may be played, though, and sophisticated play involves shots manipulating large piles of winks. Rules: The game ends in one of two ways: either all the winks of one colour are potted (a pot-out), or play continues up to a specified time limit (usually 25 minutes), after which each colour has a further five turns. Then a scoring system is used to rank the players, based on the numbers of potted and unsquopped winks of each colour. Strategy: The important appeal of the game for many players is the required combination of manual dexterity and strategic thought as well as tactics. Tiddlywinkers often claim that the game combines physical skill (such as in snooker or golf) with the strategy of chess. Tiddlywinks is unique in the combination of skill and strategy it requires. Strategy in tiddlywinks is often rather deep, since winks can be captured by squopping (covering) them. Strategic and tactical planning involves anticipating opponents' moves rather than just building a sequence of one's own moves. Another factor that complicates the game is that there is a time limit to the play of the game; it does not merely run until some objective in the game has been met. Strategy: All in all, tiddlywinks goes beyond the purely cerebral nature of a game such as chess. The fact that shots can be made or missed, together with the continuum of possible outcomes, makes strategy much less rigid than in chess, and prevents planning more than seven or eight shots in advance. Equipment: The winks and pot used in competitive play are standard, and are supplied by the English Tiddlywinks Association. The pots are made of moulded plastic (historically always red), with specified diameters at the top and the base, and specified height. The winks are made to specified measurements, and are made by slicing an extruded cylinder rather than by moulding, and then smoothing them in a tumbler. Although this leads to some minor variation in thickness, it produces a much smoother edge to the wink than that seen on cheap moulded winks. Equipment: The mats are made of thick felt. Mats obtained from different suppliers have different characteristics, and part of the skill of a tournament player is to adjust to different mats. Equipment: Squidgers are custom-made by their owners or purchased from squidger makers. A player may use as many as they like, selecting an appropriate squidger for each shot. Top players may carry up to twenty different squidgers, but will not typically use all of them in one game. The rules governing squidgers permit a range of dimensions, and the material is not specified, except for the condition that squidgers must not damage either the winks or the mat. Modern squidgers are predominantly made from different types of plastic, though antique ones were made from bone, vegetable ivory, and other materials. Squidgers are usually filed or sanded to form a sharp edge and then polished. Terminology: Selected terms used in the game include:Blitz: an attempt to pot all six winks of a given player's colour early in the game Bomb: to send a wink at a pile, usually from distance, in the hope of significantly disturbing it Boondock: to free a squopped wink by sending it a long way away, leaving the squopping wink free in the battle area Bristol: a shot which moves a pile of two or more winks as a single unit; the shot is played by holding the squidger at a right angle to its normal plane Carnovsky (US)/Penhaligon (UK): potting a wink from the baseline (i.e., from 3 feet away) Cracker (UK): a simultaneous knock-off and squop, i.e. a shot which knocks one wink off the top of another while simultaneously squopping it Crud (UK): a forceful shot whose purpose is to destroy a pile completely Good shot: named after John Good. The shot consists of playing a flat wink (one not involved in a pile) through a nearby pile with the intent of destroying the pile Gromp: an attempt to jump a pile onto another wink (usually with the squidger held in a conventional rather than a Bristol fashion) John Lennon memorial shot: a simultaneous boondock and squop Lunch: to pot a squopped wink (usually belonging to an opponent) Scrunge (UK): to bounce out of the pot Squidger: the disc used to shoot a winkSquop: to play a wink so that it comes to rest above another winkSub: to play a wink so that it (unintentionally) ends up under another wink Tiddlies: points calculated when determining the finishing placement of winkers in a tiddlywinks game History: Nineteenth century The game began as a parlour game in Victorian England. Bank clerk Joseph Assheton Fincher (1863–1900) filed the original patent application for the game in 1888 and applied for the trademark Tiddledy-Winks in 1889. John Jaques and Son were the exclusive distributors of the game named Tiddledy-Winks.However, competition was quite fierce, and for several years starting in 1888 other game publishers came out with their own versions of the game using other names, including Spoof, Flipperty Flop, Jumpkins, Golfette, Maro, Flutter, and many others. It became one of the most popular crazes during the 1890s, played by adults and children alike.Throughout its history, many different varieties were produced to meet the marketplace demands, including those combining tiddledy-winks principles with tennis, basketball, baseball, croquet, cricket, football, golf, and other popular sports and endeavours. Throughout the first half of the twentieth century, the public perception of the game changed. History: Competition organisations There are two national associations, the English Tiddlywinks Association (ETwA) and the North American Tiddlywinks Association (NATwA), (the Scottish Tiddlywinks Association having disbanded in the late 1990s). These organisations are responsible for conducting tournaments and maintaining the rules of the game. International competition is overseen by the International Federation of Tiddlywinks Associations (IFTwA),, founded on 16 June 1963 though in practice it is rarely called upon to intervene. History: Although tiddlywinks nowadays is a singles or pairs game, competition in the 1950s until the 2000s centred on team competition, with teams consisting of several (two to four) pairs. There were a number of university teams, and international matches were also played. More recently, singles and pairs tournaments have come to be the focus of competitive tiddlywinks, with only a few team matches being played each year. The four most prestigious tournaments are the National Singles and National Pairs tournaments held in England and the United States. The World Singles and World Pairs championships operate on a challenge basis; anyone winning a national tournament (or being the highest-placed home player behind a foreign winner) is entitled to challenge the current champion. History: There are several other less prestigious tournaments in England and the United States throughout the year, often with a format designed to encourage inexperienced players. The results of tournaments and world championship matches are used to calculate Tiddlywinks Ratings, which give a ranking of players. History: 1950s The birth of the modern game can be traced to a group of Cambridge University undergraduates meeting in Christ's College on 16 January 1955. Their aim was to devise a sport at which they could represent the university. Within three years the Oxford University Tiddlywinks Society was formed; although the two universities had been playing matches since 1946. In 1957, an article appeared in The Spectator entitled "Does Prince Philip cheat at tiddlywinks?" Sensing a good publicity opportunity the Cambridge University Tiddlywinks Club (CUTwC) challenged Prince Philip (later to become Chancellor of the University in 1976) to a tiddlywinks match to defend his honour. The Duke of Edinburgh appointed The Goons as his Royal champions. The Duke presented a trophy, the Silver Wink, designed and made by Robert Welch for the British Universities Championship.The English Tiddlywinks Association (ETwA) was founded on 12 June 1958 with the Reverend Edgar "Eggs" Ambrose Willis as its first Secretary-General. History: 1960s During the 1960s as many as 37 universities were playing the game in Great Britain. History: In 1962, the Oxford University Tiddlywinks Society (OUTS) toured the United States for several weeks under the sponsorship of Guinness. They were undefeated against teams from various American colleges including Harvard and newspapers. A match against the New York Giants was scheduled but the football players backed out at the last moment. A very prominent article appeared in Life magazine on 14 December 1962 with coverage of the Harvard team. Harvard's Gargoyle Undergraduate Tiddlywinks Society (GUTS) dominated winks in this era. In the next few years, Harvard and other colleges continued to play, though at a low ebb. From 1962 to 1966, tiddlywinks play in the United States was governed by the National Undergraduate Tiddlywinks Association (NUTS). History: The North American Tiddlywinks Association (NATwA) was formed on 27 February 1966, replacing NUTS, with founders from both American (Harvard University and Harvard Medical School) and Canadian (University of Waterloo and Waterloo Lutheran University) teams. History: In the meantime, in the fall of 1965, Severin Drix started a team at Cornell, and challenged his friend Ferd Wulkan of MIT to start a tiddlywinks team. MIT and Cornell played in NATwA's tiddlywinks tournaments starting in February 1967, and became dominant. The Harvard and Waterloo teams disappeared from the scene by 1968. The game took particularly strong root at MIT, and the early development of most American players can still be traced to MIT today. History: While the basic elements of the modern strategic game were devised by CUTwC in its early years, the rules have continued to be modified under the auspices of the various national tiddlywinks associations. ETwA coordinated the game throughout the boom period of the 1960s when winks flourished. A decline in interest within the UK in 1969-1970 led to the establishment of the three national competitions which have been contested to date, namely the National Singles, National Pairs, and the Teams of Four. There are also annual Open Competitions, notably in Oxford, Cambridge and London. History: 1970s The first serious trans-Atlantic contact was established in 1972, when a team from MIT including Dave Lockwood toured the UK. The success of the Americans shocked complacent Britons. Competition started at the highest level, the World Singles, in 1973. A challenge system was agreed between ETwA and NATwA. The supreme ruling body in world contests is the International Federation of Tiddlywinks Associations (IFTwA). To challenge at the world level, a player must win one of the national titles, or finish as the highest placed home player behind a foreign winner. There have been over 65 World Singles contests to date. The Americans dominated all the early matches, and it was not until the 22nd contest that a Briton won for the first time. Since then the top Britons and Americans have been closely matched. After the establishment of the World Singles, a World Pairs event followed, and there have since been over 40 World Pairs contests. International matches have been played since 1972. History: Twenty-first century During its history, winks has enjoyed variable levels of interest. The game has never taken a strong hold outside the UK and North America. The focus of British tiddlywinks is still at Cambridge, and CUTwC's 50th anniversary celebrations in 2005 were well attended. The Oxford University Tiddlywinks Society has recently fallen out of existence. Despite this there has recently been some resurgence in the game, with new clubs having been formed recently in the University of York and in Shrewsbury School. History: In America, there has been a tradition of tiddlywinks in Washington D.C., Boston, Eastern Ohio, and Ithaca, New York. There was a renewal of winks in 2007 through the MIT Tiddlywinks Association. National competitions are well attended, with a group of enthusiastic young players joining the stock of veteran players who have proved themselves at the highest level in world competition. In the US, the game had a firm footing in certain high schools, since the children of many of the players who took up the game in the late 1960s and early 1970s played when they were in high school. These players are now looking to revive university tiddlywinks in the United States. History: On 1 March 2008, there was a Royal Match in Cambridge to commemorate the 50th anniversary of the original Royal Match played against The Goons in 1958 (see above). CUTwC players took on HRH Prince Philip's Royal Champions, the Savage Club, with members of the original 1958 CUTwC team in attendance. Cambridge repeated their victory from 1958 by winning the match 24-18.Since 2000, the World Singles championship has been dominated by Larry Kahn and Patrick Barrie, with each player having won seven matches (as of December 2019).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kidney ischemia** Kidney ischemia: Kidney ischemia is a disease with a high morbidity and mortality rate. Blood vessels shrink and undergo apoptosis which results in poor blood flow in the kidneys. More complications happen when failure of the kidney functions result in toxicity in various parts of the body which may cause septic shock, hypovolemia, and a need for surgery. What causes kidney ischemia is not entirely known, but several pathophysiology relating to this disease have been elucidated. Possible causes of kidney ischemia include the activation of IL-17C and hypoxia due to surgery or transplant. Several signs and symptoms include injury to the microvascular endothelium, apoptosis of kidney cells due to overstress in the endoplasmic reticulum, dysfunctions of the mitochondria, autophagy, inflammation of the kidneys, and maladaptive repair.Kidney ischemia can be diagnosed by checking the levels of several biomarkers such as clusterin and cystatin C. While the duration of ischemia was used as a biomarker, it was found that it has significant flaws in predicting renal function outcomes. More emerging treatments are in the clinical trials such as Bendavia in targeting mitochondrial dysfunction and using Mesenchymal Stem Cell Therapy. Several receptors agonists and antagonists have shown promise in animal studies; however, they have not been proven clinically yet. Causes: Little is known as to what causes ischemic injury in the kidneys; however, several physical insults are stated to be activated during injury. Physical stress such as infarction, surgery and transplant may produce kidney ischemia. Dietary habits and genetics could cause ischemic injury, as well. Diseases such as sepsis can cause kidney ischemia too. Infarction or Physical Injury Infarction is defined as the blockage of blood flow in tissues or organs, which may cause necrosis or death of a group of cells in the tissue. In studies of mice models, clamping of the kidney may result in kidney ischemia. Renal Surgery and Transplant Renal surgery and coronary artery bypass grafting can produce renal ischemia and reperfusion injury. This could lead to an acute kidney injury. Moreover, renal ischemia can cause the delay of graft function after renal transplant and can cause rejection of the transplant. Causes: Dietary Habits In studies of mice models, a high-fat diet can induce greater injury to the kidney with renal ischemia-reperfusion as compared to mice with normal diet. This is because in a high-fat diet model, accumulation of phospholipids resulted in enlarged lysosomes within proximal tubular cells. This accumulation of phospholipids lead to an increase aggregation of ubiquitin in the kidney cells. When this happens, autophagy becomes exaggerated and results in malfunction of the mitochondria and inflammation of the tissue. Causes: Atherosclerosis A common cause of ischemic renal disease is atherosclerosis. Atherosclerosis is a specific type of arteriosclerosis. Arteriosclerosis is defined as the thickening or stiffening or both of the blood vessels; more specifically, atherosclerosis refers to the buildup of cholesterol and fats in the artery walls. Because the blood vessels carry oxygen and nutrients throughout the body, having atherosclerosis restrict blood flow and consequently prevent necessary nutrients to reach the kidneys. This accounts for 60-97% of renal arterial lesions, which could lead to the occlusion of the renal artery and ischemic atrophy of the kidneys. Causes: Genetics Several genetic pathways that lead to apoptosis of kidney cells have been implicated in mice models and in-vitro assays. These are proapoptotic genes that can be categorized in two: extrinsic and intrinsic pathways. The extrinsic pathway are directly induced upon renal ischemic injury, while intrinsic pathways are dependent on mitochondrial signaling pathways. Moreover, several genes have been implicated as risk factors in the development of ischemic injury. Causes: Extrinsic Pathway Activation of pro-caspase 8 initiates apoptosis via signaling from cell-surface death receptors such as Fas proteins and their ligands FADD and DAXX. This series of signaling cascade generally regulates programmed cell-death or apoptosis. Upregulation of Fas and FADD protein has occurred in mice models after a 24h period of ischemic injury. This is also shown in cell-based assays wherein tubule cells are monitored after ischemic-like injury. This shows that the Fas-pathway may play a role in the pathogenesis of the apoptosis of tubule cells during the early ischemic-reperfusion period. The role of DAXX is still unclear; however, DAXX mediates both Fas-dependent and TGF-beta-induced apoptosis and renal induction of TGF-beta is well documented in renal ischemia studies. Causes: Intrinsic Pathway Activation of pro-caspase 9 is dependent on mitochondrial signaling pathways which are regulated by the Bcl-2 family of proteins. Activation of Bcl-2 proteins such as Bax and Bak triggers a signaling cascade that results in the release of cytochrome c into the cytosol. This then activates pro-caspase 9 and results in apoptosis of the cells. Causes: Genetic Risk Factors Polymorphisms in genes have been shown to increase or decrease risk of renal ischemic injury. Genes such as Apolipoprotein E (APO E), which controls cholesterol metabolism, NADPH Oxidase which regulates oxidative stress, Angiotensin-converting enzyme (ACE) for vasomotor regulation, HSP72 which helps in tolerance of ischemic injury, Interleukin cytokines which is an inflammation modulator, and VEGF which regulates angiogenesis or the formation of blood vessels have all been shown to have significant effects in acute kidney injury. Causes: Apolipoprotein E Apolipoprotein E are proteins that metabolize fats in the body. In studies of patients undergoing coronary artery bypass grafting, carriers of APO-E e4 allele was found to have a decreased risk of acute kidney injury compared to non-carriers of the allele. NADPH Oxidase NADPH Oxidase regulates oxidative stress by conjugating with reactive oxygen species in cells. Polymorphisms in NADPH Oxidase p22phox and with the T allele has been shown to have a greater risk of dialysis and mortality. Angiotensin-converting Enzyme Angiotensin-converting enzyme regulates vasomotor movement by controlling blood pressure going through the kidneys. Similarly to the APO-E polymorphism, patients with the D-allele for ACE has an increased risk of acute kidney injury after coronary artery bypass grafting, as well. HSP72 In infant studies, it was shown that HSP72 gene with the G allele gave an increased risk for acute kidney injury. Causes: Interleukin Researchers have found that IL-17C is activated in kidney injury. In hypoxia-induced studies of mice, an upregulation of the synthesis of IL-17C was evident upon oxygen loss in the kidney. Moreover, Knockout variants of the IL-17C decreased the inflammation caused by activation of IL-17C.Using antibodies and siRNA against IL-17C also provided the same results. Also, studies of IL-60174GG showed that carriers of this polymorphism have a higher creatinine levels in the blood; however, carriers of the G-allele of IL-10 have a decreased risk of death after organ failure. Causes: VEGF Unlike with HSP72 polymorphism, infant studies show that VEGF with a homozygous A allele resulted in reduced risk for acute kidney injury. Pathophysiology: Several pathophysiological conditions that change when the kidney is undergoing ischemic injuries are listed below. This includes changes in the vasculature, endoplasmic reticulum stress, disfunction of the mitochondria, autophagy of cells, inflammation, and incorrect or maladaptive repair. Pathophysiology: Vasculature Normal functions of the kidney require a high amount of oxygen, as such the Oxygen supply to the kidney is well regulated. Production of Adenosine Triphosphate and Nitric Oxide requires a high concentration of Oxygen. These compounds, as well as some reactive oxygen species, are required for the kidney to function properly. With an injury, cellular respiration is compromised. This leads to an imbalance of the supply of oxygen and the products of cellular respiration. When that happens, the kidney undergoes oxidative stress and injury to the microvascular endothelium promotes the recruitment of leukocytes and platelets. This leads to changes in perfusion and oxygen delivery. Pathophysiology: Endoplasmic Reticulum Stress Misfolded and unfolded proteins accumulate in the endoplasmic reticulum. This triggers the unfolded protein response (UPR). The unfolded protein response is an adaptive mechanism to restore cell and tissue homeostasis. If the stress is too severe, the maladaptive response is activated and the C/EBP Homologous Protein pathway (CHOP) is induced. This leads to apoptosis. Pathophysiology: Mitochondrial Dysfunction In acute kidney ischemia, the proximal tubules are vulnerable to mitochondrial dysfunction because they rely on aerobic metabolism and they are in a more oxidized state as compared to the distal tubules. When mitochondrial dysfunction happens, cellular respiration is disrupted. This leads to the mitochondria releasing pro-apoptotic proteins such as cytochrome c and end up in the death of kidney cells. Pathophysiology: Autophagy During ischemic stress, the cross-talk between the mitochondria and the UPR is activated. This results in autophagy by which proteins, organelles, and cytoplasmic components are recycled and degraded by the lysosomes. The process of autophagy helps in removing unnecessary components of the cells to maintain more important functions. In this case, autophagy is induced in kidneys in response to hypoxia to protect against further kidney injury. Pathophysiology: Inflammation The renal inflammatory process involves events that lead to injury or death of renal cells. When the kidneys undergo inflammatory responses, it produces mediators such as bradykinin, histamine, and pro-inflammatory cytokines such as interleukin-1 and tumor necrosis factor-a. In mice models, studies wherein removal of these mediators from plasma were observed and has shown beneficial to mice. Maladaptive Repair When an injury is severe, the adaptive responses that are activated to restore normal cell and tissue homeostasis become maladaptive. This leads to cell and tissue malfunction. This could lead to chronic kidney disease progression. Physical Symptoms Kidney features can be clinically suggestive of renal ischemia. Because renal failure can be correlated to hypertension, both of these situations have been observed. In general, kidney sizes differ in patients with acute kidney ischemia. Hypertension, acute renal failure, progressive azotemia, and acute pulmonary edema are also signs of a developing ischemic injury for hypertensive patients. Kidney size differences In normal patients, the length of the two kidneys only differ by less than 1.5 cm; however, hypertensive patients tend to have an asymmetric kidney size. This strongly suggests ischemic renal disease. Renovascular hypertension Renovascular hypertension or renal artery stenosis is characterized as an increase in blood pressure through the arteries to the kidneys. This is due to an abnormal narrowing of the arteries. Pathophysiology: Acute Renal Failure caused by the treatment of hypertension In patients with hypertension, treatment of the disease using Angiotensin-converting enzyme inhibitors (ACEIs) are sometimes necessary. The glomerular filtration rate(GFR) in patients is regulated by vasoconstriction of the efferent arteriole. When ACEI is taken by the patient, this vasoconstrictor effect of the efferent arteriole is blocked. This then leads to a decrease in GFR and leads to acute renal failure. Studies have shown that 6-38% of patients with renal vascular disease or hypertension will develop acute renal ischemia through acute renal failure. Pathophysiology: Progressive Azotemia (with Renovascular Hypertension, refractory or severe hypertension, or atherosclerotic diseases) Azotemia is characterized as an increase of creatinine and blood urea nitrogen (BUN) in the plasma. Patients who have renovascular hypertension often get a deterioration of the renal function.Likewise above, patients who are being treated with an antihypertensive drug for renovascular, refractory or severe hypertension exhibit progressive azotemia. Acute kidney ischemia may result from taking ACEIs due to the alteration of intrarenal hemodynamics. Pathophysiology: Acute pulmonary edema Acute pulmonary edema is characterized as a fluid collection in the air sacs of the lungs. This makes it difficult for patients to breathe. Patients with poorly-controlled hypertension and renal insufficiency usually also have recurrent acute pulmonary edema. While patients may have other risk factors for having pulmonary edema, volume-dependent renovascular hypertension appears to be the dominant factor. Diagnosis and Screening: Screening of Biomarkers is one way to diagnose a patient if their kidney is functioning normally. Biomarkers Creatinine - Serum creatinine is a standard biomarker to define acute kidney injury. However, it is insensitive, nonspecific. Also, it is a fairly late marker of damage, highly dependent on diet, skeletal muscle function, and kidney stability. Clusterin - Clusterin is a protein ubiquitously expressed in different cell lines. Secreted clusterin is involved in lipid transport. The cells release clusterin as a response to cell stress. This is due to clusterin being able to protect the cell by reducing oxidative stress and by binding to misfolded proteins. Cystatin C - Cystatin C is produced in kidney cells and is used as a biomarker. The level of cystatin C is used to determine whether the kidney is functioning well or not since it is removed from the kidney through glomerular filtration. Therefore, a high amount of cystatin C in the blood is a determinant of kidney injury. EGF - lower levels of EGF mRNA and proteins in the kidneys are indicative of injury after kidney ischemia and reperfusion. KIM-1 - also known as Kidney Injury Model 1 is a protein that is highly expressed upon kidney injury. Therefore, higher levels of KIM-1 signifies that there is an injury with proximal tubes due to ischemia. IL-6 - Interleukin-6 expression is a response to ischemia and reperfusion injury linked to renal dysfunction. This is also highly expressed when a transplant of the kidney is rejected. Endothelin-1 - Endothelin-1 is a vasoconstrictor and can be detected in the urine. More specifically, urinary endothelin-1 levels are used as an acute marker in cold ischemic reperfusion and injury. Diagnosis and Screening: NGAL - neutrophil gelatinase-associated lipocalin 2 is expressed in neutrophils and in low levels in the kidney, prostate, and epithelia of the respiratory and alimentary tracts. (SOURCE). NGAL is used as a biomarker for kidney injury. This is because a high NGAL excretion can be correlated to ischemic insult. NGAL is secreted at high levels in the blood and urine within two hours of injury. Diagnosis and Screening: IMA - Ischemic Modified Albumin. IMA can be used as an early biomarker for ischemic injury. Moreover, the amount of IMA in the blood is proportional to the duration of ischemic injury and necrosis factor, as such it can be used as a biomarker to determine how long the injury has been. TIMP-2 and IGFBP7. In the distal tubular regions, there is an elevation of the expression of TIMP2 and IGFBP7 and this is linked to early tubular damage in ischemic injury and reperfusion, as well as acute kidney injury. Diagnosis and Screening: Imaging Tests Duplex Doppler Sonography Duplex Doppler Sonography(DDS) is an imaging test for evaluating blood flow in the kidney or the renal system. B-mode ultrasonography is combined with Doppler ultrasonography, to locate and assess the renal artery and the velocity of blood flowing through it. This test is useful even in the presence of azotemia and for patients with hypertension, it is not necessary to relieve the administration of ACEIs. By assessing the velocity of blood flow, the doctors can measure whether the kidney is receiving enough blood and nutrients to function normally. Diagnosis and Screening: Magnetic Resonance Angiography Similar to DDS, Magnetic Resonance Angiography(MRA) also images blood vessels. MRA uses magnetic resonance and unlike a traditional angiogram, this does not require inserting a catheter. This test can be used to evaluate stenosis and occlusions in the kidney. This test can also be used to determine aneurysms in the brain. More clinical uses of MRA is used to check blood vessels in different parts of the body, such as the thorax, lower limbs, and the heart. Diagnosis and Screening: Functional Tests Plasma renin activity Plasma renin activity is also known as renin assay. This assay measures the activity of renin, also known as angiotensinogenase, which plays a role in blood pressure regulation and urine output. This is considered a non-invasive test and patients who are taking ACEIs should opt to take it. This is because it is useful in detecting renovascular hypertension, one of the symptoms of kidney ischemia, with sensitivity going to 90%. However, renal failure may diminish the accuracy of this test. Diagnosis and Screening: Renography after administration of ACEI Renography uses radioisotopes in diagnosing renovascular disease. This test compares normal function of kidney versus stenotic kidney by measuring the amount of the radionucleotides going to the kidney and being excreted by it. Two radionuclides are used in renography: Tc99m-MAG3 (mercaptoacetyltriglycine) and TC99m-DTPA (diethylenetriaminepentacetate). In this test, the radionucleotides are injected intravenously to the system. The compound then progresses through the renal system and is tracked with a gamma camera. The camera then takes images at intervals and a measurement of the radioactivity is taken. By performing this scan, doctors can differentiate between kidney ischemia and intrinsic renal disease by checking the amount of time for the radioactivity to peak and decline. Renovascular hypertension is very sensitive to this imaging, with a specificity of 95% and sensitivity of 96%. Treatments: Traditional Treatments Our knowledge of renal ischemia comes from animal studies. Based on these studies, kidney transplants and retrospective partial nephrectomy series indicate the risk of renal function impairment the longer the ischemic injury persists. However, based on historical studies, the use of the duration of the ischemia as a dichotomous marker has been found to have significant flaws in predicting renal function outcomes. The duration of kidney ischemia does not affect kidney function either in the short term or long term. Treatments: Ischemic Preconditioning In patients who get a kidney transplant or a coronary artery bypass, ischemic preconditioning is given. In ischemic preconditioning, the kidney is given a tolerable amount of ischemia. This preconditions the kidney to tolerate subsequent ischemia-induced injuries. This reduces cell lysis and apoptosis of kidney cells and improves the overall renal function of the kidneys post-ischemia as compared to not having the preconditioning. Treatments: Furosemide to Promote Post-perfusion Diuresis Furosemide is a common diuretic and is used for the prevention or to reverse acute kidney injury. A diuretic is a substance that promotes excretion of water from the body. When the kidneys undergo ischemia, it leads to reperfusion or a return of blood supply to the organs. As such, using diuretics has helped in getting rid of excess water in the kidneys after reperfusion. Taking furosemide as a tablet, as a liquid solution, or via injection is used as a preventative measure or as treatment of kidney ischemia has shown to reduce the severity of renal failure, reduce apoptosis induced by ischemia, and speed the recovery of renal function. This as also lead to the reduction of the need of surgical renal replacement in some patients. Treatments: Fenoldopam Mesylate Fenoldopam is used postoperatively in treating Acute Kidney Injury, if used before kidney damage. Similar to Furosemide, this can be taken orally or intravenously; however, bioavailability, or the amount of the drug that reaches the blood circulation, is reduced if taken orally. Fenoldopam is used as a vasodilator and can increase blood flow to the kidneys, as well as renin secretion. Thus, it can be used to regulate the blood pressure in the arteries and reduce injury due to ischemia. Treatments: Emerging Treatments Bendavia Bendavia is currently in clinical studies targeting mitochondrial dysfunction. It is protective in rat models of kidney ischemia when it was administered before the injury. Bendavia binds to cardiolipin on the inner mitochondrial membrane and this inhibits cytochrome c peroxidase activity. This protects respiration during the early reperfusion and accelerates the recovery of ATP. In the animal models, it was found that tubular cell death and dysfunction were reduced. Treatments: Therapeutic Gases: CO, NO, and H2S Carbon monoxide (CO) helps in stabilizing HIF, which helps in regulating autophagy and hypoxic response. Through this, inflammation and tissue injury are stabilized. Nitric Oxide (NO) is a byproduct of the metabolism of arginine to citrulline by NO synthase. This gas is available in all cells, and inhalation of NO has been found to be therapeutically active. This reduces pulmonary vasoconstriction and lessens apoptosis during renal ischemia. Hydrogen Sulfide (H2S) is also an endogenous product of metabolic activity in cells. This is a byproduct of the metabolism of cysteine by cystathionine-b-synthase. Like with NO, inhalation of H2S has been found to be therapeutic and has been shown to stabilize hypothermia and stabilize cardiovascular hemodynamics which protects from ischemic injury. Treatments: Mesenchymal Stem Cell Mesenchymal Stem Cells (MSCs) are multipotent mature stem cells that are capable of differentiating into different types of cells. This is a promising line of therapy as regenerative medicine has shown benefits in the restoration of the kidneys. MSCs have anti-inflammatory properties and has been applied in animal and human patients. Because of their regenerative capabilities, the kidney can benefit from it by transdifferentiation into kidney cells. Moreover, they can give anti-inflammatory and immuno-modulatory properties and therefore protecting the kidney as well as repairing it from ischemic injury. Outcome: Ischemic kidney injury might result in fibrosis, irreversible renal dysfunction, and a need for renal replacement therapy. Acute kidney ischemia is associated with high mortality. Chronic ischemic kidney disease (CIKD) usually involves loss of renal parenchyma or reduction of GFR caused by gradual vascular obstruction. Clinically, the term “ischemic renal disease” most often describes CIKD, which contributes to 6–27% of end-stage kidney disease, particularly among patients older than 50 years
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Super NES CD-ROM** Super NES CD-ROM: The Super NES CD-ROM System (commonly shortened as the SNES-CD), known as the Super Famicom CD-ROM Adapter in Japan, is an unreleased add-on for the Super Nintendo Entertainment System (SNES) video game console. It built upon the functionality of the cartridge-based SNES by adding support for a CD-ROM-based format known as Super Disc.The SNES-CD was developed in a joint venture between Nintendo and Sony. As well as the SNES add-on, Sony planned to release it as a hybrid console, the PlayStation, similar to Sharp's Twin Famicom and NEC's TurboDuo. Another partnership with Philips yielded a few Nintendo-themed games for the CD-i platform instead of the SNES-CD. After the SNES-CD was canceled, Sony developed its own console using the PlayStation name. The first PlayStation console became the chief competitor of Nintendo's next console, the Nintendo 64. History: Sony engineer Ken Kutaragi became interested in working with video games after seeing his daughter play games on Nintendo's Famicom video game console. He took on a contract at Sony for developing hardware that would drive the audio subsystem of Nintendo's next console, the Super NES. Kutaragi secretly developed the chip, the Sony SPC 700. As Sony was uninterested in the video game business, most of his superiors did not approve of the project, but Kutaragi found support in Sony executive Norio Ohga and the project was allowed to continue. The success of the project spurred Nintendo to enter into a partnership with Sony to develop both a CD-ROM add-on for the Super NES and a Sony-branded console that would play both SNES cartridges, as well as games released for the new Super Disc format.Development of the format started in 1988, when Nintendo signed a contract with Sony to produce a CD-ROM add-on for the SNES. The system was to be compatible with existing SNES games as well as games released for the Super Disc format. Under their agreement, Sony would develop and retain control over the Super Disc format, with Nintendo thus effectively ceding a large amount of control of software licensing to Sony. Further, Sony would also be the sole benefactor of licensing related to music and movies software that it had been aggressively pursuing as a secondary application. Nintendo president Hiroshi Yamauchi was already wary of Sony at this point and deemed it unacceptable, as Sony was the sole provider of the audio chip, the S-SMP, used in the SNES and required developers to pay for an expensive development tool from Sony.Furthermore, Yamauchi started to see a more favorable partner in Philips, one of Sony's largest competitors. To counter the proposed agreement, Yamauchi sent Nintendo of America president Minoru Arakawa (his son-in-law) and executive Howard Lincoln to the Netherlands to negotiate a more favorable contract with Philips. As described by David Sheff in his book Game Over, "[The Philips deal] was meant to do two things at once: give Nintendo back its stranglehold on software and gracefully fuck Sony." At the June 1991 Consumer Electronics Show, Sony announced its SNES-compatible cartridge/CD console, the PlayStation. The next day, Nintendo revealed its partnership with Philips at the show—a surprise to the entire audience, including Sony.While Nintendo and Sony attempted to resolve their differences, between two and three hundred prototypes of the PlayStation were created, and software for the system was being developed. In 1992, a deal was reached allowing Sony to produce SNES-compatible hardware, with Nintendo retaining control and profit over the games. The organizations never repaired their rift. By the next year, Sony had dropped further development of the Super NES CD-ROM to develop its own console for the next generation, the PlayStation. Prototype: In July 2015, it was reported that one of the original Nintendo PlayStation prototypes had been found. The prototype was reportedly left behind by former Sony Computer Entertainment CEO Ólafur Jóhann Ólafsson during his time at Advanta. A former Advanta worker (Terry Diebold) acquired the device as part of a lot during Advanta's 2009 bankruptcy auction. The system was later confirmed as operational and the unit plays Super Famicom cartridges as well as the test cartridge that accompanied the unit, although the audio output and CD drive were non-functional. Some groups attempted to develop homebrew software for the console, such as Super Boss Gaiden, as there were no known games that used the CD drive.The prototype came with a Sony/PlayStation-branded version of the standard Super Famicom controller (model number SHVC-005). In March 2016, retro-gaming website RetroCollect reported that they (and influential members of online emulation communities) had received (from an anonymous source) a functional disc boot ROM for the SNES-CD.Diebold had given the unit to Benjamin Heckendorn, a console modder, to look at, around 2017. Heckendorn provided a tear-down video of the system, through which he was able to identify faults in several on-board components that he subsequently replaced, which resulted in fixing the audio and CD drive issues indirectly. Heckendorn showed audio CDs working on the system, as there were no known game CDs, but affirmed that homebrew games worked.The prototype was put up for auction by Diebold in February 2020, with an initial asking price of US$15,000, but the auction quickly exceeded US$350,000 within two days. It was auctioned off at US$360,000 to Greg McLemore, an entrepreneur and founder of Pets.com, who has a large collection of other video game hardware and plans to establish a permanent museum for this type of hardware. Prototype: Technical specifications Heckendorn's July 2016 teardown video provides technical specifications of the prototype. Heckendorn said the system would have probably been as powerful as a standard Super NES, but not as powerful as the Sega CD. The standalone unit has the following connectors: two Super NES controller ports, a cartridge slot, a dual-speed CD-ROM drive, RCA composite jacks, S-Video, RFU DC OUT (similar to the PlayStation SCPH-1001), a proprietary multi-out AV output port (the same one featured on the Super NES, Nintendo 64, and GameCube), headphone jack on the front, a serial port labelled "NEXT" (probably for debugging), and one expansion port under the unit. The specifications from the prototype are different from those published in the March 1993 edition of Electronic Gaming Monthly. Legacy: After the original contract with Sony failed, Nintendo continued its partnership with Philips. This contract provisioned Philips with the right to feature Nintendo's characters in a few games for its CD-i multimedia device, but never resulted in a CD-ROM add-on for the SNES. Witnessing the poor reception of the Sega CD, Nintendo cancelled plans for the add-on. The Nintendo-themed CD-i games were very poorly received, and the CD-i itself is considered a commercial failure. The main game in development for the SNES-CD platform launch was Square's Secret of Mana, whose planned content was cut down to the size suitable for cartridge and released on that medium instead.Kutaragi and Sony continued to develop their own console and released the PlayStation in December 1994 in Japan and September in North America and Europe the following year. The CD-based console successfully competed with Nintendo's cartridge-based Nintendo 64 and other CD-based console systems such as the Fujitsu FM Towns Marty, the NEC PC-FX, the SNK Neo Geo CD, the Panasonic 3DO Interactive Multiplayer and the Sega Saturn, causing it to become the console leader due to its success. The broken partnership with Sony has often been cited as a mistake on Nintendo's part, effectively creating a formidable rival in the video game market. Nintendo would not release an optical disc-based console of its own until the release of the GameCube in 2001.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Analytic philosophy** Analytic philosophy: Analytic philosophy is a branch and tradition of philosophy using analysis, popular in the Western world and particularly the Anglosphere, which began around the turn of the 20th century in the contemporary era in the United Kingdom, United States, Canada, Australia, New Zealand, and Scandinavia, and continues today. Analytic philosophy is often contrasted with continental philosophy, coined as a catch-all term for other methods, prominent in Europe.Central figures in this historical development of analytic philosophy are Gottlob Frege, Bertrand Russell, G. E. Moore, and Ludwig Wittgenstein. Other important figures in its history include the logical positivists (particularly Rudolf Carnap), W. V. O. Quine, and Karl Popper. After the decline of logical positivism, Saul Kripke, David Lewis, and others led a revival in metaphysics. Elizabeth Anscombe, Peter Geach, Anthony Kenny, and others developed an analytic approach to Thomism. Analytic philosophy: Analytic philosophy is characterized by an emphasis on language, known as the linguistic turn, and for its clarity and rigor in arguments, making use of formal logic and mathematics, and, to a lesser degree, the natural sciences. It also takes things piecemeal, in "an attempt to focus philosophical reflection on smaller problems that lead to answers to bigger questions".Analytic philosophy is often understood in contrast to other philosophical traditions, most notably continental philosophies such as existentialism, phenomenology, and Hegelianism. The analytic tradition has been critiqued for ahistoricism. History: The history of analytic philosophy (taken in the narrower sense of "20th-/21st-century analytic philosophy") is usually thought to begin with the rejection of British idealism, a neo-Hegelian movement.British idealism as taught by philosophers such as F. H. Bradley (1846–1924) and T. H. Green (1836–1882), dominated English philosophy in the late 19th century. Since its beginning, a basic goal of analytic philosophy has been conceptual clarity, in the name of which Moore and Russell rejected Hegelianism for being obscure—see for example Moore's "A Defence of Common Sense" and Russell's critique of the doctrine of internal relations. Inspired by developments in modern formal logic, the early Russell claimed that the problems of philosophy can be solved by showing the simple constituents of complex notions. An important aspect of British idealism was logical holism—the opinion that there are aspects of the world that can be known only by knowing the whole world. This is closely related to the opinion that relations between items are internal relations, that is, properties of the nature of those items. Russell, along with Wittgenstein, in response promulgated logical atomism and the doctrine of external relations—the belief that the world consists of independent facts.Russell, during his early career, along with his collaborator Alfred North Whitehead, was much influenced by Gottlob Frege (1848–1925), who developed predicate logic, which allowed a much greater range of sentences to be parsed into logical form than was possible using the ancient Aristotelian logic. Frege was also influential as a philosopher of mathematics in Germany at the beginning of the 20th century. In contrast to Edmund Husserl's 1891 book Philosophie der Arithmetik, which argued that the concept of the cardinal number derived from psychical acts of grouping objects and counting them, Frege argued that mathematics and logic have their own validity, independent of the judgments or mental states of individual mathematicians and logicians (which were the basis of arithmetic according to the "psychologism" of Husserl's Philosophie). Frege further developed his philosophy of logic and mathematics in The Foundations of Arithmetic (1884) and The Basic Laws of Arithmetic (German: Grundgesetze der Arithmetik, 1893–1903), where he provided an alternative to psychologistic accounts of the concept of number. History: Like Frege, Russell argued that mathematics is reducible to logical fundamentals in The Principles of Mathematics (1903). Later, his book written with Whitehead, Principia Mathematica (1910–1913), encouraged many philosophers to renew their interest in the development of symbolic logic. Additionally, Russell adopted Frege's predicate logic as his primary philosophical method, a method Russell thought could expose the underlying structure of philosophical problems. For example, the English word "is" has three distinct meanings which predicate logic can express as follows: For the sentence 'the cat is asleep', the is of predication means that "x is P" (denoted as P(x)). History: For the sentence 'there is a cat', the is of existence means that "there is an x" (∃x). For the sentence 'three is half of six', the is of identity means that "x is the same as y" (x=y).Russell sought to resolve various philosophical problems by applying such logical distinctions, most famously in his analysis of definite descriptions in "On Denoting" (1905). History: Ideal language From about 1910 to 1930, analytic philosophers like Russell and Ludwig Wittgenstein emphasized creating an ideal language for philosophical analysis, which would be free from the ambiguities of ordinary language that, in their opinion, often made philosophy invalid. During this phase, Russell and Wittgenstein sought to understand language (and hence philosophical problems) by using logic to formalize how philosophical statements are made. History: Logical atomism Russell became an advocate of logical atomism. Wittgenstein developed a comprehensive system of logical atomism in his Tractatus Logico-Philosophicus (German: Logisch-Philosophische Abhandlung, 1921). He thereby argued that the universe is the totality of actual states of affairs and that these states of affairs can be expressed by the language of first-order predicate logic. Thus a picture of the universe can be constructed by expressing facts in the form of atomic propositions and linking them using logical operators. History: Logical positivism During the late 1920s to 1940s, a group of philosophers of the Vienna Circle and the Berlin Circle developed Russell and Wittgenstein's formalism into a doctrine known as "logical positivism" (or logical empiricism). Logical positivism used formal logical methods to develop an empiricist account of knowledge. Philosophers such as Rudolf Carnap and Hans Reichenbach, along with other members of the Vienna Circle, claimed that the truths of logic and mathematics were tautologies, and those of science were verifiable empirical claims. These two constituted the entire universe of meaningful judgments; anything else was nonsense. The claims of ethics, aesthetics, and theology were consequently reduced to pseudo-statements, neither empirically true nor false and therefore meaningless. In reaction to what he considered excesses of logical positivism, Karl Popper insisted on the role of falsification in the philosophy of science—although his general method was also part of the analytic tradition. With the coming to power of Adolf Hitler and Nazism in 1933, many members of the Vienna and Berlin Circles fled to Britain and the US, which helped to reinforce the dominance of logical positivism and analytic philosophy in anglophone countries. History: Logical positivists typically considered philosophy as having a minimal function. For them, philosophy concerned the clarification of thoughts, rather than having a distinct subject matter of its own. The positivists adopted the verification principle, according to which every meaningful statement is either analytic or is capable of being verified by experience. This caused the logical positivists to reject many traditional problems of philosophy, especially those of metaphysics or ontology, as meaningless. History: Ordinary language After World War II, during the late 1940s and 1950s, analytic philosophy became involved with ordinary-language analysis. This resulted in two main trends. One continued Wittgenstein's later philosophy, which differed dramatically from his early work of the Tractatus. The other, known as "Oxford philosophy", involved J. L. Austin. In contrast to earlier analytic philosophers (including the early Wittgenstein) who thought philosophers should avoid the deceptive trappings of natural language by constructing ideal languages, ordinary-language philosophers claimed that ordinary language already represents many subtle distinctions not recognized in the formulation of traditional philosophical theories or problems. While schools such as logical positivism emphasize logical terms, supposed to be universal and separate from contingent factors (such as culture, language, historical conditions), ordinary-language philosophy emphasizes the use of language by ordinary people. The most prominent ordinary-language philosophers during the 1950s were the aforementioned Austin and Gilbert Ryle. History: Ordinary-language philosophers often sought to dissolve philosophical problems by showing them to be the result of ordinary misunderstanding language. Examples include Ryle, who tried to dispose of "Descartes' myth", and Wittgenstein. Contemporary analytic philosophy: Although contemporary philosophers who self-identify as "analytic" have widely divergent interests, assumptions, and methods—and have often rejected the fundamental premises that defined analytic philosophy before 1960—analytic philosophy today is usually considered to be determined by a particular style, characterized by precision and thoroughness about a specific topic, and resistance to "imprecise or cavalier discussions of broad topics".During the 1950s, logical positivism was challenged influentially by Wittgenstein in the Philosophical Investigations, Quine in "Two Dogmas of Empiricism", and Sellars in Empiricism and the Philosophy of Mind. After 1960, anglophone philosophy began to incorporate a wider range of interests, opinions, and methods. Still, many philosophers in Britain and America still consider themselves "analytic philosophers". They have done so largely by expanding the notion of "analytic philosophy" from the specific programs that dominated anglophone philosophy before 1960 to a much more general notion of an "analytic" style.Many philosophers and historians have attempted to define or describe analytic philosophy. Those definitions often include an emphasis on conceptual analysis: A.P. Martinich draws an analogy between analytic philosophy's interest in conceptual analysis and analytic chemistry, which aims to determine chemical compositions. Steven D. Hales described analytic philosophy as one of three types of philosophical method practiced in the West: "[i]n roughly reverse order by number of proponents, they are phenomenology, ideological philosophy, and analytic philosophy".Scott Soames agrees that clarity is important: analytic philosophy, he says, has "an implicit commitment—albeit faltering and imperfect—to the ideals of clarity, rigor and argumentation" and it "aims at truth and knowledge, as opposed to moral or spiritual improvement [...] the goal in analytic philosophy is to discover what is true, not to provide a useful recipe for living one's life". Soames also states that analytic philosophy is characterized by "a more piecemeal approach. There is, I think, a widespread presumption within the tradition that it is often possible to make philosophical progress by intensively investigating a small, circumscribed range of philosophical issues while holding broader, systematic questions in abeyance".A few of the most important and active topics and subtopics of analytic philosophy are summarized by the following sections. Contemporary analytic philosophy: Philosophy of mind and cognitive science Motivated by the logical positivists' interest in verificationism, logical behaviorism was the most prominent theory of mind of analytic philosophy for the first half of the 20th century. Behaviorists tended to opine either that statements about the mind were equivalent to statements about behavior and dispositions to behave in particular ways or that mental states were directly equivalent to behavior and dispositions to behave. Behaviorism later became much less popular, in favor of type physicalism or functionalism, theories that identified mental states with brain states. During this period, topics of the philosophy of mind were often related strongly to topics of cognitive science such as modularity or innateness. Finally, analytic philosophy has featured a certain number of philosophers who were dualists, and recently forms of property dualism have had a resurgence; the most prominent representative is David Chalmers.John Searle suggests that the obsession with the philosophy of language during the 20th century has been superseded by an emphasis on the philosophy of mind, in which functionalism is currently the dominant theory. In recent years, a central focus of research in the philosophy of mind has been consciousness. While there is a general consensus for the global neuronal workspace model of consciousness, there are many opinions as to the specifics. The best known theories are Daniel Dennett's heterophenomenology, Fred Dretske and Michael Tye's representationalism, and the higher-order theories of either David M. Rosenthal—who advocates a higher-order thought (HOT) model—or David Armstrong and William Lycan—who advocate a higher-order perception (HOP) model. An alternative higher-order theory, the higher-order global states (HOGS) model, is offered by Robert van Gulick. Contemporary analytic philosophy: Ethics in analytic philosophy Due to the commitments to empiricism and symbolic logic in the early analytic period, early analytic philosophers often thought that inquiry in the ethical domain could not be made rigorous enough to merit any attention. It was only with the emergence of ordinary language philosophers that ethics started to become an acceptable area of inquiry for analytic philosophers. Philosophers working with the analytic tradition have gradually come to distinguish three major types of moral philosophy. Contemporary analytic philosophy: Meta-ethics which investigates moral terms and concepts; Normative ethics which examines and produces normative ethical judgments; Applied ethics, which investigates how existing normative principles should be applied to difficult or borderline cases, often cases created by new technology or new scientific knowledge. Meta-ethics Twentieth-century meta-ethics has two origins. The first is G.E. Moore's investigation into the nature of ethical terms (e.g., good) in his Principia Ethica (1903), which identified the naturalistic fallacy. Along with Hume's famous is/ought distinction, the naturalistic fallacy was a major topic of investigation for analytical philosophers. Contemporary analytic philosophy: The second is in logical positivism and its attitude that unverifiable statements are meaningless. Although that attitude was adopted originally to promote scientific investigation by rejecting grand metaphysical systems, it had the side effect of making (ethical and aesthetic) value judgments (as well as religious statements and beliefs) meaningless. But because value judgments are of significant importance in human life, it became incumbent on logical positivism to develop an explanation of the nature and meaning of value judgments. As a result, analytic philosophers avoided normative ethics and instead began meta-ethical investigations into the nature of moral terms, statements, and judgments. Contemporary analytic philosophy: The logical positivists opined that statements about value—including all ethical and aesthetic judgments—are non-cognitive; that is, they cannot be objectively verified or falsified. Instead, the logical positivists adopted an emotivist theory, which was that value judgments expressed the attitude of the speaker. For example, in this view, saying, "Killing is wrong", is equivalent to saying, "Boo to murder", or saying the word "murder" with a particular tone of disapproval. Contemporary analytic philosophy: While analytic philosophers generally accepted non-cognitivism, emotivism had many deficiencies. It evolved into more sophisticated non-cognitivist theories such as the expressivism of Charles Stevenson, and the universal prescriptivism of R.M. Hare, which was based on J.L. Austin's philosophy of speech acts. These theories were not without their critics. Philippa Foot contributed several essays attacking all these theories. J.O. Urmson's article "On Grading" called the is/ought distinction into question. Contemporary analytic philosophy: As non-cognitivism, the is/ought distinction, and the naturalistic fallacy began to be called into question, analytic philosophers showed a renewed interest in the traditional questions of moral philosophy. Perhaps the most influential being Elizabeth Anscombe, whose monograph Intention was called by Donald Davidson "the most important treatment of action since Aristotle". A favorite student and friend of Ludwig Wittgenstein, her 1958 article "Modern Moral Philosophy" introduced the term "consequentialism" into the philosophical lexicon, declared the "is-ought" impasse to be unproductive, and resulted in a revival of virtue ethics. Contemporary analytic philosophy: Normative ethics The first half of the 20th century was marked by skepticism toward and neglect of normative ethics. Related subjects, such as social and political philosophy, aesthetics, and philosophy of history, became only marginal topics of English-language philosophy during this period. Contemporary analytic philosophy: During this time, utilitarianism was the only non-skeptical type of ethics to remain popular. However, as the influence of logical positivism began to decrease mid-century, analytic philosophers had renewed interest in ethics. G.E.M. Anscombe's 1958 "Modern Moral Philosophy" sparked a revival of Aristotle's virtue ethical approach and John Rawls's 1971 A Theory of Justice restored interest in Kantian ethical philosophy. Today, contemporary normative ethics is dominated by three schools: consequentialism, virtue ethics, and deontology. Contemporary analytic philosophy: Applied ethics A significant feature of analytic philosophy since approximately 1970 has been the emergence of applied ethics—an interest in the application of moral principles to specific practical issues. The philosophers following this orientation view ethics as involving humanistic values, which involve practical implications and applications in the way people interact and lead their lives socially.Topics of special interest for applied ethics include environmental issues, animal rights, and the many challenges created by advancing medical science. In education, applied ethics addressed themes such as punishment in schools, equality of educational opportunity, and education for democracy. Contemporary analytic philosophy: Analytic philosophy of religion In Analytic Philosophy of Religion, James Franklin Harris noted that analytic philosophy has been a very heterogeneous 'movement'.... some forms of analytic philosophy have proven very sympathetic to the philosophy of religion and have provided a philosophical mechanism for responding to other more radical and hostile forms of analytic philosophy.: 3  As with the study of ethics, early analytic philosophy tended to avoid the study of philosophy of religion, largely dismissing (as per the logical positivists) the subject as part of metaphysics and therefore meaningless. The demise of logical positivism renewed interest in philosophy of religion, prompting philosophers like William Alston, John Mackie, Alvin Plantinga, Robert Merrihew Adams, Richard Swinburne, and Antony Flew not only to introduce new problems, but to re-study classical topics such as the nature of miracles, theistic arguments, the problem of evil, (see existence of God) the rationality of belief in God, concepts of the nature of God, and many more.Plantinga, Mackie and Flew debated the logical validity of the free will defense as a way to solve the problem of evil. Alston, grappling with the consequences of analytic philosophy of language, worked on the nature of religious language. Adams worked on the relationship of faith and morality. Analytic epistemology and metaphysics has formed the basis for some philosophically sophisticated theistic arguments, like those of the reformed epistemologists like Plantinga. Contemporary analytic philosophy: Analytic philosophy of religion has also been preoccupied with Wittgenstein, as well as his interpretation of Søren Kierkegaard's philosophy of religion. Using first-hand remarks (which was later published in Philosophical Investigations, Culture and Value, and other works), philosophers such as Peter Winch and Norman Malcolm developed what has come to be known as contemplative philosophy, a Wittgensteinian school of thought rooted in the "Swansea tradition", and which includes Wittgensteinians such as Rush Rhees, Peter Winch, and D.Z. Phillips, among others. The name "contemplative philosophy" was first coined by D.Z. Phillips in Philosophy's Cool Place, which rests on an interpretation of a passage from Wittgenstein's Culture and Value. This interpretation was first labeled, "Wittgensteinian Fideism", by Kai Nielsen but those who consider themselves Wittgensteinians in the Swansea tradition have relentlessly and repeatedly rejected this construal as a caricature of Wittgenstein's considered position; this is especially true of D.Z. Phillips. Responding to this interpretation, Kai Nielsen and D.Z. Phillips became two of the most prominent philosophers on Wittgenstein's philosophy of religion. Contemporary analytic philosophy: Political philosophy Liberalism Current analytic political philosophy owes much to John Rawls, who in a series of papers from the 1950s onward (most notably "Two Concepts of Rules" and "Justice as Fairness") and his 1971 book A Theory of Justice, produced a sophisticated defense of a generally liberal egalitarian account of distributive justice. This was followed soon by Rawls's colleague Robert Nozick's book Anarchy, State, and Utopia, a defence of free-market libertarianism. Isaiah Berlin also had a lasting influence on both analytic political philosophy and liberalism with his lecture "Two Concepts of Liberty". Contemporary analytic philosophy: During recent decades there have also been several critiques of liberalism, including the feminist critiques of Catharine MacKinnon and Andrea Dworkin, the communitarian critiques of Michael Sandel and Alasdair MacIntyre (although neither of them endorses the term), and the multiculturalist critiques of Amy Gutmann and Charles Taylor. Although not an analytic philosopher, Jürgen Habermas is another prominent—if controversial—author of contemporary analytic political philosophy, whose social theory is a blend of social science, Marxism, neo-Kantianism, and American pragmatism. Contemporary analytic philosophy: Consequentialist libertarianism also derives from the analytic tradition. Contemporary analytic philosophy: Analytical Marxism Another development of political philosophy was the emergence of the school of analytical Marxism. Members of this school seek to apply techniques of analytic philosophy and modern social science such as rational choice theory to clarify the theories of Karl Marx and his successors. The best-known member of this school is G. A. Cohen, whose 1978 work, Karl Marx's Theory of History: A Defence, is generally considered to represent the genesis of this school. In that book, Cohen used logical and linguistic analysis to clarify and defend Marx's materialist conception of history. Other prominent analytical Marxists include the economist John Roemer, the social scientist Jon Elster, and the sociologist Erik Olin Wright. The work of these later philosophers have furthered Cohen's work by bringing to bear modern social science methods, such as rational choice theory, to supplement Cohen's use of analytic philosophical techniques in the interpretation of Marxian theory. Contemporary analytic philosophy: Cohen himself would later engage directly with Rawlsian political philosophy to advance a socialist theory of justice that contrasts with both traditional Marxism and the theories advanced by Rawls and Nozick. In particular, he indicates Marx's principle of from each according to his ability, to each according to his need. Contemporary analytic philosophy: Communitarianism Communitarians such as Alasdair MacIntyre, Charles Taylor, Michael Walzer, and Michael Sandel advance a critique of liberalism that uses analytic techniques to isolate the main assumptions of liberal individualists, such as Rawls, and then challenges these assumptions. In particular, communitarians challenge the liberal assumption that the individual can be considered as fully autonomous from the community in which he lives and is brought up. Instead, they argue for a conception of the individual that emphasizes the role that the community plays in forming his or her values, thought processes and opinions. Communitarianism has a complex relationship with the analytic tradition, as its major exponents often engage at length with figures generally considered continental, notably G. W. F. Hegel and Friedrich Nietzsche. Contemporary analytic philosophy: Analytic metaphysics One striking difference with respect to early analytic philosophy was the revival of metaphysical theorizing during the second half of the 20th century. Philosophers such as David Kellogg Lewis and David Armstrong developed elaborate theories on a range of topics such as universals, causation, possibility and necessity, and abstract objects.Among the developments that resulted in the revival of metaphysical theorizing were Quine's attack on the analytic–synthetic distinction, which was generally considered to weaken Carnap's distinction between existence questions internal to a framework and those external to it. Important also for the revival of metaphysics was the further development of modal logic, including the work of Saul Kripke, who argued in Naming and Necessity and elsewhere for the existence of essences and the possibility of necessary, a posteriori truths.Metaphysics remains a fertile topic of research, having recovered from the attacks of A.J. Ayer and the logical positivists. Although many discussions are continuations of old ones from previous decades and centuries, the debate remains active. The philosophy of fiction, the problem of empty names, and the debate over existence's status as a property have all become major concerns, while perennial issues such as free will, possible worlds, and the philosophy of time have been revived.Science has also had an increasingly significant role in metaphysics. The theory of special relativity has had a profound effect on the philosophy of time, and quantum physics is routinely discussed in the free will debate. The weight given to scientific evidence is largely due to widespread commitments among philosophers to scientific realism and naturalism. Contemporary analytic philosophy: Philosophy of language Philosophy of language is a topic that has decreased in activity during the last four decades, as evidenced by the fact that few major philosophers today treat it as a primary research topic. Indeed, while the debate remains fierce, it is still strongly influenced by those authors from the first half of the century: Gottlob Frege, Bertrand Russell, Ludwig Wittgenstein, J.L. Austin, Alfred Tarski, and W.V.O. Quine. Contemporary analytic philosophy: In Saul Kripke's publication Naming and Necessity, he argued influentially that flaws in common theories of proper names are indicative of larger misunderstandings of the metaphysics of necessity and possibility. By wedding the techniques of modal logic to a causal theory of reference, Kripke was widely regarded as reviving theories of essence and identity as respectable topics of philosophical discussion. Contemporary analytic philosophy: Another influential philosopher, Pavel Tichý initiated Transparent Intensional Logic, an original theory of the logical analysis of natural languages—the theory is devoted to the problem of saying exactly what it is that we learn, know and can communicate when we come to understand what a sentence means. Contemporary analytic philosophy: Philosophy of science Reacting against both the verificationism of the logical positivists as well as the critiques of the philosopher of science Karl Popper, who had suggested the falsifiability criterion on which to judge the demarcation between science and non-science, discussions of philosophy of science during the last 40 years were dominated by social constructivist and cognitive relativist theories of science. Thomas Samuel Kuhn with his formulation of paradigm shifts and Paul Feyerabend with his epistemological anarchism are significant for these discussions. The philosophy of biology has also undergone considerable growth, particularly due to the considerable debate in recent years over the nature of evolution, particularly natural selection. Daniel Dennett and his 1995 book Darwin's Dangerous Idea, which defends Neo-Darwinism, stand at the foreground of this debate. Contemporary analytic philosophy: Epistemology Owing largely to Gettier's 1963 paper "Is Justified True Belief Knowledge?", epistemology resurged as a topic of analytic philosophy during the last 50 years. A large portion of current epistemological research is intended to resolve the problems that Gettier's examples presented to the traditional justified true belief model of knowledge, including developing theories of justification to deal with Gettier's examples, or giving alternatives to the justified true belief model. Other and related topics of contemporary research include debates between internalism and externalism, basic knowledge, the nature of evidence, the value of knowledge, epistemic luck, virtue epistemology, the role of intuitions in justification, and treating knowledge as a primitive concept. Contemporary analytic philosophy: Aesthetics As a result of what seemed like rejections of the traditional aesthetic notions of beauty and sublimity from post-modern thinkers, analytic philosophers were slow to consider art and aesthetic judgment. Susanne Langer and Nelson Goodman addressed these problems in an analytic style during the 1950s and 1960s. Since Goodman, aesthetics as a discipline for analytic philosophers has flourished. Rigorous efforts to pursue analyses of traditional aesthetic concepts were performed by Guy Sircello in the 1970s and 1980s, resulting in new analytic theories of love, sublimity, and beauty.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Elysium Space** Elysium Space: Elysium Space is a space burial company. Burial options the company offers are Earth-orbit and then reentry burnup, and delivery to the lunar surface. The company was the first to offer burial on the Moon. History: Elysium Space was founded by Thomas Civeit in 2013.In 2015, a launch aboard a USAF Super Strypi rocket failed to reach orbit. The remains will be reflown in the second launch. The remains were to have orbited for 2 years before reentering and going out in a blaze.It will offer a service to launch the ashes of dead people into space aboard a SpaceX Falcon 9 rocket that will launch from Vandenberg Air Force Base in California, United States. This rocket rideshare will launch ashes into a Sun-synchronous orbit about the Earth. The Earth orbiting ashes will eventually have its orbit decay and return to Earth as a shooting star. Memorial spacecraft: Elysium Space launches the cremated remains aboard their Elysium Star space mausoleum satellites, a series of 1U cubesats. The Earth-orbiting satellites are designed to remain in space for 2 years before orbital decay brings them back to Earth as a shooting star, burning up in a blazing reentry.Elysium Space plans to use Astrobotic's Peregrine lunar lander for their lunar mausoleums.Elysium Space is in the early stages of planning for deep-space burials. Missions: Lunar missions are yet to be scheduled Extrasolar missions are yet to be scheduled
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poisson ring** Poisson ring: In mathematics, a Poisson ring is a commutative ring on which an anticommutative and distributive binary operation [⋅,⋅] satisfying the Jacobi identity and the product rule is defined. Such an operation is then known as the Poisson bracket of the Poisson ring. Poisson ring: Many important operations and results of symplectic geometry and Hamiltonian mechanics may be formulated in terms of the Poisson bracket and, hence, apply to Poisson algebras as well. This observation is important in studying the classical limit of quantum mechanics—the non-commutative algebra of operators on a Hilbert space has the Poisson algebra of functions on a symplectic manifold as a singular limit, and properties of the non-commutative algebra pass over to corresponding properties of the Poisson algebra. Definition: The Poisson bracket must satisfy the identities [f,g]=−[g,f] (skew symmetry) [f+g,h]=[f,h]+[g,h] (distributivity) [fg,h]=f[g,h]+[f,h]g (derivation) [f,[g,h]]+[g,[h,f]]+[h,[f,g]]=0 (Jacobi identity)for all f,g,h in the ring. A Poisson algebra is a Poisson ring that is also an algebra over a field. In this case, add the extra requirement [sf,g]=s[f,g] for all scalars s. For each g in a Poisson ring A, the operation adg defined as adg(f)=[f,g] is a derivation. If the set {adg|g∈A} generates the set of derivations of A, then A is said to be non-degenerate. If a non-degenerate Poisson ring is isomorphic as a commutative ring to the algebra of smooth functions on a manifold M, then M must be a symplectic manifold and [⋅,⋅] is the Poisson bracket defined by the symplectic form.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CryptoPunks** CryptoPunks: CryptoPunks is a non-fungible token (NFT) collection on the Ethereum blockchain. The project was launched in June 2017 by the Larva Labs studio, a two-person team consisting of Canadian software developers Matt Hall and John Watkinson. The experimental project was inspired by the London punk scenes, the cyberpunk movement, and electronic music artists Daft Punk. The crypto art blockchain project was an inspiration for the ERC-721 standard for NFTs and the modern crypto art movement, which has since become a part of the cryptocurrency and decentralized finance ecosystems on multiple blockchains. CryptoPunks: CryptoPunks are commonly credited with starting the NFT craze of 2021, along with other early projects including CryptoKitties, Bored Ape Yacht Club, and the sale of Beeple's Everydays: The First 5000 Days. There are 10,000 CryptoPunk tokens total. On March 2, 2022, an anonymous user donated CryptoPunk #5364 to Ukraine's government Ethereum wallet public address to help fund the Ukrainian government against the Russian invasion of Ukraine.On March 11, 2022 it was announced that all of the CryptoPunks IP was acquired by Yuga Labs (parent company and creators of the Bored Ape Yacht Club project) for an undisclosed sum. Immediately, Yuga Labs announced they were giving full commercial rights to CryptoPunks owners. On 7th May 2022 the transfer was completed, and the whole CryptoPunks marketplace was moved to the new Yuga Labs owned website. Concept: There are 10,000 unique CryptoPunks (6,039 male and 3,840 female). Each one was algorithmically generated through computer code and thus no two characters are exactly alike, with some traits being rarer than others. They were originally released for free and could be claimed by anyone with an Ethereum wallet by paying only "gas fees", which were low at the time..Most of the 10,000 CryptoPunks represent humans, but there are also three special types: Zombie (88), Ape (24), and Alien (9). Controversies: Flash loan In October 2021, a single NFT transaction was made for 124,457 Ether (US$532 million at the time of the sale) regarding CryptoPunk #9998, much higher than all previous NFT sales, leading to speculation on social media that this could have been some kind of scam, a security exploit or money laundering. Larva Labs said that the purchase was made with a flash loan where the NFT's owner bought the item from themselves with borrowed money, taking out and repaying the loan within a single blockchain transaction, subsequently invalidating the sale from the asset's historic and from all the related statistics. Controversies: Sotheby's 104 CryptoPunks auction In early 2022, a Sotheby's auction for a single lot of 104 CryptoPunks was announced. The auction took place on 23 February 2022, but its seller (0x650d) changed their mind 23 minutes after the auction began and decided to withdraw the auction to keep the whole lot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isosorbide dinitrate** Isosorbide dinitrate: Isosorbide dinitrate is a medication used for heart failure, esophageal spasms, and to treat and prevent chest pain from not enough blood flow to the heart. It has been found to be particularly useful in heart failure due to systolic dysfunction together with hydralazine. It is taken by mouth or under the tongue.Common side effects include headache, lightheadedness with standing, and blurred vision. Severe side effects include low blood pressure. It is unclear if use in pregnancy is safe for the baby. It should not be used together with PDE5 Inhibitors. Isosorbide dinitrate is in the nitrate family of medications and works by dilating blood vessels.Isosorbide dinitrate was first written about in 1939. It is on the World Health Organization's List of Essential Medicines. Isosorbide dinitrate is available as a generic medication. A long-acting form exists. In 2020, it was the 299th most commonly prescribed medication in the United States, with more than 1 million prescriptions. Medical uses: It is used for angina, in addition to other medications for congestive heart failure, and for esophageal spasms. It is available as an oral tablet both in extended release and slow release. The onset of action for Isosorbide Dinitrate is thirty minutes and the onset of action for oral extended release is 12–24 hours. Long-acting nitrates can be more useful as they are generally more effective and stable in the short term. Side effects: Tolerance After long-term use for treating chronic conditions, tolerance may develop in patients, reducing its effectiveness. The mechanisms of nitrate tolerance have been thoroughly investigated in the last 30 years and several hypotheses have been proposed. These include: Impaired biotransformation of isosorbide dinitrate to its active principle NO (or a NO-related species) Neurohormonal activation, causing sympathetic activation and release of vasoconstrictors such as endothelin and angiotensin II which counteract the vasodilation induced by isosorbide dinitrate Plasma volume expansion The oxidative stress hypothesis (proposed by Munzel et al. in 1995)The last hypothesis might represent a unifying hypothesis, and an isosorbide dinitrate-induced inappropriate production of oxygen free radicals might induce a number of abnormalities which include the ones described above. Furthermore, nitrate tolerance is shown to be associated with vascular abnormalities which have the potential to worsen patients prognosis: these include endothelial and autonomic dysfunction. Side effects: Other side effects In the short run, isosorbide dinitrate can cause severe headaches, necessitating analgesic administration for relief of pain, as well as severe hypotension, and, in certain cases, bradycardia. Rarely occurring are allergic reactions (rash; hives; itching; difficulty breathing; tightness in the chest; swelling of the mouth, face, lips, or tongue); fainting; fast or slow heartbeat; nausea; new or worsening chest pain; vomiting. Mechanism of action: Similar to other nitrites and organic nitrates, isosorbide dinitrate is converted to nitric oxide (NO), an active intermediate compound which activates the enzyme guanylate cyclase (atrial natriuretic peptide receptor A). This stimulates the synthesis of cyclic guanosine 3',5'-monophosphate (cGMP) which then activates a series of protein kinase-dependent phosphorylations in the smooth muscle cells, eventually resulting in the dephosphorylation of the myosin light chain of the smooth muscle fiber. The subsequent sequestration of calcium ions results in the relaxation of the smooth muscle cells and vasodilation. Society and culture: Isosorbide dinitrate is sold in the US under the brand names Dilatrate-SR by Schwarz and Isordil by Valeant, according to FDA Orange Book. It is sold under the trade name Isoket in the United Kingdom, Argentina, and Hong Kong. It is also a component of BiDil.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Erick Jones** Erick Jones: Dr. Erick Christopher Jones Sr. is dean of the College of Engineering at the University of Nevada, Reno, joining the college in September 2022. Jones is a fellow of the American Association for the Advancement of Science, among other organizations, and is a former senior science advisor in the Office of the Chief Economist at U.S. State Department. In addition to his experience in academia and government, Jones has worked in the private sector as an industrial engineer, director of engineering, consultant and project manager and executive manager. Erick Jones: Before joining the University of Nevada, Reno, Jones was a professor and associate dean for Graduate Studies at the College of Engineering at the University of Texas at Arlington (UTA). Prior to that, he was an associate professor at the University of Nebraska — Lincoln. Jones is considered an expert in radio-frequency identification (RFID), quality engineering, and Lean Six Sigma, a process improvement approach. He previously served as the deputy director of the UTA's homeland security-focused University Center Security Advances via Applied Nanotechnology (SAVANT) Center. He also served as the director of the Radio Frequency & Auto Identification (RAID) labs at UTA. Jones was the program director of The National Science Foundation's (NSF) Engineering Research Centers. He is currently Chair of the Supply Chain Technology Committee of International Supply Chain Education Alliance's (ISCEA) International Standards Board (IISB) and Editor in Chief of the International Supply Chain Technology Journal (ISCTJ).Jones's background led him to be invited to the National Science Foundation as program officer for the largest engineering investment in the country, the Engineering Research Center (ERC). He also worked in the largest fellowship program in the country, the NSF's Graduate Research Fellowships Program (GRFP). Jones served as a rotating program director at the NSF. Education: Jones graduated from Texas A&M University with a bachelor's degree in industrial engineering in May 1993. He later earned a master's degree from University of Houston, where his thesis was "Turnover of Part-Time Hourly Employees in an Industrial Service Company" under the guidance of Dr. Christopher Chung in May 1996. He further went on to obtain a PhD in industrial engineering from the University of Houston while concurrently working in the industry. Under the guidance of his advisor Chung, he worked on the topic "A Predictive SPC Model for Determining Cognitive Voluntary Turnover before Physical Departure" and successfully conferred the Ph.D. in August 2003. Industry background: He has held positions in industry that include Industrial Engineering Specialist, Director of Engineering, Consultant and Project Manager, and Executive Manager of a "Big 5" Accounting firm, and executive manager for United Parcel Service (UPS), Tompkins Associates, Academy Sports and Outdoors, and Arthur Andersen. He managed teams and operations as small as 3 people and as large as 500 people. He has managed projects implementing warehouse management systems (WMS) and enterprise resources planning (ERP) system, designing and constructing new facilities and re-engineering Fortune 1000 organizations. Operations managed include strategic systems deployment, teams of large-scale distribution operation, and human resources at an executive level. He is an expert in the field of supply chain optimization, distribution logistics, and inventory control. His contribution has laid foundation for our modern understanding of the Internet of Things (IOT), Blockchain, RFID, Auto ID and Supply Chain Technologies. Research: Jones' research interests are mainly in the field of RFID and its applications and Lean Six Sigma. However, Jones's research also covers various other topics like supply-chain technology, logistics, operations research, engineering, training, transportation and healthcare. He has received external funding from agencies like the NSF NASA, TexasMRC and internal funding in support of his research pursuits. He has also worked with undergraduate, graduate students and other professors on different research projects under RAID LABS at The University of Texas at Arlington. Research: Development of RFID technologies Jones has emerged as an industry leader in the RFID space as a result of his extensive work in forwarding the engineering aspects of the technology and the public adoption of the same in various socially relevant sectors. Having spearheaded a myriad of innovations in the field including but not limited to boosting applications of additive manufacturing in cyber-enabled manufacturing systems and integrating RFID in linear asset management, Jones now works on expanding RFID technologies towards satellites, cameras, and other data capturing technologies to create a "smart" planet. Research: Jones has also been active in enriching the RFID technological toolkit with his focused research on RFID integration in cell phones and other automated monitoring systems, resulting in numerous potential new intellectual property developments. Furthermore, his presence in academia remains prominent through his personal body of work which includes several journal publications, book authorships and conference presentations. He also serves as the Editor in Chief of the International Supply Chain Technology Journal (ISCTJ). Research: Process and supply-chain automation Jones' work heavily centers around integrated supply chain systems engineering with particular emphasis on manufacturing, automation for inventory control, and facilities and logistics planning. Through his contributions to Six Sigma Manufacturing and Management and Knowledge Worker productivity coupled with his involvement at RAID labs, he has brought new-age manufacturing and cyber solutions to many industries. Most notable of which are his collaborations with NASA towards implementing a ‘first of its kind’ RFID integration leveraged towards reducing inventory loss and record better replenishment accuracy. The technology was later ported onto the International space station which was met with positive affirmations from the astronauts on board. Additionally, Jones’ work has also been used in other socially relevant supply chains including ones in the essential manufacturing and healthcare system. Books: He has also published over 241 transcripts, books and publications and has written, edited, and published dozens of peer-reviewed articles and conference papers. Some of his most notable books are RFID and Auto-ID in Planning and Logistics: A Practical Guide for Military UID Applications, RFID in Logistics: A Practical Introduction, and Quality Management for Organizations Using Lean Six Sigma Techniques. Books: "RFID and Auto-ID in Planning and Logistics", E. C. Jones and Christopher A. Chung "Modern Quality for Organizations Using Lean Six Sigma Techniques", E. C. Jones. "RFID in Logistics:A Practical Introduction",. E. C. Jones and Christopher A. Chung. "Supply Chain Engineering and Logistics Handbook: Inventory and Production Control", E.C. Jones. Books: Industrial handbooks Tracked, What You Should Know About RFID, Internet of Things, Big Data and Data Security: The Official RFIDSCM Certification Handbook; Engineering Version by Jones, E. C., Gray, B., Wijemanne, M and Bolton, J, Tracked, What Everyone Should Know About Invisible Inventory, Monitoring and Tracking, The Official RFIDSCM Certification Handbook; Engineering Version by Jones, E. C., Gray, B and Armstrong, H. Books: The Six Sigma Trap, What you should know about Six Sigma that your company is not telling you: The Official ISCEA CLSSYB Certification Book by Jones, E. C., and Armstrong, H.A. Clampitt, H.G., and Edited by E. C. Jones, “RFID Certification Textbook,” PWD Group, January 2006. Second Edition, May 2006, Third Edition, American RFID Solutions, Arlington Heights, IL, May 2007 Leadership positions in international and national organizations: Committee member, National Academics of Science, Engineering and Medicine (NASEM), "Potential Impacts of COVID – 19 on the Careers of Women in Science, Engineering and Medicine", August 18, 2020 – March 31, 2021 President of ISCEA International Standards Board (IISB) – ISCEA, July 2020 – Present President, IISE Work Systems Division, July 2020 – Present Chairperson, Program - International Supply Chain Education Alliance RFID Supply Chain Manager (ISCEA RFID SCM) Certification Committee, 2007 – Present Board, Chief Diversity Equity and Inclusion Delegate (ASEE CDEI) - American Society of Engineering Educators, Engineering Economics Division, 2015 – 2023 Program Chair, Division -American Society of Engineering Educators, Engineering Management Division, 2006 through 2007 Fellowships: American Association for the Advancement of Science, Fellow, 2022 National Academies of Science, Engineering, and Mathematics, Jefferson Science Fellow, 2021 Sigma Xi, Fellow, June 2020 Institute of Industrial and Systems Engineers, Fellow, May 2020 African Scientific Institute (ASI), notable Fellow, 2019 International Supply Chain Education Alliance (ISCEA) Fellow, August 10, 2017 William J. Fulbright Foundation Specialist Scholar, 2011 Alfred P. Sloan Foundation, MPS Scholar, 2003 Awards and honors: The Academy of Medicine, Engineering, and Science of Texas (TAMEST) Protégé, 2020 National Role Model Administrator Award, Minority Access Inc., 2018. George and Elizabeth Pickett Endowed Professor, University of Texas at Arlington, 2017. William J. Fulbright Scholar in Mexico 2013 Fulbright Mexico Scholars Specialist, Engineering Education in Mexico, 2011. Innovative Use of Instructional Technology Teaching Award, 2007. College of Engineering Teaching Award Assistant Professor, 2007 College of Engineering Service Award Assistant Professor, 2006. Omaha World Herald Featured article about RFID Lab, March 18, 2006. College of Engineering Research Award Assistant Professor, 2006. Alfred P. Sloan Underrepresented Minority Ph.D. Program Fellow, 2001. NACME Undergraduate Award, 1990, 1991, 1992. Presidential Achievement Award (Undergraduate), 1988–1992.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SEC63** SEC63: Translocation protein SEC63 homolog is a protein that in humans is encoded by the SEC63 gene. Function: The Sec61 complex is the central component of the protein translocation apparatus of the endoplasmic reticulum (ER) membrane. The protein encoded by this gene and SEC62 protein are found to be associated with ribosome-free SEC61 complex. It is speculated that Sec61-Sec62-Sec63 may perform post-translational protein translocation into the ER. The Sec61-Sec62-Sec63 complex might also perform the backward transport of ER proteins that are subject to the ubiquitin-proteasome-dependent degradation pathway. The encoded protein is an integral membrane protein located in the rough ER. Clinical significance: Mutations of this gene have been linked with autosomal dominant polycystic liver disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cast net** Cast net: A casting net, also called a throw net, is a net used for fishing. It is a circular net with small weights distributed around its edge. Cast net: The net is cast or thrown by hand in such a manner that it spreads out while it's in the air before it sinks into the water. This technique is called net casting or net throwing. Fish are caught as the net is hauled back in. This simple device is particularly effective for catching small bait or forage fish, and has been in use, with various modifications, for thousands of years. Construction and technique: Contemporary cast nets have a radius which ranges from 4 to 12 feet (1.2 to 3.6 metres). Standard nets for recreational fishing have a four-foot hoop. Weights are usually distributed around the edge at about one pound per foot (1.5 kilograms per metre). Attached to the net is a handline, one end of which is held in the hand as the net is thrown. When the net is full, a retrieval clamp, which works like a wringer on a mop, closes the net around the fish. The net is then retrieved by pulling on this handline. The net is lifted into a bucket and the clamp is released, dumping the caught fish into the bucket.Cast nets work best in water no deeper than their radius. Casting is best done in waters free of obstructions. Reeds cause tangles and branches can rip nets. The net caster may choose to stand with one hand holding the handline, and with the net draped over the other arm so that the weights dangle, or, with most of the net being held in one hand and only a part of the lead line held in the other hand so the weights dangle in a staggered fashion (approximately half of the weights in the throwing hand being held higher than the rest of the weights). The line is then thrown out to the water, using both hands, in a circular motion rather as in hammer throwing. The net can be cast from a boat, or from the shore, or by wading. Construction and technique: There are also optional net throwers that can make casting easier. These look like a lid from a trash can, including the handle on top. The outside circumference has a deep gutter. The net is loaded along the gutter and the weights are placed inside the gutter. The net is then tossed into the water using the thrower. Regulations and Guidelines: The use of a cast net may be restricted in some areas when stated by use of sign or authorities. Some government entities in states like Florida state that most shrimp and fish under 8 inches can be caught using, “Cast nets having a stretched mesh size not greater than 1 inch in fresh waters of the state unless specifically prohibited.”. While in a state such as Illinois the law “Provides that all casting nets shall be legal, without size limits, for the capture of shad, minnow,” and many other bait fish. Along with this, different states may also require a fishing license to cast net in waters. In some states, such as Texas, "it is legal only for non-game fish" and there are also regulations in place to protect endangered species. If the guidelines outlined by the states and localities are disobeyed, fines and penalties can be brought against the individual who broke the rules. An example of this being that, "The use of cast net or throw net in any other Commonwealth waters is a violation of the Fish and Boat Code and is punishable by a fine and may result in the loss of fishing privileges" Biology: Net-casting spiders (or retiarius spiders) are stick-like spiders that build webs suspended between their front legs. When prey approaches, the spider stretches its net till it is much larger, and then propels itself onto its prey, entangling it in the web. History: In Ancient Rome, in a parody of fishing, a type of gladiator called a retiarius or "net fighter" was armed with a trident and a cast net. The retiarius was traditionally pitted against a secutor.Between 177 and 180 the Greek author Oppian wrote the Halieutica, a didactic poem about fishing. He described various means of fishing including the use of nets cast from boats. History: In Norse mythology the sea giantess Rán cast a fishing net to trap lost sailors.There is a reference in the New Testament to cast netting. Per John 21:6: "He said, “Throw your net on the right side of the boat and you will find some.” When they did, they were unable to haul the net in because of the large number of fish."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Molecular fragmentation methods** Molecular fragmentation methods: Molecular fragmentation (mass spectrometry), or molecular dissociation, occurs both in nature and in experiments. It occurs when a complete molecule is rendered into smaller fragments by some energy source, usually ionizing radiation. The resulting fragments can be far more chemically reactive than the original molecule, as in radiation therapy for cancer, and are thus a useful field of inquiry. Different molecular fragmentation methods have been built to break apart molecules, some of which are listed below. Background: A major objective of theoretical chemistry and computational chemistry is the calculation of the energy and properties of molecules so that chemical reactivity and material properties can be understood from first principles. As a practical matter, the aim is to complement the knowledge we gain from experiments, particularly where experimental data may be incomplete or very difficult to obtain. Background: High-level ab-initio quantum chemistry methods are known to be an invaluable tool for understanding the structure, energy, and properties of small up to medium-sized molecules. However, the computational time for these calculations grows rapidly with increased size of molecules. One way of dealing with this problem is the molecular fragmentation approach which provides a hierarchy of approximations to the molecular electronic energy. In this approach, large molecules are divided in a systematic way to small fragments, for which high-level ab-initio calculation can be performed with acceptable computational time. Background: The defining characteristic of an energy-based molecular fragmentation method is that the molecule (also cluster of molecules, or liquid or solid) is broken up into a set of relatively small molecular fragments, in such a way that the electronic energy, EF , of the full system F is given by a sum of the energies of these fragment molecules: EF=∑i=1Nfrag(ciEi)+ϵF where Ei is the energy of a relatively small molecular fragment, Fi . The ci are simple coefficients (typically integers), and Nfrag is the number of fragment molecules. Some of the methods also require a correction to the energies evaluated from the fragments. However, where necessary, this correction, ϵF , is easily computed. Methods: Different methods have been devised to fragment molecules. Among them you can find the following energy-based methods: Electrostatically Embedded Generalized Molecular Fractionation with Conjugate Caps (EE-GMFCC) Generalized Energy-Based Fragmentation (GEBF) Molecular Tailoring Approach (MTA) Systematic Molecular Fragmentation (SMF) Combined Fragmentation Method (CFM) Kernel Energy Method (KEM) Many-Overlapping-Body (MOB) Expansion Generalized Many-Body Expansion (GMBE) Method
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Punycode** Punycode: Punycode is a representation of Unicode with the limited ASCII character subset used for Internet hostnames. Using Punycode, host names containing Unicode characters are transcoded to a subset of ASCII consisting of letters, digits, and hyphens, which is called the letter–digit–hyphen (LDH) subset. For example, München (German name for Munich) is encoded as Mnchen-3ya. Punycode: While the Domain Name System (DNS) technically supports arbitrary sequences of octets in domain name labels, the DNS standards recommend the use of the LDH subset of ASCII conventionally used for host names, and require that string comparisons between DNS domain names should be case-insensitive. The Punycode syntax is a method of encoding strings containing Unicode characters, such as internationalized domain names (IDNA), into the LDH subset of ASCII favored by DNS. It is specified in IETF Request for Comments 3492. Encoding procedure: As stated in RFC 3492, "Punycode is an instance of a more general algorithm called Bootstring, which allows strings composed from a small set of 'basic' code points to uniquely represent any string of code points drawn from a larger set." Punycode defines parameters for the general Bootstring algorithm to match the characteristics of Unicode text. This section demonstrates the procedure for Punycode encoding, using the example of the string "bücher" (Bücher is German for books), which is translated into the label "bcher-kva". Encoding procedure: Separation of ASCII characters First, all ASCII characters in the string are copied from input to output, skipping over any other characters. For example, "bücher" is copied to "bcher". If any characters were copied, i.e. if there was at least one ASCII character in the input, an ASCII hyphen is appended to the output (e.g., "bücher" → "bcher-", but "ü" → ""). Encoding procedure: Note that hyphens are themselves ASCII characters. Thus, they can be present in the input and, if so, they will be copied to the output. This causes no ambiguity: if the output contains hyphens, the one that got added is always the last one. It marks the end of the ASCII characters. Encoding the non-ASCII characters For each non-ASCII character in the input, the encoder calculates two numbers: i is the 0-indexed position of the non-ASCII character in the input string (e.g. "0" means that the non-ASCII character is the input string's first character). n is the numeric code point, in Unicode, of the non-ASCII character, minus 127 (the last character code of ASCII).The encoder then calculates i*n, and encodes the resulting number into a sequence of base-36 digits. It renders those in ASCII, and appends the result to the output string. The ASCII rendering is: 0 → 'a', ..., 25 → 'z', 26 → '0', ..., 35 → '9', with the number's digits arranged in little-endian order. The base-36 encoding process is more complex. It outputs variable-length integers. These have the property that each number's most significant digit (e.g. the digit "1" in the number "123") is recognizable without context. Thus, the digits from multiple numbers can be concatenated, with nothing separating them, yet the original numbers can still be recognized and extracted. ACE prefix for internationalized domain names To prevent hyphens in non-international domain names from triggering a Punycode decoding, the string xn-- is prepended to Punycode sequences in internationalized domain names. This is called ACE (ASCII Compatible Encoding).Thus the domain name "bücher.tld" would be represented in ASCII as "xn--bcher-kva.tld". The decoder The decoder is a finite-state machine with two state variables i and n. i is an index into the string, ranging from zero (representing a potential insertion at the start) to the current length of the extended string (representing a potential insertion at the end). i starts at zero. n starts at 128 (the first non-ASCII code point). The state progression is a monotonic function. A state transition either increments i or, if i is at its maximum, resets i to zero and increments n. At the next state transition, we resume incrementing i. At each state, the code point denoted by n either gets inserted or not. The numbers generated by the encoder represent how many possibilities to skip before an insertion is made. Encoding procedure: There are six possible places to insert a character in the string "bcher" (including before the first character and after the last one). There are 124 code points between the last ASCII code point (127 = 0x7F, the end of ASCII) and "ü" (code point 252 = 0xFC, see Unicode's Latin-1 Supplement). There is one insertion position for the "ü" that must be skipped (position zero: before the 'b'). Encoding procedure: Thus, the decoder will skip a total of (6 × 124) + 1 = 745 possible insertions before reaching the required one. Once the character is inserted, there are now seven possible places to insert another character. Encoding procedure: Re-encoding of code numbers as ASCII sequences Punycode uses generalized variable-length integers to represent these values. For example, this is how "kva" is used to represent the code number 745: A number system with little-endian ordering is used which allows variable-length codes without separate delimiters: a digit lower than a threshold value marks that it is the most-significant digit, hence the end of the number. The threshold value depends on the position in the number and also on previous insertions, to increase efficiency. Correspondingly the weights of the digits vary. Encoding procedure: In this case a number system with 36 symbols is used, with the case-insensitive 'a' through 'z' equal to the decimal numbers 0 through 25, and '0' through '9' equal to the decimal numbers 26 through 35. Thus "kva", corresponds to the decimal number string "10 21 0". Encoding procedure: To decode this string of symbols, a sequence of thresholds will be needed, in this case it's (1, 1, 26, 26, ...). The weight (or place value) of the least-significant digit is always 1: 'k' (=10) with a weight of 1 equals 10. After this, the weight of the next digit depends on the first threshold: generally, for any n, the weight of the (n+1)-th digit is the weight of the previous one times (36 − threshold of the n-th digit). So the second symbol has a place value of 36 minus the previous threshold value, in this case, 35. Therefore, the sum of the first two symbols 'k' (=10) and 'v' (=21) is 10 × 1 + 21 × 35. Since the second symbol is not less than its threshold value of 1, there is more to come. However, since the third symbol in this example is 'a' (=0), we may ignore calculating its weight. Therefore, "kva" represents the decimal number (10 × 1) + (21 × 35) = 745. Encoding procedure: The thresholds themselves are determined for each successive encoded character by an algorithm keeping them between 1 and 26 inclusive. The case can then be used to provide information about the original case of the string. Encoding procedure: Because special characters are sorted by their code points by encoding algorithm, for the insertion of a second special character in "bücher", the first possibility is "büücher" with code "bcher-kvaa", the second "bücüher" with code "bcher-kvab", etc. After "bücherü" with code "bcher-kvae" comes codes representing insertion of ý, the Unicode character following ü, starting with "ýbücher" with code "bcher-kvaf" (different from "übücher" coded "bcher-jvab"), etc. Encoding procedure: To make the encoding and decoding algorithms simple, no attempt has been made to prevent some encoded values from encoding inadmissible Unicode values: however, these should be checked for and detected during decoding. Encoding procedure: Punycode is designed to work across all scripts, and to be self-optimizing by attempting to adapt to the character set ranges within the string as it operates. It is optimized for the case where the string is composed of zero or more ASCII characters and in addition characters from only one other script system, but will cope with any arbitrary Unicode string. Note that for DNS use, the domain name string is assumed to have been normalized using nameprep and (for top-level domains) filtered against an officially registered language table before being punycoded, and that the DNS protocol sets limits on the acceptable lengths of the output Punycode string. Examples: The following table shows examples of Punycode encodings for different types of input.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Seekg** Seekg: In the C++ programming language, seekg is a function in the fstream library (part of the standard library) that allows you to seek to an arbitrary position in a file. This function is defined for ifstream class - for ofstream class there's a similar function seekp (this is to avoid conflicts in case of classes that derive both istream and ostream, such as iostream). Seekg: position is the new position in the stream buffer. This parameter is an object of type streampos. offset is an integer value of type streamoff representing the offset in the stream's buffer. It is relative to the dir parameter.dir is the seeking direction. It is an object of type ios_base::seekdir that can take any of the following constant values: ios_base::beg (offset from the beginning of the stream's buffer). ios_base::cur (offset from the current position in the stream's buffer). ios_base::end (offset from the end of the stream's buffer).Note: If you have previously got an end of file on the stream, seekg will not reset it but will return an error in many implementations. - use the clear() method to clear the end of file bit first. This is a relatively common mistake and if seekg() is not performing as expected, it is wise to clear the fail bit, as shown below. Example of seekg: == Example clearing the fail bit ==
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Standard two-wheel motor vehicle (Japan)** Standard two-wheel motor vehicle (Japan): A standard two-wheel motor vehicle (普通自動二輪車, futsū jidō nirinsha), sometimes referred to as an ordinary motorcycle, is one of the vehicle categories in the Road Traffic Act of Japan. Such vehicles (motorcycles) have a displacement of more than 50 cc but no more than 400 cc. Standard two-wheel motor vehicle (Japan): In contrast, in the same act, such vehicle with a displacement of 125 cc or less is called a small two-wheel motor vehicle (小型二輪車, kogata nirinsha) (small motorcycle), and is a subcategory of a standard two-wheel motor vehicle. Meanwhile, one with a displacement of over 400 cc is called a large two-wheel motor vehicle (大型自動二輪車, ōgata jidō nirinsha) (heavy motorcycle). Overview: A standard two-wheel motor vehicle can be operated with an ordinary motorcycle license or a heavy motorcycle license. It is defined in the Road Traffic Act's Enforcement Regulations as a "two-wheeled vehicle (including one with a side car) other than a large special vehicle, heavy motorcycle and small special vehicle". Three-wheeled vehicles are treated as ordinary vehicles, and are not classed as motorcycles. However, the safety standard for motorcycles is applied to a three-wheeled vehicle that has coaxial wheels with a width of less than 46 cm, where the body or wheels incline when turning. License: A license to ride an ordinary motorcycle can be obtained from the age of 16 onward. Categories: Some of the vehicle categories under Japanese law are as follows:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Burdock piling** Burdock piling: Burdock piling (牛蒡積み, gobouzumi) is an advanced Japanese technique for building stone walls, named after the resemblance of the rough stones used to the ovate shapes of the blossoms of Japanese burdock plants. It was used to build ishi gaki (石垣), sloped stone walls which make up the foundations of many Japanese castles, such as Osaka Castle.Large rocks are fitted together over a mound of earth, and the remaining cracks are filled in with pebbles. This stone fill is called kuri ishi (栗石, chestnut stones) because of their small size. No mortar was used in the building of castle walls, which allowed the individual stones to move slightly during earthquakes without causing significant wall damage. Burdock piling: This technique grew from an earlier Japanese wall-building technique known as disordered piling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Function (computer programming)** Function (computer programming): In computer programming, a function or subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Function (computer programming): Functions may be defined within programs, or separately in libraries that can be used by many programs. In different programming languages, a function may be called a routine, subprogram, subroutine, or procedure; in object-oriented programming (OOP), it may be called a method. Technically, these terms all have different definitions, and the nomenclature varies from language to language. The generic umbrella term callable unit is sometimes used.A function is often coded so that it can be started several times and from several places during one execution of the program, including from other functions, and then branch back (return) to the next instruction after the call, once the function's task is done. Function (computer programming): The idea of a subroutine was initially conceived by John Mauchly and Kathleen Antonelli during their work on ENIAC, and recorded in a January 1947 Harvard symposium on "Preparation of Problems for EDVAC-type Machines". Maurice Wilkes, David Wheeler, and Stanley Gill are generally credited with the formal invention of this concept, which they termed a closed sub-routine, contrasted with an open subroutine or macro. However, Alan Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack.Functions are a powerful programming tool, and the syntax of many programming languages includes support for writing and using subroutines. Judicious use of functions (for example, through the structured programming approach) will often substantially reduce the cost of developing and maintaining a large program, while increasing its quality and reliability. Functions, often collected into libraries, are an important mechanism for sharing and trading software. The discipline of OOP is based on objects and methods (which are functions attached to these objects or object classes). Main concepts: The content of a function is its body, which is the piece of program code that is executed when the function is called or invoked. Main concepts: A function may be written so that it expects to obtain one or more data values from the calling program (to replace its parameters or formal parameters). The calling program provides actual values for these parameters, called arguments. Different programming languages may use different conventions for passing arguments: A function call may also have side effects such as modifying data structures in a computer memory, reading from or writing to a peripheral device, creating a file, halting the program or the machine, or even delaying the program's execution for a specified time. A subprogram with side effects may return different results each time it is called, even if it is called with the same arguments. An example is a random number function, available in many languages, that returns a different pseudo-random number each time it is called. The widespread use of functions with side effects is a characteristic of imperative programming languages. Main concepts: A function can be coded so that it may call itself recursively, at one or more places, to perform its task. This method allows direct implementation of functions defined by mathematical induction and recursive divide and conquer algorithms. Main concepts: A function whose purpose is to compute one boolean-valued function (that is, to answer a yes/no question) is sometimes called a predicate. In logic programming languages, often all functions are called predicates, since they primarily determine success or failure.A function that returns no value or returns a null value is sometimes called a procedure. Procedures usually modify their arguments and are a core part of procedural programming. Terminology: A subroutine is a function that doesn't return a value. The primary purpose of functions is to break up complicated computations into meaningful chunks and name them. The function may return a computed value to its caller (its return value), or provide various result values or output parameters. Indeed, a common use of functions is to implement mathematical functions, in which the purpose of the function is purely to compute one or more results whose values are entirely determined by the arguments passed to the function. (Examples might include computing the logarithm of a number or the determinant of a matrix.) In some languages the syntax for a procedure that returns a value is essentially the same as the syntax for a procedure that does not return a value, except for the absence of, e.g., RETURNS clause. In some languages a procedure may dynamically choose to return with or without a value, depending on its arguments. Language support: High-level programming languages usually include specific constructs to: Delimit the part of the program (body) that makes up the function Assign an identifier (name) to the function Specify the names and data types of its parameters and return values Provide a private naming scope for its temporary variables Identify variables outside the function that are accessible within it Call the function Provide values to its parameters The main program contains the address of the subprogram The subprogram contains the address of the next instruction of the function call in the main program Specify the return values from within its body Return to the calling program Dispose of the values returned by a call Handle any exceptional conditions encountered during the call Package functions into a module, library, object, or classSome programming languages, such as Pascal, Fortran, Ada and many dialects of BASIC, distinguish between functions or function subprograms, which provide an explicit return value to the calling program, and subroutines or procedures, which do not. In those languages, function calls are normally embedded in expressions (e.g., a sqrt function may be called as y = z + sqrt(x)). Procedure calls either behave syntactically as statements (e.g., a print procedure may be called as if x > 0 then print(x) or are explicitly invoked by a statement such as CALL or GOSUB (e.g., call print(x)). Other languages, such as C and Lisp, do not distinguish between functions and subroutines. Language support: In strictly functional programming languages such as Haskell, subprograms can have no side effects, which means that various internal states of the program will not change. Functions will always return the same result if repeatedly called with the same arguments. Such languages typically only support functions that return a value, since functions that do not return a value have no use unless they can cause a side effect. Language support: In programming languages such as C, C++, and C#, functions that return a value and functions that return no value are both called "functions" (not to be confused with mathematical functions or functional programming, which are different concepts). A language's compiler will usually translate procedure calls and returns into machine instructions according to a well-defined calling convention, so that functions can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue. Advantages: The advantages of breaking a program into functions include: Decomposing a complex programming task into simpler steps: this is one of the two main tools of structured programming, along with data structures Reducing duplicate code within a program Enabling reuse of code across multiple programs Dividing a large programming task among various programmers or various stages of a project Hiding implementation details from users of the function Improving readability of code by replacing a block of code with a function call where a descriptive function name serves to describe the block of code. This makes the calling code concise and readable even if the function is not meant to be reused. Advantages: Improving traceability (i.e. most languages offer ways to obtain the call trace which includes the names of the involved functions and perhaps even more information such as file names and line numbers); by not decomposing the code into functions, debugging would be severely impaired Disadvantages: Compared to using in-line code, invoking a function imposes some computational overhead in the call mechanism.A function typically requires standard housekeeping code – both at the entry to, and exit from, the function (function prologue and epilogue – usually saving general purpose registers and return address as a minimum). History: The idea of a subroutine was worked out after computing machines had already existed for some time. The arithmetic and conditional jump instructions were planned ahead of time and have changed relatively little, but the special instructions used for procedure calls have changed greatly over the years. The earliest computers and microprocessors, such as the Manchester Baby and the RCA 1802, did not have a single subroutine call instruction. Subroutines could be implemented, but they required programmers to use the call sequence—a series of instructions—at each call site. History: Subroutines were implemented in Konrad Zuse's Z4 in 1945. History: In 1945, Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines.In January 1947 John Mauchly presented general notes at 'A Symposium of Large Scale Digital Calculating Machinery' under the joint sponsorship of Harvard University and the Bureau of Ordnance, United States Navy. Here he discusses serial and parallel operation suggesting ...the structure of the machine need not be complicated one bit. It is possible, since all the logical characteristics essential to this procedure are available, to evolve a coding instruction for placing the subroutines in the memory at places known to the machine, and in such a way that they may easily be called into use.In other words, one can designate subroutine A as division and subroutine B as complex multiplication and subroutine C as the evaluation of a standard error of a sequence of numbers, and so on through the list of subroutines needed for a particular problem. ... All these subroutines will then be stored in the machine, and all one needs to do is make a brief reference to them by number, as they are indicated in the coding. History: Kay McNulty had worked closely with John Mauchly on the ENIAC team and developed an idea for subroutines for the ENIAC computer she was programming during World War II. She and the other ENIAC programmers used the subroutines to help calculate missile trajectories.Goldstine and von Neumann wrote a paper dated 16 August 1948 discussing the use of subroutines.Some very early computers and microprocessors, such as the IBM 1620, the Intel 4004 and Intel 8008, and the PIC microcontrollers, have a single-instruction subroutine call that uses a dedicated hardware stack to store return addresses—such hardware supports only a few levels of subroutine nesting, but can support recursive subroutines. Machines before the mid-1960s—such as the UNIVAC I, the PDP-1, and the IBM 1130—typically use a calling convention which saved the instruction counter in the first memory location of the called subroutine. This allows arbitrarily deep levels of subroutine nesting but does not support recursive subroutines. The IBM System/360 had a subroutine call instruction that placed the saved instruction counter value into a general-purpose register; this can be used to support arbitrarily deep subroutine nesting and recursive subroutines. The PDP-11 (1970) is one of the first computers with a stack-pushing subroutine call instruction; this feature also supports both arbitrarily deep subroutine nesting and recursive subroutines. Language support: In the very early assemblers, subroutine support was limited. Subroutines were not explicitly separated from each other or from the main program, and indeed the source code of a subroutine could be interspersed with that of other subprograms. Some assemblers would offer predefined macros to generate the call and return sequences. By the 1960s, assemblers usually had much more sophisticated support for both inline and separately assembled subroutines that could be linked together. Language support: One of the first programming languages to support user-written subroutines and functions was FORTRAN II. The IBM FORTRAN II compiler was released in 1958. ALGOL 58 and other early programming languages also supported procedural programming. Libraries: Even with this cumbersome approach, subroutines proved very useful. They allowed the use of the same code in many different programs. Memory was a very scarce resource on early computers, and subroutines allowed significant savings in the size of programs. Libraries: Many early computers loaded the program instructions into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program (or "mainline"); and the same subroutine tape could then be used by many different programs. A similar approach applied in computers that used punched cards for their main input. The name subroutine library originally meant a library, in the literal sense, which kept indexed collections of tapes or card-decks for collective use. Return by indirect jump: To remove the need for self-modifying code, computer designers eventually provided an indirect jump instruction, whose operand, instead of being the return address itself, was the location of a variable or processor register containing the return address. On those computers, instead of modifying the function's return jump, the calling program would store the return address in a variable so that when the function completed, it would execute an indirect jump that would direct execution to the location given by the predefined variable. Jump to subroutine: Another advance was the jump to subroutine instruction, which combined the saving of the return address with the calling jump, thereby minimizing overhead significantly. Jump to subroutine: In the IBM System/360, for example, the branch instructions BAL or BALR, designed for procedure calling, would save the return address in a processor register specified in the instruction, by convention register 14. To return, the subroutine had only to execute an indirect branch instruction (BR) through that register. If the subroutine needed that register for some other purpose (such as calling another subroutine), it would save the register's contents to a private memory location or a register stack. Jump to subroutine: In systems such as the HP 2100, the JSB instruction would perform a similar task, except that the return address was stored in the memory location that was the target of the branch. Execution of the procedure would actually begin at the next memory location. In the HP 2100 assembly language, one would write, for example to call a subroutine called MYSUB from the main program. The subroutine would be coded as The JSB instruction placed the address of the NEXT instruction (namely, BB) into the location specified as its operand (namely, MYSUB), and then branched to the NEXT location after that (namely, AA = MYSUB + 1). The subroutine could then return to the main program by executing the indirect jump JMP MYSUB, I which branched to the location stored at location MYSUB. Jump to subroutine: Compilers for Fortran and other languages could easily make use of these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls. Jump to subroutine: Incidentally, a similar method was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the return address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory, which was very limited on small computers such as the IBM PC. Call stack: Most modern implementations of a function call use a call stack, a special case of the stack data structure, to implement function calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address. Call stack: The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in reduced instruction set computing (RISC) and very long instruction word (VLIW) architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose. Call stack: The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter.Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information (like return addresses and loop counters) and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible. Call stack: When stack-based procedure calls were first introduced, an important motivation was to save precious memory. With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs that include thousands of functions, of which only a handful are active at any given moment. For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management. Call stack: However, another advantage of the call stack method is that it allows recursive function calls, since each nested call to the same procedure gets a separate instance of its private data. In a multi-threaded environment, there is generally more than one stack. An environment that fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records. Call stack: Delayed stacking One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return. The extra cost includes incrementing and decrementing the stack pointer (and, in some architectures, checking for stack overflow), and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased execution time, or increased processor complexity, or both. Call stack: This overhead is most obvious and objectionable in leaf procedures or leaf functions, which return without making any procedure calls themselves. To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed. For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If the procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers (such as the return address) that will be needed after Q returns. Examples: C and C++ In the C and C++ programming languages, subprograms are termed functions (further classified as member functions when associated with a class, or free functions when not). These languages use the special keyword void to indicate that a function does not return any value. Note that C/C++ functions can have side-effects, including modifying any variables whose addresses are passed as parameters. Examples: The function does not return a value and has to be called as a stand-alone function, e.g., Function1(); This function returns a result (the number 5), and the call can be part of an expression, e.g., x + Function2() This function converts a number between 0 and 6 into the initial letter of the corresponding day of the week, namely 0 to 'S', 1 to 'M', ..., 6 to 'S'. The result of calling it might be assigned to a variable, e.g., num_day = Function3(number);. Examples: This function does not return a value but modifies the variable whose address is passed as the parameter; it would be called with Function4(&variable_to_increment);. BASIC dialects Microsoft Small Basic In the example above, Example() calls the subroutine. To define the actual subroutine, the Sub keyword must be used, with the subroutine name following Sub. After content has followed, EndSub must be typed. Examples: Visual Basic (classic) In the Visual Basic (classic) language, subprograms are termed functions or subs (or methods when associated with a class). Visual Basic 6 uses various terms called types to define what is being passed as a parameter. By default, an unspecified variable is registered as a variant type and can be passed as ByRef (default) or ByVal. Also, when a function or sub is declared, it is given a public, private, or friend designation, which determines whether it can be accessed outside the module or project that it was declared in. Examples: By value [ByVal] – a way of passing the value of an argument to a procedure by passing a copy of the value, instead of passing the address. As a result, the variable's actual value can't be changed by the procedure to which it is passed. Examples: By reference [ByRef] – a way of passing the value of an argument to a procedure by passing an address of the variable, instead of passing a copy of its value. This allows the procedure to access the actual variable. As a result, the variable's actual value can be changed by the procedure to which it is passed. Unless otherwise specified, arguments are passed by reference. Examples: Public (optional) – indicates that the function procedure is accessible to all other procedures in all modules. If used in a module that contains an Option Private, the procedure is not available outside the project. Private (optional) – indicates that the function procedure is accessible only to other procedures in the module where it is declared. Friend (optional) – used only in a class module. Indicates that the Function procedure is visible throughout the project, but not visible to a controller of an instance of an object. Examples: The function does not return a value and has to be called as a stand-alone function, e.g., Function1 This function returns a result (the number 5), and the call can be part of an expression, e.g., x + Function2() This function converts a number between 0 and 6 into the initial letter of the corresponding day of the week, namely 0 to 'M', 1 to 'T', ..., 6 to 'S'. The result of calling it might be assigned to a variable, e.g., num_day = Function3(number). Examples: This function does not return a value but modifies the variable whose address is passed as the parameter; it would be called with "Function4(variable_to_increment)". Examples: PL/I In PL/I a called procedure may be passed a descriptor providing information about the argument, such as string lengths and array bounds. This allows the procedure to be more general and eliminates the need for the programmer to pass such information. By default PL/I passes arguments by reference. A (trivial) function to change the sign of each element of a two-dimensional array might look like: change_sign: procedure(array); declare array(*,*) float; array = -array; end change_sign; This could be called with various arrays as follows: /* first array bounds from -5 to +10 and 3 to 9 */ declare array1 (-5:10, 3:9)float; /* second array bounds from 1 to 16 and 1 to 16 */ declare array2 (16,16) float; call change_sign(array1); call change_sign(array2); Python In Python, the keyword def is used to define a function. The statements that form the body of the function must either continue on the same line or start on the next line and be indented. The following example program prints "Hello world!" followed by "Wikipedia" on the next line. Local variables, recursion and reentrancy: A subprogram may find it useful to make use of a certain amount of scratch space; that is, memory used during the execution of that subprogram to hold intermediate results. Variables stored in this scratch space are termed local variables, and the scratch space is termed an activation record. An activation record typically has a return address that tells it where to pass control back to when the subprogram finishes. Local variables, recursion and reentrancy: A subprogram may have any number and nature of call sites. If recursion is supported, a subprogram may even call itself, causing its execution to suspend while another nested execution of the same subprogram occurs. Recursion is a useful means to simplify some complex algorithms and break down complex problems. Recursive languages generally provide a new copy of local variables on each call. If the programmer desires the value of local variables to stay the same between calls, they can be declared static in some languages, or global values or common areas can be used. Here is an example of a recursive function in C/C++ to find Fibonacci numbers: Early languages like Fortran did not initially support recursion because variables were statically allocated, as well as the location for the return address. Early computer instruction sets made storing return addresses and variables on a stack difficult. Machines with index registers or general-purpose registers, e.g., CDC 6000 series, PDP-6, GE 635, System/360, UNIVAC 1100 series, could use one of those registers as a stack pointer. Local variables, recursion and reentrancy: Modern languages after ALGOL such as PL/I and C almost invariably use a stack, usually supported by most modern computer instruction sets to provide a fresh activation record for every execution of a subprogram. That way, the nested execution is free to modify its local variables without concern for the effect on other suspended executions in progress. As nested calls accumulate, a call stack structure is formed, consisting of one activation record for each suspended subprogram. In fact, this stack structure is virtually ubiquitous, and so activation records are commonly termed stack frames. Local variables, recursion and reentrancy: Some languages such as Pascal, PL/I, and Ada also support nested functions, which are functions callable only within the scope of an outer (parent) function. Inner functions have access to the local variables of the outer function that called them. This is accomplished by storing extra context information within the activation record, also termed a display. Local variables, recursion and reentrancy: If a subprogram can be executed properly even when another execution of the same subprogram is already in progress, that subprogram is said to be reentrant. A recursive subprogram must be reentrant. Reentrant subprograms are also useful in multi-threaded situations since multiple threads can call the same subprogram without fear of interfering with each other. In the IBM CICS transaction processing system, quasi-reentrant was a slightly less restrictive, but similar, requirement for application programs that were shared by many threads. Overloading: In strongly typed languages, it is sometimes desirable to have a number of functions with the same name, but operating on different types of data, or with different parameter profiles. For example, a square root function might be defined to operate on reals, complex values or matrices. The algorithm to be used in each case is different, and the return result may be different. By writing three separate functions with the same name, the programmer has the convenience of not having to remember different names for each type of data. Further, if a subtype can be defined for the reals, to separate positive and negative reals, two functions can be written for the reals, one to return a real when the parameter is positive, and another to return a complex value when the parameter is negative. Overloading: In object-oriented programming, when a series of functions with the same name can accept different parameter profiles or parameters of different types, each of the functions is said to be overloaded. Overloading: Here is an example of function overloading in C++, demonstrating the implementation of two functions with the same name (Area) but different parameters: As another example, a function might construct an object that will accept directions, and trace its path to these points on screen. There are a plethora of parameters that could be passed in to the constructor (colour of the trace, starting x and y co-ordinates, trace speed). If the programmer wanted the constructor to be able to accept only the color parameter, then he could call another constructor that accepts only color, which in turn calls the constructor with all the parameters passing in a set of default values for all the other parameters (X and Y would generally be centered on screen or placed at the origin, and the speed would be set to another value of the coder's choosing). Overloading: PL/I has the GENERIC attribute to define a generic name for a set of entry references called with different types of arguments. Example: DECLARE gen_name GENERIC( name WHEN(FIXED BINARY), flame WHEN(FLOAT), pathname OTHERWISE ); Multiple argument definitions may be specified for each entry. A call to "gen_name" will result in a call to "name" when the argument is FIXED BINARY, "flame" when FLOAT", etc. If the argument matches none of the choices "pathname" will be called. Closures: A closure is a subprogram together with the values of some of its variables captured from the environment in which it was created. Closures were a notable feature of the Lisp programming language, introduced by John McCarthy. Depending on the implementation, closures can serve as a mechanism for side-effects. Conventions: A wide number of conventions for the coding of functions have been developed. Pertaining to their naming, many developers have adopted the approach that the name of a function should be a verb when it does a certain task, and adjective when it makes some inquiry, and a noun when it is used to substitute variables. Some programmers suggest that a function should perform only one task, and if a function does perform more than one task, it should be split up into more functions. They argue that functions are key components in code maintenance, and their roles in the program must remain distinct. Conventions: Proponents of modular programming (modularizing code) advocate that each function should have minimal dependency on other pieces of code. For example, the use of global variables is generally deemed unwise by advocates for this perspective, because it adds tight coupling between the function and these global variables. If such coupling is not necessary, their advice is to refactor functions to accept passed parameters instead. However, increasing the number of parameters passed to functions can affect code readability. Return codes: Besides its main or normal effect, a subroutine may need to inform the calling program about exceptional conditions that may have occurred during its execution. In some languages and programming standards, this is often done through a return code, an integer value placed by the subprogram in some standard location, which encodes the normal and exceptional conditions. Return codes: In the IBM System/360, where return code was expected from the subroutine, the return value was often designed to be a multiple of 4—so that it could be used as a direct branch table index into a branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency. In the System/360 assembly language, one would write, for example: Optimization of function calls: There is a significant runtime overhead in calling a function, including passing the arguments, branching to the subprogram, and branching back to the caller. The overhead often includes saving and restoring certain processor registers, allocating and reclaiming call frame storage, etc.. In some languages, each function call also implies automatic testing of the function's return code or the handling of exceptions that it may raise. A significant source of overhead in object-oriented languages is the intensively used dynamic dispatch for method calls. Optimization of function calls: There are some seemingly obvious optimizations of procedure calls that cannot be applied if the procedures may have side effects. For example, in the expression (f(x)-1)/(f(x)+1), the function f must be called twice, because the two calls may return different results. Moreover, in the few languages which define the order of evaluation of the division operator's operands, the value of x must be fetched again before the second call, since the first call may have changed it. Determining whether a subprogram may have a side effect is very difficult (indeed, undecidable by virtue of Rice's theorem). So, while those optimizations are safe in purely functional programming languages, compilers of typical imperative programming usually have to assume the worst. Optimization of function calls: Inlining A method used to eliminate this overhead is inline expansion or inlining of the subprogram's body at each call site (versus branching to the function and back). Not only does this avoid the call overhead, but it also allows the compiler to optimize the procedure's body more effectively by taking into account the context and arguments at that call. The inserted body can be optimized by the compiler. Inlining, however, will usually increase the code size, unless the program contains only one call to the function.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adam's Wrath** Adam's Wrath: Adam's Wrath is an adventure module for the 2nd edition of the Advanced Dungeons & Dragons fantasy role-playing game. Plot summary: Adam's Wrath is an adventure which pits mid-level PCs against Doctor Victor Mordenheim and his minions. The adventure includes a visit to a haunted mansion, a showdown with living snow, and a climax on the Isle of Agony. Publication history: Adam's Wrath was written by Lisa Smedman, and published by TSR, Inc. Doug Stewart did editing and additional development. Reception: Rick Swan reviewed Adam's Wrath for Dragon magazine #207 (July 1994). He reviewed this adventure with the supplement Van Richten's Guide to the Created, and commented that the "AD&D game meets Frankenstein in these first-rate supplements for the Ravenloft setting." He suggested considering "Guide to the Created a warm-up for the Adam's Wrath adventure". According to Swan, the "most unforgettable moment comes early, when the party regains consciousness in Mordenheim's lightning tower. I don't want to give it away, but suffice to say that when you awaken, 'Something is wrapped around your head, covering your eyes...'"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Service Design Package (ITIL)** Service Design Package (ITIL): The Service Design Package (SDP) contains the core documentation of a service and is attached to its entry in the ITIL Service Portfolio. The SDP is described in the book Service Design, one of the five books that comprises the core of ITIL. The SDP follows the lifecycle of a service from when it is first suggested as a possibility to when it is finally retired. It is the central reference point for all documentation of a service, so it contains many links to other documents. Service Design Package (ITIL): A description of the sort of information that should be kept in an SDP is found in Appendix A of the Service Design book.The main categories described are: Service lifecycle plan Service programme Service transition plan Service operational acceptance plan Overall operational strategy, objectives, policy, risk assessment and plans Service acceptance criteriaAt major stages through the life of a service, the "Service Design Package (SDP)"' will contain project plans, project progress and project outcomes, as well as the business case that justified the service or the transition of the service from one status to another. Definition of a 'Service': In the ITIL model, a 'Service' is defined as, "A means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs and risks." The meaning is thus highly business-focused and assumes some degree of outsourcing, although this may just be outsourcing from within the functional business unit to some IT services group within the same overall business.'Service' in this context should not be confused with the IT meanings of 'service', such as a web service. This is somewhat confused by ITIL also recommending the adoption of service-oriented architecture, as expounded by OASIS. In most technical contexts, SOA is widely assumed to imply the provision and interconnection of technical services. Although a fashionable buzzword for ITIL to have incorporated, they do not use the term according to its general meaning.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adhyāsa** Adhyāsa: Adhyāsa (Sanskrit:अध्यास Superimposition) is a concept in Hindu philosophy referring to the superimposition of an attribute, quality, or characteristic of one entity onto another entity. In Advaita Vedanta, Adhyasa means a false superimposition of the characteristics of physical body (birth, death, skin color etc.) onto the atman, and also the false superimposition of the characteristics of Atman (sentiency, existence) onto the physical body. Origin: The first mention of Adhyasa is found within the Brahma Sutra Bhasya of Adi Shankara. Adi Shankara begins his commentary of the Brahma Sutras by explaining what Adhyasa is and its nature. Shankara lists different views about Adhyasa from different philosophical schools, which suggests that the concept of Adhyasa certainly existed before Shankara. Definition: In his introduction to the commentary on Brahma Sutras, Shankara gives a definition of Adhyasa as thus - आह कोऽयमध्यासो नामेति। उच्यते स्मृतिरूपः परत्र पूर्वदृष्टावभासः। तं केचित् अन्यत्रान्यधर्माध्यास इति वदन्ति। केचित्तु यत्र यदध्यासः तद्विवेकाग्रहनिबन्धनो भ्रम इति। अन्ये तु यत्र यदध्यासः तस्यैव विपरीतधर्मत्वकल्पनामाचक्षते। सर्वथापि तु अन्यस्यान्यधर्मावभासतां न व्यभिचरति। तथा च लोकेऽनुभवः शुक्तिका हि रजतवदवभासते एकश्चन्द्रः सद्वितीयवदिति।। Swami Gambhirananda translates it as - If it be asked, "What is it that is called Superimposition?"- the answer is - "It is awareness, similar in nature to memory, that arises on a foreign (different) location as a result of some past experience. With regards to this, some say that it consists the in the superimposition of the attributes on one thing on another. But others assert that wherever a superimposition on anything occurs, there is only a confusion arising from the absence of distinction between them. Others say that the superimposition of anything on any other substratum consists in fancying some opposite attribute on the very basis. From every point of view, however, there is no difference as regards the appearance of one thing as something else. And in accord with this, we find common experience that the nacre appears as silver, and a single moon appears as two." Karl H. Potter translates it as - Now what is superimposition ? It is the appearance (ābhāsa), in the form of a memory, of something previously experienced in some other place. Other philosophers define superimposition in slightly different ways. Some say it involves the nongrasping of the distinction of two things leading to one being superimposed on the other. Others say it is the attribution to a thing of properties contrary to those belonging to that thing. In any case all agree that it involves the appearance of the properties of one thing in another. And this agrees with ordinary experience as reflected in the reports of illusions such as "the shell appears as silver" or "the single moon appears double."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Holdover in synchronization applications** Holdover in synchronization applications: Two independent clocks, once synchronized, will walk away from one another without limit. To have them display the same time it would be necessary to re-synchronize them at regular intervals. The period between synchronizations is referred to as holdover and performance under holdover relies on the quality of the reference oscillator, the PLL design, and the correction mechanisms employed. Importance: Synchronization is as important as power at the cell site. The quote above suggests that one can think of holdover in synchronization applications as analogous to running on backup power. Importance: Modern wireless communication systems require at least knowledge of frequency and often knowledge of phase as well in order to work correctly. Base stations need to know what time it is, and they usually get this knowledge from the outside world somehow (from a GPS Time and Frequency receiver, or from a synchronization source somewhere in the network they are connected to). Importance: But if the connection to the reference is lost then the base station will be on its own to establish what time it is. The base station needs a way to establish accurate frequency and phase (to know what time it is) using internal (or local) resources, and that’s where the function of holdover becomes important. The importance of GPS-derived timing: A key application for GPS in telecommunications is to provide synchronization in wireless basestations. Base stations depend on timing to operate correctly, particularly for the handoff that occurs when a user moves from one cell to another. In these applications holdover is used in base stations to ensure continued operation while GPS is unavailable and to reduce the costs associated with emergency repairs, since holdover allows the site to continue to function correctly until maintenance can be performed at a convenient time.Some of the most stringent requirements come from the newer generation of wireless base stations, where phase accuracy targets as low as 1μs need to be maintained for correct operation. However the need for accurate timing has been an integral part of the history of wireless communication systems as well as wireline, and it has been suggested that the search for reliable and cost effective timing solutions was spurred on by the need for CDMA to compete with lower cost solutions.Within the base station, besides standard functions, accurate timing and the means to maintain it through holdover is vitally important for services such as E911GPS as a source of timing is a key component in not just Synchronization in telecommunications but to critical infrastructure in general. Of the 18 Critical Resource and Key infrastructure (CIKR)sectors, 15 use GPS derived timing to function correctly. One notable application where highly accurate timing accuracy (and the means to maintain it through holdover) is of importance is in the use of Synchrophasors in the power industry to detect line faults. Another is in Low latency trading applications in capital markets. How GPS-derived timing can fail: GPS is sensitive to jamming and interference because the signal levels are so low and can easily be swamped by other sources, that can be accidental or deliberate. Also since GPS depends on line of sight signals it can be disrupted by Urban canyon effects, making GPS only available to some locations at certain times of the day, for example. How GPS-derived timing can fail: A GPS outage however is not initially an issue because clocks can go into holdover, allowing the interference to be alleviated as much as the stability of the oscillator providing holdover will allow. The more stable the oscillator, the longer the system can operate without GPS. Defining holdover: In Synchronization in telecommunications applications holdover is defined by ETSI as: An operating condition of a clock which has lost its controlling input and is using stored data, acquired while in locked operation, to control its output. The stored data are used to control phase and frequency variations, allowing the locked condition to be reproduced within specifications. Holdover begins when the clock output no longer reflects the influence of a connected external reference, or transition from it. Holdover terminates when the output of the clock reverts to locked mode condition. Defining holdover: One can regard holdover then as a measure of accuracy or error acquired by a clock when there is no controlling external reference to correct for any errors. MIL-PRF-55310 defines Clock Accuracy as: T(t)=T0+∫0tR(t)dt+ϵ(t)=T0+(R0t+12At2+...)+∫0tEt(t)dt+ϵ(t) Where T0 is the synchronization error at t=0 ; R(t) is the fractional frequency difference between two clocks under comparison; ϵ(t) is the error due to random noise; R0 is R(t) at t=0 ; A is the linear aging rate and E1(t) is the frequency difference due to environmental effects. Similarly ITU G.810 defines Time Error as: x(t)=x0+y0t+D2t2+ϕ(t)2πνnom Where x(t) is the time error; x0 is the time error at t=0 ; y0 is the fractional frequency error at t=0 ; D is the linear fractional frequency drift rate; ϕ(t) is the random phase deviation component and νnom is the nominal frequency. Implementing holdover: In applications that require synchronization (such as wireless base stations) GPS Clocks are often used and in this context are often known as a GPSDO (GPS Disciplined Oscillator) or GPS TFS (GPS Time and Frequency Source).NIST defines a Disciplined Oscillator as: An oscillator whose output frequency is continuously steered (often through the use of a phase locked loop) to agree with an external reference. For example, a GPS disciplined oscillator (GPSDO) usually consists of a quartz or rubidium oscillator whose output frequency is continuously steered to agree with signals broadcast by the GPS satellites. Implementing holdover: In a GPSDO a GPS or GNSS signal is used as the external reference that steers an internal oscillator. In a modern GPSDO the GPS processing and steering function are both implemented in a Microprocessor allowing a direct comparison between the GPS reference signal and the oscillator output. Implementing holdover: Amongst the building blocks of a GPS Time and Frequency solution the oscillator is a key component and typically they are built around an Oven Controlled Crystal Oscillator (OCXO) or a Rubidium based clock. The dominant factors influencing the quality of the reference oscillator are taken to be aging and temperature stability. However, depending upon the construction of the oscillator, barometric pressure and relative humidity can have at least as strong an influence on the stability of the quartz oscillator. What is often referred to as "random walk" instability is actually a deterministic effect of environmental parameters. These can be measured and modeled to vastly improve the performance of quartz oscillators. An addition of a Microprocessor to the reference oscillator can improve temperature stability and aging performance During Holdover any remaining clock error caused by aging and temperature instability can be corrected by control mechanisms. A combination of quartz based reference oscillator (such as an OCXO) and modern correction algorithms can get good results in Holdover applications.The holdover capability then is provided either by a free running local oscillator, or a local oscillator that is steered with software that retains knowledge of its past performance. The earliest documentation of such an effort comes from the then National Bureau of Standards in 1968 [Allan, Fey, Machlan and Barnes, "An Ultra Precise Time Synchronization System Designed By Computer Simulation", Frequency], where an analog computer consisting of ball-disk integrators implemented a third order control loop to correct for the frequency ageing of an oscillator. The first microprocessor implementation of this concept occurred in 1983 [Bourke, Penrod, "An Analysis of a Microprocessor Controlled Disciplined Frequency Standard", Frequency Control Symposium] where Loran-C broadcasts were used to discipline very high quality quartz oscillators as a caesium replacement in telecommunications wireline network synchronization. The basic aim of a steering mechanism is to improve the stability of a clock or oscillator while minimizing the number of times it needs calibration. In Holdover the learned behaviour of the OCXO is used to anticipate and correct for future behavior. Effective aging and temperature compensation can be provided by such a mechanism and the system designer is faced with a range of choices for algorithms and techniques to do this correction including extrapolation, interpolation and predictive filters (including Kalman filters).Once the barriers of aging and environmental effects are removed the only theoretical limitation to holdover performance in such a GPSDO is irregularity or noise in the drift rate, which is quantified using a metric like Allan deviation or Time deviation.The complexity in trying to predict the effects on Holdover due to systematic effects like aging and temperature stability and stochastic influences like Random Walk noise has resulted in tailor-made Holdover Oscillator solutions being introduced in the market.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Axelopran** Axelopran: Axelopran (INN, USAN) (developmental code name TD-1211) is a drug which is under development by Theravance Biopharma and licensed to Glycyx for all indications. It acts as a peripherally acting μ-opioid receptor antagonist and also acts on κ-, and δ-opioid receptors, with similar affinity for the μ- and κ-opioid receptors and about an order of magnitude lower affinity for the δ-opioid receptor. Recent data suggests that μ-opioid antagonists have a direct effect on overall survival in patients with advanced cancer.A μ-opioid agonist (e.g., morphine) have been shown to have multiple pro-tumor effects in vivo and in vitro, which can be blocked with μ-opioid antagonists including promoting angiogenesis, accelerating tumor cell proliferation, and modifying the response to chemotherapeutics. An extensive body of literature has shown diverse and profound immunosuppressive effects of μ-opioid activation in vivo and in vitro.Recent data for axelopran in three different pre-clinical models of cancer shows that a μ-opioid antagonist isolates distinct effects of the endogenous opioid system on tumor growth and works in combination with checkpoint inhibitors. The study of axelopran in melanoma in a zebrafish embryo model with an immature immune system and no microbiome tested axelopran direct effects on tumor growth and metastasis. The study of breast cancer in chicken eggs with a functional immune system and no microbiome tested the direct effect of axelopran on tumor weight, tumor immune infiltration, metastasis and angiogenesis. The study of axelopran in MC-38 syngeneic colorectal cancer in mice in combination with murine anti-PD-1 antibody tested the effect of a μ-opioid blockade on tumor volume and survival in a full in vivo model with both fully functional immune system and mature gut function and enteric microbiome. Axelopran: All three pre-clinical studies showed a significant impact of axelopran on their respective endpoints, suggesting that μ-opioid blockade is useful across different tumor types and has multiple mechanisms of action, including direct suppression of tumor cell proliferation, angiogenesis and metastasis, and immune surveillance. Furthermore, axelopran and murine anti-PD-1 antibody were synergistic in slowing tumor growth and increasing survival in the syngeneic mouse model. Axelopran: Axelopran has potent μ-opioid receptor antagonist activity on the gastrointestinal tract in vivo, and thus it produces a dose-dependent inhibition of opioid-induced delaying in gastric emptying in mice and rats following subcutaneous or oral administration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GOT2** GOT2: Aspartate aminotransferase, mitochondrial is an enzyme that in humans is encoded by the GOT2 gene. Glutamic-oxaloacetic transaminase is a pyridoxal phosphate-dependent enzyme which exists in cytoplasmic and inner-membrane mitochondrial forms, GOT1 and GOT2, respectively. GOT plays a role in amino acid metabolism and the urea and Kreb's cycle. Also, GOT2 is a major participant in the malate-aspartate shuttle, which is a passage from the cytosol to the mitochondria. The two enzymes are homodimeric and show close homology. GOT2 has been seen to have a role in cell proliferation, especially in terms of tumor growth. Structure: GOT2 is a dimer containing two identical subunits that hold overlapping subunit regions. The top and sides of the enzyme are made up of helices, while the bottom is formed by strands of beta sheets and extended hairpin loops. The subunit itself can be categorized into four different parts: a large domain, which binds pyridoxal-P, a small domain, an NH2-terminal arm, and a bridge across two domains, which is formed by residues 48-75 and 301-358. Virtually ubiquitous in eukaryotic cells, GOT2 nucleic acid and protein sequences are highly conserved, and its 5’regulatory regions in genomic DNA resemble those of typical house-keeping genes in that, e.g., they lack a TATA box. The GOT2 gene is also located on 16q21 and has an exon count of 10. Function: In order to produce the energy needed for everyday activities, our body needs to go through the process of glycolysis, which breaks down glucose into pyruvate. In this pathway, one very important part is the reduction of NAD+ to NADH and then the rapid oxidation of NADH back into NAD+. The oxidation phase mainly occurs in the mitochondria as part of the electron transport chain, but the transfer of NADH into the mitochondria from the cytosol is impossible, due to the impermeability of the inner mitochondrial membrane to NADH. Therefore, the malate-aspartate shuttle is needed to transfer reducing equivalents across the mitochondrial membrane for energy production. GOT2 and another enzyme, MDH, are essential for the functioning of the shuttle. GOT2 converts oxaloacetate into aspartate by transamination. This aspartate as well as alpha-ketoglutarate return into the cytosol, which is then converted back to oxaloacetate and glutamate, respectively.Another function of GOT2 is that it is believed to transaminate kynurenine into kynurenic acid (KYNA) in the brain. The KYNA made by the GOT2 is thought to be an important factor in brain pathology. It is suggested that KYNA synthesized by GOT2 could constitute a common, and mechanistically relevant, feature of the neurotoxicity caused by mitochondrial poisons, such as rotenone, malonate, 1-methyl-4-phenylpyridinium, and 3-nitropropionic acid. Clinical Significance: In nearly all cancer cells, glycolysis has been seen to be highly elevated to meet their increased energy, biosynthesis, and redox needs. Therefore, the malate-aspartate shuttle promotes the net transfer of cytosolic NADH into mitochondria to ensure a high rate of glycolysis in diverse cancer cell lines. In a study completed in 2008, inhibiting the malate-aspartate shuttle was found to impair the glycolysis process and essentially decreased breast adenocarcinoma cell proliferation. Furthermore, knocking down GOT2 and GOT1 has also been reported to inhibit cell proliferation and colony formation in pancreatic cancer cell lines, suggesting that the GOT enzyme is essential for maintaining a high rate of glycolysis to support rapid tumor cell growth. Also, both glucose and glutamine increase GOT2 3K acetylation in PANC-1 cells and that GOT2 3K acetylation plays a critical role in coordinating glucose and glutamine uptake to provide energy and support cell proliferation and tumor growth. This implies that inhibiting GOT2 3K acetylation may merit exploration as a therapeutic agent especially for pancreatic cancer.Mutations in this gene have been associated with an early onset infantile encephalopathy. Interactions: GOT2 has been seen to interact with: oxaloacetate kynurenine aspartate alpha-ketoglutarate Interactive pathway map: Click on genes, proteins and metabolites below to link to respective articles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spread (projective geometry)** Spread (projective geometry): A frequently studied problem in discrete geometry is to identify ways in which an object can be covered by other simpler objects such as points, lines, and planes. In projective geometry, a specific instance of this problem that has numerous applications is determining whether, and how, a projective space can be covered by pairwise disjoint subspaces which have the same dimension; such a partition is called a spread. Specifically, a spread of a projective space PG(d,K) , where d≥1 is an integer and K a division ring, is a set of r -dimensional subspaces, for some 0<r<d such that every point of the space lies in exactly one of the elements of the spread. Spread (projective geometry): Spreads are particularly well-studied in projective geometries over finite fields, though some notable results apply to infinite projective geometries as well. In the finite case, the foundational work on spreads appears in André and independently in Bruck-Bose in connection with the theory of translation planes. In these papers, it is shown that a spread of r -dimensional subspaces of the finite projective space PG(d,q) exists if and only if r+1∣d+1 Spreads and translation planes: For all integers n≥1 , the projective space PG(2n+1,q) always has a spread of n -dimensional subspaces, and in this section the term spread refers to this specific type of spread; spreads of this form may (and frequently do) occur in infinite projective geometries as well. These spreads are the most widely studied in the literature, due to the fact that every such spread can be used to create a translation plane using the André/Bruck-Bose construction. Spreads and translation planes: Reguli and regular spreads Let Σ be the projective space PG(2n+1,K) for n≥1 an integer, and K a division ring. A regulus R in Σ is a collection of pairwise disjoint n -dimensional subspaces with the following properties: R contains at least 3 elements Every line meeting three elements of R , called a transversal, meets every element of R Every point of a transversal to R lies on some element of R Any three pairwise disjoint n -dimensional subspaces in Σ lie in a unique regulus. A spread S of Σ is regular if for any three distinct n -dimensional subspaces of S , all the members of the unique regulus determined by them are contained in S . Regular spreads are significant in the theory of translation planes, in that they generate Moufang planes in general, and Desarguesian planes in the finite case when the order of the ambient field is greater than 2 . All spreads of PG(2n+1,2) are trivially regular, since a regulus only contains three elements. Spreads and translation planes: Constructing a regular spread Construction of a regular spread is most easily seen using an algebraic model. Letting V be a (2n+2) -dimensional vector space over a field F , one can model the k -dimensional subspaces of PG(2n+1,F) using the (k+1) -dimensional subspaces of V ; this model uses homogeneous coordinates to represent points and hyperplanes. Incidence is defined by intersection, with subspaces intersecting in only the zero vector considered disjoint; in this model, the zero vector of V is effectively ignored. Spreads and translation planes: Let F be a field and E an n -dimensional extension field of F . Consider V=E⊕E as a 2n -dimensional vector space over F , which provides a model for the projective space PG(2n−1,F) as above. Each element of V can be written uniquely as (x,y) where x,y∈E . A regular spread is given by the set of n -dimensional projective spaces defined by J(k)={(x,kx):x∈E} , for each k∈E , together with J(∞)={(0,y):y∈E} Constructing spreads: Spread sets The construction of a regular spread above is an instance of a more general construction of spreads, which uses the fact that field multiplication is a linear transformation over E when considered as a vector space. Since E is a finite n -dimensional extension over F , a linear transformation from E to itself can be represented by an n×n matrix with entries in F . A spread set is a set S of n×n matrices over F with the following properties: S contains the zero matrix and the identity matrix For any two distinct matrices X and Y in S , X−Y is nonsingular For each pair of elements a,b∈E , there is a unique X∈S such that aX=b In the finite case, where E is the field of order qn for some prime power q , the last condition is equivalent to the spread set containing qn matrices. Given a spread set S , one can create a spread as the set of n -dimensional projective spaces defined by J(k)={(x,xM):x∈E} , for each M∈S , together with J(∞)={(0,y):y∈E} As a specific example, the following nine matrices represent GF(9) as 2 × 2 matrices over GF(3) and so provide a spread set of AG(2,9) .[0000],[1001],[2002],[0120],[1121],[2122],[0210],[1211],[2221] Another example of a spread set yields the Hall plane of order 9 [0000],[1001],[2002],[1112],[2221],[0120],[0210],[1222],[2111] Modifying spreads One common approach to creating new spreads is to start with a regular spread and modify it in some way. The techniques presented here are some of the more elementary examples of this approach. Constructing spreads: Spreads of 3-space One can create new spreads by starting with a spread and looking for a switching set, a subset of its elements that can be replaced with an alternate set of pairwise disjoint subspaces of the correct dimension. In PG(3,K) , a regulus forms a switching set, as the set of transversals of a regulus R also form a regulus, called the opposite regulus of R . Removing the lines of a regulus in a spread and replacing them with the opposite regulus produces a new spread which is often non-isomorphic to the original. This process is a special case of a more general process called derivation or net replacement.Starting with a regular spread of PG(3,q) and reversing any regulus produces a spread that yields a Hall plane. In more generality, the process can be applied independently to any collection of reguli in a regular spread, yielding a subregular spread; the resulting translation plane is called a subregular plane. The André planes form a special subclass of subregular planes, of which the Hall planes are the simplest examples, arising by replacing a single regulus in a regular spread. Constructing spreads: More complex switching sets have been constructed. Bruen has explored the concept of a chain of reguli in a regular spread of PG(3,q) , q odd, namely a set of (q+3)/2 reguli which pairwise meet in exactly 2 lines, so that every line contained in a regulus of the chain is contained in exactly two distinct reguli of the chain. Bruen constructed an example of a chain in the regular spread of PG(3,5) , and showed that it could be replaced by taking the union of exactly half of the lines from the opposite regulus of each regulus in the chain. Numerous examples of Bruen chains have appeared in the literature since, and Heden has shown that any Bruen chain is replaceable using opposite half-reguli. Chains are known to exist in a regular spread of PG(3,q) for all odd prime powers q up to 37, except 29, and are known not to exist for 29 41 43 47 49 } . It is conjectured that no additional Bruen chains exist. Constructing spreads: Baker and Ebert generalized the concept of a chain to a nest, which is a set of reguli in a regular spread such that every line contained in a regulus of the nest is contained in exactly two distinct reguli of the nest. Unlike a chain, two reguli in a nest are not required to meet in a pair of lines. Unlike chains, a nest in a regular spread need not be replaceable, however several infinite families of replaceable nests are known. Constructing spreads: Higher-dimensional spreads In higher dimensions a regulus cannot be reversed because the transversals do not have the correct dimension. There exist analogs to reguli, called norm surfaces, which can be reversed. The higher-dimensional André planes can be obtained from spreads obtained by reversing these norm surfaces, and there also exist analogs of subregular spreads which do not give rise to André planes. Constructing spreads: Geometric techniques There are several known ways to construct spreads of PG(3,q) from other geometrical objects without reference to an initial regular spread. Some well-studied approaches to this are given below. Constructing spreads: Flocks of quadratic cones In PG(3,q) , a quadratic cone is the union of the set of lines containing a fixed point P (the vertex) and a point on a conic in a plane not passing through P. Since a conic has q+1 points, a quadratic cone has q(q+1)+1 points. As with traditional geometric conic sections, a plane of PG(3,q) can meet a quadratic cone in either a point, a conic, a line or a line pair. A flock of a quadratic cone is a set of q planes whose intersections with the quadratic cone are pairwise disjoint conics. The classic construction of a flock is to pick a line m that does not meet the quadratic cone, and take the q planes through m that do not contain the vertex of the cone; such a flock is called linear. Constructing spreads: Fisher and Thas show how to construct a spread of PG(3,q) from a flock of a quadratic cone using the Klein correspondence, and show that the resulting spread is regular if and only if the initial flock is linear. Many infinite families of flocks of quadratic cones are known, as are numerous sporadic examples.Every spread arising from a flock of a quadratic cone is the union of q reguli which all meet in a fixed line m . Much like with a regular spread, any of these reguli can be replaced with its opposite to create several potentially new spreads. Constructing spreads: Hyperbolic fibrations In PG(3,q) a hyperbolic fibration is a partition of the space into q−1 pairwise disjoint hyperbolic quadrics and two lines disjoint from all of the quadrics and each other. Since a hyperbolic quadric consists of the points covered by a regulus and its opposite, a hyperbolic fibration yields 2q−1 different spreads. Constructing spreads: All spreads yielding André planes, including the regular spread, are obtainable from a hyperbolic fibration (specifically an algebraic pencil generated by any two of the quadrics), as articulated by André. Using nest replacement, Ebert found a family of spreads in which a hyperbolic fibration was identified. Baker, et al. provide an explicit example of a construction of a hyperbolic fibration. A much more robust source of hyperbolic fibrations was identified by Baker, et al., where the authors developed a correspondence between flocks of quadratic cones and hyperbolic fibrations; interestingly, the spreads generated by a flock of a quadratic cone are not generally isomorphic to the spreads generated from the corresponding hyperbolic fibration. Constructing spreads: Subgeometry partitions Hirschfeld and Thas note that for any odd integer n≥3 , a partition of PG(n−1,q2) into subgeometries isomorphic to PG(n−1,q) gives rise to a spread of PG(2n−1,q) , where each subgeometry of the partition corresponds to a regulus of the new spread. Constructing spreads: The "classical" subgeometry partitions of PG(n−1,q2) can be generated using suborbits of a Singer cycle, but this simply generates a regular spread. Yff published the non-classical subgeometry partition, namely a partition of PG(2,9) into 7 copies of PG(2,3) , that admit a cyclic group permuting the subplanes. Baker, et al. provide several infinite families of partitions of PG(2,q2) into subplanes, with the same cyclic group action. Partial spreads: A partial spread of a projective space PG(d,K) is a set of pairwise disjoint r -dimensional subspaces in the space; hence a spread is just a partial spread where every point of the space is covered. A partial spread is called complete or maximal if there is no larger partial spread that contains it; equivalently, there is no r -dimensional subspace disjoint from all members of the partial spread. As with spreads, the most well-studied case is partial spreads of lines of the finite projective space PG(3,q) , where a full spread has size q2+1 . Mesner showed that any partial spread of lines in PG(3,q) with size greater than q2−q cannot be complete; indeed, it must be a subset of a unique spread. For a lower bound, Bruen showed that a complete partial spread of lines in PG(3,q) with size at most q+q lines cannot be complete; there will necessarily be a line that can be added to a partial spread of this size. Bruen also provides examples of complete partial spreads of lines in PG(3,q) with sizes q2−q+1 and q2−q+2 for all q>2 Spreads of classical polar spaces: The classical polar spaces are all embedded in some projective space PG(d,K) as the set of totally isotropic subspaces of a sesquilinear or quadratic form on the vector space underlying the projective space. A particularly interesting class of partial spreads of PG(d,K) are those that consist strictly of maximal subspaces of a classical polar space embedded in the projective space. Such partial spreads that cover all of the points of the polar space are called spreads of the polar space. Spreads of classical polar spaces: From the perspective of the theory of translation planes, the symplectic polar space is of particular interest, as its set of points are all of the points in PG(2n+1,K) , and its maximal subspaces are of dimension n . Hence a spread of the symplectic polar space is also a spread of the entire projective space, and can be used as noted above to create a translation plane. Several examples of symplectic spreads are known; see Ball, et al.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Der Urologe** Der Urologe: Der Urologe is a peer-reviewed scientific journal of urology published by Springer Medizin. It was established in 1962. The current editor-in-chief is Bernd Wullich. In 1970, the journal split into Der Urologe. Ausg. A and Der Urologe. Ausg. B (which continued the short-lived Der Urologische Facharzt, published from 1968 to 1969). Der Urologe. Ausg. B ceased publications in 2002, upon which Der Urologe. Ausg. A was renamed Der Urologe.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theta graph** Theta graph: In computational geometry, the Theta graph, or Θ -graph, is a type of geometric spanner similar to a Yao graph. The basic method of construction involves partitioning the space around each vertex into a set of cones, which themselves partition the remaining vertices of the graph. Like Yao Graphs, a Θ -graph contains at most one edge per cone; where they differ is how that edge is selected. Whereas Yao Graphs will select the nearest vertex according to the metric space of the graph, the Θ -graph defines a fixed ray contained within each cone (conventionally the bisector of the cone) and selects the nearest neighbor with respect to orthogonal projections to that ray. The resulting graph exhibits several good spanner properties. Theta graph: Θ -graphs were first described by Clarkson in 1987 and independently by Keil in 1988. Construction: Θ -graphs are specified with a few parameters which determine their construction. The most obvious parameter is k , which corresponds to the number of equal angle cones that partition the space around each vertex. In particular, for a vertex p , a cone about p can be imagined as two infinite rays emanating from it with angle θ=2π/k between them. With respect to p , we can label these cones as C1 through Ck in a counterclockwise pattern from C1 , which conventionally opens so that its bisector has angle 0 with respect to the plane. As these cones partition the plane, they also partition the remaining vertex set of the graph (assuming general position) into the sets V1 through Vk , again with respect to p . Every vertex in the graph gets the same number of cones in the same orientation, and we can consider the set of vertices that fall into each. Construction: Considering a single cone, we need to specify another ray emanating from p , which we will label l . For every vertex in Vi , we consider the orthogonal projection of each v∈Vi onto l . Suppose that r is the vertex with the closest such projection, then the edge {p,r} is added to the graph. This is the primary difference from Yao Graphs which always select the nearest vertex; in the example image, a Yao Graph would include the edge {p,q} instead. Construction: Construction of a Θ -graph is possible with a sweepline algorithm in log ⁡n) time. Properties: Θ -graphs exhibit several good geometric spanner properties. When the parameter k is a constant, the Θ -graph is a sparse spanner. As each cone generates at most one edge per cone, most vertices will have small degree, and the overall graph will have at most k⋅n=O(n) edges. Properties: The stretch factor between any pair of points in a spanner is defined as the ratio between their metric space distance, and their distance within the spanner (i.e. from following edges of the spanner). The stretch factor of the entire spanner is the maximum stretch factor over all pairs of points within it. Recall from above that θ=2π/k , then when k≥9 , the Θ -graph has a stretch factor of at most cos sin ⁡θ) . If the orthogonal projection line l in each cone is chosen to be the bisector, then for k≥7 , the spanning ratio is at most sin ⁡(π/k)) .For k=1 , the Θ -graph forms a nearest neighbor graph. For k=2 , it is easy to see that the graph is connected, as each vertex will connect to something to its left, and something to its right, if they exist. For k=3 ,4 5 , 6 , and ≥7 , the Θ -graph is known to be connected. Many of these results also give upper and/or lower bounds on their spanning ratios. Properties: When k is an even number, we can create a variant of the Θk -graph known as the half- Θk -graph, where the cones themselves are partitioned into even and odd sets in an alternating fashion, and edges are only considered in the even cones (or, only the odd cones). Half- Θk -graphs are known to have some very nice properties of their own. For example, the half- Θ6 -graph (and, consequently, the Θ6 -graph, which is just the union of two complementary half- Θ6 -graphs) is known to be a 2-spanner. Software for drawing Theta graphs: A tool written in Java Cone-based Spanners in Computational Geometry Algorithms Library (CGAL)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perchang** Perchang: Perchang is a physics-based game about getting little balls into a funnel. It was released on IOS on June 22, 2016 by Perchang Games. Gameplay: In Perchang the player is tasked with getting a certain number of balls from a designated starting point to go into a funnel located somewhere else in the environment to complete a level. The player must get the required number of balls into the funnel in a certain time limit to complete the level. In most levels the path between the start and end points is like Rube Goldberg machine with elements (environmental objects) such as magnets and flippers that need to be properly operated to get the balls to the funnel. The environmental objects are colored red or blue. Tapping the red button activates the objects colored red and tapping the blue button activates the objects colored blue. The player can change the color of the environmental objects which is done by tapping on the object (if the object was red it will become blue and vice versa).As the player progresses through the levels new elements are introduced such as magnets, flippers, fans, portals and more. The game has 60 levels, with 50 being regular levels (arranged in 10 areas) and 10 being bonus levels unlocked by getting a gold metal (done by completing the level in a certain amount of time or less) on all levels in an area.On October 27, 2016, a black mode was added with new zero g levels. Reception: Perchang got mostly positive reviews. 148 Apps called the game a "beautifully minimalist puzzle game" and rated it 4 stars out of 5. A reviewer at Pocket Tactics said that the game "reminds me surprisingly strongly of playing Angry Birds". They gave the game 3 out of 5 stars saying that it was "attractive and engaging to think about, less enjoyable for which to perfect solutions". Apple'N'Apps gave the game a score of 4.0 out 5 praising that the game "combines thought and timing skill", has "varied contraptions", and has a "high quality design" while criticizing the fact that "some levels can be tedious" and "solutions can also drag out a bit". TouchArcade called the game like "Marble Madness meets Lemmings".The game also reached the Top Charts for paid apps in the US and UK App Stores (it was in #3 position on July 5, 2016).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Granolithic** Granolithic: Granolithic screed, also known as granolithic paving and granolithic concrete, is a type of construction material composed of cement and fine aggregate such as granite or other hard-wearing rock. It is generally used as flooring, or as paving (such as for sidewalks). It has a similar appearance to concrete, and is used to provide a durable surface where texture and appearance are usually not important (such as outdoor pathways or factory floors). It is commonly laid as a screed. Screeds are a type of flooring laid on top of the structural element (like reinforced concrete) to provide a level surface on which the "wearing flooring" (the flooring which people see and walk on) is laid. A screed can also be laid bare, as it provides a long-lasting surface.The aggregate mixed with the cement can be of various size, shape, and material, depending on the texture of the surface needed and how long-lasting it must be. The aggregate is usually sifted so that the particles are roughly the same size, which helps reduce air pockets in the material (which can weaken it). Generally, the mix of aggregate to cement is 2.5 to 1 by volume.Granolithic screed or paving can be problematic. Because it is made with a high cement content and requires a great deal of water to mix, it may crack while drying. It can also come loose from the material below (especially if the lower material is not properly prepared). Pouring the material in layers is generally avoided. Cracking and curling can be reduced by dividing the area to be covered into smaller sections and then pouring the material. Debonding of the granolithic material can also be significantly avoided by using bonding agents like epoxy resins or polymer latex. Granolithic: A high degree of skill in pouring and finishing the material is needed to prevent problems. Sealers and hardeners can be added to the granolithic material to improve its resistance to wear.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TAF2** TAF2: Transcription initiation factor TFIID subunit 2 is a protein that in humans is encoded by the TAF2 gene.Initiation of transcription by RNA polymerase II requires the activities of more than 70 polypeptides. The protein that coordinates these activities is transcription factor IID (TFIID), which binds to the core promoter to position the polymerase properly, serves as the scaffold for assembly of the remainder of the transcription complex, and acts as a channel for regulatory signals. TFIID is composed of the TATA binding protein (TBP) and a group of evolutionarily conserved proteins known as TBP-associated factors or TAFs. TAFs may participate in basal transcription, serve as coactivators, function in promoter recognition or modify general transcription factors (GTFs) to facilitate complex assembly and transcription initiation. This gene encodes one of the larger subunits of TFIID that is stably associated with the TFIID complex. It contributes to interactions at and downstream of the transcription initiation site, interactions that help determine transcription complex response to activators.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**X32 ABI** X32 ABI: The x32 ABI is an application binary interface (ABI) and one of the interfaces of the Linux kernel. The x32 ABI provides 32-bit integers, long and pointers (ILP32) on Intel and AMD 64-bit hardware. The ABI allows programs to take advantage of the benefits of x86-64 instruction set (larger number of CPU registers, better floating-point performance, faster position-independent code, shared libraries, function parameters passed via registers, faster syscall instruction) while using 32-bit pointers and thus avoiding the overhead of 64-bit pointers. Details: Though the x32 ABI limits the program to a virtual address space of 4 GiB, it also decreases the memory footprint of the program by making pointers smaller. This can allow it to run faster by fitting more code and more data into cache. The best results during testing were with the 181.mcf SPEC CPU 2000 benchmark, in which the x32 ABI version was 40% faster than the x86-64 version. On average, x32 is 5–8% faster on the SPEC CPU integer benchmarks compared to x86-64. There is no speed advantage over x86-64 in the SPEC CPU floating-point benchmarks. There are also some application benchmarks that demonstrate the advantages of the x32 ABI. History: Running a userspace that consists mostly of programs compiled in ILP32 mode and which also have principal access to 64-bit CPU instructions has not been uncommon, especially in the field of "classic RISC" chips. For example, the Solaris operating system does so for both SPARC and x86-64. On the Linux side, Debian also ships an ILP32 userspace. The underlying reason is the somewhat "more expensive" nature of LP64 code, just like it has been shown for x86-64. In that regard, the x32 ABI extends the ILP32-on-64bit concept to the x86-64 platform. History: Several people had discussed the benefits of an x86-64 ABI with 32-bit pointers in the years since the Athlon 64's release in 2003, notably Donald Knuth in 2008. There was little publicly visible progress towards implementing such a mode until August 27, 2011, when Hans Peter Anvin announced to the Linux kernel mailing list that he and H. J. Lu had been working on the x32 ABI.That same day, Linus Torvalds replied with a concern that the use of 32-bit time values in the x32 ABI could cause problems in the future. This is because the use of 32-bit time values would cause the time values to overflow in the year 2038. Following this request, the developers of the x32 ABI changed the time values to 64-bit.A presentation at the Linux Plumbers Conference on September 7, 2011, covered the x32 ABI.The x32 ABI was merged into the Linux kernel for the 3.4 release with support being added to the GNU C Library in version 2.16.In December 2018 there was discussion as to whether to deprecate the x32 ABI, which has not happened as of April 2023.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Niven's theorem** Niven's theorem: In mathematics, Niven's theorem, named after Ivan Niven, states that the only rational values of θ in the interval 0° ≤ θ ≤ 90° for which the sine of θ degrees is also a rational number are: sin sin 30 sin 90 1. In radians, one would require that 0 ≤ x ≤ π/2, that x/π be rational, and that sin x be rational. The conclusion is then that the only such values are sin 0 = 0, sin π/6 = 1/2, and sin π/2 = 1. Niven's theorem: The theorem appears as Corollary 3.12 in Niven's book on irrational numbers.The theorem extends to the other trigonometric functions as well. For rational values of θ, the only rational values of the sine or cosine are 0, ±1/2, and ±1; the only rational values of the secant or cosecant are ±1 and ±2; and the only rational values of the tangent or cotangent are 0 and ±1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rice noodles** Rice noodles: Rice noodles, or simply rice noodle, are noodles made with rice flour and water as the principal ingredients. Sometimes ingredients such as tapioca or corn starch are added in order to improve the transparency or increase the gelatinous and chewy texture of the noodles. Rice noodles are most common in the cuisines of China, India and Southeast Asia. They are available fresh, frozen, or dried, in various shapes, thicknesses and textures. Fresh noodles are also highly perishable; their shelf life may be just several days. History: The origin of rice noodles dates back to China during the Qin dynasty when people from northern China invaded the south. Due to climatic conditions, the northern Chinese have traditionally preferred wheat and millet which grew in cold weather while the southern Chinese preferred rice which grew in hot weather. Noodles are traditionally made out of wheat and eaten throughout northern China so to adapt, northern cooks tried to prepare "noodles" using rice, thus inventing rice noodles. Over time rice noodles and their processing methods have been introduced around the world, becoming especially popular in Southeast Asia. In India, idi-appam, strings of cooked rice, was known in ancient Tamil country around 1st century AD, as per references in the Sangam literature, according to food historian K. T. Achaya.The shelf life may be extended by drying and removing its moisture content. Studies of drying rice noodles were conducted by the International Food Research Journal. Varieties: Round thick varieties Bánh canh – thick Vietnamese noodles. The Vietnamese word bánh refers to items such as noodles or cakes that are made from flour, and canh means "soup." Lai fun – a short and thick variety of Chinese noodles, also referred to as bánh canh by Vietnamese Nan gyi – large thick round rice noodles used in Burma Nan lat – medium thick round rice noodles used in Burma Silver needle noodles – a variety of Chinese noodles. It is short, about 5 cm long and 5 mm in diameter. Similar to Lai Fun but has a tapering end resembling a rat's tail. More commonly known as silver needle noodle in Hong Kong and Taiwan, and rat noodle or "mouse tail noodles" in Malaysia and Singapore and Locupan in Indonesia. They are also known as pin noodles. In Thailand they are known as Giam Ee noodles. Varieties: Flat thick varieties Bánh phở – thick fresh rice noodle used in popular Vietnamese phở noodle soups. Varieties: Shahe fen/chao fen/chow fun – wide chinese noodles. Also known as shahe fen / he fen (Mandarin), ho fun, hofoen, hor fun, sar hor fun, etc (Cantonese), kway teow, (literally "ricecake strips" in Minnan Chinese) or Sen Yai Migan – type of rice noodle from the Dai people, a Tai cultural group from Yunnan Province, China. It is made from ordinary non-glutinous rice. It is primarily defined by its relatively broad and flat shape Juanfen – similar to Migan Sen lek – narrow flat rice noodle in Thailand Used in such dishes as pad thai, Sukhothai rice noodles and in noodle soups. Its full name would be kuaitiao sen lek Nan byar – flat rice noodles used in Burma byar/pyar means flat. Varieties: San see – sticky flat rice noodles from Shan State of Burma Guay jub / kuay jab / kuai chap – Thai rolled rice chips or rice flake sheets Thin varieties Khanom chin – fresh, thin rice noodles in Thai cuisine which are made from rice sometimes fermented for three days, boiled, and then made into noodles by extruding the resulting dough through a sieve into boiling water. Burmese mont bat (မုန့်ဖတ်) or mont di (မုန့်တီ), are similar to this. Varieties: Rice vermicelli – thin strips, sometimes referred to as rice sticks. Also known as bí-hún, bīfun, bíjon or bihon, bee hoon, bihun, num banh chok, bún, mee hoon, Sevai and Sen Mee Others Mixian – a type of rice noodle from the Yunnan Province, China, made from ordinary non-glutinous rice. In many areas there are at least two distinct thicknesses produced, a thinner form (roughly 1.5 mm or 0.059 inches in diameter) and a thicker form (roughly 3.5–4 mm or 0.14–0.16 inches in diameter).Pasta made from brown rice flour is also available (in health food stores in Western nations) as an alternative to wheat flour-based noodles for individuals who react poorly to gluten. Dishes: Burmese Baik kut kyee kaik Kat kyi kaik Kyar san kyaw – ‘Kyar zan’/‘Kyar san’ means ’thin noodles’ in Burmese, and ’kyaw’ means fried. It is made with thin rice noodles, vermicelli and various vegetables, chicken, pork and seafood. Dishes: Kyay oh Meeshay Mohinga Mont di Nan gyi thohk Nanbyagyi thoke Rakhine kyarzan thoke Shan khauk swè (similar to Yunnan mi xian) – a "soup version" of meeshay without gel, and fish sauce instead of soy sauce, with flat or round noodles, where the soup is part of the dish itself, rather than as consommé. Also known as Khaut sew or Shan style noodles, these are, thin noodles served with a peppery soup topped with either chicken or pork and pickled vegetables. Dishes: Cambodian Kuyteav Num banhchok Chinese Beef chow fun Cart noodle Chao fen – Also known as Chow Fun in many Chinese restaurants in North America Clay-Pot Lao Shu Fen Crossing-the-bridge noodles Laoyou rice noodles Luosifen Mixian (noodle) Rice noodle roll Singapore-style noodles Filipino Mami bihon Pancit bihon Pancit choca Pancit luglug Pancit Malabon Pancit miki-bihon – round egg noodles with bihon, a hybrid type of stir-fried noodle. Dishes: Pancit palabok Pancit sinanta – consists of flat egg noodles, bihon, clams and chicken, with broth colored with annatto and served with pinakufu, a variant of dango Indonesian Bihun Bihun goreng Bakso Ketoprak (dish) Kwetiau ayam Kwetiau goreng Kwetiau kuah Kwetiau Medan kwetiau sapi Lakso Soto Lao Feu or Fer Khao piak sen Khao poon Khao soi Khua mee / pad lao – Savory, sweet, caramelized fried noodles, traditionally topped with a fried egg omelette. It is the equivalent of pad thai in Thai cuisine Mee ka tee Malaysian Char kway teow Kway chap Laksa Mee siam South Indian/Sri Lankan Idiyappam Sevai Singapore Beef kway teow Crab bee hoon Hokkien mee Katong laksa Satay bee hoon Thai Khanom chin Khao soi Kuai chap – it is a soup of pork broth with rolled up rice noodle sheets (resulting in rolls about the size of Italian penne), pork intestines, "blood tofu", and boiled egg. Dishes: Kuai tiao khua kai Kuai tiao nam tok – noodle soup darkened with raw blood kuaitiao ruea aka boat noodles – Boat noodles have been served since the period of Plaek Phibunsongkhram in 1942, and were originally served from boats that traversed Bangkok's canals. Type of the noodles for boat noodle are several, thin rice noodles, egg noodles, sen yai (wide broad rice noodles), and sen lek (narrow flat rice noodles). Dishes: Mi krop Nam ngiao Phat khi mao Pad thai Phat si-io – stir-fried noodles in dark soy sauce Rat na – gravy noodles Sukhothai rice noodles Vietnamese Bánh canh – Vietnamese soup with thick rice noodles Bánh cuốn – sheet of rice flour filled with spiced minced pork and mushroom Bánh hỏi Bún chả Bún bò Huế – rice vermicelli in soup with beef, lemon grass and other ingredients Bún kèn Bún mắm Bún ốc Bún riêu – rice vermicelli in soup with crab meat Bún thịt nướng Bún quậy Cao lầu Gỏi cuốn / Summer roll Hủ tiếu – A version of kuay teow that became popular in the 1960s in Southern Vietnam, especially in Saigon. There are different types of noodles for Hu Tieu, such as soft rice noodles, egg noodles, or chewy tapioca noodles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plasma activation** Plasma activation: Plasma activation (or plasma functionalization) is a method of surface modification employing plasma processing, which improves surface adhesion properties of many materials including metals, glass, ceramics, a broad range of polymers and textiles and even natural materials such as wood and seeds. Plasma functionalization also refers to the introduction of functional groups on the surface of exposed materials. It is widely used in industrial processes to prepare surfaces for bonding, gluing, coating and painting. Plasma processing achieves this effect through a combination of reduction of metal oxides, ultra-fine surface cleaning from organic contaminants, modification of the surface topography and deposition of functional chemical groups. Importantly, the plasma activation can be performed at atmospheric pressure using air or typical industrial gases including hydrogen, nitrogen and oxygen. Thus, the surface functionalization is achieved without expensive vacuum equipment or wet chemistry, which positively affects its costs, safety and environmental impact. Fast processing speeds further facilitate numerous industrial applications. Introduction: Quality of adhesive bonding such as gluing, painting, varnishing and coating depends strongly on the ability of the adhesive to efficiently cover (wet) the substrate area. This happens when the surface energy of the substrate is greater than the surface energy of the adhesive. However, high strength adhesives have high surface energy. Thus, their application is problematic for low surface energy materials such as polymers. To solve this problem, surface treatment is used as a preparation step before adhesive bonding. It cleans the surface from the organic contaminants, removes a weak boundary layer, chemically bonds to the substrate a strong layer with high surface energy and chemical affinity to the adhesive, and modifies the surface topography enabling capillary action by the adhesive. Importantly, surface preparation provides a reproducible surface allowing consistent bonding results.Many industries employ surface preparation methods including wet chemistry, exposure to UV light, flame treatment and various types of plasma activation. Advantage of the plasma activation lies in its ability to achieve all necessary activation objectives in one-step without the use of chemicals. Thus, plasma activation is simple, versatile and environmentally friendly. Types of plasmas used for surface activation: Many types of plasmas can be used for surface activation. However, due to economic reasons, atmospheric pressure plasmas found most applications. They include arc discharge, corona discharge, dielectric barrier discharge and its variation piezoelectric direct discharge. Types of plasmas used for surface activation: Arc discharge Arc discharges at atmospheric pressure are self-sustained DC electric discharges with large electric currents, typically higher than 1 A, in some cases reaching up to 100.000 A, and relatively low voltages, typically of the order of 10 – 100 V. Due to high collision frequencies of plasma species, atmospheric pressure arcs are in thermal equilibrium having temperatures of the order of 6.000 – 12.000 °C. Most of the arc volume is electrically neutral except for thin anode and cathode layers where strong electric fields are present. These typically collision-less layers have voltage drops of about 10 – 20 V. Ions, which are produced within the cathode layer, accelerate in this voltage and impact the cathode surface with high energies. This process heats the cathode stimulating thermal electron emission, which sustains the high discharge currents. On the cathode surface the electric currents concentrate at fast moving spots with sizes of 1 – 100 μm. Within these spots, the cathode material reaches local temperatures of 3000 °C, leading to its evaporation and a slow cathode erosion.Pulsed atmospheric arc technology improves the arc stability at low electric currents, maximizes the discharge volume, and together with it the production of reactive species for plasma activation, while at the same time reducing the size of the driving high voltage electronics. These factors make it economically very attractive for industrial applications. Types of plasmas used for surface activation: There are two ways of using electric arcs for surface activation: non-transferred and transferred electric arcs. In the non-transferred technique, both electrodes are part of the plasma source. One of them also acts as a gas nozzle producing a stream of plasma. After the plasma stream leaves the arc region, the ions recombine quickly, leaving the hot gas having high concentrations of chemically active hydrogen, nitrogen and oxygen atoms and compounds, which is also called remote plasma. The temperature of this gas stream is of the order of 200 – 500 °C. The gas is very reactive allowing high surface treatment speeds when only a short-time contact with the substrate is sufficient to achieve the activation effect. This gas can activate all materials, including temperature-sensitive plastics. Moreover, it is electrically neutral and free from electric potentials, which is important for activation of sensitive electronics. Types of plasmas used for surface activation: In the transferred technique of using the electric arcs, the substrate itself plays the role of the cathode. In this case, the substrate is subject not only to the reactive chemical species, but also to their ions with energies of up to 10 – 20 eV, to high temperatures reaching within the cathode spots 3000 °C, and to UV light. These additional factors lead to even greater activation speeds. This treatment method is suitable for conductive substrates such as metals. It reduces metal oxides by their reactions with hydrogen species and leaves the surface free from organic contaminants. Moreover, the fast moving multiple cathode spots create a microstructure on the substrate improving mechanical binding of the adhesive. Types of plasmas used for surface activation: Corona discharge Corona discharges appear at atmospheric pressures in strongly non-uniform electric fields. Sharp edges of high voltage electrodes produce such fields in their vicinity. When the field in the rest space is negligible – this happens at large distances to the electric grounds – the corona discharge can be ignited. Otherwise, the high voltage electrodes may spark to the ground. Depending on the polarity of the high voltage electrode one distinguishes negative corona, formed around the cathode, and positive corona, formed around the anode. Negative corona is similar to the Townsend discharge, where the electrons, emitted by the cathode, accelerate in the electric field, ionize the gas in collisions with its atoms and molecules releasing more electrons, and thus creating an avalanche. Secondary processes include electron emission from the cathode and photoionization within the gas volume. Negative corona creates a uniform plasma glowing around the sharp edges of the electrodes. On the other hand, electrons initiating the avalanches in the positive corona are produced by the photoionization of the gas, surrounding the high voltage anode. The photons are emitted in the more active region of the anode vicinity. Then the electron avalanches propagate towards the anode. The plasma of the positive corona consists out of many constantly moving filaments. Types of plasmas used for surface activation: Corona discharges produce electric currents of the order of 1 – 100 μA at high voltages of the order of several kV. These currents and the corresponding discharge power are low comparing to the currents and the power of the arc and the dielectric barrier discharges. However, the advantage of the corona discharge is simplicity of the DC high voltage electronics. While electric sparks limit the high voltage, and thus the corona power, the latter can be further increased with a help of pulse-periodic high voltages. However, this complicates the high voltage system. Types of plasmas used for surface activation: Dielectric barrier discharge Dielectric barrier discharge occurs between two electrodes separated by a dielectric. Due to the presence of the dielectric barrier, such plasma sources operate only with sine-wave or pulsed high voltages. The physical principles of the discharge do not limit the operating frequency range. The typical frequencies of commonly used solid-state high voltage supplies are 0.05 – 500 kHz. The voltage amplitudes of the order of 5 – 20 kV produce electric currents in the range of 10 – 100 mA. The power of the dielectric barrier discharge is significantly higher than that of the corona discharge, but smaller comparing to the arc discharge. The discharge generally consists of multiple micro-discharges, although in some cases uniform discharges are created too. To increase the uniformity and the discharge gap in the case of VBDB, a pre-ionization system can be used.Other types of DBD used for functionalization are plasma jets. The processed area is smaller compared with the surface or volume DBD discharges. Micro plasma jets produced in capillary tubes with less than 1μm diameter tip are ultrafine atmospheric pressure plasma jets and proved to be great tools in micro-size processing and functionalization of materials such as carbon nanotubes or polymers. Types of plasmas used for surface activation: Piezoelectric direct discharge Piezoelectric direct discharge can be considered as a special technical realization of the dielectric barrier discharge, which combines the alternating current high voltage generator, high voltage electrode and the dielectric barrier into a single element. Namely, the high voltage is generated with a piezo-transformer, the secondary circuit of which acts also as the high voltage electrode. Since the piezoelectric material of the transformer, such as lead zirconate titanate, is often a dielectric, the produced electric discharge resembles properties of the dielectric barrier discharge. In addition, when operated in far from the electric ground, it also produces corona discharges on the sharp edges of the piezo-transformer. Types of plasmas used for surface activation: Due to the unique construction principles, the piezoelectric barrier discharge is the economic and compact source of the dielectric barrier and corona plasmas. Although its power is limited to about 10 W per unit, the low costs and small sizes of the units allow construction of large arrays optimized for particular applications. Further types of plasmas Plasmas suitable for surface activation were also created using inductive heating with RF and microwave frequencies, spark discharges, resistive barrier discharges and various types of micro-discharges. Physical and chemical activation mechanisms: The goal of the plasma generators is to convert the electric energy into the energy of charged and neutral particles – electrons, ions, atoms and molecules – which then would produce large quantities of chemical compounds of hydrogen, nitrogen and oxygen, in particular short-lived highly reactive species. Bombardment of the substrate with all constituent plasma species cleans and chemically activates the surface. In addition, at the contact points of discharge filaments the surface can locally reach high temperatures. This modifies the topography of the surface improving mechanical binding of the adhesive. Physical and chemical activation mechanisms: Processes within the plasma volume At the atmospheric pressure, the high collision frequency between the electrons and the gas molecules precludes the electrons from reaching high energies. Typical electron energies are of the order of 1 eV except for the electrode layers of 10 – 30 μm thickness where they can reach 10 – 20 eV. Due to the low electric currents of individual filaments in corona and dielectric barrier discharges, the gas present within the discharge volume does not reach thermal equilibrium with the electrons and remains cold. Its temperature rises typically only by up to a few 10 °C above the room temperature. On the other hand, due to the high electric currents of the arc discharge, the whole arc volume thermally equilibrates with the electrons reaching temperatures of 6,000 – 12,000 °C. However, after leaving the arc volume, this gas quickly cools down to a few 100 °C before it contacts the substrate. Physical and chemical activation mechanisms: Although it is not correct to speak of temperatures of non-equilibrium electron and ion gases, the temperature concept is illustrative of the physical conditions of the discharges, as the temperature defines the average energy of the particles. The average electron energy of 1 eV, realized typically within the plasma volume, is equal to average electron energy at temperatures of 10,000 °C. In the thin cathode and anode layers, the ions and the electrons reach average energies up to 10 times higher, corresponding to temperatures of 100,000 °C. At the same time, the molecular gas can remain cold. Physical and chemical activation mechanisms: Due to the high electron-ion and electron-molecule collision energies, the plasma volume acts as an efficient chemical reactor enabling fast production of chemical compounds of hydrogen, nitrogen and oxygen. Among those, the short-lived highly reactive species are the main agents of the plasma activation of surfaces. They include atomic H, N and O species, OH and ON radicals, ozone, nitrous and nitric acids, as well as various other molecules in metastable excited states. Moreover, when the discharge directly contacts the substrate, the ions of these species as well as the electrons, both having high energies, bombard the surface. Physical and chemical activation mechanisms: Surface processes Plasma of the atmospheric discharges or its product gas, rich with highly reactive chemical species, initiates a multitude of physical and chemical processes upon contact with the surface. It efficiently removes organic surface contaminants, reduces metal oxides, creates a mechanical microstructure on the surface and deposits functional chemical groups. All of these effects can be adjusted by selecting discharge types, their parameters and the working gas. Following processes result in surface activation: Ultra-fine cleaning. Reactive chemical species efficiently oxidize organic surface contaminants, converting them into carbon dioxide and water, which evaporate from the surface, leaving it in ultra-fine clean state. Physical and chemical activation mechanisms: Removal of weak boundary layers. Plasma removes surface layers with the lowest molecular weight, at the same time it oxidises the uppermost atomic layer of the polymer. Cross-linking of surface molecules. Oxygen radicals (and UV radiation, if present) help break up bonds and promote the three-dimensional cross bonding of molecules. Physical and chemical activation mechanisms: Reduction of metal oxides. Plasma discharges, ignited in the forming gas, typically containing 5 % of hydrogen and 95 % of nitrogen, produce large quantities of reactive hydrogen species. By contact with oxidized metal surfaces, they react with metal oxides reducing them to metal atoms and water. This process is particularly efficient in electric arcs burning directly on the substrate surface. It leaves the surface clean from the oxides and the contaminants. Physical and chemical activation mechanisms: Modification of the surface topography. Electric discharges having direct contact with the substrate erode the substrate surface on the micrometer scale. This creates microstructures that are filled by the adhesives due to the capillary action, improving the mechanical binding of the adhesives. Physical and chemical activation mechanisms: Deposition of functional chemical groups. Short-lived chemical species, produced within the plasma volume, as well as the ions, produced within the thin layer, where the discharge contacts the surface, bombard the substrate initiating a number of chemical reactions. Reactions depositing functional chemical groups onto the substrate surface are in many cases the most important mechanism of plasma activation. In the case of plastics, usually having low surface energy, polar OH and ON groups significantly increase the surface energy, improving the surface wettability by the adhesives. In particular, this increases the strength of the dispersive adhesion. Moreover, by employing specialized working gases, which produce chemical species that can form strong chemical bonds with both the substrate surface and the adhesive, one can achieve very strong bonding between chemically dissimilar materials.Balance of the chemical reactions on the substrate surface depends on the plasma gas composition, velocity of the gas flow, as well as the temperature. The effect of the latter two factors depends on the probability of the reaction. Here one distinguishes two regimes. In a diffusion regime, with a high reaction probability, the speed of the reaction depends on the velocity of the gas flow, but does not depend on the gas temperature. In the other, kinetic regime, with a low reaction probability, the speed of the reaction depends strongly on the gas temperature according to the Arrhenius equation. Surface characterization methods: One of the main objectives of the plasma activation is to increase the surface energy. The latter is characterized by the wettability of the surface—the ability of the liquid to cover the surface. There are several methods to assess the wettability of the surface: In the wetting tension test, several liquids of different surface energies are applied to the surface. The liquid with the lowest surface energy, which wets the tested surface, defines the surface energy of the latter. Surface characterization methods: A drop of liquid with known surface energy, e.g. distilled water, is applied to the tested surface. The contact angle of the liquid drop surface, with respect to the substrate surface, determines the substrate surface energy. A defined amount of distilled water is spilled on the surface. The area covered by the water determines the surface energy. A drop of distilled water is placed on the surface, which is being tilted. The maximum tilt angle of the surface with respect to the horizontal plane, at which the drop is still held in place, determines the surface energy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Triangular tiling** Triangular tiling: In geometry, the triangular tiling or triangular tessellation is one of the three regular tilings of the Euclidean plane, and is the only such tiling where the constituent shapes are not parallelogons. Because the internal angle of the equilateral triangle is 60 degrees, six triangles at a point occupy a full 360 degrees. The triangular tiling has Schläfli symbol of {3,6}. Triangular tiling: English mathematician John Conway called it a deltille, named from the triangular shape of the Greek letter delta (Δ). The triangular tiling can also be called a kishextille by a kis operation that adds a center point and triangles to replace the faces of a hextille. It is one of three regular tilings of the plane. The other two are the square tiling and the hexagonal tiling. Uniform colorings: There are 9 distinct uniform colorings of a triangular tiling. (Naming the colors by indices on the 6 triangles around a vertex: 111111, 111112, 111212, 111213, 111222, 112122, 121212, 121213, 121314) Three of them can be derived from others by repeating colors: 111212 and 111112 from 121213 by combining 1 and 3, while 111213 is reduced from 121314.There is one class of Archimedean colorings, 111112, (marked with a *) which is not 1-uniform, containing alternate rows of triangles where every third is colored. The example shown is 2-uniform, but there are infinitely many such Archimedean colorings that can be created by arbitrary horizontal shifts of the rows. A2 lattice and circle packings: The vertex arrangement of the triangular tiling is called an A2 lattice. It is the 2-dimensional case of a simplectic honeycomb. The A*2 lattice (also called A32) can be constructed by the union of all three A2 lattices, and equivalent to the A2 lattice. A2 lattice and circle packings: + + = dual of = The vertices of the triangular tiling are the centers of the densest possible circle packing. Every circle is in contact with 6 other circles in the packing (kissing number). The packing density is π⁄√12 or 90.69%. The voronoi cell of a triangular tiling is a hexagon, and so the voronoi tessellation, the hexagonal tiling, has a direct correspondence to the circle packings. Geometric variations: Triangular tilings can be made with the equivalent {3,6} topology as the regular tiling (6 triangles around every vertex). With identical faces (face-transitivity) and vertex-transitivity, there are 5 variations. Symmetry given assumes all faces are the same color. Related polyhedra and tilings: The planar tilings are related to polyhedra. Putting fewer triangles on a vertex leaves a gap and allows it to be folded into a pyramid. These can be expanded to Platonic solids: five, four and three triangles on a vertex define an icosahedron, octahedron, and tetrahedron respectively. This tiling is topologically related as a part of sequence of regular polyhedra with Schläfli symbols {3,n}, continuing into the hyperbolic plane. It is also topologically related as a part of sequence of Catalan solids with face configuration Vn.6.6, and also continuing into the hyperbolic plane. Wythoff constructions from hexagonal and triangular tilings Like the uniform polyhedra there are eight uniform tilings that can be based from the regular hexagonal tiling (or the dual triangular tiling). Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms, 7 which are topologically distinct. (The truncated triangular tiling is topologically identical to the hexagonal tiling.) Related regular complex apeirogons: There are 4 regular complex apeirogons, sharing the vertices of the triangular tiling. Regular complex apeirogons have vertices and edges, where edges can contain 2 or more vertices. Regular apeirogons p{q}r are constrained by: 1/p + 2/q + 1/r = 1. Edges have p vertices, and vertex figures are r-gonal.The first is made of 2-edges, and next two are triangular edges, and the last has overlapping hexagonal edges. Related regular complex apeirogons: Other triangular tilings There are also three Laves tilings made of single type of triangles:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interstitial television show** Interstitial television show: In television programming, an interstitial television show (or wraparound program or wraparound segment) refers to a short program that is often shown between movies or other events, e.g. cast interviews after movies on premium channels. The term can also refer to a narrative bridge between segments within a program, such as the live action introductions to the animated segments in the Disney films Fantasia and Fantasia 2000, or the Simpson family's interludes during their annual Treehouse of Horror episodes. Interstitial television show: Sometimes, if a program finishes earlier than expected, a short extra program may be inserted in the schedule to fill the time until the next scheduled program is due to start. American cable channel TBS commonly aired TV's Bloopers & Practical Jokes after shorter-than-average Braves games. Interstitial television show: For U.S. telecasts of the film The Wizard of Oz between 1959 and 1968, celebrity hosts appeared in wraparound segments. Opening credits especially designed by the network were shown in CBS's own format, followed by the host's first appearance, in which he made comments (often humorous, though never derogatory) about the film. Immediately following this, and without a commercial pause, the film itself would begin with all of its original 1939 opening credits. Halfway through the picture, the host would reappear and introduce the second half of the film. When the film ended, however, its closing credits would not be shown in their original format. Instead, the host would appear once more, bid farewell to the viewing audience, and the closing credits would be shown in CBS's own format. Interstitial television show: Among the notable interstitial programs shown between or during Saturday morning cartoons in the United States were In the News, shown on CBS starting in 1971, and Schoolhouse Rock!, shown on ABC starting in 1973.Raidió Teilifís Éireann in the Republic of Ireland used a variety of material as interstitials; often animation, including Roger Mainwood's video of Kraftwerk's hit "Autobahn", Halas and Batchelor shorts, and stop-motion Soviet cartoons; also rhythmic gymnastics performances, instrumental music, or sometimes simply a test card. Interstitial television show: Japanese public broadcasting organization NHK's Minna no Uta is something of a national institution, commissioning makers of usually animated films and famous or upcoming music acts to collaborate on exclusive music videos used to plug schedule gaps in lieu of advertisements. Interstitial television show: In Canada, short film series such as Canada Vignettes, Hinterland Who's Who, and Heritage Minutes were often used on CBC Television and other broadcasters.In Australia, it is common for the Australia Broadcasting corporation (ABC) to play these, as the ABC is government-funded and doesn't need as much time for commercial breaks. This means that TV shows made for commercial networks finish earlier and not on the hour.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**C2 (classification)** C2 (classification): In Paralympic sports, C2 is a para-cycling classification. The UCI recommends this be coded as MC2 or WC2. Definition: PBS defined this classification as "Riders with upper or lower limb impairments and moderate to severe neurological disfunction [sic?]." The Telegraph defined this classification in 2011 as "C 1–5: Athletes with cerebral palsy, limb impairments and amputitions [sic?]." Classification history: Cycling first became a Paralympic sport at the 1988 Summer Paralympics.In September 2006, governance for para-cycling passed from the International Paralympic Committee's International Cycling Committee to UCI at a meeting in Switzerland. When this happened, the responsibility of classifying the sport also changed. At the Paralympic Games: For the 2016 Summer Paralympics in Rio, the International Paralympic Committee had a zero classification at the Games policy. This policy was put into place in 2014, with the goal of avoiding last minute changes in classes that would negatively impact athlete training preparations. All competitors needed to be internationally classified with their classification status confirmed prior to the Games, with exceptions to this policy being dealt with on a case-by-case basis. Becoming classified: Classification is handled by Union Cycliste Internationale. Classification for the UCI Para-Cycling World Championships is completed by at least two classification panels. Members of the classification panel must not have a relationship with the cyclist and must not be involved in the World Championships in any other role than as classifier. In national competitions, the classification is handled by the national cycling federation. Classification often has three components: physical, technical and observation assessment. Rankings: This classification has UCI rankings for elite competitors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Accelerated testing of adhesives** Accelerated testing of adhesives: Accelerated testing of adhesives is used to predict long term performance of adhesive exposed to a variety of environmental factors. Adhesives are sometimes used as load bearing and sealing joints, which points great stress on them. In accelerated testing, factors like the temperature, moisture, vibrations, voltage, and UV light are greatly increased over a short period so long term predictions can be made about the effect of the aforementioned factors. Adhesive failure prediction: Accelerated testing may induce reaction kinetics that is not applicable to the actual service environment of an adhesive, which could cause greater concern than is necessary for certain adhesives. High temperatures are often avoided because it frequently causes new reactions to occur. Adhesive stability: Adhesives commonly react with oxygen at low temperatures, which leads to a slow break down of polymer chains. The breakdown of polymer chains is often undetectable until the adhesive has reached a critical point where the stability of remainder of the adhesive rapidly degrades. High temperature accelerated testing often cannot be used to estimate stability in oxygen environments since high temperatures often lead to new reaction pathways that would not typically exist at the temperature the adhesives would be used.Moisture sensitivity accelerated tests involve either increased temperatures or increased surface area of a sample. The surface area of samples is increased by applying adhesives to a single surface rather than placing it between two surfaces and placing the sample in a water bath. Chemiluminescence: When polymers are oxidized, unstable alkyl radicals are formed which react further with oxygen to form peroxy bonds. The excitation and stabilization of the peroxy radical causes chemiluminescence. The light produced by this reaction is typically low wavelength infrared light. The amount of light emitted is used to determine the oxidation rate of an adhesive. Chemiluminescence (CL) light intensity can be measured at various isothermal oxidation cycles; however, the temperature need not be raised to high levels. Correlation of light intensity is made to oxidation process parameters such as Oxidation Induction Temperature (OIT). By obtaining measurements at different temperatures, an accelerated oxidation progression correlation can be established. The prediction of oxidation during service life can then be carried out.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**COM Structured Storage** COM Structured Storage: COM Structured Storage (variously also known as COM structured storage or OLE structured storage) is a technology developed by Microsoft as part of its Windows operating system for storing hierarchical data within a single file. Strictly speaking, the term structured storage refers to a set of COM interfaces that a conforming implementation must provide, and not to a specific implementation, nor to a specific file format (in fact, a structured storage implementation need not store its data in a file at all). In addition to providing a hierarchical structure for data, structured storage may also provide a limited form of transactional support for data access. Microsoft provides an implementation that supports transactions, as well as one that does not (called simple-mode storage, the latter implementation is limited in other ways as well, although it performs better). COM Structured Storage: Structured storage is widely used in Microsoft Office applications, although newer releases (starting with Office 2007) use the XML-based Office Open XML by default. It is also an important part of both COM and the related Object Linking and Embedding (OLE) technologies. Other notable applications of structured storage include SQL Server, the Windows shell, and many third-party CAD programs. Motivation: Structured storage addresses some inherent difficulties of storing multiple data objects within a single file. One difficulty arises when an object persisted in the file changes in size due to an update. If the application that is reading/writing the file expects the objects in the file to remain in a certain order, everything following that object's representation in the file may need to be shifted backward to make room if the object grows, or forward to fill in the space left over if the object shrinks. If the file is large, this could result in a costly operation. Of course, there are many possible solutions to this difficulty, but often the application programmer does not want to deal with low level details such as binary file formats. Motivation: Structured storage provides an abstraction known as a stream, represented by the interface IStream. A stream is conceptually very similar to a file, and the IStream interface provides methods for reading and writing similar to file input/output. A stream could reside in memory, within a file, within another stream, etc., depending on the implementation. Another important abstraction is that of a storage, represented by the interface IStorage. A storage is conceptually very similar to a directory on a file system. Storages can contain streams, as well as other storages. Motivation: If an application wishes to persist several data objects to a file, one way to do so would be to open an IStorage that represents the contents of that file and save each of the objects within a single IStream. One way to accomplish the latter is through the standard COM interface IPersistStream. OLE depends heavily on this model to embed objects within documents. Format: Microsoft's implementation uses a file format known as compound files, and all of the widely deployed structured storage implementations read and write this format. Compound files use a FAT-like structure to represent storages and streams. Chunks of the file, known as sectors (these may or may not correspond to sectors of the underlying file system), are allocated as needed to add new streams and to increase the size of existing streams. If streams are deleted or shrink, leaving unallocated sectors, those sectors can be reused for new streams. Format: The following applications use the OLE Structured Storage (Compound Document Format) Microsoft Office 97–2003 documents: Word documents (.DOC, .DOT) Excel spreadsheets (.XLS, .XLT) PowerPoint presentations (.PPT, .POT) Publisher files (.PUB) Visio files (.VSD) Project files (.MPP) Microsoft PhotoDraw files (.MIX) Microsoft Outlook files (.MSG) Windows Installer files (.MSI, .MSP, .MST) Microsoft Picture It! / Microsoft Digital Image files (.MIX) Internet Explorer RSS Feeds Windows RSS Platform files (.feed-ms) Windows 7 StickyNotes (.SNT) Windows 7 jumplists files Thumbs.db Microsoft SQL 2000 Server DTS packages Autodesk Revit Autodesk Inventor FlashPix Altium Designer Native Structured Storage: During the beta testing phase of Windows 2000, it included a feature titled Native Structured Storage (NSS) for storage of Structured Storage documents (like the binary Microsoft Office formats and the thumbs.db file Windows Explorer uses to cache thumbnails) with each Stream that makes up a document stored in a separate NTFS data stream. It included utilities that automatically split up the streams in a regular Structured Storage document into NTFS data streams and vice versa. However, the feature was withdrawn after Beta 3 due to incompatibilities with other OS components, and any NSS files automatically converted to the single data stream format. Implementations: For Microsoft .NET: OpenMCDF – Free .NET component for accessing OLE structured storage files, MPL licensed. For Linux: GNOME Structured File Library – Can read Microsoft structured storage files. POLE. Cross platform C++ for Window/MacOSX/Linux: POLE v3 and up. For Java: POIFS – Java implementation of the OLE 2 Compound Document format, part of Apache POI. For Perl: LAOLA Binary Structures For JavaScript: js-cfb – JavaScript implementation of the OLE 2 Compound Document format. For Python: compoundfiles – Python implementation of the Microsoft Compound File Binary (CFB) format.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Guard Mount** Guard Mount: Guard mount may refer to two things: The first is the actual forming (mounting) of a military security group called the Guard; the second is the bugle call which was used to signal the formation of the group. Military formation: Guard mount is a pre-shift official formation among designated United States Air Force Security Forces or United States Army Military Police Corps members. This meeting is usually held after all security personnel have "armed up" or received their weapons and equipment for the shift. During the formation, the senior NCO (non-commissioned officer) such as the flight leader (known as the flight chief) or the First Sergeant will discuss topics such as current unit, base or other service (air force or army) level news, current terrorism warnings or military intelligence, and weapons, vehicle and ground safety. Members also will be inspected and assessed on their appearance, uniform and checked for required equipment. When held in a deployed combat threat environment, guard mount is somewhat akin to US Army pre-combat checks and pre-combat inspections, PCC and PCI respectively. Bugle call: "Guard Mount" is a bugle call which sounds as a warning that the guard is about to be assembled for guard mount.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Defensive fighting position** Defensive fighting position: A defensive fighting position (DFP) is a type of earthwork constructed in a military context, generally large enough to accommodate anything from one soldier to a fire team (or similar sized unit). Terminology: Tobruk type positions are named after the system of defensive positions constructed, initially, by the Italian Army at Tobruk, Libya. After Tobruk fell to the Allies in January 1941, the existing positions were modified and significantly expanded by the Australian Army which, along with other Allied forces, reused them in the Siege of Tobruk. Terminology: A foxhole is one type of defensive strategic position. It is a "small pit used for cover, usually for one or two personnel, and so constructed that the occupants can effectively fire from it".It is known more commonly within United States Army slang as a "fighting position" or as a "ranger grave". It is known as a "fighting hole" in the United States Marine Corps, a "gun-pit" in Australian Army terminology, and a "fighting pit" in the New Zealand Army. Terminology: In British and Canadian military argot it equates to a range of terms including slit trench, or fire trench (a trench deep enough for a soldier to stand in), a sangar (sandbagged fire position above ground) or shell scrape (a shallow depression that affords protection in the prone position), or simply—but less accurately—as a "trench". During the American Civil War the term "rifle pit" was recognized by both U.S. Army and Confederate Army forces. A protected emplacement or concealed post in which one or several machine guns are set up is known in U.S. English as a machine gun nest. History: During the fighting in North Africa (1942–43), U.S. forces employed the shell scrape. This was a very shallow excavation allowing one soldier to lie horizontally while shielding his body from nearby shell bursts and small arms fire. The shell scrape soon proved inadequate in this role, as the few inches of dirt above the soldier's body could often be penetrated by bullets or shell fragments. It also exposed the user to assault by enemy tanks, which could crush a soldier inside a shallow shell scrape by driving into it, then making a simple half-turn.After the Battle of Kasserine Pass (early 1943), U.S. troops increasingly adopted the modern foxhole, a vertical, bottle-shaped hole that allowed a soldier to stand and fight with head and shoulders exposed. The foxhole widened near the bottom to allow a soldier to crouch down while under intense artillery fire or tank attack. Foxholes could be enlarged to two-soldier fighting positions, as well as excavated with firing steps for crew-served weapons or sumps for water drainage or live enemy grenade disposal. History: Tobruks The Germans used hardened fortifications in North Africa and later in other fortifications, such as the Atlantic Wall, that were in essence foxholes made from concrete. The Germans knew them officially as Ringstände; the Allies called them "Tobruks" because they had first encountered the structures during the fighting in Africa.Frequently, the Germans put a turret from an obsolete French or German tank on the foxhole. This gave the Tobruk enhanced firepower and the gunner protection from shrapnel and small arms. Modern designs: Modern militaries publish and distribute elaborate field manuals for the proper construction of DFPs in stages. Initially, a shallow "shell scrape" is dug, often called a ranger grave, which provides very limited protection. Each stage develops the fighting position, gradually increasing its effectiveness, while always maintaining functionality. In this way, a soldier can improve the position over time, while being able to stop at any time and use the position in a fight. Modern designs: Typically, a DFP is a pit or trench dug deep enough to stand in, with only the head exposed, and a small step at the bottom, called a fire step, that allows the soldier to crouch on to avoid fire and tank treads. The fire step usually slopes down into a deeper narrow slit called a grenade sump at the bottom to allow for live grenades to be kicked in to minimize damage from grenade fragments. Modern designs: When possible, DFPs are revetted with corrugated iron, star pickets and wire or local substitutes. Ideally, the revetting will also be dug in below ground level so as to minimise damage from fire and tank tracks. The revetting helps the DFP resist cave-in from near misses from artillery or mortars and tank tracks. Time permitting, DFPs can be enlarged to allow a machine gun crew and ammunition to be protected, as well as additional overhead cover via timbers. In training, DFPs are usually dug by hand or in some cases by mechanical trench diggers. On operations, explosives, especially shaped charges ("beehives"), may be used to increase the speed of development. Developing and maintaining DFPs is a constant and ongoing task for soldiers deployed in combat areas. For this reason, in some armies, infantry soldiers are referred to as "gravel technicians", as they spend so much time digging. Modern designs: Because of the large expenditure in effort and materials required to build a DFP, it is important to ensure that the DFP is correctly sited. In order to site the DFP, the officer in charge ("OIC") should view the ground from the same level that the intended user's weapons will be sighted from. Normally, the OIC will need to lie on his belly to obtain the required perspective. This ensures that the position will be able to cover the desired sector.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quarter stick** Quarter stick: A quarter stick is a large firecracker that falls within a certain range of dimensions. Typically, a quarter stick consists of a thick walled cardboard tube containing approximately 1 oz (28 g) of pyrotechnic flash powder, with a short length of Visco fuse protruding from the side or end of the device. No true standard for dimensions and construction exists, as these devices are products of bootleg manufacturers. Quarter stick: The term quarter stick is based on a quarter-stick of dynamite, which it somewhat resembles. However, quarter stick firecrackers do not contain nitroglycerin as dynamite does, and have far less explosive power. In the United States, quarter sticks and similar large firecrackers are illegal to manufacture or possess without an ATF High Explosives Manufacturing License. They are sometimes colloquially known as M-1000s or "Block Busters". The smaller M100 is also known as a "Silver Salute". The M250 is also known as a "Pineapple".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CARD11** CARD11: Caspase recruitment domain-containing protein 11 also known as CARD-containing MAGUK protein 1 (Carma 1) is a protein in the CARD-CC protein family that in humans is encoded by the CARD11 gene. CARD 11 is a membrane associated protein that is found in various human tissues, including the thymus, spleen, liver, and peripheral blood leukocytes. Similarly, CARD 11 is also found in abundance in various lines of cancer cells. Function: The protein encoded by this gene belongs to the membrane-associated guanylate kinase (MAGUK) family, a class of proteins that functions as molecular scaffolds for the assembly of multiprotein complexes at specialized regions of the plasma membrane. This protein is also a member of the CARD protein family, which is defined by carrying a characteristic caspase-associated recruitment domain (CARD). CARD11 (CARMA1) has a domain structure similar to that of CARD10 (CARMA3) and CARD14 (CARMA2) as a member of the CARD-CC family with a C terminal MAGUK domain (the so-called CARMA proteins). The CARD domain of proteins in the CARD-CC family have been shown to specifically interact with BCL10, a protein known to function as a positive regulator of NF-κB activation by recruitment and activation of MALT1. When overexpressed in cells, this protein family activates NF-κB and induces the phosphorylation of BCL10.CARD11 is critical for T cell and B cell function and is activated after T cell receptor or B cell receptor stimulation. After receptor stimulation, CARD11 is phosphorylated by PKC-θ (in T cells) or PKC-β (in B cells). The phosphorylation induces formation of filamentous CARD11 multimers that recruit BCL10 and MALT1, which in turn activates NF-κB. Loss of function mutations in CARD11 cause severe combined immunodeficiency (SCID) since the function of cells critical for adaptive immunity are disrupted. Structure: The structure of CARD11 involves multiple domains that impact the protein's ability to activate BCL10 and NF-κB activity. CARD11 has a CARD domain, a serine-threonine rich region, is associated with the N-terminus, which is essential for NF-κB signaling activity. The region following the CARD domain is highly coiled. In deleting the CARD domain, all NF-κB signaling activity was prevented. The CARD domain on CARD11 interacts with the CARD domain on BCL10 to initiate the signaling pathway.On the C-terminus of CARD11 there is the MAGUK domain that is associated with the cell membrane. This domain is often referred to as to as the inhibitory domain. Protein kinase C activates CARD11 by phosphorylating serine residues within the inhibitory domain. Interactions: CARD11 has been shown to interact with BCL10. This interaction occurs between the CARD domain on BCL10 and the CARD domain on CARD11, and results in signal propagation and NF-κB activation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NicVAX** NicVAX: NicVAX is an experimental conjugate vaccine intended to reduce or eliminate physical dependence to nicotine. According to the U.S. National Institute of Drug Abuse, NicVAX can potentially be used to inoculate against nicotine addiction. This proprietary vaccine is being developed by Nabi Biopharmaceuticals of Rockville, MD. with the support from the U.S. National Institute on Drug Abuse. NicVAX consists of the hapten 3'-aminomethylnicotine which has been conjugated (attached) to Pseudomonas aeruginosa exotoxin A.Early trials of NicVax were promising; two successive phase III trials showed results no better than placebo, and a more recent study showed that the drug decreased subjects' cravings for cigarettes. Mechanism: Nicotine is a small molecule that, after inhalation into the lungs, quickly passes into the bloodstream, subsequently crossing the blood–brain barrier. Once in the brain, it binds to specific nicotine receptors, resulting in the release of neurotransmitters, such as dopamine and norepinephrine. Mechanism: NicVAX is a relapse prevention therapy designed to stimulate the immune system to produce antibodies that bind to nicotine in the bloodstream and prevent and/or slow it from crossing the blood–brain barrier and entering the brain. With a reduced amount of nicotine reaching the brain, neurotransmitter release is greatly lessened and the pleasurable, positive-reinforcing effects of nicotine are diminished. Pre-clinical studies with the vaccine have shown that vaccination slows and decreases the amount of nicotine that reaches the brain and blocks the effects of nicotine, including effects that can reinforce and maintain addiction in animals. Therefore, if a recently quit tobacco smoker is vaccinated and has a tobacco cigarette after the immunization series is completed, the antibodies generated by the vaccine bind nicotine and alter its distribution into the brain. Because not enough nicotine enters the brain, addiction-relevant neural pathways are not activated. No pleasure is derived from the tobacco cigarette and the vaccinated subject does not relapse and begin smoking again. Mechanism: NicVAX is administered by injection into the arm; the 3'-aminomethylnicotine molecule found in the vaccine instigates an immune response in which 'nicotine' antibodies are created. The antibodies bind to nicotine molecules, causing the nicotine-antibody complexes to be too large to enter the brain; this prevents nicotine from being able to affect addiction-relevant pathways in the brain. The idea behind the drug is that since often even a single cigarette can deliver enough nicotine to the brain to reinstate the addiction, blocking the entry of nicotine into the brain might prevent this renewed dependence. This treatment works for nicotine addiction from any source. Clinical Pharmacology & Therapeutics considers this method "attractive," since the antibody does not enter the brain; as a result, side effects on the central nervous system are minimal, if any. Additionally, the antibodies produced bind only to nicotine and not to nicotine metabolites or any similar endogenous structures such as acetylcholine.However, while this drug may curtail addiction, it does not prevent psychological cravings; a user could potentially smoke heavily to compensate for the nullifying effects of NicVAX. However, a study performed indicated that this did not happen among the test subjects. Studies: Initial tests, involving injections of nicotine-specific immunoglobulin G into laboratory rats in the early 2000s, resulted in nicotine levels in the brain cut by up to 65%. Studies: Phase II trials An early study in 2005 for 38 weeks by the University of Minnesota Cancer Center's Transdisciplinary Tobacco Use Research Center and published in Clinical Pharmacology & Therapeutics, involved 68 smokers, none of whom had known health problems or intended to quit smoking in the next month. Subjects were not instructed to quit smoking during the study. The test subjects were injected four times throughout the trial: when it began, and after four, eight and 26 weeks with either one of three dosages of NicVAX or a placebo. At the conclusion of the study, it was concluded that NicVAX was "safe and well tolerated", with side effects including headaches, colds, and upper respiratory tract infections. While most of the test subjects continued to smoke, six people from the high dosage group, one person from the medium dosage group, no one from the low dosage group, and two people from the placebo group quit smoking. They did not start again for at least thirty days.In a further Phase IIb trial, a statistically significant number of patients with a high anti-nicotine antibody response met the primary endpoint of eight weeks of continuous abstinence between weeks 19–26. The top 30% of antibody responders (61 of the total 201 patients receiving drug) were examined in detail. A statistically significant number of these patients, (24.6%; p=0.04) showed continuous abstinence between weeks 19-26 compared to only 13.0% for the 100 patients receiving placebo. The quit rate of those patients who did not have a high antibody response was not statistically significant from placebo. The trial enrolled a total of 301 heavy smokers who smoked an average of 24 cigarettes per day prior to enrollment. Nabi issued a press release indicating that phase IIB testing was a success, showing statistically significant rates of smoking cessation and continuous long-term smoking abstinence during the trial period. Studies: Phase III trials Nabi Biopharmaceuticals conducted two Phase III trials of the drug. The first started in November 2009 and the second in March 2010. Nabi issued press releases announcing the start of these trials. In July 2011 it was announced that the first of two planned phase III trials for NicVAX failed, sending the market capitalization of NABI Biopharmaceuticals to below the value of its cash holdings. In November 2011, NABI announced that NicVAX had failed the second phase III clinical trial, performing no better than a placebo.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fundamental attribution error** Fundamental attribution error: In social psychology, fundamental attribution error, also known as correspondence bias or attribution effect, is a cognitive attribution bias where observers underemphasize situational and environmental factors for the behavior of an actor while overemphasizing dispositional or personality factors. In other words, observers tend to overattribute the behaviors of others to their personality (e.g., he is late because he's selfish) and underattribute them to the situation or context (e.g. he is late because he got stuck in traffic). Although personality traits and predispositions are considered to be observable facts in psychology, the fundamental attribution error is an error because it misinterprets their effects. Origin: Etymology The phrase was coined by Lee Ross 10 years after an experiment by Edward E. Jones and Victor Harris in 1967. Ross argued in a popular paper that the fundamental attribution error forms the conceptual bedrock for the field of social psychology. Jones wrote that he found Ross's phrase "overly provocative and somewhat misleading", and also joked: "Furthermore, I'm angry that I didn't think of it first." Some psychologists, including Daniel Gilbert, have used the phrase "correspondence bias" for the fundamental attribution error. Other psychologists have argued that the fundamental attribution error and correspondence bias are related but independent phenomena, with the former being a common explanation for the latter. Origin: 1967 demonstration study Jones and Harris hypothesized, based on the correspondent inference theory, that people would attribute apparently freely chosen behaviors to disposition and apparently chance-directed behaviors to situation. The hypothesis was confounded by the fundamental attribution error.Subjects in an experiment read essays for and against Fidel Castro. Then they were asked to rate the pro-Castro attitudes of the writers. When the subjects believed that the writers freely chose positions for or against Castro, they would normally rate the people who liked Castro as having a more positive attitude towards Castro. However, contradicting Jones and Harris' initial hypothesis, when the subjects were told that the writers' positions were determined by a coin toss, they still rated writers who spoke in favor of Castro as having, on average, a more positive attitude towards Castro than those who spoke against him. In other words, the subjects were unable to properly see the influence of the situational constraints placed upon the writers; they could not refrain from attributing sincere belief to the writers. The experimental group provided more internal attributions towards the writer. Criticism: The hypothesis that people systematically overattribute behavior to traits (at least for other people's behavior) is contested. A 1986 study tested whether subjects over-, under-, or correctly estimated the empirical correlation among behaviors (i.e., traits, see trait theory). They found that estimates of correlations among behaviors correlated strongly with empirically-observed correlations among these behaviors. Subjects were sensitive to even very small correlations, and their confidence in the association tracked how far they were discrepant (i.e., if they knew when they did not know), and was higher for the strongest relations. Subjects also showed awareness of the effect of aggregation over occasions and used reasonable strategies to arrive at decisions. Epstein concluded that "Far from being inveterate trait believers, as has been previously suggested, [subjects'] intuitions paralleled psychometric principles in several important respects when assessing relations between real-life behaviors."A 2006 meta-analysis found little support for a related bias, the actor–observer asymmetry, in which people attribute their own behavior more to the environment, but others' behavior to individual attributes. The implications for the fundamental attribution error, the author explained, were mixed. He explained that the fundamental attribution error has two versions: Observers make person-focused attributions more than environmental attributions for actor behavior; Observers will mistakenly overestimate the influence of personal factors on actor behavior.The meta-analysis concluded that existing weight of evidence does not support the first form of the fundamental attribution error, but does support the second. Explanations: Several theories predict the fundamental attribution error, and thus both compete to explain it, and can be falsified if it does not occur. Some examples include: Just-world fallacy. The belief that people get what they deserve and deserve what they get, the concept of which was first theorized by Melvin J. Lerner in 1977. Attributing failures to dispositional causes rather than situational causes—which are unchangeable and uncontrollable—satisfies our need to believe that the world is fair and that we have control over our lives. We are motivated to see a just world because this reduces our perceived threats, gives us a sense of security, helps us find meaning in difficult and unsettling circumstances, and benefits us psychologically. However, the just-world hypothesis also results in a tendency for people to blame and disparage victims of an accident or a tragedy, such as rape and domestic abuse, to reassure themselves of their insusceptibility to such events. People may even blame the victim's faults in a "past life" to pursue justification for their bad outcome. Explanations: Salience of the actor. We tend to attribute an observed effect to potential causes that capture our attention. When we observe other people, the person is the primary reference point while the situation is overlooked as if it is nothing but mere background. As such, attributions for others' behavior are more likely to focus on the person we see, not the situational forces acting upon that person that we may not be aware of. (When we observe ourselves, we are more aware of the forces acting upon us. Such a differential inward versus outward orientation accounts for the actor–observer bias.) Lack of effortful adjustment. Sometimes, even though we are aware that the person's behavior is constrained by situational factors, we still commit the fundamental attribution error. This is because we do not take into account behavioral and situational information simultaneously to characterize the dispositions of the actor. Initially, we use the observed behavior to characterize the person by automaticity. We need to make deliberate and conscious effort to adjust our inference by considering the situational constraints. Therefore, when situational information is not sufficiently taken into account for adjustment, the uncorrected dispositional inference creates the fundamental attribution error. This would also explain why people commit the fundamental attribution error to a greater degree when they're under cognitive load; i.e. when they have less motivation or energy for processing the situational information. Explanations: Culture. It has been suggested cultural differences occur in attribution error: people from individualistic (Western) cultures are reportedly more prone to the error while people from collectivistic cultures are less prone. Based on cartoon-figure presentations to Japanese and American subjects, it has been suggested that collectivist subjects may be more influenced by information from context (for instance being influenced more by surrounding faces in judging facial expressions). Alternatively, individualist subjects may favor processing of focal objects, rather than contexts. Others suggest Western individualism is associated with viewing both oneself and others as independent agents, therefore focusing more on individuals rather than contextual details. Versus correspondence bias: The fundamental attribution error is commonly used interchangeably with "correspondence bias" (sometimes called "correspondence inference"), although this phrase refers to a judgment which does not necessarily constitute a bias, which arises when the inference drawn is incorrect, e.g. dispositional inference when the actual cause is situational). However, there has been debate about whether the two terms should be distinguished from each other. Three main differences between these two judgmental processes have been argued: They seem to be elicited under different circumstances, as both correspondent dispositional inferences and situational inferences can be elicited spontaneously. Attributional processing, however, seems to only occur when the event is unexpected or conflicting with prior expectations. This notion is supported by a 1994 study, which found that different types of verbs invited different inferences and attributions. Correspondence inferences were invited to a greater degree by interpretative action verbs (such as "to help") than state action or state verbs, thus suggesting that the two are produced under different circumstances. Versus correspondence bias: Correspondence inferences and causal attributions also differ in automaticity. Inferences can occur spontaneously if the behavior implies a situational or dispositional inference, while causal attributions occur much more slowly. Versus correspondence bias: It has also been suggested that correspondence inferences and causal attributions are elicited by different mechanisms. It is generally agreed that correspondence inferences are formed by going through several stages. Firstly, the person must interpret the behavior, and then, if there is enough information to do so, add situational information and revise their inference. They may then further adjust their inferences by taking into account dispositional information as well. Causal attributions however seem to be formed either by processing visual information using perceptual mechanisms, or by activating knowledge structures (e.g. schemas) or by systematic data analysis and processing. Hence, due to the difference in theoretical structures, correspondence inferences are more strongly related to behavioral interpretation than causal attributions.Based on the preceding differences between causal attribution and correspondence inference, some researchers argue that the fundamental attribution error should be considered as the tendency to make dispositional rather than situational explanations for behavior, whereas the correspondence bias should be considered as the tendency to draw correspondent dispositional inferences from behavior. With such distinct definitions between the two, some cross-cultural studies also found that cultural differences of correspondence bias are not equivalent to those of fundamental attribution error. While the latter has been found to be more prevalent in individualistic cultures than collectivistic cultures, correspondence bias occurs across cultures, suggesting differences between the two phrases. Further, disposition correspondent inferences made to explain the behavior of nonhuman actors (e.g., robots) do not necessarily constitute an attributional error because there is little meaningful distinction between the interior dispositions and observable actions of machine agents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Postal codes in the Cayman Islands** Postal codes in the Cayman Islands: Postal codes in the Cayman Islands are used by the Cayman Islands Postal Service to route inbound mail to groups of post office boxes in the country. A postal code typically consists of an island code, a hyphen separator, and a section code. They were introduced in 2006.There are only three island codes: KY1 for Grand Cayman, KY2 for Cayman Brac, and KY3 for Little Cayman. Each of these is subdivided into section codes according to which local post office handles a particular group of boxes. Postal codes in the Cayman Islands: A single sheet portable document is available from CIPS listing all post codes in the archipelago.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amsterdam Density Functional** Amsterdam Density Functional: Amsterdam Density Functional (ADF) is a program for first-principles electronic structure calculations that makes use of density functional theory (DFT). ADF was first developed in the early seventies by the group of E. J. Baerends from the Vrije Universiteit in Amsterdam, and by the group of T. Ziegler from the University of Calgary. Nowadays many other academic groups are contributing to the software. Software for Chemistry & Materials (SCM), formerly known as Scientific Computing & Modelling is a spin-off company from the Baerends group. SCM has been coordinating the development and distribution of ADF since 1995. Together with the rise in popularity of DFT in the nineties, ADF has become a popular computational chemistry software package used in the industrial and academic research. ADF excels in spectroscopy, transition metals, and heavy elements problems. A periodic structure counterpart of ADF named BAND is available to study bulk crystals, polymers, and surfaces. The Amsterdam Modeling Suite has expanded beyond DFT since 2010, with the semi-empirical MOPAC code, the Quantum_ESPRESSO plane wave code, a density-functional based tight binding (DFTB) module, a reactive force field module ReaxFF, and an implementation of Klamt's COSMO-RS method, which also includes COSMO-SAC, UNIFAC, and QSPR. Specific features and capabilities: See ADF website for a comprehensive listing.Slater-type orbitals (STOs) as basis functions for both molecular and periodic calculations, in contrast to Gaussian orbitals (GTOs) and plane waves in other codes. Basis sets and relativistic methods (zeroth order regular approximation to the Dirac equation (ZORA), X2C: scalar relativistic and spin-orbit coupling) for all the chemical elements up to no. 118. Various molecular properties: IR, Raman, VCD, UV, XAS spectra; NMR and EPR (ESR) parameters. Solvent and environmental effects via COSMO, QM/MM, DRF, subsystem DFT. Specific features and capabilities: Many chemical analysis tools (energy decomposition analysis, transfer integrals, (partial) density of states, etc.) Periodic DFT with atomic orbitals: 1D, 2D, 3D and a graphical interface to plane wave code Quantum ESPRESSO Thermodynamic properties of solvents and solutions (Solubility, LogP, VLE, LLE) with COSMO-RS Semi-empirical modules MOPAC and DFTB] Parallelized ReaxFF with GUI for reactive molecular dynamicsIntegrated graphical user interface (GUI) for all modules to set up calculations and visualize the results. Specific features and capabilities: Out-of-the-box parallel calculations via IntelMPI, OpenMPI or native MPI. Limited GPU support
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Niobium dioxide** Niobium dioxide: Niobium dioxide, is the chemical compound with the formula NbO2. It is a bluish-black non-stoichiometric solid with a composition range of NbO1.94-NbO2.09. It can be prepared by reducing Nb2O5 with H2 at 800–1350 °C. An alternative method is reaction of Nb2O5 with Nb powder at 1100 °C. Properties: The room temperature form of NbO2 has a tetragonal, rutile-like structure with short Nb-Nb distances, indicating Nb-Nb bonding. The high temperature form also has a rutile-like structure with short Nb-Nb distances. Two high-pressure phases have been reported: one with a rutile-like structure (again, with short Nb-Nb distances); and a higher pressure with baddeleyite-related structure.NbO2 is insoluble in water and is a powerful reducing agent, reducing carbon dioxide to carbon and sulfur dioxide to sulfur. In an industrial process for the production of niobium metal, NbO2 is produced as an intermediate, by the hydrogen reduction of Nb2O5. The NbO2 is subsequently reacted with magnesium vapor to produce niobium metal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**60S ribosomal protein L12** 60S ribosomal protein L12: 60S ribosomal protein L12 is a protein that in humans is encoded by the RPL12 gene. Function: Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L11P family of ribosomal proteins. It is located in the cytoplasm. The protein binds directly to the 26S rRNA. This gene is co-transcribed with the U65 snoRNA, which is located in its fourth intron. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome. Interactions: RPL12 has been shown to interact with CDC5L.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Torticollis** Torticollis: Torticollis, also known as wry neck, is an extremely painful, dystonic condition defined by an abnormal, asymmetrical head or neck position, which may be due to a variety of causes. The term torticollis is derived from the Latin words tortus, meaning "twisted", and collum, meaning "neck".The most common case has no obvious cause, and the pain and difficulty with turning the head usually goes away after a few days, even without treatment in adults. Signs and symptoms: Torticollis is a fixed or dynamic tilt, rotation, with flexion or extension of the head and/or neck. The type of torticollis can be described depending on the positions of the head and neck. Signs and symptoms: laterocollis: the head is tipped toward the shoulder rotational torticollis: the head rotates along the longitudinal axis towards the shoulder anterocollis: forward flexion of the head and neck and brings the chin towards the chest retrocollis: hyperextension of head and neck backward bringing the back of the head towards the backA combination of these movements may often be observed. Torticollis can be a disorder in itself as well as a symptom in other conditions. Signs and symptoms: Other signs and symptoms include: Neck pain Occasional formation of a mass Thickened or tight sternocleidomastoid muscle Tenderness on the cervical spine Tremor in head Unequal shoulder heights Decreased neck movement Causes: A multitude of conditions may lead to the development of torticollis including: muscular fibrosis, congenital spine abnormalities, or toxic or traumatic brain injury. Causes: A rough categorization discerns between congenital torticollis and acquired torticollis.Other categories include: Osseous Traumatic CNS/PNS Ocular Non-muscular soft tissue Spasmodic Drug induced Oral ties (lip and tongue ties) Congenital muscular torticollis Congenital muscular torticollis is the most common torticollis that is present at birth. Congenital muscular torticollis is the third most common congenital musculoskeletal deformity in children. The cause of congenital muscular torticollis is unclear. Birth trauma or intrauterine malposition is considered to be the cause of damage to the sternocleidomastoid muscle in the neck. Other alterations to the muscle tissue arise from repetitive microtrauma within the womb or a sudden change in the calcium concentration in the body that causes a prolonged period of muscle contraction.Any of these mechanisms can result in a shortening or excessive contraction of the sternocleidomastoid muscle, which curtails its range of motion in both rotation and lateral bending. The head is typically tilted in lateral bending toward the affected muscle and rotated toward the opposite side. In other words, the head itself is tilted in the direction of the shortened muscle, with the chin tilted in the opposite direction.Congenital torticollis is presented at 1–4 weeks of age, and a hard mass usually develops. It is normally diagnosed using ultrasonography and a color histogram or clinically by evaluating the infant's passive cervical range of motion.Congenital torticollis constitutes the majority of cases seen in paediatric clinical practice. The reported incidence of congenital torticollis is 0.3-2.0%. Sometimes a mass, such as a sternocleidomastoid tumor, is noted in the affected muscle. Congenital Muscular Torticollis is also defined by a fibrosis contracture of the sternocleidomastoid muscle on one side of the neck. Congenital torticollis may not resolve on its own, and can result in rare complications including plagiocephaly. Secondary complications associated with Congenital Muscular Torticollis include visual dysfunctions, facial asymmetry, delayed development, cervical scoliosis, and vertebral wedge degeneration which will have a serious impact on the child's appearance and even mental health.Benign paroxysmal torticollis is a rare disorder affecting infants. Recurrent attacks may last up to a week. The condition improves by age 2. The cause is thought to be genetic. Causes: Acquired torticollis Noncongenital muscular torticollis may result from muscle spasm, trauma, scarring or disease of cervical vertebrae, adenitis, tonsillitis, rheumatism, enlarged cervical glands, retropharyngeal abscess, or cerebellar tumors. It may be spasmodic (clonic) or permanent (tonic). The latter type may be due to Pott's Disease (tuberculosis of the spine). Causes: A self-limiting spontaneously occurring form of torticollis with one or more painful neck muscles is by far the most common ('stiff neck') and will pass spontaneously in 1–4 weeks. Usually the sternocleidomastoid muscle or the trapezius muscle is involved. Sometimes draughts, colds, or unusual postures are implicated; however, in many cases, no clear cause is found. These episodes are commonly seen by physicians.Most commonly this self-limiting form relates to an untreated dental occlusal dysfunction, which is brought on by clenching and grinding the teeth during sleep. Once the occlusion is treated it will completely resolve. Treatment is accomplished with an occlusal appliance, and equilibration of the dentition. Causes: Tumors of the skull base (posterior fossa tumors) can compress the nerve supply to the neck and cause torticollis, and these problems must be treated surgically. Infections in the posterior pharynx can irritate the nerves supplying the neck muscles and cause torticollis, and these infections may be treated with antibiotics if they are not too severe, but could require surgical debridement in intractable cases. Ear infections and surgical removal of the adenoids can cause an entity known as Grisel's syndrome, a subluxation of the upper cervical joints, mostly the atlantoaxial joint, due to inflammatory laxity of the ligaments caused by an infection. The use of certain drugs, such as antipsychotics, can cause torticollis. Antiemetics - Neuroleptic Class - Phenothiazines There are many other rare causes of torticollis. A very rare cause of acquired torticollis is fibrodysplasia ossificans progressiva (FOP), the hallmark of which is malformed great toes. Spasmodic torticollis Torticollis with recurrent, but transient contraction of the muscles of the neck and especially of the sternocleidomastoid, is called spasmodic torticollis. Synonyms are "intermittent torticollis", "cervical dystonia" or "idiopathic cervical dystonia", depending on cause. Causes: Trochlear torticollis Torticollis can caused by damage to the trochlear nerve (fourth cranial nerve), which supplies the superior oblique muscle of the eye. The superior oblique muscle is involved in depression, abduction, and intorsion of the eye. When the trochlear nerve is damaged, the eye is extorted because the superior oblique is not functioning. The affected person will have vision problems unless they turn their head away from the side that is affected, causing intorsion of the eye and balancing out the extorsion of the eye. This can be diagnosed by the Bielschowsky test, also called the head-tilt test, where the head is turned to the affected side. A positive test occurs when the affected eye elevates, seeming to float up. Anatomy: The underlying anatomical distortion causing torticollis is a shortened sternocleidomastoid muscle. This is the muscle of the neck that originates at the sternum and clavicle and inserts on the mastoid process of the temporal bone on the same side. There are two sternocleidomastoid muscles in the human body and when they both contract, the neck is flexed. The main blood supply for these muscles come from the occipital artery, superior thyroid artery, transverse scapular artery and transverse cervical artery. The main innervation to these muscles is from cranial nerve XI (the accessory nerve) but the second, third and fourth cervical nerves are also involved. Pathologies in these blood and nerve supplies can lead to torticollis. Diagnosis: Evaluation of a child with torticollis begins with history taking to determine circumstances surrounding birth and any possibility of trauma or associated symptoms. Physical examination reveals decreased rotation and bending to the side opposite from the affected muscle. Some say that congenital cases more often involve the right side, but there is not complete agreement about this in published studies. Evaluation should include a thorough neurologic examination, and the possibility of associated conditions such as developmental dysplasia of the hip and clubfoot should be examined. Radiographs of the cervical spine should be obtained to rule out obvious bony abnormality, and MRI should be considered if there is concern about structural problems or other conditions. Diagnosis: Ultrasonography can be used to visualize muscle tissue, with a colour histogram generated to determine cross-sectional area and thickness of the muscle.Evaluation by an optometrist or an ophthalmologist should be considered in children to ensure that the torticollis is not caused by vision problems (IV cranial nerve palsy, nystagmus-associated "null position", etc.). Differential diagnosis for torticollis includes Cranial nerve IV palsy Spasmus nutans Sandifer syndrome Myasthenia gravis Cerebrospinal fluid leakCervical dystonia appearing in adulthood has been believed to be idiopathic in nature, as specific imaging techniques most often find no specific cause. Treatment: Initially, the condition is treated with physical therapies, such as stretching to release tightness, strengthening exercises to improve muscular balance, and handling to stimulate symmetry. A TOT collar is sometimes applied. Early initiation of treatment is very important for full recovery and to decrease chance of relapse. Treatment: Physical therapy Physical therapy is an option for treating torticollis in a non-invasive and cost-effective manner. In the children above 1 year of age, surgical release of the tight sternocleidomastoid muscle is indicated along with aggressive therapy and appropriate splinting. Occupational therapy rehabilitation in Congenital muscular torticollis concentrates on observation, orthosis, gentle stretching, myofascial release techniques, parents’ counseling-training, and home exercise program. While outpatient infant physiotherapy is effective, home therapy performed by a parent or guardian is just as effective in reversing the effects of congenital torticollis. It is important for physical therapists to educate parents on the importance of their role in the treatment and to create a home treatment plan together with them for the best results for their child. Five components have been recognized as the "first choice intervention" in PT for treatment of torticollis and include neck passive range of motion, neck and trunk active range of motion, development of symmetrical movement, environmental adaptations, and caregiver education. In therapy, parents or guardians should expect their child to be provided with these important components, explained in detail below. Lateral neck flexion and overall range of motion can be regained quicker in newborns when parents conduct physical therapy exercises several times a day.Physical therapists should teach parents and guardians to perform the following exercises: Stretching the neck and trunk muscles actively. Parents can help promote this stretching at home with infant positioning. For example, prone positioning will encourage the child to lift their chin off the ground, thereby strengthening their bilateral neck and spine extensor muscles, and stretching their neck flexor muscles. Active rotation exercises in supine, sitting or prone position by using toys, lights and sounds to attract infant's attention to turn neck and look toward the non-affected side. Treatment: Stretching the muscle in a prone position passively. Passive stretching is manual, and does not include infant involvement. Two people can be involved in these stretches, one person stabilizing the infant while the other holds the head and slowly brings it through the available range of motion. Passive stretching should not be painful to the child, and should be stopped if the child resists. Also, discontinue the stretch if changes in breathing or circulation are seen or felt. Treatment: Stretching the muscle in a lateral position supported by a pillow (have infant lie on the side with the neck supported by pillow). Affected side should be against the pillow to deviate the neck towards the non-affected side. Treatment: Environmental adaptations can control posture in strollers, car seats and swings (using U-shaped neck pillow or blankets to hold neck in neutral position) Passive cervical rotation (much like stretching when being supported by a pillow, have affected side down) Position infant in the crib with affected side by the wall so they must turn to the non-affected side to face outPhysical therapists often encourage parents and caregivers of children with torticollis to modify the environment to improve neck movements and position. Modifications may include: Adding neck supports to the car seat to attain optimal neck alignment Reducing time spent in a single position Using toys to encourage the child to look in the direction of limited neck movement Alternating sides when bottle or breastfeeding Encouraging prone playtime (tummy time). Although the Back to Sleep campaign promotes infants sleeping on their backs to avoid sudden infant death syndrome during sleep, parents should still ensure that their infants spend some waking hours on their stomachs. Treatment: Manual Therapy A systematic review, looked into the possible benefits of using manipulation techniques to counteract infant torticollis. The study considered the impact of manipulation on an infant’s sleep, crying, and restlessness as well. This review did not report any adverse effects of using manipulation techniques. It was shown that using manipulation techniques on their own had little to no statistical differences from a placebo group, immediately. When manipulation techniques were combined with physical therapy, there was a change in symptoms compared to the use of physical therapy alone. When targeting the cervical spine, manipulation techniques were shown to shorten treatment duration in infants with head asymmetries. Treatment: Microcurrent therapy A Korean study has recently introduced an additional treatment called microcurrent therapy that may be effective in treating congenital torticollis. For this therapy to be effective the children should be under three months of age and have torticollis involving the entire sternocleidomastoid muscle with a palpable mass and a muscle thickness over 10 mm. Microcurrent therapy sends minute electrical signals into tissue to restore the normal frequencies in cells. Microcurrent therapy is completely painless and children can only feel the probe from the machine on their skin.Microcurrent therapy is thought to increase ATP and protein synthesis as well as enhance blood flow, reduce muscle spasms and decrease pain along with inflammation. It should be used in addition to regular stretching exercises and ultrasound diathermy. Ultrasound diathermy generates heat deep within body tissues to help with contractures, pain and muscle spasms as well as decrease inflammation. This combination of treatments shows remarkable outcomes in the duration of time children are kept in rehabilitation programs: Micocurrent therapy can cut the length of a rehabilitation program almost in half with a full recovery seen after 2.6 months.About 5–10% of cases fail to respond to stretching and require surgical release of the muscle. Treatment: Surgery Surgical release involves the two heads of the sternocleidomastoid muscle being dissected free. This surgery can be minimally invasive and done laparoscopically. Usually surgery is performed on those who are over 12 months old. The surgery is for those who do not respond to physical therapy or botulinum toxin injection or have a very fibrotic sternocleidomastoid muscle. After surgery the child will be required to wear a soft neck collar (also called a Callot's cast). There will be an intense physiotherapy program for 3–4 months as well as strengthening exercises for the neck muscles. Treatment: Other treatments Other treatments include: Rest and analgesics for acute cases Diazepam or other muscle relaxants Botulinum toxin Encouraging active movements for children 6–8 months of age Ultrasound diathermy Prognosis: Studies and evidence from clinical practice show that 85–90% of cases of congenital torticollis are resolved with conservative treatment such as physical therapy. Earlier intervention is shown to be more effective and faster than later treatments. More than 98% of infants with torticollis treated before 1 month of age recover by 2.5 months of age. Infants between 1 and 6 months usually require about 6 months of treatment. After that point, therapy will take closer to 9 months, and it is less likely that the torticollis will be fully resolved. It is possible that torticollis will resolve spontaneously, but chance of relapse is possible. For this reason, infants should be reassessed by their physical therapist or other provider 3–12 months after their symptoms have resolved. Other animals: In veterinary literature usually only the lateral bend of head and neck is termed torticollis, whereas the analogon to the rotatory torticollis in humans is called a head tilt. The most frequently encountered form of torticollis in domestic pets is the head tilt, but occasionally a lateral bend of the head and neck to one side is encountered. Other animals: Head tilt Causes for a head tilt in domestic animals are either diseases of the central or peripheral vestibular system or relieving posture due to neck pain. Known causes for head tilt in domestic animals include: Encephalitozoon cuniculi (or E. cuniculi) infection in rabbits parasitic infestation by the nematode (roundworm) Baylisascaris procyonis in rabbits Inner ear infection Hypothyroidism in dogs Disease of the VIIIth cranial nerve the N. Vestibulocochlearis through trauma, infection, inflammation or neoplasia Disease of the brain stem through either stroke, trauma or neoplasia Damage to the vestibular organ due to toxicity, inflammation or impaired blood supply Geriatric vestibular syndrome in dogs
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LiveBench** LiveBench: LiveBench is a continuously running benchmark project for assessing the quality of protein structure prediction and secondary structure prediction methods. LiveBench focuses mainly on homology modeling and protein threading but also includes secondary structure prediction, comparing publicly available webserver output to newly deposited protein structures in the Protein Data Bank. Like the EVA project and unlike the related CASP and CAFASP experiments, LiveBench is intended to study the accuracy of predictions that would be obtained by non-expert users of publicly available prediction methods. A major advantage of LiveBench and EVA over CASP projects, which run once every two years, is their comparatively large data set.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jeopardy!** Jeopardy!: Jeopardy! is an American game show created by Merv Griffin. The show is a quiz competition that reverses the traditional question-and-answer format of many quiz shows. Rather than being given questions, contestants are instead given general knowledge clues in the form of answers and they must identify the person, place, thing, or idea that the clue describes, phrasing each response in the form of a question. Jeopardy!: The original daytime version debuted on NBC on March 30, 1964, and aired until January 3, 1975. A nighttime syndicated edition aired weekly from September 1974 to September 1975, and a revival, The All-New Jeopardy!, ran on NBC from October 1978 to March 1979 on weekdays. The syndicated show familiar to modern viewers and produced daily (currently by Sony Pictures Television) premiered on September 10, 1984. Jeopardy!: Art Fleming served as host for all versions of the show between 1964 and 1979. Don Pardo served as announcer until 1975, and John Harlan announced for the 1978–1979 season. The daily syndicated version premiered in 1984 with Alex Trebek as host and Johnny Gilbert as announcer. Trebek hosted until his death, with his last episode airing January 8, 2021, after over 36 years in the role. Following his death, a variety of guest hosts completed the season beginning with consulting producer and former contestant Ken Jennings, each hosting for a few weeks before passing the role onto someone else. Then-executive producer Mike Richards initially assumed the position of permanent host in September 2021, but relinquished the role within a week. Since then, Jennings and Mayim Bialik have served as permanent rotating hosts of the syndicated series. While Bialik was originally arranged to host additional primetime specials on ABC, and spin-offs, the announcement of Jeopardy! Masters in 2023 meant these duties were shared as well. Jeopardy!: Currently in its 39th season, Jeopardy! is one of the longest-running game shows of all time. The show has consistently enjoyed a wide viewership and received many accolades from professional television critics. With over 8,000 episodes aired, the daily syndicated version of Jeopardy! has won a record 39 Daytime Emmy Awards as well as a Peabody Award. In 2013, the program was ranked No. 45 on TV Guide's list of the 60 greatest shows in American television history. Jeopardy! has also gained a worldwide following with regional adaptations in many other countries. Gameplay: Each game of Jeopardy! features three contestants competing in three rounds: Jeopardy!, Double Jeopardy!, and Final Jeopardy! In each round, contestants are presented trivia clues phrased as answers, to which they must respond in the form of a question that correctly identifies whatever the clue is describing. For example, if a contestant were to select "Presidents for $200", the resulting clue could be "This 'Father of Our Country' didn't really chop down a cherry tree", to which the correct response is "Who is/was George Washington?" The Jeopardy! and Double Jeopardy! rounds each feature large electronic game boards consisting of six categories with five clues each. The clues are valued by dollar amounts from lowest to highest, ostensibly by difficulty. The values of the clues increased over time, with those in the Double Jeopardy! round always being double the range of the Jeopardy! round. On the original Jeopardy! series, clue values in the first round ranged from $10 to $50 in the Jeopardy! round and $20 to $100 in Double Jeopardy! On The All-New Jeopardy!, they ranged from $25 to $125 and $50 to $250. The 1984 series' first round originally ranged from $100 to $500 in Jeopardy! and $200 to $1,000 in Double Jeopardy! These ranges were increased to $200–$1,000 and $400–$2,000, respectively, on November 26, 2001.Gameplay begins when the returning champion (or in Tournament of Champions play, the highest seeded player, or in all tournaments' second or final round play, the player with the highest score in the previous round) selects a clue by indicating its category and dollar value on the game board. The two (or if there is no returning champion, three) challengers, or in non-Tournament of Champions play, first round tournament contestants, participate in a random draw prior to taping to determine contestant order, and if there is no returning champion or in first round play of regular tournaments, the contestant who drew the first lectern starts first. The underlying clue is revealed and read aloud by the host, after which any contestant may ring in using a lock-out device. The first contestant to ring in successfully is prompted to respond to the clue by stating a question containing the correct answer to the clue. Any grammatically coherent question with the correct answer within it counts as a correct response. If the contestant responds correctly, its dollar value is added to the contestant's score, and they may select a new clue from the board. An incorrect response or a failure to respond within five seconds deducts the clue's value from the contestant's score and allows the other contestants the opportunity to ring in and respond. If the response is not technically incorrect but otherwise judged too vague to be correct, the contestant is given additional time to provide a more specific response. Whenever none of the contestants ring in and respond correctly, the host gives the correct response, and the player who selected the previous clue chooses the next clue. Gameplay continues until the board is cleared or the round's time length expires, which is typically indicated by a beeping sound. The contestant who has the lowest score selects the first clue to start the Double Jeopardy! round. If there is a tie for the contestant with the lowest score, the contestant with the last correct question among the tied players will select first in the round, a rule change since season 38 (2021) and made public on an August 2022 show podcast.A "Daily Double" clue is hidden behind one clue in the Jeopardy! round, and two in Double Jeopardy! The name and inspiration were taken from a horse-racing term. Daily Double clues with a sound component are known as "Audio Daily Doubles", and clues with a video component are known as "Video Daily Doubles". Before the clue is revealed, the contestant who has selected the Daily Double must declare a wager, from a minimum of $5 to a maximum of their entire score (known as a "true Daily Double") or the highest clue value available in the round, whichever is greater. Only the contestant who chooses the Daily Double is allowed to answer the clue and they must provide a response. A correct response adds the value of the wager to the contestant's score while an incorrect response (or failure to provide any response at all) deducts the same value. Whether or not the contestant responds correctly, they choose the next clue.During the Jeopardy! round, contestants are not penalized for forgetting to phrase their response in the form of a question, although the host will remind them to watch their phrasing in future responses if they do. In the Double Jeopardy! round and in the Daily Double in the Jeopardy! round, the phrasing rule is followed more strictly, with a response only able to be ruled as correct if it is phrased properly in question form. A contestant who initially does not phrase a response in the form of a question must re-phrase it before the host rules against them.Contestants are encouraged to select the clues in order from lowest to highest value, as the clues are sometimes written in each category to flow from one to the next, as is the case with game shows that ask questions in a linear string. Deviating from this is known as the "Forrest Bounce", a strategy in which contestants randomly pick clues to confuse opponents that was first used in 1985 by Chuck Forrest, who won over $70,000 in his initial run as champion. Trebek expressed that this strategy not only annoyed him but the staffers as well since it also disrupts the rhythm that develops when revealing the clues and increases the potential for error. Another strategy used by some contestants is to play all of the higher-valued clues first and build up a substantial lead, starting at the bottom of the board. James Holzhauer, whose April–June 2019 winning streak included the ten highest single-day game totals, regularly used this strategy, in conjunction with the Forrest Bounce and aggressive Daily Double wagering.From the premiere of the original Jeopardy! until the end of the 1984–85 syndicated season, contestants were allowed to ring in as soon as the clue was revealed. Since September 1985, contestants have been required to wait until the clue is read before ringing in. To accommodate the rule change, lights were added to the game board (unseen by home viewers) to signify when it is permissible for contestants to signal. Attempting to signal before the light goes on locks the contestant out for half of a second. The change was made to allow the home audience to play along more easily and to keep an extremely fast contestant from potentially dominating the game. In pre-1985 episodes, a sound accompanied a contestant ringing in. According to Trebek, the sound was eliminated because it was "distracting to the viewers" and presented a problem when contestants rang in while Trebek was still reading the clue. Contestants who are visually impaired or blind have been given a card with the category names printed in Braille before each round begins.To ensure fairness in competition and accuracy in scores, the judges double-check their own rulings throughout the production of each episode. If it is determined at any point that a previous response was wrongly ruled correct or incorrect during the taping of an episode, the scores are adjusted at the first available opportunity, typically either at the start of the next round/segment or immediately after a Daily Double is found, with the host providing any necessary explanation regarding the changes. If an error that may have affected the result is not discovered until after taping of an episode is completed, the affected contestant(s) are invited back to compete on a future show, complying with federal quiz show regulations. However, this is rare, as most errors are found in the course of an episode's taping itself. Gameplay: Contestants who finish Double Jeopardy! with $0 or a negative score are automatically eliminated from the game at that point and awarded a consolation prize. On at least one episode hosted by Art Fleming, all three contestants finished Double Jeopardy! with $0 or less, and as a result, no Final Jeopardy! round was played. This rule is still in place for the syndicated version, although staff has suggested that it is not set in stone and they may decide to display the clue for home viewers' play if such a situation were ever to occur. Gameplay: Final Jeopardy! The Final Jeopardy! round features a single clue. At the end of the Double Jeopardy! round, the host announces the Final Jeopardy! category and a commercial break follows. Contestants who finish Double Jeopardy! with less than $1 do not participate in this round. During the break, partitions are placed between the contestant lecterns, and each contestant makes a final wager; they may wager any amount of their earnings, but may not wager certain numbers with connotations that are deemed inappropriate. Contestants write their wagers using a light pen on an electronic display on their lectern, and are limited to five minutes (although the limit may be adjusted if production issues delay the resumption of taping). During this time, contestants also phrase the question, which is pre-written during the wager. After the break, the Final Jeopardy! clue is revealed and read by the host. The contestants have 30 seconds to write their responses on the electronic display, while the show's "Think!" music plays. If either the display or the pen malfunctions, contestants can manually write their responses and wagers using an index card and marker, although the index card has the required phrasing pre-printed on each side ("Who/What"). Visually impaired or blind contestants typically type their responses and wagers with a computer keyboard.Contestants' responses are revealed in order of their pre-Final Jeopardy! scores from lowest to highest. Once a correct response is revealed the host confirms it. Otherwise, the host reveals the correct response if all contestants responded incorrectly. A correct response adds the amount of the contestant's wager to their score. A miss, failure to respond, insufficiently specific response, misspelling that affects the pronunciation of the answer, or failure to phrase the response as a question (even if correct) deducts it.The contestant with the highest score at the end of the round is that day's winner. If there is a tie for second place, consolation prizes are awarded based on the scores going into the Final Jeopardy! round. If all three contestants finish with $0, no one returns as champion for the next show, and based on scores going into the Final Jeopardy! round, the two contestants who were first and second receive the second-place prize, and the contestant in third receives the third-place prize. Gameplay: Various researchers have studied Final Jeopardy! wagering strategies. If the leader's score is more than twice the second place contestant's score (a situation known as a "runaway game"), the leader can guarantee victory by making a sufficiently small wager.: 269  Otherwise, according to Jeopardy! College Champion Keith Williams, the leader usually wagers an amount that would be a dollar greater than twice the second place contestant's score, guaranteeing a win with a correct response. Writing about Jeopardy! wagering in the 1990s, mathematicians George Gilbert and Rhonda Hatcher said that "most players wager aggressively.": 269 Winnings The top scorer in each game is paid their winnings in cash and returns to play in the next match. Non-winners receive consolation prizes instead of their winnings in the game. Since May 16, 2002, consolation prizes have been awarded in cash -- $2,000 for the second-place contestant(s) and $1,000 for the third-place contestant. Since travel and lodging are generally not provided for contestants, cash consolation prizes offset these costs. Production covers the cost of travel for returning champions and players invited back because of errors who must make multiple trips to Los Angeles. Production also covers the cost of travel if a tournament travels (does not stay in Los Angeles) on the second week. Starting in Season 40, according to the official podcast in August 2023, as a result of inflation, consolation prizes were raised $1,000 each to $3,000 for second and $2,000 for third. Gameplay: During Art Fleming's hosting run, all three contestants received their winnings in cash where applicable. This was changed at the start of Trebek's hosting run to avoid the problem of contestants who stopped participating in the game, or avoided wagering in Final Jeopardy!, rather than risk losing the money they had already won. This also allowed the increase to clue values since only one contestant's score is paid instead of three. From 1984 to 2002, non-winning contestants on the Trebek version received vacation packages and merchandise, which were donated by manufacturers as promotional consideration. Since 2004, a presenting sponsor has provided cash prizes to the losing contestants. Gameplay: Returning champions The winner of each episode returns to compete against two new contestants on the next episode. Originally, a contestant who won five consecutive days retired undefeated and was guaranteed a spot in the Tournament of Champions. The five-day limit was eliminated September 8, 2003.In rare instances, contestants tie for first place. The rules related to ties have changed over time. Since November 24, 2014, ties for first place following Final Jeopardy! are broken with a tie-breaker clue, resulting in only one champion being named, keeping their winnings, and returning to compete in the next show. The tied contestants are given the single clue, and the first contestant to buzz-in must give the correct question. A contestant cannot win by default if the opponent gives an incorrect question or forgets to phrase the response as a question (even if correct). The contestant must give a correct question to win the game. If neither player gives the correct question, another clue is given. Previously, if two or all three contestants tied for first place, they were declared "co-champions", and each retained his or her winnings and (unless one was a five-time champion who retired prior to 2003) returned on the following episode. A tie occurred on the January 29, 2014, episode when Arthur Chu, leading at the end of Double Jeopardy!, wagered to tie challenger Carolyn Collins rather than winning. Chu followed Jeopardy! College Champion Keith Williams's advice to wager for the tie to increase the leader's chances of winning. A three-way tie for first place has only occurred once on the syndicated version hosted by Trebek, on March 16, 2007, when Scott Weiss, Jamey Kirby, and Anders Martinson all ended the game with $16,000. Until March 1, 2018, no regular game had ended in a tie-breaker. Gameplay: If no contestant finishes Final Jeopardy! with a positive total, there is no winner and three new contestants compete on the next episode. This has happened on several episodes, including the second episode hosted by Trebek.A winner unable to return as champion because of a change in personal circumstances – for example, illness or a job offer – may be allowed to appear as a co-champion (now a rare occurrence since the co-champion rule was disestablished in early Season 31) in a later episode. Gameplay: Variations for tournament play Throughout each season, Jeopardy! features various special tournaments for particular groups, including among others college students, teenagers, and teachers. Each year at the Tournament of Champions, the players who had won the most games and money in the previous season come back to compete against each other for a large cash prize. Tournaments generally feature 15 contestants and run for 10 consecutive episodes. They generally take place across three rounds: the quarterfinal round (five games), the semifinal round (three games), and the final round (two games). Gameplay: The first five episodes, the quarterfinals, feature three new contestants each day. Other than in the Tournament of Champions, the quarterfinals are unseeded and contestants participate in a random draw to determine playing order and lectern positions over the course of the five games. The Tournament of Champions is seeded based on total winnings in regular games to determine playing order and lectern positions, with the top five players occupying the champion's lectern for the quarterfinal games. Since the removal of the five-game limit in regular gameplay, in the unlikely case of a tie in total winnings between two Tournament of Champions players, the player who won the most games receives the higher seed. If still tied, seeding is determined by comparing the tied players' aggregate Double Jeopardy! and (if still tied) Jeopardy! round scores. Gameplay: The winners of the five quarterfinal games and the four highest-scoring non-winners ("wild cards") advance to the semifinals, which run for three days. The semifinals are seeded with the quarterfinal winners being seeded 1–5 based on their quarterfinal scores, and the wild cards being seeded 6–9. The winners of the quarterfinal games with the three highest scores occupy the champion's lectern for the semifinals. The winners of the three semifinal games advance to play in a two-game final match, in which the scores from both games are combined to determine the overall standings. This format has been used since the first Tournament of Champions in 1985 and was devised by Trebek himself.To prevent later contestants from playing to beat the earlier wild card scores instead of playing to win, contestants are "completely isolated from the studio until it is their time to compete".If none of the contestants in a quarterfinal end with a positive score, no contestant automatically qualifies from that game, and an additional wild card contestant advances instead. This occurred in the quarterfinals of the 1991 Seniors Tournament and the semifinals of the 2013 Teen Tournament, where the rule was in effect during the semifinals, but after that tournament the rule has changed for semifinals and finals.As the players are not isolated during the semifinals the way they are during the quarterfinals, show officials discovered a flaw after the 2013 Teen Tournament, because the triple zero loss happened in the second semifinal that allowed the third semifinal of the 2013 Teen Tournament to be played differently than the first (which was played before the triple zero loss). Starting with the 2013 Tournament of Champions, semifinal games, like the two-game finals, must have a winner. Players who participate in Final Jeopardy! will participate in the standard tie-breaker, regardless of the score being zero or a positive score. Similarly, if all three players have a zero score at the end of a two-game match, a normal tournament finals format will proceed to a tie-breaker. In a tournament format where a player must win multiple games to win the tournament, such as the 2020 Greatest of All Time or 2022 Tournament of Champions, the tie-breaker will be used regardless of the score being zero or positive for players to win the game and receive the point. Gameplay: In the standard tournament finals format, contestants who finish Double Jeopardy! with a $0 or negative score on either day do not play Final Jeopardy! that day. Their score for that leg is recorded as $0. Conception and development: In a 1963 Associated Press profile released shortly before the original Jeopardy! series premiered, Merv Griffin offered the following account of how he created the quiz show: My wife Julann just came up with the idea one day when we were in a plane bringing us back to New York City from Duluth. I was mulling over game show ideas, when she noted that there had not been a successful 'question and answer' game on the air since the quiz show scandals. Why not do a switch, and give the answers to the contestant and let them come up with the question? She fired a couple of answers to me: "5,280"—and the question of course was 'How many feet in a mile?'. Another was '79 Wistful Vista'; that was Fibber and Mollie McGee's address. I loved the idea, went straight to NBC with the idea, and they bought it without even looking at a pilot show. Conception and development: Griffin's first conception of the game used a board comprising ten categories with ten clues each, but after finding that this board could not easily be shown on camera, he reduced it to two rounds of thirty clues each, with five clues in each of six categories. He originally intended requiring grammatically correct phrasing (e.g., only accepting "Who is..." for a person), but after finding that grammatical correction slowed the game down, he decided to accept any correct response that was in question form. Griffin discarded his initial title of What's the Question? when skeptical network executive Ed Vane rejected his original concept of the game, claiming, "It doesn't have enough jeopardies."The format of giving contestants the answers and requiring the questions had previously been used by the Gil Fates-hosted program CBS Television Quiz, which aired from July 1941 until May 1942. Personnel: Hosts Art Fleming was the original host of the show throughout both NBC runs and its brief weekly syndicated run, between 1964 and 1979. Alex Trebek served as host of the daily syndicated version from its premiere in 1984 until his death in 2020, except when he switched places with Wheel of Fortune host Pat Sajak as an April Fool's joke on April 1, 1997.On a Fox News program in July 2018, Trebek said the odds of his retirement in 2020 were 50/50 "and a little less". He added that he might continue if he's "not making too many mistakes" but would make an "intelligent decision" as to when he should give up the emcee role. In November 2018, Trebek renewed his contract as host through 2022, stating in January 2019 that the work schedule consisting of 46 taping sessions each year was still manageable for a man of his age. On March 6, 2019, Trebek announced he had been diagnosed with stage IV pancreatic cancer (a disease from which Fleming also died on April 25, 1995). In a prepared video statement announcing his diagnosis, Trebek noted that his prognosis was poor but that he would aggressively fight the cancer in hopes of beating the odds and would continue hosting Jeopardy! for as long as he was able, joking that his contract obligated him to do so for three more years regardless of health.Trebek was still serving as host, having taped his last episode on October 29, 2020, for an intended Christmas Day broadcast, when contingency plans were made for him to miss the next taping, scheduled for November 9–10, 2020. In an October 13, 2022, interview for New York magazine's Vulture section, Ken Jennings noted supervising producers Lisa Broffman and Rocky Schmidt had named him interim host for that taping and remembered his last conversation with Trebek days before rehearsal was to commence. Personnel: I was scheduled to come into the studio to rehearse for some games; even if Alex bounced back as he had before, he wanted somebody to fill in for him for a little while. A producer set up a call, and his voice was notably weaker than we’d ever heard it on the air, which really struck me at first. It was a tough moment. But once you got over the timbre of the voice, he was still very much Alex — going down conversational side paths about old movies he liked. At one point, he started talking about tennis players he compared to various Jeopardy! champions. But the thing that stuck with me is he thanked me for coming in to fill in for him. That just broke me. I said, “Alex, are you kidding? We should be thanking you. I’d take a bullet for you, Alex. I’m happy to help.” In an August 2, 2023, podcast by Sony Pictures Television and Sony Music, This is Jeopardy!: The Story of America's Favorite Quiz Show, supervising producer Lisa Broffman noted the rehearsal for Jennings was scheduled November 8, 2020, but cancelled when Rocky Schmidt gave staff the news Alex Trebek had died that day. Personnel: We had planned on having Ken (Jennings) come in to rehearse on November 8th, 2020. So we had a crew in, and that morning, (supervising producer) Rocky Schmidt called me and said, "He's gone." So we canceled the rehearsal day. At the time of Trebek's death, producers publicly declined to discuss any plans to introduce his successor while stating that they had enough new episodes with Trebek as host to run through Christmas Day, even though the the show's official podcast in 2023 admitted Ken Jennings was officially scheduled as interim host, with his first taping cancelled on the news of Trebek's death.On November 9, 2020, the first episode to air after Trebek's death, executive producer Mike Richards paid tribute to Trebek, after a few seconds of silence where the lights on the Jeopardy! set (which had been set up for Jennings to host before Trebek's death) slowly dimmed. That episode, as well as subsequent episodes that aired after Trebek's death, also included a dedication screen at the end of the credits through the remainder of the season.To compensate for concerns over pre-emptions caused by holiday week specials and sports, Sony announced on November 23, 2020, that the air dates of Trebek's final week were postponed, with episodes scheduled for the week of December 21–25 being postponed to January 4–8, 2021. Reruns of episodes in which Trebek recorded clues on location aired from December 21, 2020, to January 1, 2021, before his final episodes aired January 4–8, 2021.Jennings took over hosting when production resumed on November 30, 2020, three weeks after he had been scheduled to host. The six weeks of episodes began airing January 11, 2021. Sony announced the hosts would come from "within the Jeopardy! family". Between January and February 2021, additional guest hosts were announced, including executive producer Mike Richards; television news personalities Katie Couric, Bill Whitaker, Savannah Guthrie, Sanjay Gupta, and Anderson Cooper; athlete Aaron Rodgers; talk show host Mehmet Oz; and actress and neuroscientist Mayim Bialik. An April 2021 announcement listed the final group of guest hosts, including: television news personalities George Stephanopoulos and Robin Roberts; Reading Rainbow host LeVar Burton; Squawk on the Street co-host David Faber; and Fox Sports broadcaster Joe Buck. In addition, Buzzy Cohen, the 2017 Jeopardy! Tournament of Champions winner, hosted the 2021 Tournament of Champions.On August 11, 2021, it was announced that Richards would succeed Trebek as host of the daily show and Bialik would host Jeopardy! primetime specials and spin-offs. On August 20, 2021, following a report from The Ringer exposing controversial remarks made on his podcast in the past, resurfaced controversies from Richards's time on The Price Is Right, and accusations of self-dealing regarding his executive producer position, Richards stepped down as host after taping the first week of episodes while remaining executive producer, before being dismissed from the latter role on August 31. Richards's five episodes as host aired in September 2021. Bialik and Jennings then alternated hosting the show for the rest of season 38, through the end of July 2022. Bialik also hosted the season's various tournaments and primetime specials.In July 2022, it was announced that Bialik and Jennings would continue splitting hosting duties for the 39th season of the syndicated version. Jennings would also host the Tournament of Champions and the new Second Chance Tournament, while Bialik would also again host primetime specials and spinoffs, including a new celebrity edition of Jeopardy!, which premiered in September 2022. However, in January 2023, ABC announced Jennings would host a Jeopardy! Masters spinoff, indicating a change of arrangement. In May 2023, Bialik opted not to host the final episodes of the season in support of writers during the 2023 Writers Guild of America strike, with Jennings stepping in to host the remaining episodes. Personnel: Announcers Don Pardo held the role of announcer on the NBC version and weekly syndicated version, while John Harlan replaced him for The All-New Jeopardy! In the daily syndicated version's first pilot, from 1983, Jay Stewart served as the announcer, but Johnny Gilbert took over the role at Trebek's recommendation when that version was picked up as a series. Personnel: Clue Crew The Jeopardy! Clue Crew, introduced on September 24, 2001, was a team of roving correspondents who appeared in videos, recorded around the world, to narrate some clues. Explaining why the Clue Crew was added, executive producer Harry Friedman said, "TV is a visual medium, and the more visual we can make our clues, the more we think it will enhance the experience for the viewer."Following the initial announcement of auditions for the team, over 5,000 people applied for Clue Crew posts. The original Clue Crew members were Cheryl Farrell, Jimmy McGuire, Sofia Lidskog, and Sarah Whitcomb Foss. Jon Cannon and Kelly Miyahara joined the Clue Crew in 2005. Farrell recorded clues until October 2008, and Cannon until July 2009. Miyahara, who also served as announcer for the Sports Jeopardy! spin-off series, left in 2019.The Clue Crew was eliminated, beginning with the 39th season in September 2022; Foss became a producer for the show and McGuire a stage manager. Foss also serves as in-studio announcer when Johnny Gilbert is unable to attend a taping. In such cases, her voice is replaced with Gilbert's in post-production.The Clue Crew traveled to over 300 cities worldwide, spanning all 50 of the United States and 46 other countries. Occasionally, they visited schools to showcase the educational game Classroom Jeopardy! Production staff Robert Rubin served as the producer of the original Jeopardy! series for most of its run and later became its executive producer. Following Rubin's promotion, the line producer was Lynette Williams.Griffin was the daily syndicated version's executive producer until his retirement in 2000. Trebek served as producer as well as host until 1987, when he began hosting NBC's Classic Concentration for the next four years. At that time, he handed producer duties to George Vosburgh, who had formerly produced The All-New Jeopardy! In 1997, Harry Friedman, Lisa Finneran, and Rocky Schmidt succeeded Vosburgh as producers of the show. Beginning in 1999, Friedman became executive producer, and Gary Johnson became the third producer. In 2006, Deb Dittmann and Brett Schneider became producers, while Finneran, Schmidt, and Johnson became supervising producers.The original Jeopardy! series was directed at different times by Bob Hultgren, Eleanor Tarshis, and Jeff Goldstein. Dick Schneider, who directed episodes of The All-New Jeopardy!, returned as director from 1984 to 1992. From 1992 to 2018, Kevin McCarthy served as director, who had previously served as associate director under Schneider. McCarthy announced his retirement after 26 years on June 26, 2018, and was succeeded as director by Clay Jacobsen.As of 2012, Jeopardy! employs nine writers and five researchers to create and assemble the categories and clues. Billy Wisse is the editorial producer and Michele Loud is the editorial supervisor Previous writing and editorial supervisors have included Jules Minton, Terrence McDonnell, Harry Eisenberg, and Gary Johnson. Trebek himself also contributed to writing clues and categories.Naomi Slodki is the production designer for the program. Previous art directors have included Henry Lickel, Dennis Roof, Bob Rang, and Ed Flesh (who also designed sets for other game shows such as The $25,000 Pyramid, Name That Tune, and Wheel of Fortune).On August 1, 2019, Sony Pictures Television announced that Friedman would retire as executive producer of both Jeopardy! and Wheel of Fortune at the end of the 2019–20 season. On August 29, 2019, it was announced that Mike Richards replaced Friedman in 2020. On August 31, 2021, after Richards had resigned as permanent host earlier in the month, he was fired from his executive producer position at both Jeopardy! and Wheel, with Sony executives citing continued internal turmoil that Richards's resignation as host had failed to quell as they had hoped. Michael Davies from Embassy Row, which also produces Sony game show Who Wants to Be a Millionaire, was selected as interim executive producer through the 2021–22 season. On April 14, 2022, Davies accepted the role on a permanent basis. Production: The daily syndicated version of Jeopardy! is produced by Sony Pictures Television (previously known as Columbia TriStar Television, the successor company to original producer Merv Griffin Enterprises). The copyright holder is Jeopardy Productions, which, like SPT, operates as a subsidiary of Sony Pictures Entertainment. The rights to distribute the program worldwide are owned by CBS Media Ventures, which absorbed original distributor King World Productions in 2007.The original Jeopardy! series was taped in Studio 6A at NBC Studios at 30 Rockefeller Plaza in New York City, and The All-New Jeopardy! was taped in Studio 3 at NBC's Burbank Studios at 3000 West Alameda Avenue in Burbank, California. The Trebek version was initially taped at Metromedia Stage 7, KTTV, on Sunset Boulevard in Hollywood, but moved its production facilities to Hollywood Center Studios' Stage 1 in 1985. In 1994 the Jeopardy! production facilities moved to Sony Pictures Studios' Stage 10 on Washington Boulevard in Culver City, California, where production has remained since. Stage 10 was dedicated in Trebek's honor when episodes for the 38th season began taping in August 2021, with the stage being renamed to "The Alex Trebek Stage", with help from the Trebek family (Alex's wife, Jean, son, Matthew, and daughters, Emily and Nicky).Five episodes are taped each day, with two days of taping every other week. However, taping slowed after Alex Trebek's health issues in 2019 until his last taping day on October 29, 2020. Some weeks had three episodes taped within a single day, while some had two episodes taped within a single day. Production: Set Various technological and aesthetic changes have been made to the Jeopardy! set over the years. The original game board was exposed from behind a curtain and featured clues printed on cardboard pull cards which were revealed as contestants selected them. The All-New Jeopardy!'s game board was exposed from behind double-slide panels and featured pull cards with the dollar amount in front and the clue behind it. When the Trebek version premiered in 1984, the game board used individual television monitors for each clue within categories. The original monitors were replaced with larger and sleeker ones in 1991. In 2006, these monitors were discarded in favor of a nearly seamless projection video wall, which was replaced in 2009 with 36 high-definition flat-panel monitors manufactured by Sony Electronics.From 1985 to 1997, the sets were designed to have a background color of blue for the Jeopardy! round and red for the Double Jeopardy! and Final Jeopardy! rounds. In 1991 a brand new set was introduced that resembled a grid. On the episode aired November 11, 1996, Jeopardy! introduced the first of several sets designed by Naomi Slodki, who intended the set to resemble "the foyer of a very contemporary library, with wood and sandblasted glass and blue granite".In 2002, another new set was introduced, which was given slight modifications when Jeopardy! and sister show Wheel of Fortune transitioned to high-definition broadcasting in 2006. During this time, virtual tours of the set began to be featured on the official web site. The various HD improvements for Jeopardy! and Wheel represented a combined investment of approximately $4 million, 5,000 hours of labor, and 6 miles (9.7 km) of cable. Both programs had been shot using HD cameras for several years before beginning to broadcast in HD. On standard-definition television broadcasts, episodes continue displaying with an aspect ratio of 4:3. Production: In 2009, Jeopardy! updated its set once again. The new set debuted with special episodes taped at the 42nd annual International CES technology trade show, hosted at the Las Vegas Convention Center in Winchester (Las Vegas Valley), Nevada, and became the primary set for Jeopardy! when the 2009–2010 season began.In 2013, Jeopardy! introduced another new set. This set underwent several modifications in 2020, with a wider studio without any studio audience (the last episodes of the 2019–2020 season were also taped without an audience), and new lecterns for contestants and the host. The lecterns are spaced considerably apart to comply with California state regulations imposed when filming resumed after the coronavirus pandemic ended the 2020 season early. Although the modified COVID-era set from the previous two seasons was kept, the live studio audience fully returned for season 39, which began airing on September 12, 2022. Production: Theme music Since the debut of Jeopardy! in 1964, several songs and arrangements have been used as the theme music, most of which were composed by Griffin. The main theme for the original Jeopardy! series was "Take Ten", composed by Griffin's wife Julann. The All-New Jeopardy! opened with "January, February, March" and closed with "Frisco Disco", both of which were composed by Griffin himself.The best-known theme song on Jeopardy! is "Think!", originally composed by Griffin under the title "A Time for Tony", as a lullaby for his son. "Think!" has always been used for the 30-second period in Final Jeopardy! when the contestants write down their responses, and since the syndicated version debuted in 1984, a rendition of that tune has been used as the main theme song. "Think!" has become so popular that it has been used in many different contexts, from sporting events to weddings; "its 30-second countdown has become synonymous with any deadline pressure". Griffin estimated that the use of "Think!" had earned him royalties of over $70 million throughout his lifetime. "Think!" led Griffin to win the Broadcast Music, Inc. (BMI) President's Award in 2003, and during GSN's 2009 Game Show Awards special, it was named "Best Game Show Theme Song". In 1997, the main theme (later rearranged in 2001) and Final Jeopardy! "Think!" cue were rearranged by Steve Kaplan, who served as music director until his December 2003 death. Then in 2008, the Jeopardy! music package was rearranged again, this time by Chris Bell Music & Sound Design. The fully-synthesized 2021 version of the main theme, which draws elements from the 2008 arrangement, was composed by Bleeding Fingers Music and has been used since season 38. Production: Audition process For the original Jeopardy! series, prospective contestants contacted the production office in New York to arrange an appointment and to preliminarily determine eligibility. They were briefed and auditioned together in groups of ten to thirty individuals, participating in both a written test and mock games. Individuals who were successful at the audition were invited to appear on the program within approximately six weeks.Since 1984, prospective contestants begin with a written exam comprising 50 questions. This exam is administered online periodically, as well as being offered at regional contestant search events. Since 1998, a Winnebago recreational vehicle dubbed the "Jeopardy! Brain Bus" travels to conduct regional events throughout the United States and Canada. Participants who correctly answer at least 35 out of 50 questions advance in the audition process and are invited to attend in-person group auditions throughout the country. At these auditions, a second written exam is administered, followed by a mock game and interviews. Those who are approved are notified at a later time and invited to appear as contestants.Contestants are required to travel to the production location, which since 1994, has been Culver City, California, making travel and lodging arrangements at their own expense when doing so. This requirement has been criticized by contestant Ben Goldstein, who stated in June 2023, in reference to The Jeopardy! Fan website, 'that the show would be more accessible to a wider range of contestants if the production paid for these expenses, saying, "Not everyone can afford a trip to L.A. with no guarantee of payback." The website's creator Andy Saunders, replied, "This has been a longstanding Jeopardy! policy and has generally been presented as an issue of fairness by the show. A 1994 Oakland Tribune article quotes then–contestant coordinator Kelley Carpenter as saying, 'Because we have both out-of-towners and locals appearing on the show, if we were to pay for an airfare and a hotel, we would have technically given away money to some contestants coming from the East Coast, which wouldn't be fair to someone who only lives 20 minutes away.'" Eligibility is limited to people who have not previously appeared as contestants, and have not been to an in-person audition for at least 18 months.Many of the contestants who appear on the series, including a majority of Teen Tournament contestants and nearly half of all College Tournament contestants, participated in quiz bowl competitions during their time in high school. The National Academic Quiz Tournaments has been described by Ken Jennings as a de facto "minor league" for game shows such as Jeopardy! Broadcast history: The original Jeopardy! series premiered on NBC on March 30, 1964, and by the end of the 1960s was the second-highest-rated daytime game show, behind only The Hollywood Squares. The program was successful until 1974, when Lin Bolen, then NBC's Vice President of Daytime Programming, moved the show out of the noontime slot where it had been located for most of its run, as part of her effort to boost ratings among the 18–34 female demographic. After 2,753 episodes, the original Jeopardy! series ended on January 3, 1975. To compensate Griffin for its cancellation, NBC purchased Wheel of Fortune, another show that he had created, and premiered it the following Monday. A syndicated edition of Jeopardy!, distributed by Metromedia and featuring many contestants who were previously champions on the original series, aired in primetime from 1974 to 1975. The NBC daytime series was later revived as The All-New Jeopardy!, which premiered on October 2, 1978, and aired 108 episodes, ending on March 2, 1979. This revival featured significant rule changes, including progressive elimination of contestants over the course of the main game, and a Super Jeopardy! bonus round (based loosely on bingo) instead of Final Jeopardy!The daily syndicated version debuted on September 10, 1984, and was launched in response to the success of the syndicated version of Wheel and the installation of electronic trivia games in pubs and bars. This version of the program has outlived 300 other game shows and has become the second most popular game show in syndication (behind Wheel), averaging 25 million viewers per week. The most recent renewal, in January 2023, extends it through the 2027–28 season. Broadcast history: Jeopardy! has spawned versions in many foreign countries throughout the world, including Canada, the United Kingdom, Germany, Sweden, Russia, Denmark, Israel, and Australia. The American syndicated version of Jeopardy! is also broadcast throughout the world, with international distribution rights handled by CBS Studios International.Three spin-off versions of Jeopardy! have been created. Rock & Roll Jeopardy! debuted on VH1 in 1998 and ran until 2001. The format centered around post-1950s popular music trivia and was hosted by Jeff Probst. Jep!, which aired on GSN during the 1998–1999 season, was a special children's version hosted by Bob Bergen and featured various rule changes from the original version. Sports Jeopardy!, a sports-themed version hosted by Dan Patrick, premiered in 2014 on the Crackle digital service and eventually moved to the cable sports network NBCSN in 2016.In March 2020, taping halted as a result of the COVID-19 pandemic. Originally, the production team taped episodes without an audience, until production was shut down altogether. In May 2020, Sony announced new episodes would air until June 12, 2020, including the Teacher's Tournament. In July 2020, Jeopardy! began rerunning a package of 20 classic episodes, including the first two from the syndicated run.Production resumed in August 2020 with new safety measures in place following government guidelines to protect contestants, staff, crew and talent. New expanded lecterns, designed to allow social distancing during gameplay, are spaced apart from one another. Until the COVID-19 pandemic in the United States is over, only essential staff and crew are allowed on stage. Personal protective equipment is provided for everyone behind the scenes and all staff and crew are tested regularly, while contestants are also tested before they step onto the set. Social distancing measures are also enforced off-stage. Ken Jennings joined production in an on-air role in 2020.Following Trebek's death, an announcement noted that the pre-taped episodes were to air posthumously until December 25, 2020. Owing to concerns after a late start to tapings caused by the pandemic and the cancellation of November tapings, officials added a two-week lineup of classic episodes to avoid NFL, NBA, or local Christmas programming preemptions that moved Trebek's final episode to January 8, 2021. The first episode with an interim host aired January 11, 2021. Broadcast history: Archived episodes Only a small number of episodes of the first three Jeopardy! versions survive. From the original NBC daytime version, archived episodes mostly consist of black-and-white kinescopes of the original color videotapes. Various episodes from 1967, 1971, 1973, and 1974 are listed among the holdings of the UCLA Film and Television Archive. The 1964 "test episode", Episode No. 2,000 (from February 21, 1972, in color), and a June 1975 episode of the weekly syndicated edition exist at the Paley Center for Media. The test episode, of which only a few limited clips had been released, was released to the public in full on the Jeopardy! YouTube account March 30, 2022, and an audiotape containing approximately five minutes (including introductions and Final Jeopardy!) from the first aired episode was also released to the public; both episodes were released to celebrate the 58th anniversary of the show's debut. The 1975 series finale, also in color and containing two short clips from the 1967 "College Scholarship Tournament" and Gene Shalit's appearance on an early version of Celebrity Jeopardy! also exists in its entirety. Incomplete paper records of the NBC-era games exist on microfilm at the Library of Congress. GSN holds The All-New Jeopardy!'s premiere and finale in broadcast quality, and aired the latter on December 31, 1999, as part of its "Y2Play" marathon. The UCLA Archive holds a copy of a pilot taped for CBS in 1977, and the premiere exists among the Paley Center's holdings.GSN, which, like Jeopardy!, is an affiliate of Sony Pictures Television, has rerun episodes since the channel's launch in 1994. Copies of 43 Trebek-hosted syndicated Jeopardy! episodes aired between 1989 and 2004 have been collected by the UCLA Archive, and the premiere and various other episodes are included in the Paley Center's collection. In July 2022, Vulture reported that vintage episodes of the daily syndicated version would air on a dedicated channel on Pluto TV (owned by distributor Paramount Global) beginning in August. The channel, named Jeopardy! Hosted by Alex Trebek, launched on August 1. Reception and legacy: By 1994, the press called Jeopardy! "an American icon". It has won a record 39 Daytime Emmy Awards. The program holds the record for the Daytime Emmy Award for Outstanding Game/Audience Participation Show, with seventeen awards won in that category. Trebek won seven awards for Outstanding Game Show Host. Twelve other awards were won by the show's directors and writers in the categories of Outstanding Direction for a Game/Audience Participation Show and Outstanding Special Class Writing before these categories were removed in 2006. On June 17, 2011, Trebek shared the Lifetime Achievement Award with Sajak at the 38th Annual Daytime Emmy Awards ceremony. The following year, the program was honored with a Peabody Award for its role in encouraging, celebrating, and rewarding knowledge.In its April 17–23, 1993, issue, TV Guide named Jeopardy! the best game show of the 1970s as part of a celebration of the magazine's 40th anniversary. In January 2001, the magazine ranked the program number 2 on its "50 Greatest Game Shows" list—second only to The Price Is Right. It later ranked Jeopardy! number 45 on its list of the 60 Best TV Series of All Time, calling it "habit-forming" and saying that the program "always makes [its viewers] feel smarter". Also in 2013, the program ranked number 1 on TV Guide's list of the 60 Greatest Game Shows. In the summer of 2006, the program was ranked number 2 on GSN's list of the 50 Greatest Game Shows of All Time, second only to Match Game.A hall of fame honoring Jeopardy! was added to the Sony Pictures Studios tour on September 20, 2011. It features the show's Emmy Awards as well as retired set pieces, classic merchandise, video clips, photographs, and other memorabilia related to Jeopardy!'s history.In 1989, Fleming expressed dissatisfaction with the daily syndicated Jeopardy! series in an essay published in Sports Illustrated. He confessed that he only watched the Trebek version infrequently—only for a handful of questions—and criticized this iteration mainly for its Hollywood setting. Fleming believed that in contrast to New Yorkers who Fleming considered being more intelligent and authentic, moving the show to Hollywood brought both an unrealistic glamour and a dumbing-down of the program that he disdained. He also disliked the decision to not award losing contestants their cash earnings (believing the parting gifts offered instead were cheap) and expressed surprise that what he considered a parlor game had transformed into such a national phenomenon under Trebek. In television interviews, Fleming expressed similar sentiments while also noting that he approved of Trebek's approach to hosting, that Fleming and Trebek were personal friends and that, despite the modern show's flaws, it was still one of the best television shows.Jeopardy!'s answer-and-question format has become widely entrenched: Fleming observed that other game shows had contestants phrasing their answers in question form, leading hosts to remind them that they are not competing on Jeopardy! Tournaments and other events: Regular events Starting in 1985, the show has held an annual Tournament of Champions featuring the top fifteen champions who have appeared on the show since the last tournament. The top prize awarded to the winner was originally valued at $100,000, and increased to $250,000 in 2003. Other regular tournaments include the Teen Tournament, with a $100,000 top prize; the College Championship, in which undergraduate students from American colleges and universities compete for a $100,000 top prize; and the Teachers Tournament, where educators compete for a $100,000 top prize. Each tournament runs for ten consecutive episodes in a format devised by Trebek himself, consisting of five quarter-final games, three semi-finals, and a final consisting of two games with the scores totaled. Winners of the College Championship and Teachers Tournament are invited to participate in the Tournament of Champions. Tournaments and other events: Non-tournament events held regularly on the show include Celebrity Jeopardy!, in which celebrities and other notable individuals compete for charitable organizations of their choice, and Kids Week, a special competition for school-age children aged 10 through 12. Tournaments and other events: Special events Three International Tournaments, held in 1996, 1997, and 2001, featured one-week competitions among champions from each of the international versions of Jeopardy! Each of the countries that aired their own version of the show in those years could nominate a contestant. The format was identical to the semi-finals and finals of other Jeopardy! tournaments. In 1996 and 1997, the winner received $25,000. In 2001, the top prize was doubled to $50,000. The 1997 tournament was recorded in Stockholm on the set of the Swedish version of Jeopardy!, and is significant for being the first week of Jeopardy! episodes taped in a foreign country. Magnus Härenstam, the host of the Swedish version of Jeopardy! at the time, introduced the first episode of the 1997 tournament, including Trebek. In addition, prior to Final Jeopardy! each day, a video clip of Härenstam with Trebek in Stockholm was shown. Tournaments and other events: There have been several special tournaments featuring the greatest contestants in Jeopardy! history. The first of these "all-time best" tournaments, Super Jeopardy!, aired in the summer of 1990 on ABC, and featured 35 top contestants from the previous seasons of the Trebek version and one notable champion from the original Jeopardy! series competing for a top prize of $250,000. In 1993, that year's Tournament of Champions was followed by a Tenth Anniversary Tournament conducted over five episodes. In May 2002, to commemorate the Trebek version's 4,000th episode, the show invited fifteen champions to play for a $1 million prize in the Million Dollar Masters tournament, which took place at Radio City Music Hall in New York City. The Ultimate Tournament of Champions aired in 2005 and pitted 145 former Jeopardy! champions against each other, with two winners moving on to face Ken Jennings in a three-game final for $2,000,000, the largest prize in the show's history. Overall, the tournament spanned 15 weeks and 76 episodes, starting on February 9 and ending on May 25. In 2014, Jeopardy! commemorated the 30th anniversary of the Trebek version with a Battle of the Decades tournament, in which 15 champions apiece from the first, second, and third decades of Jeopardy!'s daily syndicated history competed for a grand prize of $1,000,000. On November 18, 2019, an announcement of Jeopardy! returning to ABC for a primetime "Greatest of All Time" tournament was made beginning January 7, 2020, which was to include Ken Jennings, Brad Rutter, and James Holzhauer. The event used a multi-night format, with each episode featuring a two-game match. The contestant with the higher cumulative point total across both games was declared the winner of the match. The first to win three matches received a $1,000,000 prize. The tournament concluded on January 14, 2020, after four matches, with Ken Jennings winning three matches to Holzhauer's one and Rutter's zero wins. Rutter and Holzhauer each received $250,000 for their participation. Tournaments and other events: In November 1998, Jeopardy! traveled to Boston to reassemble 12 past Teen Tournament contestants for a special Teen Reunion Tournament. In 2008, fifteen contestants from the first two Kids Weeks competed in a special reunion tournament. During 2009–2010, a special edition of Celebrity Jeopardy!, called the Million Dollar Celebrity Invitational, was played in which twenty-seven contestants from past celebrity episodes competed for a grand prize of $1,000,000 for charity. The grand prize was won by Michael McKean.The IBM Challenge aired February 14–16, 2011, and featured IBM's Watson computer facing off against Ken Jennings and Brad Rutter in a two-game match played over three shows. This was the first man-vs.-machine competition in Jeopardy!'s history. Watson won both the first game and the overall match to win the grand prize of $1 million, which IBM divided between two charities (World Vision International and World Community Grid). Jennings, who won $300,000 for second place, and Rutter, who won the $200,000 third-place prize, both pledged to donate half of their winnings to charity. The competition brought the show its highest ratings since the Ultimate Tournament of Champions.In 2019, The All-Star Games had six teams with three former champions each. Each team member played one of the three rounds in each game played. Rutter, David Madden and Larissa Kelly won the tournament. Record holders: Jeopardy!'s record for the longest winning streak is held by Ken Jennings, who competed on the show from June 2 through November 30, 2004, winning 74 matches before being defeated by Nancy Zerg in his 75th appearance. He amassed $2,522,700 over his 75 episodes, for an average of $33,636 per episode. At the time, he held the record as the highest money-winner ever on American game shows, and his winning streak increased the show's ratings and popularity to the point where it became TV's highest-rated syndicated program. In addition to these winnings on the daily Jeopardy! series, Jennings returned for a number of Jeopardy! special tournaments, taking home the following: the second-place prize of $500,000 in the 2005 Jeopardy! Ultimate Tournament of Champions, the $300,000-second-place prize in the 2011 Jeopardy! IBM Challenge, the $123,600-second-place prize in the 2014 Jeopardy! Battle of the Decades, a $100,000 prize (one-third of the $300,000-second-place prize to his three-player team) in the 2019 Jeopardy! All-Star Games, and the $1,000,000 first-place prize in the 2020 Jeopardy! The Greatest of All Time tournament. Record holders: The record holder for lifetime Jeopardy!-related winnings is Brad Rutter, who has won nearly $5.2 million in cash and prizes across five episodes of the regular series (when the rules stipulated that a contestant who won five consecutive days retired undefeated) and seven Jeopardy! tournaments and events (winning five of those specials, along with two third-place finishes). Counting all prizes that he won, he has achieved a cumulative total of $5,129,036 in winnings, which included: the $55,102 prize over five regular episodes in 2000 (also including the value of two cars won, worth $45,000), the $100,000 first-place prize in the 2001 Jeopardy! Tournament of Champions, the $1,000,000 first-place prize in the 2002 Jeopardy! Million Dollar Masters Tournament, the $2,000,000 first-place prize (plus $115,000 in preliminary rounds) in the 2005 Jeopardy! Ultimate Tournament of Champions, the $200,000 third-place prize in the 2011 Jeopardy! IBM Challenge, the $1,030,600 first-place prize in the 2014 Jeopardy! Battle of the Decades, $333,334 (one-third of the $1,000,000 first-place prize, shared with his three-player team) in the 2019 Jeopardy! All-Star Games and a $250,000 prize in the 2020 Jeopardy! The Greatest of All Time tournament. Record holders: The holder of the all-time record for single-day winnings on Jeopardy! is James Holzhauer. Holzhauer first surpassed the record of $77,000, held since 2010 by Roger Craig, when he earned $110,914 on the episode that aired on April 9, 2019. Holzhauer pushed his own single-day record to $131,127 on the episode that aired April 17, 2019, by amassing $71,114 over the episode's first two rounds, then successfully wagering an additional $60,013 in the Final Jeopardy! round. Holzhauer's total of 32 consecutive games won was second place of all time in regular game play at the time and remains fourth overall after Matt Amodio and Amy Schneider surpassed Holzhauer in 2021 and 2022, respectively. When he departed the show, he held the top 16 spots for highest single-day regular-game winnings and is the only player to win more than $100,000 in a single episode in regular play (achieved six times). On April 15, 2019, Holzhauer moved into second place for regular play Jeopardy! winnings (behind Jennings) and third place for all Jeopardy!-related winnings (behind Rutter and Jennings). On April 23, 2019, Holzhauer joined Rutter and Jennings as the third Jeopardy!-made millionaire (Amodio eventually became the fourth). The next day, Holzhauer moved onto the top ten list for all-time American game show winnings at No. 10, joining Rutter (#1) and Jennings (#2) on that list. Holzhauer was defeated on the June 3, 2019, episode, finishing in second place. His winnings on Jeopardy! totaled $2,464,216, $58,484 behind Jennings' record. Including over $58,000 from a 2014 appearance on The Chase, with Holzhauer's $2.96 million from Jeopardy! (including his Tournament of Champions and The Greatest of All Time prizes), he is #3 on the list of all-time American game show winnings. Record holders: The record-holder among women on Jeopardy! for regular series winnings is Amy Schneider, with a total of $1,382,800 earned in 40 episodes between 2021 and 2022. Schneider is currently ranked second all-time in consecutive games won, behind only Jennings (74). Mattea Roach, whose winning streak earned $560,983 over 23 games in April and May 2022, has been the most successful Canadian contestant to have competed on the program. Following their run, Roach ranked fifth for both consecutive games won and regular play Jeopardy! winnings.The highest single-day winnings in a Celebrity Jeopardy! tournament was achieved by comedian Andy Richter during a first-round game of the 2009–2010 "Million Dollar Celebrity Invitational", in which he finished with $68,000 for his selected charity, the St. Jude Children's Research Hospital.Four contestants on the Trebek version share the record for winning a game with the lowest amount possible, at $1. The first was U.S. Air Force Lieutenant Colonel Darryl Scott, on the episode that aired January 19, 1993. The second was Benjamin Salisbury, on a Celebrity Jeopardy! episode that aired April 30, 1997. The third was Brandi Chastain, on the Celebrity Jeopardy! episode that aired February 9, 2001. The fourth was U.S. Navy Lieutenant Manny Abell, on the episode that aired October 17, 2017. Other media: Portrayals and parodies Jeopardy! has been featured in several films, television shows, and books over the years, mostly with one or more characters participating as contestants, or viewing and interacting with the game show from their own homes. The sitcoms The Golden Girls, Mama's Family, and Cheers are among the shows which have featured primary characters participating in a fictionalized version of the show (the latter in the episode "What Is... Cliff Clavin?"). The animated television shows Family Guy, The Simpsons, and Scooby-Doo and Guess Who? have done likewise, all three times with Trebek providing his own voice. On the series The Conners, Jackie Harris plays on a fictionalized version of the show during the guest host run of Aaron Rodgers. Other media: From 1996 to 2015, Saturday Night Live featured a recurring Celebrity Jeopardy! sketch in which Trebek, portrayed by Will Ferrell, has to deal with the exasperating ineptitude of the show's celebrity guests and the constant taunts of antagonists Sean Connery (played by Darrell Hammond) and Burt Reynolds (Norm Macdonald). The show has also parodied Jeopardy! by way of the recurring sketch Black Jeopardy!, in which the host and two of the three contestants are stereotypical black Americans and the categories and clues likewise reflect black American culture. The third contestant in Black Jeopardy! provides a contrast to the others.The 1992 film White Men Can't Jump features a subplot in which Gloria Clemente (played by Rosie Perez) attempts to pass the show's auditions. In the David Foster Wallace short story "Little Expressionless Animals", first published in The Paris Review and later reprinted in Wallace's collection Girl with Curious Hair, the character Julie Smith competes and wins on every Jeopardy! game for three years (a total of 700 episodes) and then uses her winnings to pay for the care of her brother, who has autism. American musician "Weird Al" Yankovic satirized the Art Fleming incarnation of the show with his 1984 single "I Lost on Jeopardy", a parody of Greg Kihn's 1983 hit song "Jeopardy". Released months before the Trebek version of the show, the song's accompanying music video featured a re-creation of the 1960s-era set, along with cameos from Fleming, Pardo and, at the end of the video, Kihn himself.At the DEF CON hacker conference in Las Vegas, a variant called 'Hacker Jeopardy' has been organized. In 2004, it was won by Kevin Mitnick. Other media: Merchandise Over the years, the Jeopardy! brand has been licensed for various products. From 1964 through 1976, with one release in 1982, Milton Bradley issued annual board games based on the original Fleming version. The Trebek version has been adapted into board games released by Pressman Toy Corporation, Tyco Toys, and Parker Brothers. In addition, Jeopardy! has been adapted into a number of video games released on various consoles and handhelds spanning multiple hardware generations, starting with a Nintendo Entertainment System game released in 1987. The show has also been adapted for personal computers (starting in 1987 with Apple II, Commodore 64, and DOS versions), Facebook, Twitter, Android, and the Roku Channel Store.A DVD titled Jeopardy!: An Inside Look at America's Favorite Quiz Show, released by Sony Pictures Home Entertainment on November 8, 2005, features five curated episodes of the Trebek version (the 1984 premiere, Jennings' final game, and the three-game finals of the Ultimate Tournament of Champions) and three featurettes discussing the show's history and question selection process. Other products featuring the Jeopardy! brand include a collectible watch, a series of daily desktop calendars, and various slot machine games for casinos and the Internet. Other media: Internet Jeopardy!'s official website, active as early as 1998, receives over 400,000 monthly visitors. The website features videos, photographs, and other information related to each week's contestants, as well as mini-sites promoting remote tapings and special tournaments. The Jeopardy! website is regularly updated to align with producers' priorities for the show. In its 2012 "Readers Choice Awards", About.com praised the official Jeopardy! website for featuring "everything [visitors] need to know about the show, as well as some fun interactive elements", and for having a humorous error page.In November 2009, Jeopardy! launched a viewer loyalty program called the "Jeopardy! Premier Club", which allowed home viewers to identify Final Jeopardy! categories from episodes for a chance to earn points, and play a weekly Jeopardy! game featuring categories and clues from the previous week's episodes. Every three months, contestants were selected randomly to advance to one of three quarterly online tournaments; after these tournaments were played, the three highest-scoring contestants would play one final online tournament for the chance to win $5,000 and a trip to Los Angeles to attend a taping of Jeopardy! The Premier Club was discontinued by July 2011.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Disney pin trading** Disney pin trading: Pin trading is the practice of buying, selling, and exchanging collectible pins – most often lapel pins associated with a particular common theme, as well as related items – such as lanyards, bags, and hats to store and display the pins – as a hobby. Collectible pins used in pin trading are often found in amusement parks and resorts; the Walt Disney World and Disneyland resorts, for example, are venues where Disney pin trading has become a popular activity, and similar pin trading activities are popular at comparable venues such as SeaWorld, Universal Resorts, and at Six Flags theme parks. They are also found at events that are recurring and/or share a common theme, such as the Olympic Games and other sporting events. The pins collected and traded are often of a limited edition and thus more highly valued in pin trading, and are sometimes marked or distributed by various companies such as The Coca-Cola Company who sponsor the events and venues associated with the traded pins. Pin trading at particular venues and events is often governed by rules of etiquette particular to the venue or occasion. Disney pin trading: Pin trading is also an annual tradition of the Pasadena Tournament of Roses Game and Parade. Participating teams, marching bands, floats, sponsors, and the parade's Grand Marshal each have their own custom pin.Many clubs, sports teams, events, and churches trade and collect custom pins that were made specifically for their organization. Quick manufacturing processes allow these pins to be produced at a low cost and in small quantities. Collections of these pins are often worn by the collector on an article of clothing such as a hat, vest, or scarf. Disney pin trading: Pin trading and collecting may have originated with the sport of curling as some of the oldest pins that could be described as trading pins are from curling clubs dating back to the mid nineteenth-century.Pin trading also has a long standing history in Baseball. It is common for little leagues trade team pins during the Little League World Series. Pin trading often has specialist categories of pins to trade in – some popular categories include Golly pins from Robertson's Jam, Hard Rock Café, Disney, political pins, sports pins (including bowling, rugby, the Olympic Games and soccer), military or other categories including flags of countries. Motorcycling is also a popular theme for pin trading. While most trading pins are typically flat with a glossy finish, there are many trading pin accessories to accommodate any sports team or corporate brand. Common features include blinking lights, hanging charms or "danglers", spinners, bobble heads, and more. Pin manufacturers are known to use a large variety of these features when designing pins. Disney pin trading: Disney pin trading is the buying and trading of collectible pins and related items featuring Disney characters, attractions, icons, events and other elements. The practice is a hobby officially supported and promoted by Disney. Disney pin trading: Cost Many thousands of unique pins have been created over the years. Pins are available for a limited time; the base price for a pin is US$9.99. Limited edition pins, and special pins (e.g. pins that have a dangle, pin-on-pin, flocking, lenticular, light-up, moving element, 3-D element, etc.) cost up to $17.99. Featured Artist and Jumbo Pins cost between $20 and $35 and Super Jumbo pins cost upwards of, and sometimes beyond, $125. Pins are frequently released at special events, movie premiers, pin trading events or to commemorate the opening day of a new attraction. Some pins have appreciated well on the secondary market and have reached prices of over US$2000 at venues such as eBay, though Disney fans debate the ethics of people who buy pins from the parks in bulk and then inflate the price to sell later on platforms like eBay. Most Disney pins are enamel or enamel cloisonné with a metal base. The backs of each pin are very sharp and should be used with care by young collectors. Disney pin trading: Pin trading history Pins have always been present at Disney parks, but it wasn't until 1999 as part of the Millennium Celebration that Disney Pin Trading at the Walt Disney World Resort was introduced. This was following an Odyssey of the Mind function at the resort in which pins were being traded, inspiring the pin trading idea. The next year, the craze spread to the Disneyland Resort, which has become the home of most Pin Trading events but is most popular in Disney World. Since then, Pin Trading has spread to Aulani, Disneyland Paris, Tokyo Disney Resort, Hong Kong Disneyland Resort and Disney Cruise Lines with each location creating their own pins and traditions. Although the trading of pins has been suspended in Tokyo Disney Resort due to pin traders and their pin display mats taking over the park, pins are still offered as prizes at carnival games, and a relatively small number of pins are available. Disney pin trading: Current pin trading In all Disney resorts, a large variety of pins are available for purchase and trade. Most merchandise cast members wear pins on lanyards around their necks, or on a pin display card or hip lanyard – a 4 by 5 in (10 by 13 cm) piece of colored nylon fabric – clipped to their belt. Additional cast members may wear lanyards if pin trading does not distract from their responsibilities; some managers choose to wear lanyards, but ride operators are not permitted. Some cast members wear a teal colored lanyard at Disneyland and a green lanyard at Walt Disney World with pins tradable to children and adults of all ages. Disney pin trading: Each lanyard contains around a dozen unique pins, and cast members must trade with guests if they are presented with an acceptable pin. The cast members may not decline a particular trade based on preference or rarity of the pin but may decline if the pin is not acceptable or pin trading rules are not being observed. Disney pin trading: Cast members may have differently colored lanyards that determine what age group can trade for those pins. For example, a green lanyard worn by a cast member means that children twelve years of age and younger only can trade for pins on the lanyard in Walt Disney World Florida. Other than this restriction, people of all ages can enjoy this activity.Each guest may only trade two pins with the same cast member in one day. If the cast member gives his or her lanyard to a different cast member, a guest may trade again with the new cast member even though the physical lanyard is the same. Disney pin trading: The specifics of what make a pin acceptable for trading varies from park to park. At Disneyland and California Adventure parks, the cast members are instructed not to accept pins that have a clasp or brooch-type backing (as with jewelry). This limitation is new as of 2008, and notable because it bars cast members from accepting pins that Disney specifically designed and made in the 1980s. The new rule about the pin backing type is printed on brochures and certain informational boards. Disney pin trading: In Disneyland Paris, the cast members are instructed not to accept pins with any of the following origins: Euro Disney, Kodak, Arthus-Bertrand, Disney Store, Spain (also called sedesma pins), or Germany (also called ProPins). This is a partial list of the Disneyland Paris cast member instructions; the full instructions are in French and worn on the cast members' trading lanyards. Disney pin trading: Pin collectors can customize displaying their pins because of the wide variety of pin products Disney produces. Lanyards are available in a wide variety of colors and designs as are lanyard medals. There are many ways to store and display a collector's pins: pin bags, notebooks, frames and cork boards. Collectors can be very creative in displaying their pins and are often easy to spot in the parks with their pin-covered vests, hats, lanyards and fanny packs. Disney pin trading: Pin etiquette Disney has published a pamphlet on how to trade pins, and tips on Pin Etiquette. Among these tips include: To trade a pin with a Disney cast member, the pin must be made of metal and have a representation of a Disney character, park, attraction, icon, or other official affiliation. Additionally, the pin must have a Disney copyright on its back. Disney pin trading: Guests must trade with Cast Members, one pin at a time, with the pin back in place (pins have functional sharp posts). Guests can make up to two pin trades per cast member per day. Refrain from touching another person's pins or lanyard, ask to see the pin so they can bring the pin into closer view. The pin that is traded to the cast member cannot be a duplicate of any pin they already have on their lanyard. No money can change hands on Disney property in exchange for a pin.Note that this pin etiquette pamphlet is only a partial list of restrictions, and restrictions as indicated in the above section "Current Pin Trading" also exist. Official Disney pin release locations There are many official locations where a guest can find Disney pins for purchase. Disney pin trading: Disneyland Resort Anaheim (DLR) Disneyland Disney's California Adventure Downtown Disney Traders Walt Disney World (WDW) Magic Kingdom Epcot Disney's Hollywood Studios Disney's Animal Kingdom Disney Springs Disneyland Paris (DLP) Disneyland Park Walt Disney Studios Park Disney Village Hotels Hong Kong Disneyland Resort (HKDLR) Hong Kong Disneyland Tokyo Disney Resort* Disney Cruise Lines Walt Disney Studios, Burbank Walt Disney Imagineering (WDI) Disney Studio Store Hollywood (next to the El Capitan Theatre)Since 2008, trading pins are no longer sold in stores outside of those located at the theme parks, and are only available through ordering them at the online Disney Store. Disney Shopping has offered limited edition pins on their website since Disney Auctions was closed. Recently, some Disney stores have added open edition pins themed for their location. (Examples include Honolulu, Hawaii and San Francisco, California stores.) * Note: Pin trading is not available in Tokyo Disney Resort. Visitors can only purchase pins in the resort and win them from games. Disney pin trading: Pin terms General Artist Proof – Artist Proof pins (or AP pins) are created during a manufacturing run to verify quality. AP pins have an AP stamped on their back. Generally 20–24 AP pins are made of each pin per run. Some collectors may value AP pins more than others. Back Stamp – A pin's back stamp contains information about the pin and can include copyright information and edition size. Chaser – A pin in a series that is rarer or more difficult to acquire. They can often be colour variants of a known pin. Cloisonné – A French word meaning "partitioned." It refers to a style of pin in which the surface decoration is set in designated sections, one color at a time. Cloisonné also refers to a pin type in which crushed minerals and pigments are used to create coloring on a pin. Dangle Pins – Dangle pins have an extension to the base of the pin that dangles (hangs) from one or more small loops or chains. Die Cast – Die Cast pins are cast from brass zinc alloy using high-quality hand engraved dies which create an eye-catching, three-dimensional image. Epoxy Coating – Epoxy coating is a glassy, opaque substance used as a decorative or protective coating. When the coating dries, it forms a smooth, glossy surface. Flocking – A flocked pin has an area that is fuzzy. Disney pin trading: Hard Enamel – Hard Enamel is sometimes called the new cloisonné. It not only retains the characteristics of classic cloisonné, but also provides a much wider selection of colors. Just as with cloisonné, each pin is hand-crafted in a process that begins with a flat piece of brass which is die-struck and then filled with enamel colors. The surface is then hand polished to give it a smooth finish. Disney pin trading: Lenticular – A Lenticular pin has two or more images that can change when it is tilted back and forth. Light-Up Pin – A Light-up pin has lights in its design that flash when activated. The Light-up element has been used less in recent years due to difficulties in battery replacement and metal corrosion. Disney pin trading: Pre Production/Prototype Pin – Pre Production/Prototype pins (or PP Pins) are received by product developers prior to a pin being manufactured. These pins sometimes contain different coloring, fills or features than the final production pin. The number depends on what the final product will be, as these pins may be different in size, texture, color, etc. The developers use these "test" pins to determine what the final product will be. Pin from late 2007 - now will contain a PP stamp on the back. Pins prior to late 2007 may contain a Pro Products label signifying it is a pre production pin. Some pins may contain no identification that it is a pre-production pin at all. Disney pin trading: Scrapper Pin – A Scrapper pin is an unauthorized pin. Many of the molds Disney uses to make pins are not destroyed after the creation of its pin order, and bootlegs are created. This practice has flooded the Disney parks and secondary markets like eBay with cheap imitations, mostly of Cast lanyard pins and mystery release pins. Some are sold on eBay or found in the parks before the real pins are even released. Disney pin trading: Slider Pin – A Slider pin has a movable piece that slides back and forth across the base of a pin. Spinner Pin – A Spinner pin has a spinning mechanism that moves a piece of the pin 360 degrees. Soft Enamel – A soft enamel pin has the design stamped into the base metal. These pins are filled with enamel colors and baked for durability. A final clear epoxy dome is applied to protect the finish. Typically a thinner pin than cloisonné pins. Exclusive to Disney pin trading The following terms are specific, specialized terms relating to Disney pin trading: Build-A-Pin – The Build-A-Pin program was introduced in 2002 and retired in Summer 2004. Guests could personalize pins bases with character add-ons. After selecting their favorite base and add on, the pin was assembled with a special machine. Continuing the Pin Trading Tradition Pin – Also known as a CTT pin, these annual pins were created for guest recognition by cast members. Guests may be awarded a Continuing The Pin Trading Tradition pin for demonstrating positive Disney Pin Trading etiquette and promoting Disney Pin Trading. Disney pin trading: Fantasy Pin – A pin commissioned or produced by Disney pin collectors that contains similarities to Disney pins, but has not been created or endorsed by Disney. These pins are not allowed to be traded with cast members, although collectors may trade for these pins amongst themselves. From time to time, Disney will produce a pin that is very similar to a fantasy pin. Disney pin trading: FREE-D – Free-D stands for Fastened Rubber Element on a pin for Extra Dimension. Pins that feature Free-D elements sometimes have discoloring issues and extra precautions should be taken to make sure that the Free-D element is not dirtied. Disney pin trading: GWP – A GWP (Gift with Purchase) pin is a bonus pin given to guests who buy at least $25 of pin merchandise in one transaction. The Disneyland Resort designates the first Sunday of every month GWP Sunday, and has two collections each year of six pins each. The pins are often traded as lanyard fodder, and as a result they are not valuable initially. Walt Disney World has promotions where GWPs are available for $1 each with a $30 purchase. Their current promotion involves surplus Mystery Machine Pins. Disney pin trading: HHG – HHG, or the Hitchhiking Ghosts, are the most famous residents of the Haunted Mansion. HM – HM denotes either a Haunted Mansion or Hidden Mickey pin depending on the context. Disney pin trading: Jumbo Pins – Jumbo Pins are larger and often more intricately designed than a regular size pin; as such, the pins cost between US$20 and US$35. Featured Artist (Jumbo) Pins are currently released at DLR, while WDW released a monthly Jumbo Monorail Collection for 2008. Traditionally, Jumbo Pins were released monthly with an edition size of 750 and available for $25. Recently, Jumbo Pins have been sold in editions of 1000 for US$20 or, at the Disneyland Resort, in editions of 500 for US$35. Disney pin trading: Limited Edition Pins – Limited Edition pins are just that - limited. This means there will be a finite number of pins manufactured and sold. The "back stamp" (the text on the back of a pin) on the pin will list the edition size. Sometimes, a Limited Edition pin will be individually numbered meaning it will be #XXX of XXXX (depending upon edition size). Disney pin trading: Mickey's Mystery Pin Machine – Debuting at Mouse Gear in Epcot at WDW in late 2007, the machines were a modified Gravity Hill arcade machine that dispensed a pin regardless of outcome. The pins were part of small collections consisting of five pins each. Although the pins originally cost $5 and were distributed randomly, remaining pins were sold as GWP pins and the Machines have now been designated as inactive and removed. Disney pin trading: Name Pins – Name Pins are pins that have a name engraved on them, and may not be traded with cast members. Disney pin trading: Piece of History (POH) – A Piece of History pin from the 2005 set is considered to be one of the rarest series in Disney Pin Trading. Each pin contains a minuscule piece of a prop from a WDW attraction. The first pin in the series, the 20,000 Leagues Under the Sea pin with a sliver of a porthole, has sold for over $275 on eBay. The success of the series has led to a 2006 and 2008 set and a 2009 and 2010 set for Disneyland Resort. Disney pin trading: Pin Traders Delight (PTD) - The Pin Trader Delight is an ice cream sundae that comes with a limited edition pin as a gift with purchase. This sundae is only available at the Ghirardelli Studio Store located in Hollywood, California. Each pin depicts the featured character eating an ice cream sundae and is highly sought after as typically they have an edition size between 300 and 750. Sundaes are limited to 2 per person provided that the gift pins for each sundae are not the same. Disney pin trading: Pin Trading Night (PTN) – Pin Trading Nights are monthly meetings of Disney Pin Traders at DLR, WDW, or Disneyland Paris resorts. The Pin Trading Team provides pin games and gives traders the opportunity to trade and socialize. Often, an LE pin is released to commemorate the occasion. Disney pin trading: Pin With Purchase/Purchase With Purchase (PWP) - Similar to GWP. except that the pin is not a "gift" but must be purchased. Typically for pins the pin price is $3.95 and a $30 purchase is required to qualify. At one time Cast Members occasionally allowed guests to combine multiple receipts (including those from Disney-owned restaurants at the resort) to reach the $30 requirement but as of 2016 this is no longer permitted; the pin must be purchased at the same time as the qualifying transaction. Disney pin trading: Rack Pins – Rack pins, also called Open Edition (OE) or core pins, are pins introduced and sold until they are discontinued or retired. These pins are re-ordered for up to several consecutive years. The starting retail price for these pins is typically $6.95 (for a flat pin). Depending upon the number of features on the pin (such as pin-on-pin), the retail price will increase to either $8.95 or $10.95. Some OE pins have a high secondary value, such as the Soda Pop Series pins which each go in the $20 range. Disney pin trading: Retired Pins - Retired (or discontinued) pins are pins that are no longer in production. Disney periodically "retires" pins so they can introduce new pins. Disney pin trading: RSP -The Random Selection Process is the method by which LE pins are distributed at the Pin Events. Each guest submits a form which has slots for the Limited Edition merchandise items offered. Each slot is filled in order based on pin availability. If 1000 forms were to be submitted and 50 forms had an LE 25 framed set in their first slot, the first 25 forms would be given the purchase, with the remaining 25 given the opportunity to purchase their second-slot pin. Typically, there are three rounds of the RSP process with the smaller editions being unavailable to purchase in a subsequent round. RSP forms only allow a style of pin to appear once on each RSP form so that there is a better, fairer chance of each person getting one pin. Disney pin trading: Scrapper - An unauthorized Disney pin. These pins are literally scrap pins. Sometimes they are seconds from the factory runs, or sometimes they have errors in color, design, or the imprint on the back. Scrappers can also be the result of extra unauthorized production runs. These pins often make it onto the secondary market where they are sold, often in lots, at much lower than market price. Scrapper pins can then be traded with cast members, as cast members do not decline a trade based on suspected scrapper status. Recent Hidden Mickey pins, DLR pins especially, have flooded the market months before their initial introductions. Disney pin trading: Surprise or Mystery Pins - These pins usually feature a low-Limited Edition size. Typically, the back stamp will include the words "Surprise Pin". The release of this pin happens randomly at various merchandise locations within the Disney Theme Parks and Resorts. Although Surprise pins have continued at the Disneyland Resort (as evidenced by their current Resort Sign set), WDW releases Surprise pins at PTNs rarely. Disney pin trading: Popular themes Because there are over 100,000 Disney Pins available, many themes and characters are collected: Characters Mickey Mouse universe Mickey Mouse Minnie Mouse Donald Duck Daisy Duck Goofy Pluto Chip 'n Dale Pinocchio Jiminy Cricket Winnie the Pooh Lilo & Stitch characters Stitch Lilo Pelekai Angel (Experiment 624) Roger Rabbit Jessica Rabbit Figment (primarily at WDW) Tinker Bell Films Aladdin Alice in Wonderland Cinderella Finding Nemo Beauty and the Beast Peter Pan Snow White and the Seven Dwarfs Song of the South Tangled The Jungle Book The Little Mermaid The Princess and the Frog Toy Story Attractions Space Mountain Big Thunder Mountain Railroad Splash Mountain Soarin' It's a Small World The Twilight Zone Tower of Terror Rock 'n' Roller Coaster starring Aerosmith The Haunted Mansion and the Hitchhiking Ghosts Pirates of the Caribbean Star Tours – The Adventures Continue Disneyland or Walt Disney World Monorail Series Cast lanyard (now known as Hidden Mickey series) Disney Auctions LE 100 (limited production run of only 100 pins) Piece of History Pins Soda Pop Series Cast lanyard and Hidden Mickey pins The WDW Cast Lanyard Collection was introduced in 1999 to encourage guests to trade pins with cast members. The first series of Lanyard pins consisted of just under 100 pins. Previews of the next year's Lanyard pins are at each September Event, with the pins officially distributed a few weeks later. "Disney's Cast Lanyard Collection" is on the back stamp of each pin in the first two series. Beginning with the third series, pin designers placed Hidden Mickeys on the pins after guests complained that it was difficult to discern Lanyard pins from the other pins on lanyards. In 2007, with the release of the fifth Lanyard series, the name of the series was officially changed to the Hidden Mickey Collection and a collection of 94 of the most popular earlier designs were reissued. When asked about the change the Pin Team responded, "The name change is based on the current identifier found on Hidden Mickey pins, a small Mickey Mouse icon. Those icons of Mickey Mouse, commonly referred to as a Hidden Mickey, are also incorporated into many attractions and locations at Disney Theme Parks and Resorts. We felt this change would complement something fun many guests were already seeking." In 2007, the second WDW Hidden Mickey set was released as a collection of 75 new designs, followed by the third Hidden Mickey set in late 2008. Disney pin trading: Disneyland Resort has had their own Lanyard Pin Series since 2002. DLR Lanyard Pin Collections have fewer styles than the WDW series, with most DLR series consisting of around 50 pins. Additionally, sets of 12 Hotel Lanyard Pins have been released biannually to DLR hotel guests who receive two pins at their time of check-in to trade. For the 2007 and 2008 Hidden Mickey Collections, pins have been released monthly by series. Scrappers of past DLR Hidden Mickey pins have appeared on the secondary market months before their official release dates. In an effort to combat this practice, designs for the 2008 and 2009 series, although previously shown at DLR Pin Trading Nights, have been released monthly. Disney pin trading: Pin events Pins have been available as merchandise at WDW and DLR hard ticket events since the late 1990s. After the Millennium Celebration, annual Pin Events were established to provide event-exclusive pins and opportunities for traders to socialize. The largest and most notable event is the September Event, held at Epcot annually. The 2008's event was Disney's Pin Celebration 2008 - Pin Trading University, which was held from September 5–7, 2008. Occasionally, special events are planned at the Walt Disney World Resort beyond the September Event. Expedition Pins allowed Pin Traders to take over Disney's Animal Kingdom after hours. Disney pin trading: Disneyland Resort offers pin events as well, although not as frequently. Their "Camp Pin-e-ha-ha" event was well received, and the Disney Day Campin' Event on June 21 was part of their annual summer-long Pin Festival. 2008 saw Mickey's Pin Odyssey and in 2009 Disneyland hosted the Haunted Mansion O'Pin House. Both events featured weekly releases of themed pins. Next up was the Disney Summer Pin Festival 2010 - Dateline: Disneyland.Disneyland Resort Paris also stages semiannual events including the DroPIN event to celebrate the opening of their Tower of Terror. All of the events feature pin games, exclusive pins and children's activities, and most have pin gifts to remember the event by. Each Resort also offers a monthly Pin Trading Nights with pin boards, games, and kid's areas. Disney pin trading: Hong Kong Disneyland introduced its first Pin Trading Fun Day in 2007. The park continues to organize the pin event once a year during a weekend in the Easter holiday period. The event features activities such as "Magical Moment", "Surprise Moment" and other games. Also, a special limited edition pin box set will be released as well during the event. Disney pin trading: Unofficial pin events take place regularly both inside and outside of the parks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stardom in Showcase vol.3** Stardom in Showcase vol.3: Stardom in Showcase vol.3 (ショーケースのスターダムvol.3, Shōkēsu no sutādamu vol. 3) was a professional wrestling event promoted by World Wonder Ring Stardom. The event took place on November 26, 2022, in Kawasaki at the Kawasaki City Todoroki Arena with a limited attendance due in part to the ongoing COVID-19 pandemic at the time. Background: The Stardom in Showcase is a series of pay-per-views which mainly focuses on a diversity of gimmick matches, only different from the singles match stipulation. Billed as respiro shows, the main tagline of these events is "Anything can happen".The show featured seven professional wrestling matches that resulted from scripted storylines, where wrestlers portrayed villains, heroes, or less distinguishable characters in the scripted events that built tension and culminated in a wrestling match or series of matches. Background: Event The preshow match in which 7Upp (Nanae Takahashi & Yuu) defeated wing★gori (Hanan & Saya Iida) in one of the Goddess of Stardom Tag League matches was broadcast live on Stardom's YouTube channel. The first main card event saw AZM, Koguma, Starlight Kid, and Ram Kaicho facing off in a soccer-themed four-way match resembling the events from the group stages of the 2022 FIFA World Cup. Each wrestler wore different jerseys of various football nations. Koguma won the match and kept all the other wrestlers' jerseys as a prize. The third match saw Lady C getting a win over the Wonder of Stardom Champion Saya Kamitani, Himeka, and Momo Kohgo in a shampoo scramble to win a haircut from the main sponsor of the event, hairdressing saloon "ZEST". The next bout presented a "judo rules" match in which Mayu Iwatani, Hanan & Maika picked up a victory over Utami Hayashishita, Hina & Mirai. The match alluded to the wrestler's real-life judo sports background. The fifth match presented Natsuko Tora & Saki Kashima vs. Hazuki and a returning Sumire Natsu in a No Holds Barred Tag Team Match. The match was ruled a No Contest when additional members of Oedo Tai and STARS wouldn’t stop brawling in the ring. After the match, Sumire attacked Hazuki and declared that she will be going to furtherly invade Stardom as a freelancer. The sixth bout saw Risa Sera, Suzu Suzuki & Hiragi Kurumi defeating the Goddess of Stardom Champions Tam Nakano & Natsupoi, and a returning Unagi Sayaka in a Six-Woman Hardcore Tag Team Match. After the bout concluded, the Prominence members challenged Starlight Kid, Saki Kashima & Momo Watanabe for the Artist of Stardom Championship at Dream Queendom on December 29, 2022, which the champion team accepted.The main event presented Nanae Takahashi, Yuu, and a masked reaper facing Donna Del Mondo's Giulia, Thekla & Mai Sakurai in an Exploding Coffin Six-Woman Tag Team Match. The Neo Stardom Army picked up the win after putting all the DDM members into the casket alongside Rossy Ogawa who was again a victim such as at the previous Showcase events. The masked reaper was announced to have joined the Neo Stardom Army but their identity was not revealed after the show.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OpenRC** OpenRC: OpenRC is a dependency-based init system for Unix-like computer operating systems. It was created by Roy Marples, a NetBSD developer who was also active in the Gentoo project. It became more broadly adopted as an init system outside of Gentoo following the decision by some Linux distributions not to adopt systemd. Adoption: OpenRC is the default init system and/or process supervisor for: Alpine Linux Funtoo Gentoo Linux Hyperbola GNU/Linux-libre Maemo Leste NitruxOpenRC is an available init system and/or process supervisor for: Artix Linux Devuan Parabola GNU/Linux-libre Design: OpenRC is made up of several modular components, the main ones being an init (optional), the core dependency management system and a daemon supervisor (optional). It is written in C and POSIX-compliant shell, making it usable on BSD and Linux systems. Design: The core part of OpenRC handles dependency management and init script parsing. OpenRC works by scanning the runlevels, building a dependency graph, then starting the needed service scripts. It exits once the scripts have been started. By default, OpenRC uses a modified version of start-stop-daemon for daemon management.Init scripts share similarities with scripts used in sysvinit, but offer several features to simplify their creation. Scripts are assumed to have start(), stop() and status(); and the system uses variables already declared to create the default functions. The depend function is used to declare dependencies to other services that would be done with LSB headers in sysvinit. Configuration and mechanism are separated with configuration files in the conf.d directory and init files in the init.d directory. Design: Openrc-init first appeared in version 0.25 as an optional replacement for /sbin/init. Several other inits are supported, including sysvinit and Busybox.Supervise-daemon first appeared in version 0.21 giving OpenRC supervision capabilities. It can be enabled in the init script for supervise-daemon to start and monitor a daemon. Several other daemon supervisors are supported, including runit and s6. Features: Portable between Linux, FreeBSD, and NetBSD Parallel service startup (off by default) Dependency-based boot-up Process segregation through cgroups Per-service resource limits (ulimit) Separation of code and configuration (init.d / conf.d) Extensible startup scripts Stateful init scripts (is it started already?) Complex init scripts to start multiple components (Samba [smbd and nmbd], NFS [nfsd, portmap, etc.]) Automatic dependency calculation and service ordering Modular architecture and separation of optional components (cron, syslog) Expressive and flexible network handling (including VPN, bridges, etc.) Verbose debug mode
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**People Make Games** People Make Games: People Make Games (PMG) is a British investigative video game journalism YouTube channel. The channel focuses on the developers and people who make video games. People Make Games has reported on topics like video game crunch, outsourcing, and worker exploitation. History: The group was created by Chris Bratt and Anni Sayers in 2018, who were previously both journalists who had worked for Eurogamer. Sayers creates the graphics. Quintin Smith, a journalist from Rock Paper Shotgun, joined in 2020. The channel is viewer-funded with Patreon; in June 2022, the Patreon made US$17,409 per month. Additional funding comes from Loading Bar, a chain of bars in London and Brighton. Notable reports: Roblox In a video published in August 2021, Smith accused Roblox's parent company, Roblox Corporation, of exploiting the platform's young game developers. Smith argues the revenue split is significantly less favourable toward developers than other video game marketplaces, and players are incentivized to keep all ingame currency, which Smith likened to scrip, on Roblox by having high minimum withdrawal amounts and low exchange rates. In a followup video released in December 2021 titled "Roblox Pressured Us to Delete Our Video. So We Dug Deeper.", he further accused the platform of having child safety issues and criticized its "collectibles stock market" by likening it to gambling. Notable reports: Annapurna Interactive In March 2022, the channel reported on three video game studios publishing under Annapurna Interactive — Mountains, Fullbright, and Funomena. In all three cases, employees reportedly reached out to Annapurna Interactive, addressing concerns regarding abuse and a toxic work environment being created by the studio founders. In hopes of getting Annapurna Interactive to mediate, employees stated that the publisher was siding mostly with the founders in question. According to one former studio employee, representatives of Annapurna Interactive had been quoted responding that "without strong personalities, games don't get made." Bratt described these incidents as part of a greater pattern of auteur culture that can be found across the independent film and video game industry. Following the video, Robin Hunicke, one of the heads of Funomena, issued a Twitter apology, before stating to staff alongside Funomena co-founder Martin Middleton that there would be layoffs at Funomena and that the studio would likely close due to the video and its impact on the studio's ability to secure outside funding. Notable reports: VRChat and the Metaverse In their video titled "Making Sense of VRChat, the "Metaverse" People Actually Like" released May 2022, PMG praised VRChat's abiltiy to provide a social space for transgender and disabled people as well as furries, while criticising the approach of Meta Platforms to virtual reality, and its "sexless, Zuckerbergian, brand-friendly presentation". CS:GO skin gambling In November 2022, PMG reported on skin gambling in Counter-Strike: Global Offensive and argued that Valve generally avoided taking action on gambling websites using their game, thus "facilitating unregulated gambling by children".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**David Hyerle** David Hyerle: David Hyerle is an author and creator of a thought-organization methodology called "Thinking Maps" that is popular in public schools in the United States. Thinking Maps: In 1988, David Hyerle wrote Expand Your Thinking and introduced Thinking Maps. These are a set of techniques used in primary and secondary education with the intention of providing a common visual language to information structure. There are eight types of maps: Circle Map: used for defining in context Bubble Map: used for describing with adjectives Flow Map: used for sequencing and ordering events Brace Map: used for identifying part/whole relationships Tree Map: used for classifying or grouping Double Bubble Map: used for comparing and contrasting Multi-flow map: used for analysing causes and effects Bridge map: used for illustrating analogies He believed that all K-12 educators teach the same thought processes regardless of grade level and regardless of what terms were used to refer to them. Thinking Maps were intended to standardize the language and visual organization used in education, which the company believed would close the achievement gap by establishing common ground. The idea was that if all children have the same background knowledge, less time would be spent teaching and re-teaching thought processes. Hyerle also thought these techniques would promote metacognition and continuous cognitive development over the course of a student's academic career.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NRG (file format)** NRG (file format): An NRG file is a proprietary optical disc image file format originally created by Nero AG for the Nero Burning ROM utility.[1] It is used to store disc images. Other than Nero Burning ROM, however, a variety of software titles can use these image files. For example, Alcohol 120%, or Daemon Tools can mount NRG files onto virtual drives for reading. NRG (file format): Contrary to popular belief, NRG files are not ISO images with a .nrg extension and a header attached. They can store audio tracks for Audio CDs, which ISO images cannot. Nero's NRG format is one of the few formats besides BIN/CUE, Alcohol 120%'s MDF/MDS and CloneCD's CCD/IMG/SUB disc image formats to support Mixed Mode CDs which contain audio CD tracks as well as data tracks. File format: The file format specification below is unofficial and as such is lacking some data. There may also be errors. The NRG file format uses a variation of the Interchange File Format (IFF) and stores data in a chain of "chunks". All integer values are stored unsigned in big endian byte order. Version 1 NRG format stores values as 32-bit integers. Nero Burning ROM v5.5 introduced a new NRG file format, version 2, with support for 64-bit integers. File format: Header The NRG format does not store its data as a header at the beginning of a file. It is instead attached at the end of the file like a footer. Image information is stored as a serialized chain of IFF chunks. To get the offset of the first chunk one must read the NRG footer from the last 8 or 12 bytes of the file. File format: Chunks (CUES) Cue Sheet Available in all versions of NRG file format. The CUEX chunk is the concatenation of fixed-size blocks, each one representing a cue point. File format: The index0 points are present even when they are identical to the index1 ones. The index0 points in audio tracks are incorrect if Nero has been asked to record all the sub-channel data (in that case the sector size is 2448 bytes). No index other than 0 or 1 has been encountered, although the chunk format allows for such cue points to be recorded; thus the number of cue blocks seems to be always 2*(#track + 1): two indices for each track, an index0 for the lead-in and an index1 for the lead-out. File format: (DAOI) DAO Information Available in all versions of NRG file format. DAOI chunks store disc at once sessions specific information in two parts. The first part contains data that is specific for the session only. The second part repeats track specific information (grey) once for each track. Parse the SINF chunks to get the number of tracks for a specific session. (CDTX) CD-text Available in version 2 NRG file format. The CDTX chunk is the concatenation of raw CD-text packs of 18 bytes each. (ETNF) Extended Track Information Available in all versions of NRG file format. ETNF chunks are used to store track information for track at once sessions. The data is repeated once for each track. Parse the SINF chunks to get the number of tracks for a specific session. (SINF) Session Information Available in all versions of NRG file format. Session information chunks should be used to quickly scan the image for session and track count. SINF chunks are always listed in sequential order corresponding to the sessions order. To get more details information about a specific session one must parse the corresponding DAOI or ETNF chunk. (MTYP) Media Type? Available in all versions of NRG file format. This chunk and its use is unknown. A value of 1 (big endian) was found in images of several CD (audio or data; CD-ROM or CD-R). (DINF) Disc Information? Found in TAO images in version 2 of NRG file format. Found in DAO images in version of NRG file format only if Nero was asked not to close the disc. This chunk and its use is unknown. (TOCT) TOC T? Found in TAO images in version 2 of NRG file format. This chunk and its use is unknown. (RELO) Found in TAO images in version 2 of NRG file format. This chunk and its use is unknown. (END!) End of chain Available in all versions of NRG file format. End of chain chunk is signals that there are no more chunks to be read.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NFAT** NFAT: Nuclear factor of activated T-cells (NFAT) is a family of transcription factors shown to be important in immune response. One or more members of the NFAT family is expressed in most cells of the immune system. NFAT is also involved in the development of cardiac, skeletal muscle, and nervous systems. NFAT was first discovered as an activator for the transcription of IL-2 in T cells (as a regulator of T cell immune response) but has since been found to play an important role in regulating many more body systems. NFAT transcription factors are involved in many normal body processes as well as in development of several diseases, such as inflammatory bowel diseases and several types of cancer. NFAT is also being investigated as a drug target for several different disorders. Family members: The NFAT transcription factor family consists of five members NFATc1, NFATc2, NFATc3, NFATc4, and NFAT5. NFATc1 through NFATc4 are regulated by calcium signalling, and are known as the classical members of the NFAT family. NFAT5 is a more recently discovered member of the NFAT family that has special characteristics that differentiate it from other NFAT members.Calcium signalling is critical to activation of NFATc1-4 because calmodulin (CaM), a well-known calcium sensor protein, activates the serine/threonine phosphatase calcineurin (CN). Activated CN binds to its binding site located in the N-terminal regulatory domain of NFATc1-4 and rapidly dephosphorylates the serine-rich region (SRR) and SP-repeats which are also present in the N-terminus of the NFAT proteins. This dephosphorylation results in a conformational change that exposes a nuclear localization signal which promotes nuclear translocation.On the other hand, NFAT5 lacks a crucial part of the N-terminal regulatory domain which in the aforementioned group harbours the essential CN binding site. This makes NFAT5 activation completely independent of calcium signalling. It is, however, controlled by MAPK during osmotic stress. When a cell encounters a hypertonic environment NFAT5 is transported into the nucleus where it activates transcription of several osmoprotective genes. Therefore, it is expressed in the kidney medulla, skin and eyes but it can be also found in the thymus and activated lymphocytes. Signalling and binding: Canonical signalling Although phosphorylation and dephosphorylation are key for controlling NFAT function by masking and unmasking nuclear localization signals, as shown by the high number of phosphorylation sites in the NFAT regulatory domain, this dephosphorylation cannot occur without an influx of calcium ions.The classical signalling relies on activation of PLC through different receptors like TCR (PLCG1) or BCR (PLCG2). This activation leads to release of inositol-1,4,5-triphosphate (IP3) and diacylglycerol (DAG). The IP3 is especially important for calcium influx because it binds to a IP3 receptor located in the membrane of ER. This causes a short sharp increase in calcium concentration in cytosol as the ions leave the ER through the IP3 receptor. Signalling and binding: However, this is not enough to activate NFAT signalling. The release of calcium ions from ER is sensed by STIM proteins which are ER transmembrane proteins. Under normal circumstances the STIM proteins bind calcium ions but if most of them are released from ER the bound ions are released from the STIM proteins as well. This causes them to oligomerize and subsequently interact with ORAI1 which is an indispensable protein of CRAC complex. This complex serves as a channel which selectively allows the influx of calcium ions from outside of a cell. This phenomenon is called store-operated calcium entry (SOCE). Only this longer inflow of calcium ions is capable of fully activating NFAT through the CaM/CN mediated dephosphorylation as stated above. Signalling and binding: Alternative signalling Although SOCE is the main activation mechanism of most of the proteins of the NFAT family, they can also be activated by an alternative pathway. This pathway was until now proofed only for NFATc2. In this alternative activation SOCE is insignificant as shown by the fact that cyclosporine (CsA), which inhibits CN mediated dephosphorylation, does not abrogate this pathway. The reason for this is that it is activated through IL7R which leads to subsequent phosphorylation of single tyrosine in NFAT mediated by Jnk3 kinase a member of MAPK kinase subfamily. Signalling and binding: DNA binding Nuclear import of NFAT and its subsequent export is dependent on the calcium level inside of a cell. If the calcium level drops, the exporting kinases in a nucleus such as PKA, CK1 or GSK-3β rephosphorylate NFAT. This causes that NFAT reverts into its inactive state and is exported back to the cytosol where maintenance kinases finish the rephosphorylation in order to keep it in the inactivated state.NFAT proteins have weak DNA-binding capacity. Therefore, to effectively bind DNA, NFAT proteins must cooperate with other nuclear resident transcription factors generically referred to as NFATn. This important feature of NFAT transcription factors enables integration and coincidence detection of calcium signals with other signalling pathways such as ras-MAPK or PKC. In addition, this signalling integration is involved in tissue-specific gene expression during development. A screen of ncRNA sequences identified in EST sequencing projects discovered a 'ncRNA repressor of the nuclear factor of activated T cells' called NRON.NFAT-dependent promoters and enhancers tend to have 3-5 NFAT binding sites which indicates that higher order synergistic interactions between relevant proteins in a cooperative complex is needed for effective transcription. The best known class of these complexes is composed of NFAT and AP-1 or other bZIP proteins. This NFAT:AP-1 complex binds to the conventional Rel-family proteins DNA binding sites and is involved in gene transcription in immune cells. NFAT function in different cell types: T cells TCR stimulation, as stated above, causes the dephosphorylation of NFAT which in almost every kind of T cell then forms a complex with AP-1 (except in Tregs). This complex depending on the cytokine context then activates the key transcription factors of the distinct T cell subpopulations: T-bet for Th1, GATA3 for Th2, RORγ for Th17 and BATF for Tfh. T cells express almost all NFAT family members (except NFAT3). However, not every NFAT has the same significance for each subpopulation of T cells.Upon TCR stimulation and after subsequent activation of T-bet under Th1 cytokine conditions, a complex which consists of the transcription factor T-bet and NFAT stimulates production of IFN-γ, the most prominent cytokine of Th1 cells. The TCR activation also triggers, through NFAT:AP-1 complex, production of NFAT2/αA which is a short isoform of NFATc2 which lacks the C-terminal domain and is fulfilling a role of an autoregulator because it further enhances the activation of all effector T cells. For Th1 response NFATc1 seems to be the most indispensable since knockout of NFATc1 in mice leads to extremely skewed Th2 response.Under Th2 stimulating conditions GATA3 is activated. It subsequently also interacts with NFAT and triggers production of Th2 typical cytokines like IL-4, IL-5 and IL-13. NFATc2 seems to be the most important for Th2 mediated response since its impairment lowers the amount of the aforementioned cytokines and also decreases the amount of IgG1 and IgE. NFATc1 also plays an essential role as it forms a complex with GATA3 just like NFATc2. It further mediates the production of Th2 cytokines indirectly through regulation of CRTh2.In line with Th1 and Th2 response, the stimulation of TCR under Th17 conditions elicits expression of RORγ. It subsequently binds to NFAT and stimulates the production of Th17 specific cytokines like IL-17A, IL-17F, IL-21, IL-22. In Th17 response probably NFATc2 plays a key role since mice with NFATc2 knockout show reduction in RORγ as well as in IL-17A, IL-17F, and IL-21. NFAT function in different cell types: Treg cells are the only exceptions to the NFAT:AP-1 complex formation since after their TCR stimulation NFAT binds to SMAD3 instead of AP-1. This complex then activates FOXP3 transcription, a master gene regulator in Tregs. NFAT:FOXP3 complex then regulates Treg specific cytokine production. There are two main populations of Treg cells: natural Treg (nTreg) cells which develop in Thymus and induced Treg (iTreg) cells which develop from naive CD4+ T cells in the periphery after their stimulation. iTreg cells seem to be highly dependent on NFATc1, 2 and 4 since deletion of any of these genes or their combination causes almost a complete loss of iTreg cells but not nTreg cells.In Tfh cells just like in Th1, Th2 and Th17 cells NFAT:AP-1 complex is formed. This complex afterwards activates transcription of BATF which then also binds to NFAT and together with other proteins like IRF4 commences production of Tfh indespensable molecules: CXCR5, ICOS, Bcl6 and IL-21. Tfh cells express high levels of NFATc1 and especially NFATc2 and NFAT2/αA which suggest an important role of NFATc2. Deletion of NFATc2 in T cells facilitates an increased number of Tfh cells and higher germinal center response probably due to dysregulation of CXCR5 and decreased number of T follicular regulatory (Tfr) cells. Since Tfh are tightly connected with humoral response any defect in them will project into B cells. Therefore, it is not surprising that NFAT2 lymphocytes specific ablation causes a defect of the BCR-mediated proliferation but whether this phenotype is caused by sole dysregulation of Tfh or B cells or combination of both is uncertain. NFAT function in different cell types: B cells Although discovered in T cells it is becoming more obvious that NFAT is also expressed in different cell types. In B cells mainly NFATc1 and after activation also NFATc2 and NFAT2/αA are expressed and fulfil important functions like antigen presentation, proliferation, and apoptosis. Although the impairment of NFAT pathway has serious consequences in T cells, in B cells they seem to be rather mild. If for instance a specific B cell knockout of both STIM proteins is carried out, SOCE is completely abolished and therefore NFAT signalling as well. Although in these knockout B cells the resulting humoral response is very similar to B cells with no knockout, the complete abolishment of NFAT also brought about a decrease in IL-10. However, some studies suggest a more important role of NFAT in B cells and therefore this topic is still not well understood and warrants further research. NFAT function in different cell types: T cell anergy and exhaustion T cell anergy is induced by suboptimal stimulation conditions when for instance TCR is stimulated without appropriate costimulatory signals. Because of the missing co-stimulation AP-1 is absent and a NFAT:NFAT complex is formed. This complex activates anergy associated genes like E3 ubiquitin ligases (Cbl-b, ITCH, and GRAIL), diacylglycerol kinase α (DGKα), and caspase 3 which promote the induction of T-cell anergy. Similar to T cell anergy is T cell exhaustion which is also caused by impaired formation of the NFAT:AP-1 complex but the underlying induction of exhaustion state is through chronic stimulation rather than suboptimal stimulation. In both anergy and exhaustion NFATc1 seems to play a key role. Conversaly, NFATc2 together with NFAT2/αA are needed to revert the state of anergy or exhaustion. NFAT signalling in neural development: The Ca2+ dependent calcineurin/NFAT signalling pathway has been found to be important in neuronal growth and axon guidance during vertebrate development. Each different class of NFAT contributes to different steps in the neural development. NFAT works with neurotrophic signalling to regulate axon outgrowth in several neuronal populations. Additionally, NFAT transcription complexes integrate neuronal growth with guidance cues such as netrin to facilitate the formation of new synapses, helping to build neural circuits in the brain. NFAT is a known important player in both the developing and adult nervous system. Clinical significance: Inflammation NFAT plays a role in the regulation of inflammation of inflammatory bowel disease (IBD). In the gene that encodes LRRK2 (leucine-rich repeat kinase 2), a susceptibility locus for IBD was found. The kinase LRRK2 is an inhibitor for the NFATc2 variety, so in mice lacking LRRK2, increased activation of NFATc2 was found in macrophages. This led to an increase in the NFAT-dependent cytokines that spark severe colitis attacks. Clinical significance: NFAT also plays a role in Rheumatoid Arthritis (RA), an autoimmune disease that has a strong pro-inflammatory component. TNF-α, a pro-inflammatory cytokine, activates the calcineurin-NFAT pathway in macrophages. Additionally, inhibiting the mTOR pathway decreases joint inflammation and erosion, so the known interaction between mTOR pathway and NFAT presents a key to the inflammatory process of RA. As a drug target: Due to its essential role in the production of the T cell proliferative cytokine IL-2, NFAT signalling is an important pharmacological target for the induction of immunosuppression. CN inhibitors, which prevent the activation of NFAT, including CsA and tacrolimus (FK506), are used in the treatment of rheumatoid arthritis, multiple sclerosis, Crohn's disease, and ulcerative colitis and to prevent the rejection of organ transplants. However, there is a toxicity associated with these drugs due to their ability to inhibit CN in non-immune cells, which limits their use in other situations that may call for immunosuppressing drug therapy, including allergy and inflammation. There are other compounds that target NFAT directly, as opposed to targeting the phosphatase activity of calcineurin, that may have broad immunosuppressive effects but lack the toxicity of CsA and FK506. Because individual NFAT proteins exist in specific cell types or affect specific genes, it may be possible to inhibit individual NFAT protein functions for an even more selective immune effect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spanish nursery rhymes** Spanish nursery rhymes: Nursery rhymes (Spanish: rimas infantiles) in the Spanish language have been passed down by oral tradition. They may be classified according to their amusing, educative or soothing qualities. History and context: Nursery rhymes are short songs written for small children. The lyrics are usually simple and repetitive for easy comprehension and memorization. Although they are meant to be lighthearted and fun, they also function as an introduction to music and certain basic concepts learned through repetition and song. Traditionally, nursery rhymes are taught through oral tradition where knowledge, stories, and songs are learned through generational repetition as part of familial or popular culture. In more recent decades, specialized artists have worked within the infant market. Nursery rhymes are activities through which children can learn and play with different melodies. They also introduce children to popular themes that help with early socialization.Many Latin American nursery rhymes are based in the context of the farm or rural life. After the Spanish conquest of the continent, much of the oral tradition derived from religious and superstitious traditions with the goal of introducing children to formative social concepts. Classification by function: One possible method of nursery rhyme classification is that of function. Although it is possible that one song may fall under more than one category, each has a different goal or purpose: Play songs or De Juego Lullabies or Nanas/Canciones de cuna Tongue-twisters or De habilidad Teaching songs or Didácticas Examples Los Pollitos Dicen ("Little Chickens") is a classic Spanish Nursery Rhyme De juego, and also falls under the Nana or Cancion de cuna category. Many spanish speaking countries lay claim to this song such as Ecuador and Spain, but its author is the Chilean musician and poet Ismael Parraguez. Its popularity is similar to that of "Twinkle Twinkle Little Star" in English.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wale mark** Wale mark: A wale mark, red wale sign or wale sign is an endoscopic sign suggestive of recent hemorrhage, or propensity to bleed, seen in individuals with esophageal varices at the time of endoscopy. The mark has the appearance of a longitudinal red streak located on an esophageal varix. It derives its name from the visual similarity to patterns seen in the textile corduroy.Similar lesions that are suggestive of recent or impending bleeding from esophageal varices include the cherry-red spot, which is circular and red in colour. Bleeding risk of esophageal varices can be ascertained at the time of endoscopy by evaluating for the presence of these markers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**60S ribosomal protein L15** 60S ribosomal protein L15: 60S ribosomal protein L15 is a protein that in humans is encoded by the RPL15 gene.Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L15E family of ribosomal proteins. It is located in the cytoplasm. This gene shares sequence similarity with the yeast ribosomal protein YL10 gene. Although this gene has been referred to as RPL10, its official symbol is RPL15. This gene has been shown to be overexpressed in some esophageal tumors compared to normal matched tissues. Transcript variants utilizing alternative polyA signals exist. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Slater determinant** Slater determinant: In quantum mechanics, a Slater determinant is an expression that describes the wave function of a multi-fermionic system. It satisfies anti-symmetry requirements, and consequently the Pauli principle, by changing sign upon exchange of two electrons (or other fermions). Only a small subset of all possible fermionic wave functions can be written as a single Slater determinant, but those form an important and useful subset because of their simplicity. Slater determinant: The Slater determinant arises from the consideration of a wave function for a collection of electrons, each with a wave function known as the spin-orbital χ(x) , where x denotes the position and spin of a single electron. A Slater determinant containing two electrons with the same spin orbital would correspond to a wave function that is zero everywhere. The Slater determinant is named for John C. Slater, who introduced the determinant in 1929 as a means of ensuring the antisymmetry of a many-electron wave function, although the wave function in the determinant form first appeared independently in Heisenberg's and Dirac's articles three years earlier. Definition: Two-particle case The simplest way to approximate the wave function of a many-particle system is to take the product of properly chosen orthogonal wave functions of the individual particles. For the two-particle case with coordinates x1 and x2 , we have Ψ(x1,x2)=χ1(x1)χ2(x2). Definition: This expression is used in the Hartree method as an ansatz for the many-particle wave function and is known as a Hartree product. However, it is not satisfactory for fermions because the wave function above is not antisymmetric under exchange of any two of the fermions, as it must be according to the Pauli exclusion principle. An antisymmetric wave function can be mathematically described as follows: Ψ(x1,x2)=−Ψ(x2,x1). Definition: This does not hold for the Hartree product, which therefore does not satisfy the Pauli principle. This problem can be overcome by taking a linear combination of both Hartree products: Ψ(x1,x2)=12{χ1(x1)χ2(x2)−χ1(x2)χ2(x1)}=12|χ1(x1)χ2(x1)χ1(x2)χ2(x2)|, where the coefficient is the normalization factor. This wave function is now antisymmetric and no longer distinguishes between fermions (that is, one cannot indicate an ordinal number to a specific particle, and the indices given are interchangeable). Moreover, it also goes to zero if any two spin orbitals of two fermions are the same. This is equivalent to satisfying the Pauli exclusion principle. Definition: Multi-particle case The expression can be generalised to any number of fermions by writing it as a determinant. For an N-electron system, the Slater determinant is defined as Ψ(x1,x2,…,xN)=1N!|χ1(x1)χ2(x1)⋯χN(x1)χ1(x2)χ2(x2)⋯χN(x2)⋮⋮⋱⋮χ1(xN)χ2(xN)⋯χN(xN)|≡|χ1,χ2,⋯,χN⟩≡|1,2,…,N⟩, where the last two expressions use a shorthand for Slater determinants: The normalization constant is implied by noting the number N, and only the one-particle wavefunctions (first shorthand) or the indices for the fermion coordinates (second shorthand) are written down. All skipped labels are implied to behave in ascending sequence. The linear combination of Hartree products for the two-particle case is identical with the Slater determinant for N = 2. The use of Slater determinants ensures an antisymmetrized function at the outset. In the same way, the use of Slater determinants ensures conformity to the Pauli principle. Indeed, the Slater determinant vanishes if the set {χi} is linearly dependent. In particular, this is the case when two (or more) spin orbitals are the same. In chemistry one expresses this fact by stating that no two electrons with the same spin can occupy the same spatial orbital. Example: Matrix elements in a many electron problem: Many properties of the Slater determinant come to life with an example in a non-relativistic many electron problem. Example: Matrix elements in a many electron problem: The one particle terms of the Hamiltonian will contribute in the same manner as for the simple Hartree product, namely the energy is summed and the states are independent The multi-particle terms of the Hamiltonian, i.e. the exchange terms, will introduce a lowering of the energy of the eigenstatesStarting from an Hamiltonian: where ri are the electrons and RI are the nuclei and nucl (r)=−∑IZIe2|r−RI| For simplicity we freeze the nuclei at equilibrium in one position and we remain with a simplified Hamiltonian H^e=∑iNh^(ri)+12∑i≠jNe2rij where nucl (r) and where we will distinguish in the Hamiltonian between the first set of terms as G^1 (the "1" particle terms) and the last term G^2 which is the "2" particle term or exchange term G^1=∑iNh^(ri) G^2=12∑i≠jNe2rij The two parts will behave differently when they have to interact with a Slater determinant wave function. We start to compute the expectation values det det {ψ1...ψN}⟩ In the above expression, we can just select the identical permutation in the determinant in the left part, since all the other N! − 1 permutations would give the same result as the selected one. We can thus cancel N! at the denominator det {ψ1...ψN}⟩ Because of the orthonormality of spin-orbitals it is also evident that only the identical permutation survives in the determinant on the right part of the above matrix element ⟨Ψ0|G1|Ψ0⟩=⟨ψ1...ψN|G1|ψ1...ψN⟩ This result shows that the anti-symmetrization of the product does not have any effect for the one particle terms and it behaves as it would do in the case of the simple Hartree product. Example: Matrix elements in a many electron problem: And finally we remain with the trace over the one particle Hamiltonians ⟨Ψ0|G1|Ψ0⟩=∑i⟨ψi|h|ψi⟩ Which tells us that to the extent of the one particle terms the wave functions of the electrons are independent of each other and the energy is given by the sum of energies of the single particles. Example: Matrix elements in a many electron problem: For the exchange part instead det det det {ψ1...ψN}⟩ If we see the action of one exchange term it will select only the exchanged wavefunctions 12 12 12 |ψ2ψ1⟩ And finally which instead is a mixing term, the first contribution is called the "coulomb" term and the second is the "exchange" term which can be written using {\textstyle \sum _{ij}} or {\textstyle \sum _{i\neq j}} , since the Coulomb and exchange contributions exactly cancel each other for i=j It is important to notice explicitly that the electron-electron repulsive energy ⟨Ψ0|G2|Ψ0⟩ on the antisymmetrized product of spin-orbitals is always lower than the electron-electron repulsive energy on the simple Hartree product of the same spin-orbitals. The difference is just represented by the second term in the right-hand side without the self-interaction terms i=j . Since exchange bielectronic integrals are positive quantities, different from zero only for spin-orbitals with parallel spins, we link the decrease in energy with the physical fact that electrons with parallel spin are kept apart in real space in Slater determinant states. As an approximation: Most fermionic wavefunctions cannot be represented as a Slater determinant. The best Slater approximation to a given fermionic wave function can be defined to be the one that maximizes the overlap between the Slater determinant and the target wave function. The maximal overlap is a geometric measure of entanglement between the fermions. A single Slater determinant is used as an approximation to the electronic wavefunction in Hartree–Fock theory. In more accurate theories (such as configuration interaction and MCSCF), a linear combination of Slater determinants is needed. Discussion: The word "detor" was proposed by S. F. Boys to refer to a Slater determinant of orthonormal orbitals, but this term is rarely used. Unlike fermions that are subject to the Pauli exclusion principle, two or more bosons can occupy the same single-particle quantum state. Wavefunctions describing systems of identical bosons are symmetric under the exchange of particles and can be expanded in terms of permanents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boulder wall** Boulder wall: A boulder wall, also spelled boulder-walls or bowlder-wall, is a kind of wall built of round flints and pebbles, laid in a strong mortar. It is used where the sea has a beach cast up, or where there are plenty of flints.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Distributed Proofreaders** Distributed Proofreaders: Distributed Proofreaders (commonly abbreviated as DP or PGDP) is a web-based project that supports the development of e-texts for Project Gutenberg by allowing many people to work together in proofreading drafts of e-texts for errors. As of March 2021, the site had digitized 41,000 titles. History: Distributed Proofreaders was founded by Charles Franks in 2000 as an independent site to assist Project Gutenberg. Distributed Proofreaders became an official Project Gutenberg site in 2002. History: On 8 November 2002, Distributed Proofreaders was slashdotted, and more than 4,000 new members joined in one day, causing an influx of new proofreaders and software developers, which helped to increase the quantity and quality of e-text production. Distributed Proofreaders posted their 5,000th text to Project Gutenberg in October 2004, in March 2007, the 10,000th DP-produced e-text was posted to Project Gutenberg, in May 2009, the 15,000th DP-produced e-text was posted to Project Gutenberg, in April 2011, the 20,000th DP-produced e-text was posted to Project Gutenberg, and in July 2015, the 30,000th DP-produced e-text was posted to Project Gutenberg. DP-contributed e-texts comprised more than half of works in Project Gutenberg, as of July 2015. History: On 31 July 2006, the Distributed Proofreaders Foundation was formed to provide Distributed Proofreaders with its own legal entity and not-for-profit status. IRS approval of section 501(c)(3) status was granted retroactive to 7 April 2006. Proofreading process: Public domain works, typically books with expired copyright, are scanned by volunteers, or sourced from digitization projects and the images are run through optical character recognition (OCR) software. Since OCR software is far from perfect, many errors often appear in the resulting text. To correct them, pages are made available to volunteers via the Internet; the original page image and the recognized text appear side by side. This process thereby distributes the time-consuming error-correction process, akin to distributed computing. Proofreading process: Each page is proofread and formatted several times, and then a post-processor combines the pages and prepares the text for uploading to Project Gutenberg. Besides custom software created to support the project, DP also runs a forum and a wiki for project coordinators and participants. Related projects: DP Europe In January 2004, Distributed Proofreaders Europe started, hosted by Project Rastko, Serbia. This site had the ability to process text in Unicode UTF-8 encoding. Books proofread centered on European culture, with a considerable proportion of non-English texts including Hebrew, Arabic, Urdu, and many others. As of October 2013, DP Europe had produced 787 e-texts, the last of these in November 2011. Related projects: The original DP is sometimes referred to as "DP International" by members of DP Europe. However, DP servers are located in the United States, and therefore works must be cleared by Project Gutenberg as being in the public domain according to U.S. copyright law before they can be proofread and eventually published at DP. Related projects: DP Canada In December 2007, Distributed Proofreaders Canada launched to support the production of e-books for Project Gutenberg Canada and take advantage of shorter Canadian copyright terms. Although it was established by members of the original Distributed Proofreaders site, it is a separate entity. All its projects are posted to Faded Page, their book archive website. In addition, it supplies books to Project Gutenberg Canada (which launched on Canada Day 2007) and (where copyright laws are compatible) to the original Project Gutenberg. Related projects: In addition to preserving Canadiana, DP Canada is notable because it is the first major effort to take advantage of Canada's copyright laws which may allow more works to be preserved. Unlike copyright law in some other countries, Canada has a "life plus 50" copyright term. This means that works by authors who died more than fifty years ago may be preserved in Canada, whereas in other parts of the world those works may not be distributed because they are still under copyright. Related projects: Notable authors whose works may be preserved in Canada but not in other parts of the world include Clark Ashton Smith, Dashiell Hammett, Ernest Hemingway, Carl Jung, A. A. Milne, Dorothy Sayers, Nevil Shute, Walter de la Mare, Sheila Kaye-Smith and Amy Carmichael. Milestones: 10,000th E-book On 9 March 2007, Distributed Proofreaders announced the completion of more than 10,000 titles. In celebration, a collection of fifteen titles was published: Slave Narratives, Oklahoma (A Folk History of Slavery in the United States From Interviews with Former Slaves) by the U.S. Work Projects Administration (English) Eighth annual report of the Bureau of ethnology. (1891 N 08 / 1886–1887) edited by John Wesley Powell (English) R. Caldecott's First Collection of Pictures and Songs by Randolph Caldecott [Illustrator] (English) Como atravessei Àfrica (Volume II) by Serpa Pinto (Portuguese) Triplanetary by E. E. "Doc" Smith (English) Heidi by Johanna Spyri (English) Heimatlos by Johanna Spyri (German) October 27, 1920 issue of Punch (English) Sylva, or, A Discourse of Forest-Trees by John Evelyn (English) Encyclopedia of Needlework by Therese de Dillmont (English) The annals of the Cakchiquels by Francisco Ernantez Arana (fl. 1582), translated and edited by Daniel G. Brinton (1837–1899) (English with Central American Indian) The Shanty Book, Part I, Sailor Shanties (1921) by Richard Runciman Terry (1864–1938) (English) Le marchand de Venise by William Shakespeare, translated by François Guizot (French) Agriculture for beginners, Rev. ed. by Charles William Burkett (English) Species Plantarum (Part 1) by Carl Linnaeus (Carl von Linné) (Latin) 20,000th E-book On April 10, 2011, the 20,000th book milestone was celebrated as a group release of bilingual books: The Renaissance in Italy–Italian Literature, Vol 1, John Addington Symonds (English with Italian) Märchen und Erzählungen für Anfänger; erster Teil, H. A. Guerber (German with English) Gedichte und Sprüche, Walther von der Vogelweide (Middle High German (ca. 1050-1500) with German) Studien und Plaudereien im Vaterland, Sigmon Martin Stern (German with English) Caos del Triperuno, Teofilo Folengo (Italian with Latin) Niederländische Volkslieder, Hoffmann von Fallersleben (German with Dutch) A "San Francisco", Salvatore Di Giacomo (Italian with Neapolitan) O' voto, Salvatore Di Giacomo (Italian with Neapolitan) De Latino sine Flexione & Principio de Permanentia, Giuseppe Peano (1858-1932) (Latin with Latino sine Flexione) Cappiddazzu paga tuttu—Nino Martoglio, Luigi Pirandello (Italian with Sicilian) The International Auxiliary Language Esperanto, George Cox (English with Esperanto) Lusitania: canti popolari portoghesi, Ettore Toci (Italian with French) 30,000th E-book On 7 July 2015, the 30,000th book milestone was celebrated with a group of thirty texts. One was numbered 30,000: Graded literature readers - Fourth book, editors: Harry Pratt Judson and Ida C. Bender, 1900
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deaf cinema** Deaf cinema: Deaf cinema is a movement that includes all works produced and directed by deaf people or members of the deaf community and is led by deaf actors. All these works have a tendency to nurture and develop the culture's self image and to reflect correctly the core of the Deaf culture and language. Deaf Cinema vs Cinema of the deaf: "Deaf cinema" is a movement that dissociates from the "Cinema of the deaf". “The two are worlds apart" while the Cinema of the deaf is "a mainstream cinema in need of character types as grist for its mercantile mill", the Deaf Cinema is "an outsider cinema serving to nurture and develop a culture's self-image”. Deaf Cinema vs Cinema of the deaf: Cinema of the deaf The "Cinema of the deaf" includes any film where the deafness is the main subject but is written or directed by anyone without questioning its relationship or knowledge about the deaf culture or language. It also includes any film that is played by hearing actors for deaf roles pretending to know sign language. Or by deaf people that do not know sign language and quickly learn it making it look ridiculous. Often these films contain erroneous messages such as negative stereotypes, continued misrepresentations or incorrect usage of the sign language and "are letting deaf people down" In regard to deaf performers, the #DeafTalent movement spread like wildfire across social media in 2015. "Using this hashtag, members of the Deaf community publicly spoke out against the cultural appropriation of deafness in movies and TV" "Deaf parts belong to deaf performers — people who understand the experience of hearing loss and can accurately portray deaf characters. Just as blackface is not an acceptable way to depict a black character, having a non-deaf actor pretend to be deaf is irresponsible, unethical, and offensive." However the challenge remains since the scripts are written by those who are not actually familiar to the deaf culture and language and then it is in the hands of hearing directors, who have received the right budget for the production from big production houses, thus often the deaf actors are found themselves in a dilemma when they find the script does not align with the deaf cores. Deaf Cinema vs Cinema of the deaf: "Deaf people’s culture and experiences have long been appropriated for the fascination and entertainment of others, and in the process kneaded into a bastardisation bearing no resemblance to real-life experiences, because it is rare that deaf people are actually involved in the production process" explains Rebecca Atkinson in The Guardian "but films and TV shows about deaf characters, told through a hearing lens are demeaning, depressing and cause more damage then good" In 2013 a deaf storyline on BBC1 "caused outrage among deaf viewers, with the depiction of the nine-year-old daughter of a deaf man (this time played by a deaf actor) interpret complex medical information about his upcoming heart surgery". As one deaf blogger said: “5.3 million viewers will now think that deaf people should be looked after by our kids.” Deaf Cinema "Deaf Cinema" emerges as a response to the continued misrepresentations of the deaf in the media. It includes films written, produced or directed by deaf people whose leading actors are deaf. Thanks to the advent of the digital technology in 2000s, there has been a rise of various short film productions by and with deaf people, formation of deaf film production companies and deaf film festivals. Computer software has made filmmaking more affordable and the proliferation of internet and the ability to stream in sign language freely and internationally has contributed to an increase in Deaf Cinema. Lately there has been an increase of long feature films as well. Deaf Cinema vs Cinema of the deaf: The following films have been directed by deaf directors: Deafula (1975), Think Me Nothing (1975), See What I'm Saying: The Deaf Entertainers Documentary (2009), Lake Windfall (2013), No Ordinary Hero: The SuperDeafy Movie (2013) and Sign Gene (2017).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Estrone/progesterone/testosterone** Estrone/progesterone/testosterone: Estrone/progesterone/testosterone (E1/P4/T), sold under the brand name Tristeron or Tristerone, is an injectable combination medication of estrone (E1), an estrogen, progesterone (P4), a progestogen, and testosterone (T), an androgen/anabolic steroid, which was used in the treatment of functional uterine bleeding in women. It contained 6 mg estrone, 50 mg progesterone, and 25 mg testosterone in microcrystalline aqueous suspension and was administered by intramuscular injection. The medication was manufactured by Wyeth and was marketed by 1951. It is no longer available.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded