text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Potassium trichloridocuprate(II)**
Potassium trichloridocuprate(II):
Potassium trichloridocuprate(II) is a salt with chemical formula KCuCl3, more properly [K+]2[Cu2Cl2−4].It is a member of the "halide" sub-family of perovskite materials with general formula ABX3 where A is a monovalent cation, B is a divalent cation, and X is a halide anion.The compound occurs in nature as the bright red mineral sanguite.The compound is also called potassium trichlorocuprate(II), potassium copper(II) trichloride, potassium cupric chloride and other similar names. The latter is used also for potassium tetrachloridocuprate(II) K2CuCl4.
Preparation and properties:
The compound can be obtained by evaporation of a solution of potassium chloride KCl and copper(II) chloride CuCl2 in 1:1 mole ratio.The anhydrous form is garnet-red. It can be crystallized from a molten mixture of potassium chloride KCl and copper(II) chloride CuCl2. or by evaporation from a solution of the salts in ethanol. It is very hygroscopic, and soluble in methanol and ethanol. It is antiferromagnetic below 30 K, and pleochroic, with maximum visible absorption when the electric vector is parallel to the Cu–Cu vector of the dimer.
Structure:
Anhydrous The anhydrous mineral form (sanguite) has the monoclinic crystal structure, with symmetry group P21/c and lattice parameters a = 402.81 pm, b = 1379.06 pm, c = 873.35 pm, and β = 97.137°, cell volume V = 0.48138 nm3, and formulas per cell Z = 4. The measured density is 2.86 g/cm3, close to the calculated one 2.88 g/cm3. It contains discrete almost planar anions [Cu2Cl6]2−, each with the two copper atoms connected by two bridging chlorine atoms. These anions are arranged in columns consisting of distorted edge-sharing CuCl6 octahedra, stacked in double chains parallel to the a axis. The columns occupy the edges and the centre of the cell's projection on the bc plane. The potassium atoms are located between these columns; each K+ cation is surrounded by nine chlorine atoms. The mineral is optically biaxial (negative), with α = 1.653, β = 1.780, γ = 1.900', 2V= 85°. The mineral is named from the Latin sanguis (blood), alluding to its color.Theoretical calculations for this topology give the lattice parameters as a = 1388.1 pm, b = 427.7 pm, c = 896.5 pm, α = 79.855°, cell volume V = 0.523891 nm3, calculated density 2.65 g/cm3.
Structure:
Theoretical An alternative theoretical structure for the compound has a cubic crystal system, symmetry group Pm3m[221], with the copper atoms arranged as corners of a cubic grid, a potassium atom at the center of each cube and a chlorine atom at the midpoint of each edge. The latice parameters are a = b = c = 485.8 pm, V = 0.114684 nm3, predicted density 3.03 g/cm3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Penis envy**
Penis envy:
Penis envy (German: Penisneid) is an idea in psychoanalytic theory. This is a stage theorized by Sigmund Freud regarding female psychosexual development, in which young girls experience anxiety upon realization that they do not have a penis. Freud considered this realization a defining moment in a series of transitions toward a mature female sexuality. In Freudian theory, the penis envy stage begins the transition from an attachment to the mother to competition with the mother for the attention, recognition and affection of the father. The parallel reaction of a boy's realization that women do not have a penis is castration anxiety.
Penis envy:
Freud's theory on penis envy was criticized and debated by other psychoanalysts, such as Karen Horney, Ernest Jones, Helene Deutsch, and Melanie Klein, specifically on the treatment of penis envy as a fixed operation as opposed to a formation constructed or used in a secondary manner to fend off earlier wishes.
Freud's theory:
Freud introduced the concept of interest and envy of the penis in his 1908 article "On the Sexual Theories of Children." It was not mentioned in the first edition of Freud's earlier Three Contributions to the Theory of Sex (1905), but a synopsis of the 1908 article was added to the third edition in 1915. In On Narcissism (1914) he described how some women develop a masculine ideal as "a survival of the boyish nature that they themselves once possessed". The term grew in significance as Freud gradually refined his views of sexuality, coming to describe a mental process he believed occurred as one went from the phallic stage to the latency stage (see Psychosexual development.) Psychosexual development Child Penis envy stems from Freud's concept of the Oedipus complex in which the phallic conflict arises for males, as well as for females. Though Carl Jung made the distinction between the Oedipus complex for males and the Electra complex for females in his work The Theory of Psychoanalysis, Freud rejected this latter term, stating that the feminine Oedipus complex is not the same as the male Oedipus because, "It is only in the male child that we find the fateful combination of love for the one parent and simultaneous hatred of the other as a rival." This development of the female Oedipus complex according to Freud begins when the female makes comparisons with another male, perceiving this not as a sex characteristic; but rather, by assuming that she had previously possessed a penis, and had lost it by castration. This leads to the essential difference between the male and female Oedipus complex that the female accepts castration as a fact, while the boy fears it happening.Freud felt that penis envy may lead to: Resentment towards the mother who failed to provide the daughter with a penis Depreciation of the mother who appears to be castrated Giving up on phallic activity (clitoral masturbation) and adopting passivity (vaginal intercourse) A symbolic equivalence between penis and childThis envy towards the penis leads to various psychical consequences according to Freud, so long as it does not form into a reaction-formation of a masculinity complex. One such consequence is a sense of inferiority after becoming aware of the wound inflicted upon her narcissism. After initially attempting to explain this lack of a penis as a punishment towards her, she later realizes the universality of her female situation, and as a result begins to share the contempt that men have towards women as a lesser (in the important respect of a lack of a penis), and so insists upon being like a man. A second consequence of penis envy involves the formation of the character-trait of jealousy through displacement of the abandoned penis envy upon maturation. Freud concludes this from considering the common female fantasy of a child being beaten to be a confession of masturbation, with the child representing the clitoris. A third consequence of penis envy involves the discovery of the inferiority of this clitoris, suggested through the observation that masturbation is further removed from females than from males. This is, according to Freud, because clitoral masturbation is a masculine activity that is slowly repressed throughout puberty (and shortly after discovering the penis envy) in an attempt to make room for the female's femininity by transitioning the erotogenic zone from the clitoris to the vagina.The result of these anxieties culminates in the girl giving up on her desire for the penis, and instead puts it in the place of the wish for a child; and, with that goal in mind, she takes her father as the love-object and makes the mother into the object of her jealousy.
Freud's theory:
Adult Freud considered that in normal female development penis envy transformed into the wish for a man and/or a baby.Karl Abraham differentiated two types of adult women in whom penis envy remained intense as the wish-fulfilling and the vindictive types: The former were dominated by fantasies of having or becoming a penis—as with the singing/dancing/performing women who felt that in their acts they magically incorporated the (parental) phallus. The latter sought revenge on the male through humiliation or deprivation (whether by removing the man from the penis or the penis from the man).
Society and culture:
Within psychoanalytic circles Freud's theories regarding psychosexual development, and in particular the phallic stage, were challenged early by other psychoanalysts, such as Karen Horney, Otto Fenichel and Ernest Jones, though Freud did not accept their view of penis envy as a secondary, rather than a primary, female reaction. Later psychologists, such as Erik Erikson and Jean Piaget, challenged the Freudian model of child psychological development as a whole.
Society and culture:
Jacques Lacan, however, took up and developed Freud's theory of the importance of what he called "penisneid in the unconscious of women" in linguistic terms, seeing what he called the phallus as the privileged signifier of humanity's subordination to language: "the phallus (by virtue of which the unconscious is language)". He thereby opened up a new field of debate around phallogocentrism—some figures like Juliet Mitchell endorsing a view of penis envy which "uses, not the man, but the phallus to which the man has to lay claim, as its key term", others strongly repudiating it.Ernest Jones attempted to remedy Freud's initial theory penis envy by giving three alternative meanings: The wish to acquire a penis, usually by swallowing it and retaining it within the body, often converting it there into a baby The wish to possess a penis in the clitoris region The adult wish to enjoy a penis in intercourse Feminist criticisms In Freud's theory, the female sexual center shifts from the clitoris to the vagina during a heterosexual life event. Freud believed in a duality between how genders construct mature sexuality in terms of the opposite gender, whereas feminists reject the notion that female sexuality can only be defined in relation to the male. Feminist development theorists instead believe that the clitoris, not the vagina, is the mature center of female sexuality because it allows a construction of mature female sexuality independent of the penis.Karen Horney — a German psychoanalyst who also placed great emphasis on childhood experiences in psychological development — was a particular advocate of this view. She asserted the concept of "womb envy", and saw "masculine narcissism" as underlying the mainstream Freudian view.
Society and culture:
Some feminists argue that Freud's developmental theory is heteronormative and denies women a mature sexuality independent of men; they also criticize it for privileging the vagina over the clitoris as the center of women's sexuality. They criticize the sociosexual theory for privileging heterosexual sexual activity and penile penetration in defining women's "mature state of sexuality". Others claim that the concept explains how, in a patriarchal society, women might envy the power accorded to those with a phallus.In her academic paper "Women and Penis Envy" (1943), Clara Thompson reformulated the latter as social envy for the trappings of the dominant gender, a sociological response to female subordination under patriarchy.Betty Friedan referred to penis envy as a purely parasitic social bias typical of Victorianism and particularly of Freud's own biography, and showed how the concept played a key role in discrediting alternative notions of femininity in the early to mid twentieth century: "Because Freud's followers could only see woman in the image defined by Freud – inferior, childish, helpless, with no possibility of happiness unless she adjusted to being man's passive object – they wanted to help women get rid of their suppressed envy, their neurotic desire to be equal. They wanted to help women find sexual fulfillment as women, by affirming their natural inferiority".A small but influential number of feminist philosophers, working in psychoanalytic feminism, and including Luce Irigaray, Julia Kristeva, and Hélène Cixous, have taken varying post-structuralist views on the question, inspired or at least challenged by figures such as Jacques Lacan and Jacques Derrida. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hypotrichosis with juvenile macular dystrophy**
Hypotrichosis with juvenile macular dystrophy:
Hypotrichosis with juvenile macular dystrophy (HJMD or CDH3) is an extremely rare congenital disease characterized by sparse hair growth (hypotrichosis) from birth and progressive macular corneal dystrophy.
Signs and symptoms:
Hair growth on the head is noticeably less full than normal, and the hairs are very weak; the rest of the body shows normal hair. The macular degeneration comes on slowly with deterioration of central vision, leading to a loss of reading ability. Those affected may otherwise develop in a completely healthy manner; life expectancy is normal.
Cause:
Hypotrichosis with juvenile macular dystrophy is an autosomal recessive hereditary disease. It is caused by a combination of mutations (compound heterozygosity) in the CDH3 gene, which codes for Cadherin-3 (also known as P-Cadherin), a calcium-binding protein that is responsible for cellular adhesion in various tissues.
Diagnosis:
The markedly anomalous hair growth should lead to a retinal examination by school entry at the latest, since weak vision will not necessarily be detected in the course of normal medical check-ups. Confirmation of a diagnosis, which is necessary for any future therapeutic options, is only possible by means of a molecular genetic diagnosis in the context of genetic counseling.
Diagnosis:
Examination method The extent of retinal damage is assessed by fluorescent angiography, retinal scanning and optical coherence tomography; electrophysiological examinations such as electroretinography (ERG) or multifocal electroretinography (mfERG) may also be used.
Differential diagnosis Anomalies of the hair shaft caused by ectodermal dysplasia should be ruled out. Mutations in the CDH3 gene can also appear in EEM syndrome.
Treatment:
There is no treatment for the disorder. A number of studies are looking at gene therapy, exon skipping and CRISPR interference to offer hope for the future. Accurate determination through confirmed diagnosis of the genetic mutation that has occurred also offers potential approaches beyond gene replacement for a specific group, namely in the case of diagnosis of a so-called nonsense mutation, a mutation where a stop codon is produced by the changing of a single base in the DNA sequence. This results in premature termination of protein biosynthesis, resulting in a shortened and either functionless or function-impaired protein. In what is sometimes called "read-through therapy", translational skipping of the stop codon, resulting in a functional protein, can be induced by the introduction of specific substances. However, this approach is only conceivable in the case of narrowly circumscribed mutations, which cause differing diseases.
Treatment:
Life planning A disease that threatens the eyesight and additionally produces a hair anomaly that is apparent to strangers causes harm beyond the physical. It is therefore not surprising that learning the diagnosis is a shock to the patient. This is as true of the affected children as of their parents and relatives. They are confronted with a statement that there are at present no treatment options. They probably have never felt so alone and abandoned in their lives. The question comes to mind, "Why me/my child?" However, there is always hope and especially for affected children, the first priority should be a happy childhood. Too many examinations and doctor appointments take up time and cannot practically solve the problem of a genetic mutation within a few months. It is therefore advisable for parents to treat their child with empathy, but to raise him or her to be independent and self-confident by the teenage years. Openness about the disease and talking with those affected about their experiences, even though its rarity makes it unlikely that others will be personally affected by it, will together assist in managing life.
Epidemiology:
It is estimated to affect less than one in a million people. Only 50 to 100 cases have so far been described.
History:
The disease was first described in 1935 by Hans Wagner, a German physician.
Sources:
"A Rare Syndrome: Hypotrichosis with Juvenile Macular Dystrophy (HJMD)". Investigative Ophthalmology & Visual Science. 55 (13): 6424. April 2014.
Online Mendelian Inheritance in Man (OMIM): CADHERIN 3 - 114021 Samuelov, L; Sprecher, E; Tsuruta, D; Bíró, T; Kloepper, J. E.; Paus, R (2012). "P-cadherin regulates human hair growth and cycling via canonical Wnt signaling and transforming growth factor-β2". Journal of Investigative Dermatology. 132 (10): 2332–41. doi:10.1038/jid.2012.171. hdl:2437/149863. PMID 22696062.
Nagel-Wolfrum, K; Möller, F; Penner, I; Wolfrum, U (2014). "Translational read-through as an alternative approach for ocular gene therapy of retinal dystrophies caused by in-frame nonsense mutations". Visual Neuroscience. 31 (4–5): 309–16. doi:10.1017/S0952523814000194. PMID 24912600. S2CID 13191204. (Review).
Gregory-Evans, C. Y.; Wang, X; Wasan, K. M.; Zhao, J; Metcalfe, A. L.; Gregory-Evans, K (2014). "Postnatal manipulation of Pax6 dosage reverses congenital tissue malformation defects". Journal of Clinical Investigation. 124 (1): 111–116. doi:10.1172/JCI70462. PMC 3871240. PMID 24355924.
Sources:
Schwarz, N.; Carr, A.-J.; Lane, A.; Moeller, F.; Chen, L. L.; Aguila, M.; Nommiste, B.; Muthiah, M. N.; Kanuga, N.; Wolfrum, U.; Nagel-Wolfrum, K.; Da Cruz, L.; Coffey, P. J.; Cheetham, M. E.; Hardcastle, A. J. (2014). "Translational read-through of the RP2 Arg120stop mutation in patient iPSC-derived retinal pigment epithelium cells". Human Molecular Genetics. 24 (4): 972–86. doi:10.1093/hmg/ddu509. PMC 4986549. PMID 25292197. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MechQuest**
MechQuest:
MechQuest is an online Flash based single-player sci-fi role-playing video game developed by Artix Entertainment. MechQuest centers on mecha combat and was updated on a weekly basis. Players can play for free or pay a one time fee which grants access to more game content like: a Starship, missions/events, and special Mechas.
Gameplay:
MechQuest is a single player RPG; however the character data is stored on a server. Players control their character via pointing and clicking on the screen in various areas to navigate the player character to the point where they click. Most items are activated either simply by running into them, or by pressing a button that will appear when the point is reached (when outside of battle). Battles are presented in two ways, Mecha battles and energy blade battles, both battle styles are similar to a traditional RPG in that much of its game play revolves around fighting enemies in a turn based system. Mecha Battles features a set of many types of attacks but the player must spend energy points to use them.
Gameplay:
G.E.A.R.S. University Houses G.E.A.R.S. University Houses are groups that the players can join so they can participate in competitive activities. There are three houses available for players to join: house WolfBlade holds G.E.A.R.S. warriors and heroes, house of RuneHawk is a refuge for science and magic alike, and house of MystRaven is for tricksters who enjoy pranks and shenanigans.
Plot:
The player controls a mecha pilot from an unknown location. The game begins with the player on a starship heading towards the planet Loreon, where the player will attend G.E.A.R.S. University in Soluna City. After joining the university, the player is educated and trained in the art of mecha and energy blade combat. The player soon discovers an alien empire called the Shadowscythe, who plan to assimilate the entire galaxy, and uses their newly obtained skills to stop the empire's evil plans.
Plot:
Holiday events MechQuest has several recurring holiday events. These include New Years Day, Valentine's Day, April Fools' Day Halloween(named "Mogloween" in game), Christmas (named Frostval in game), Friday the 13th, Talk Like a Pirate Day and Thanksgiving.
Critical reception:
Nic Stransky complimented the graphics and simplicity of the game, but wrote that melee could feel inconsistent and that players may wish for more strategy. MMOHuts praised MechQuest for: "Running on flash, for having a classic RPG style turn-based combat, and plenty of gear available for purchase." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MicroKORG**
MicroKORG:
The microKORG is a MIDI-capable digital synthesizer/vocoder from Korg featuring DSP-based analog modelling. The synthesizer is built in such a way that it is essentially a Korg MS-2000 with a programmable step arpeggiator (the MS-2000 has only six simple patterns), a less advanced vocoder (8 bands instead of 16 bands on the MS-2000), lack of motion sequencing (MS-2000 had three motion sequences), lack of an XLR microphone input, and in a smaller case with fewer real-time control knobs.
MicroKORG:
The microKORG was released in 2002 and is still in production as of 2022. It is considered one of the most popular music synthesizers in recent history, with an estimated 100,000 units sold as of May 2009. In September 2007 Korg released a limited edition of the microKORG with reverse-color keys, although the functionality was otherwise unchanged. At NAMM 2008, a successor dubbed the microKORG XL was introduced. Available since early 2009, it uses Korg's MMT (Multi Modeling Technology) engine, borrowed from the newer and more powerful Radias/R3 synthesizers. Also, in late 2016, a slightly updated version was released, dubbed the MicroKORG S. This edition retains the same sound engine as the original MicroKORG, but offers an integrated speaker system (stereo + sub), updated color scheme & twice the patch memory. In 2022, a VST Version was released as part of the Korg Collection.
Synthesis:
The microKORG features a DSP-based synthesis engine, designed around the same engine found in the Korg MS2000. In Korg's terminology, the fundamental unit of sound is referred to as the "timbre". Each timbre consists of a pair of multi-function oscillators. Two timbres can be combined in one patch to create a four-oscillator "layer", which can in turn be used to create more complex sounds (although doing so halves the polyphony from four notes to two) Oscillator one (OSC1) can produce one of several virtual analog-style waveforms, including sawtooth, square, triangle, and sine waves. Alternatively, OSC1 can produce a so-called "VOX" wave (which simulates human vocal formants), white noise, and one of 64 different digital waveforms created via harmonic additive synthesis. Some of these 64 waveforms (which are really single-cycle wavetables) were originally featured in the Korg DW-6000 & DW-8000 digital-analog hybrid synthesizers of the mid 1980s. The second oscillator (OSC2) is limited to sawtooth, square, and triangle waveforms.
Synthesis:
Each waveform on OSC1 has a unique modulation feature, including wave morphing, Pulse-width modulation, and FM. OSC2 can be detuned, synchronized, and/or ring-modulated with OSC1 in order to create more complex sounds. OSC1 can also be replaced with the signal from one of the line-level inputs on the back of the unit, allowing for external signals to be processed as if they were an oscillator (via the filters, effects, or even ring-modulated by OSC2).
Synthesis:
For further shaping of the sound, the microKORG offers several types of digital filters, including Low Pass (-12dB/Oct and -24dB/Oct), Band Pass (-12dB/Oct), and High Pass (-12dB/Oct) modes.Additionally, the unit provides a number of built-in effects, such as flanger, ensemble (chorus), phaser, and digital delay, all of which can be applied to external signals. For modulation, there are two independent LFOs, with six different waveforms, allowing for the creation of more complex, time-varying patches.
Synthesis:
When playing a single timbre, the keyboard is limited to four-voice polyphony. In layer mode it generally has only two-voice polyphony, although one combination of polyphonic/mono layers allows for effective three-voice polyphony of the second timbre.
Synthesis:
The microKORG groups its 128 factory preset sound patches into 8 groups: Trance Techno/House Electronica D'n'B/Breaks Hip hop/Vintage Retro Special Effects/Hit VocoderA large knob changes the selected sound group. Each group has 16 different patches (two banks of eight); the active patch is selected by the eight LED-illuminated buttons on the front panel, while the accompanying A/B switch toggles between the two banks. All patches are user editable, and do not necessarily have to align with the genre groupings listed on the faceplate.
microKORG S:
In 2016, Korg reissued the microKORG as the modified 'microKORG S'. This edition retains the engine and features of the original microKORG (as opposed to the XL/XL+, see below), but includes a new lighter-colored housing, built-in speakers, twice the original patch memory (256 slots) and a Favorites feature to assign 8 patches to the program buttons for easier selection.
microKORG XL:
The direct successor to the microKORG, the 'microKORG XL', utilizes the MMT (Multi Modelling Technology) engine, and is based on Korg's own R3 synthesizer. The XL features a brand-new LCD display and two large Program Select knobs for easier patch access, though has fewer real-time controls than the original microKORG.
microKORG XL:
The microKORG XL groups its 128 factory preset sound patches into 8 groups: Vintage Synth Rock/Pop R&B/Hip Hop Jazz/Fusion TechnoTrance House/Disco D'N'B/Breaks Favouriteand several sub categories: Poly Synth Bass Lead Arp/Motion Pad/Strings Keyboard/Bell Special Effects/Hit Vocoder New features specific to the microKORG XL Notably, the 'microKORG XL' features 17 different KAOSS derived effects, including phaser, flange, decimation, vibrato, tremolo and retrigger. The XL also features several included PCM Waveforms, including Piano, Brass Ensemble, nine Electric Piano and Clavinet, seven organ sounds (one of which emulates the Korg M1 Organ), a full String Orchestra, two variable formant waves and more than 32 digitally generated waveforms (SYNWAVE 6 is a ramp wave/inverted sawtooth). The XL adds two additional Waveform Modulation types: Phase Modulation and Unison (in which five stacked oscillators within 1 oscillator can be detuned and phased to achieve a richer sound.) The Unison Simulator is similar to the Supersaw waveform on the Roland JP-8000. The included "OSC MOD WAVEFORM" and "OSC2 SYNC" controllers are reminiscent of the Poly-Mod feature in the Sequential Circuits Prophet-5.
microKORG XL:
The microKORG XL also includes a waveshaper (uncommon in most synthesizers) which will morph the current waveform into an approximation of the waveform desired, resulting in a harsh sound. The waveshaper also includes a third oscillator (Sub oscillator.) Additional improvements include: Polyphony increased up to eight notes Vocoder increased to 16 bands, but still supports the 4 note polyphony USB connector for MIDI over USB operation "Split" and "Multi" added to Voice modes The option to use ten scales, including one defined by the user.
microKORG XL:
"Analog Tune" simulates the pitch instability and oscillator “drift” that was characteristic of vintage analog synthesizers
Korg RK-100S:
In 2014, Korg announced the RK-100S keytar, which is essentially a 37-key "keytar" version of the 'microKORG XL+', with many external differences and only two internal differences. On the inside, it sports the same exact features as the 'MicroKorg XL+', except it has 200 program storage instead of 128, and allowing for the long ribbon controller to serve as a modulation source. Externally the RK-100S is radically different, it lacks the ability to edit programs from the unit. Editing may only be done via a control app available for Mac and PC, and if one is daring enough, it is technically possible to create ones own editor using the available MIDI messages chart. 'MicroKORG XL' and 'XL+' patches may be downloaded into the unit one-by-one or en-masse, allowing patch editing to be done on a 'microKORG XL+'.
Korg RK-100S:
Notable external differences of the RK-100S Lacks the XLR mic input and dual quarter-inch mono output jacks of 'microKORG XL+', instead featuring a stereo 1/4" TRS jack and mono 1/8" audio input jack, switchable between three gain levels (Line, Mic1, Mic2) Adds a short and long ribbon controller, and buttons that toggle the behavior of the ribbons (e.g. between modulation of pitch or frequency, although other things can be modulated) Sports a 37-key keyboard of "mini" keys as on the MS 20 Mini; these are larger than microKORG'S keys but much thinner than traditional keys Has five banks of "favorites" selectable with five LED-backlit buttons; these buttons serve as a level meter for output volume during normal performance A multipurpose up/down lever switch used for selecting between programs, banks, adjusting tempo, etc.
Korg RK-100S:
Wooden body with very fragile glossy lacquer paint that is very easy to chip or crack should the unit bump into anything hard Double the battery life for a set of 4 alkaline AA batteries (8 hours instead of 4 on the microKORG XL+), according to Korg's documentation.Even though the RK-100S is not marketed as a microKORG, the fact that its synthesis engine is identical makes it ideal for microKORG users wishing to perform live without needing to have a keyboard stand restricting their movement around a stage.
Competing products:
The MicroKORG was released during the same period as several similar products: Alesis Micron Novation KS Series / Novation Xio-Synth (Discontinued) Dave Smith Evolver Akai MiniakThe microKORG shared several features with the earlier discontinued Quasimidi Sirius, in particular a built-in vocoder. Although the Sirius used distinctively unique analog modeling - sample playback hybrid synthesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Leica CL (Typ 7323)**
Leica CL (Typ 7323):
The Leica CL is an APS-C mirrorless system camera announced by Leica Camera AG in November, 2017.The CL is a member of Leica's L-mount family of cameras, which began with the discontinued T/TL, and is currently shared with the TL2 and SL cameras. It shares the same sensor as the TL2, and is primarily differentiated from its sister model by its user interface, which focuses on physical controls as opposed to touchscreen, and presence of an integrated viewfinder. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Potion**
Potion:
A potion is a liquid "that contains medicine, poison, or something that is supposed to have magic powers.” It derives from the Latin word potio which refers to a drink or the act of drinking. The term philtre is also used, often specifically for a love potion, a potion that is supposed to create feelings of love or attraction in the one who drinks it.
Potion:
Throughout history there have been several types of potions for a range of purposes. Reasons for taking potions ranged from curing an illness, to securing immortality to trying to induce love. These potions, while often ineffective or poisonous, occasionally had some degree of medicinal success depending on what they sought to fix and the type and amount of ingredients used. Some popular ingredients used in potions across history include Spanish fly, nightshade plants, cannabis, and opium.During the 17th to 19th century, it was common in Europe to see peddlers offering potions for ailments ranging from heartbreak to the plague. These were eventually dismissed as quackery. Prostitutes, courtesans, enchanters and midwives were also known to distribute potions.
Etymology:
The word potion has its origins in the Latin word potus, an irregular past participle of potare, meaning "to drink.” This evolved to the word potionem (nominative potio) meaning either "a potion, a drinking” or a "poisonous draught, magic potion." In Ancient Greek, the word for both drugs and potions was “pharmaka” or “pharmakon.” In the 12th century, the French had the word pocion, meaning "potion," "draught," or "medicine". By the 13th century, this word became pocioun, referring to either a medicinal drink, or a dose of liquid medicine (or poison).
Etymology:
The word "potion" is also cognate with the Spanish words pocion with the same meaning, and ponzoña, meaning "poison"; The word pozione was originally the same word for both "poison" and "potion" in Italian, but by the early 15th century in Italy, potion began to be known specifically as a magical or enchanted drink.
Administrators of potions:
The practice of administering potions has had a long history of being illegalised. Despite these laws, there have been several different administrators of potions across history.
Administrators of potions:
Quacks Quacks or charlatans are people who sell "medical methods that do not work and are only intended to make money". In Europe in the 15th century it was also common to see long-distance peddlers, who sold supposedly magical healing potions and elixirs. During the Great Plague of London in the 17th century, quacks sold many fake potions promising either cures or immunity. Because pills looked less trustworthy to the public, potions were often the top sellers of quacks. These potions often included bizarre ingredients such as floral pomanders and the smoke of fragrant woods. The well known Wessex quack Vilbert was known to sell love potions made of pigeon hearts. By the 18th century in England, it was common for middle class households to stock potions that claimed to solve a variety of ailments. Quackery grew to its height in the 19th century.
Administrators of potions:
Pharmacists In 18th- and 19th-century Britain, pharmacies or apothecaries were often a cheaper, more accessible option for medical treatment than doctors. Potions distributed by chemists for illnesses were often derived from herbs and plants, and based on old beliefs and remedies.Prior to the Pharmacy Act 1868 anybody could become a pharmacist or chemist. Since the practice was unregulated, potions were often made from scratch.Potions were additionally used to cure illness in livestock. One potion found in a 19th-century pharmacist's recipe book was to be used for "lambs of about 7 years old" and contains chalk, pomegranate and opium.
Administrators of potions:
The role of women in distributing potions There was a strict hierarchy in the medical community of Europe during the 12th to 15th centuries. Male doctors were the most respected and paid followed by female apothecaries, barber-surgeons and surgeons. Women were often the main way that individuals who could not afford doctors or apothecaries could gain medical treatment Potions, in addition to calming teas or soup, were a common homemade treatment made by women. When unable to go to a female house member, early modern people would often go to the wise women of their village. Wise women (who were often supposed witches) were knowledgeable in health care and could administer potions, lotions or salves in addition to performing prayers or chants. This was often free of charge or significantly less expensive than the potions of apothecaries.The limited jobs available to women during the 17th to 18th century in Europe often involved a knowledge of potions as an additional way to gain a financial income. Jobs that often involved the selling of love potions included prostitutes, courtesans, enchanters and midwives. These practices varied by region. In Rome, up until the period of the civil wars, the only physicians were drug-sellers, enchanters and midwives. In Greece, retired courtesans often both created potions and worked as midwives. Prostitutes in Europe were often expected to be an expert in magic and administer love potions.
Administrators of potions:
Self-administration In the Middle Ages and the early modern period using potions to induce sterility and abortion was widely practiced in Europe. The majority of abortive potions were made using emmenagogue herbs (herbs used to stimulate menstruation) which were intended to cause a period and end a pregnancy. Additionally abortive potions could also be prepared by infusion of a herb or tree. The willow tree was a common ingredient in these potions, as it was fabled to cause sterility. Several key theological and legal literature of the time condemned this practice, including Visigothic law and the Church.Many herbal potions containing emmenagogues did not contain abortifacients (substances that induce abortion) and were instead used to cure amenorrhoea (a lack of period). There are several different types of literature in the humoral tradition that propose the use of herbal potions or suppositories to provoke menstruation.
Famous potion makers:
Giulia Tofana and Gironima Spana Giulia Tofana (1581-1651) was an Italian poisoner, known as the inventor of the famous poison Aqua Tofana. Born in Sicily, she invented and started to sell the poison in Palermo in Sicily. She later established herself in Rome, where she continued the business, specialising in selling to women in abusive marriages who wanted to become widows. She died peacefully in 1651 and left the business to her stepdaughter Gironima Spana, who expanded it to a substantial business in the 1650s. The organization was exposed in 1659 and resulted in the famous Spana Prosecution, which became a subject of sensationalistic mythologization for centuries.
Famous potion makers:
Paula de Eguiluz Paula de Eguiluz was born into slavery in Santo Domingo, Dominican Republic in the 17th century. Within the area in which she lived, sickness and disease ravaged the towns and major cities. Paula de Eguiluz decided to research and find her own cures to these maladies. Because of this, she is widely known for being involved in health care and healing.
Famous potion makers:
Once her healing and health care practice took off, she started to sell potions and serums to clients. de Eguiluz's business attracted a following and slowly got her into a bit of trouble.
Due to Paula's healing accomplishments, she was arrested approximately 3 times. During these inquisitions, she was forced to tell the jury that she performed witchcraft. In response to these false confessions, she was imprisoned and whipped several times.
Catherine Monvoisin Catherine Monvoisin, better known to some as La Voisin, was born within the year 1640 in France.
Catherine Monvoisin married Antoine Monvoisin who was a jeweler in Paris. His business plummeted and Catherine had to find work in order for her and her family to survive. She had a knack for reading people very accurately coupled with chiromancy and utilized her skills in order to make money.
La Voisin would read people's horoscopes and perform abortions, but she also sold potions and poisons to her clients. Her work quickly became well known throughout France and people would quickly become her clients. Around the year 1665, her fortune telling was questioned by Saint Vincent de Paul's Order, but she was quick to dismiss the allegations of witchcraft.
Catherine would then begin making potions whether it be for love, murder, or everyday life. Her love potion consisted of bones, the teeth of moles, human blood, Spanish fly beetles, and even small amounts of human remains. Her predecessor and major influence was Giulia Tofana.
On March 12, 1679, Catherine was arrested Notre- Dame Bonne- Nouvelle due to a string of incidents involving her and her potions. She confessed her crimes of murder and told authorities a majority of everything they needed to know about the people she knowingly murdered.
On February 22, 1680, La Voisin was sentenced to a public death wherein she was to be burned as the stake for witchcraft.
Famous potion makers:
Jacqueline Felicie Jacqueline Felice de Almania was tried in Italy in 1322 for the unlicensed practice of medicine. She was mainly accused of doing a learned male physicians job and accepting a fee. This job involved “examining urine by its physical appearance; touching the body; and prescribing potions, digestives, and laxatives.” Eight witnesses testified to her medical experience and wisdom. However, as she had not attended university, her knowledge was dismissed. Jacqueline Felice was then found guilty and fined and excommunicated from the church.
Popular types of potions:
Emotions such as anger, fear and sadness are universal and as such potions have been created across history and cultures in response to these human emotions.
Popular types of potions:
Love potion Love potions have been used throughout history and cultures. Scandinavians often used love-philtres, which is documented in the Norse poem The Lay of Gudrun.In 17th-century Cartagena, Afro-Mexican curer (curanderos/as) and other Indigenous healers could gain an income and status from selling spells and love potions to women trying to secure men and financial stability. These love potions were sold to women of all social classes, who often wished to gain sexual agency.
Popular types of potions:
Restorative potion Confectio Alchermes In the early 9th century, Arab physician Yuhanna ̄ Ibn Masawaih used the dye kermes to create a potion called Confectio Alchermes. The potion was “intended for the caliph and his court and not for commoners.” The potion was intended to cure heart palpitations, restore strength and cure madness and depression.During the Renaissance in Europe, Confectio Alchermes was used widely. Recipes for the potion appeared in the work of the popular English apothecary Nicholas Culpeper and the official pharmacopoeia handbooks of London and Amsterdam. Queen Elizabeth's French ambassador was even treated with the remedy; however, the recipe was altered to include a "unicorn’s horn" (possibly a ground-up narwhal tusk) in addition to the traditional ingredients. The ingredients for the potion mainly included ambergris, cinnamon, aloes, gold leaf, musk, pulverized lapis lazuli, and white pearls.
Popular types of potions:
St Paul's potion St Paul's potion was intended to cure epilepsy, catalepsy and stomach problems. Many ingredients used in the potion had medicinal value. According to Toni Mount the list of ingredients included “liquorice, sage, willow, roses, fennel, cinnamon, ginger, cloves, cormorant blood, mandrake, dragon’s blood and three kinds of pepper”.Many of these ingredients still have medicinal value in the 21st century. Liquorice can be used to treat coughs and bronchitis. Sage can help memory and improve blood flow to the brain. Willow contains salicylic acid, which is a component of aspirin. Fennel, cinnamon and ginger are all carminatives, which help relieve gas in the intestines. The cormorant blood adds iron to treat anemia. If used in small doses, Mandrake is a good sleeping draught (though in large doses Mandrake can be poisonous.) Dragon's blood refers to the bright red resin of the tree Dracaena draco. According to Toni Mount “it has antiseptic, antibiotic, anti-viral and wound-healing properties, and it is still used in some parts of the world to treat dysentery.” Immortality potion Creating a potion for immortality, was a common pursuit of alchemists throughout history. The Elixir of Life is a famous potion that aimed to create eternal youth. During the Chinese dynasties, this elixir of life was often recreated and drunk by emperors, nobles and officials. In India, there is a myth of the potion amrita, a drink of immortality made out of nectar.
Popular types of potions:
Psychedelic potions Ayahuasca Ayahuasca, is a hallucinogenic plant-based potion used in many parts of the world. It was first created by indigenous South Americans from the Amazon basin as a spiritual medicine. The potion was often administered by a shaman during a ceremony. The potion contains the boiled stems of the ayahuasca vine and leaves from the chacruna plant. Chacruna contains dimethyltryptamine (also known as DMT), a psychedelic drug. The potion caused users to vomit or 'purge' and induced hallucinations.
Folklore:
Potions or mixtures are common within many of local mythologies. In particular, references to love potions are common in many cultures. Yusufzai witches, for example, would bathe a recently deceased leatherworker and sell the water to those seeking a male partner; this practice is said to exist in a modified form in modern times.
Famous potions in literature:
Potions have played a critical role in many pieces of literature. Shakespeare wrote potions into many of his plays including a love potion in A Midsummer Night’s Dream, poison in Hamlet, and Juliet takes a potion to fake her death in Romeo and Juliet.In the Harry Potter series, potions also play a main role. The students are required to attend potion classes and knowledge of potions often becomes a factor for many of the characters.
Famous potions in literature:
In the fairytale "The Little Mermaid" by Hans Christian Andersen, the Little Mermaid wishes to become human and have an immortal soul. She visits the Sea Witch who sells her a potion, in exchange for which she cuts out the Little Mermaid's tongue. The Sea Witch makes the potion using her own blood that she cuts from her breast. She warns the Little Mermaid that it will feel as if she had been cut with a sword when her fin becomes legs, that she will never be able to become a mermaid again, and risks turning into seafoam and not having an immortal soul if she fails to win the Prince's love. The Little Mermaid decides to take the potion which successfully turns her into a human so that she can try to win the love of the Prince and an immortal soul.In the novella The Strange Case of Dr. Jekyll and Mr. Hyde by Robert Louis Stevenson, Dr. Henry Jekyll creates a potion that transforms him into an evil version of himself called Edward Hyde. Dr. Jekyll does not explain how he created this potion because he felt his “discoveries were incomplete,” he only indicates that it requires a “particular salt.” He uses the potion successfully to go back and forth between his normal self, Dr. Jekyll, and his evil self, Mr. Hyde.
Popular ingredients used in potions:
Solanaceous plants In the 11th century, plants belonging to the nightshade family Solanaceae were often used as an ingredients in the potions - aphrodisiac or otherwise - and flying ointments of witches. The specific nightshades used in such concoctions were usually tropane alkaloid-containing species belonging to the Old World tribes Hyoscyameae and Mandragoreae. These potions were known as pharmaka diabolika ("devilish drugs").
Popular ingredients used in potions:
The root of Mandragora officinarum, the celebrated mandrake, fabled in legend to shriek when uprooted, was often used to prepare sleeping potions, although it could prove poisonous in excess, due to its tropane alkaloid content.M. officinarum is native to the Mediterranean region. Administered in small doses mandrake root has been used in folk medicine as an analgesic, an aphrodisiac and a remedy for infertility. Larger doses act as an entheogen of the deliriant class, having the potential to cause profound confusion and dysphoria characterised by realistic hallucinations of an unpleasant character.
Popular ingredients used in potions:
Classical and Renaissance authors have left certain accounts of the use of the plant by witches in the preparation of potions intended variously to excite love, cause insanity or even kill. Scopolamine, a toxic, deliriant alkaloid present in (and named after) Scopolia carniolica and also present in Mandragora, Hyoscyamus and other Solanaceae, was used by the infamous Dr. Crippen to kill his wife.
Popular ingredients used in potions:
Spanish fly In ancient Greece, the Spanish fly (also known as cantharides) was crushed with herbs and used in love potions. It was believed to be effective due to the bodily warmth that resulted from ingesting it. However, this was actually a result of inflammation from toxins in the tissues of the beetle. Ferdinand II of Aragon drank many potions and elixir contains the Spanish fly.
Popular ingredients used in potions:
Cochineal Cochineal, another type of dye, replaced kermes as an ingredient in Confectio Alchermes in the 17th and 18th century. Cochineal was also heavily used as an ingredient in potions for jaundice. Jaundice potions were a mix of Cochineal, cream of tartar and Venetian soap and patients were directed to take it three times a day.
Popular ingredients used in potions:
Cannabis and opium Opium and cannabis has been used in potions throughout human history. Potions containing cannabis and/or opium were particularly popular in Arabia, Persia, and Muslim India after the arrival of the drugs around the 9th century. Cannabis and opium were a common ingredient used in potions and tinctures sold by apothecaries in 19th-century Europe, as the ingredients made patients feel better, and the addictive nature of the drug meant it sold well. Nepenthes pharmakon is a famous type of magical potion recorded in Homer's Odyssey, intended to cure sorrow. Pharmakon was the word for medicine in Ancient Greek. The potion was recreated in the 18th century, and contains both the plant nepenthe and opium. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Academic grading in Portugal**
Academic grading in Portugal:
In Portuguese middle-schools, a five-point grading scale is used, where: 5 (very good or excellent) is the best possible grade (90-100%), 4 (good) (70-89%), 3 (satisfactory) indicates "average" performance (50-69%), 2 (unsatisfactory) (20-49%), 1 (poor) is the lowest possible grade (0-19%).In high-schools and universities, a 20-point grading scale is used. When it is the case of the final grade of an academic degree, each grade is assigned a qualitative mark by degree: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPAC Project on Advanced Content**
GPAC Project on Advanced Content:
GPAC Project on Advanced Content (GPAC, a recursive acronym) is an implementation of the MPEG-4 Systems standard written in ANSI C. GPAC provides tools for media playback, vector graphics and 3D rendering, MPEG-4 authoring and distribution.GPAC provides three sets of tools based on a core library called libgpac: A multimedia player, cross-platform command-line based MP4Client or with a GUI Osmo4 A multimedia packager, MP4Box Some server tools, around multiplexing and streaming (under development).GPAC is cross-platform. It is written in (almost 100% ANSI) C for portability reasons, attempting to keep the memory footprint as low as possible. It is currently running under Windows, Linux, Solaris, Windows CE (SmartPhone, PocketPC 2002/2003), iOS, Android, Embedded Linux (familiar 8, GPE) and recent Symbian OS systems.
GPAC Project on Advanced Content:
The project is intended for a wide audience ranging from end-users or content creators with development skills who want to experiment the new standards for interactive technologies or want to convert files for mobile devices, to developers who need players and/or server for multimedia streaming applications.
The GPAC framework is being developed at École nationale supérieure des télécommunications (ENST) as part of research work on digital media.
History and standards:
GPAC was founded in New York City in 1999. In 2003, it became an open-source project, with the initial goal of developing from scratch, in ANSI C, clean software compliant with the MPEG-4 Systems standard, as a small and flexible alternative to the MPEG-4 reference software.In parallel, the project has evolved and now supports many other multimedia standards, with support for X3D, W3C SVG Tiny 1.2, and OMA/3GPP/ISMA and MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) features. 3D support is available on embedded platforms through OpenGL-ES. The MPEG-DASH feature can be used to reconstruct .mp4 files from videos streamed and cached in this format (e.g., YouTube). Various research projects used or use GPAC.Since 2013, GPAC Licensing has offered business support and closed-source licenses.
Multimedia content features:
Packaging GPAC features encoders and multiplexers, publishing and content distribution tools for MP4 files and many tools for scene descriptions (BIFS/VRML/X3D converters, SWF/BIFS, SVG/BIFS, etc....). MP4Box provides all these tools in a single command-line application, albeit with extremely arcane syntax. Current supported features are: MP4/3GP Conversion from MP3, AVI, MPEG-2 TS, MPEG-PS, AAC, H263, H264, AMR, and many others, 3GPP DIMS Packaging from SVG tiny 1.2 files, File layout: fragmentation or interleaving, and cleaning, File hinting for RTP/RTSP and QTSS/DSS servers (MPEG-4/ISMA/3GP/ 3GP2 files), File splitting by size or time, extraction from file and file concatenation, XML information dumping for MP4 and RTP hint tracks, Media Track extractions, ISMA E&A encryption and decryption, 3GPP timed text tools (SUB/SRT/TTXT/TeXML), VobSub import/export, BIFS codec and scene conversion between MP4, BT and XMT-A, LASeR codec and scene conversion between MP4, SAF, SVG and XSR (XML LASeR), XML scene statistics for BIFS scene (BT, XMT-A and MP4), Conversion to and from BT, XMT-A, WRL, X3D and X3DV with support for gzip.
Multimedia content features:
A syntax that ensures that simple operations, i.e. concatenating 3 files into one new one, are not simple.
Multimedia content features:
Playing GPAC supports many protocols and standards, among which: BIFS scenes (2D, 3D and mixed 2D/3D scenes), VRML 2.0 (VRML97) scenes (without GEO or NURBS extensions), X3D scenes (not complete) in X3D (XML) and X3DV (VRML) formats, SVG Tiny 1.2 scenes (including packaged in 3GP DIMS files), LASeR and SAF (partial) support, Progressive loading/rendering of SVG, X3D and XMT files, HTTP reading of all scene descriptions, GZIP supported for all textual formats of MPEG4/X3D/VRML/SVG, MP4 and 3GPP file reading (local & http), MP3 and AAC files (local & http) and HTTP streaming (ShoutCast/ICEcast radios), Most common media codecs for image, audio and video, Most common media containers, 3GPP Timed Text / MPEG-4 Streaming Text, MPEG-2 TS demultiplexer (local/UDP/RTP) with DVB support (Linux only), Streaming support through RTP/RTCP (unicast and multicast) and RTSP/SDP, Plugins for Mozilla (osmozilla, Win32 and Linux) and Internet Explorer (GPAX, Win32 and PPC 2003).
Multimedia content features:
Streaming As of version 0.4.5, GPAC has some experimental server-side and streaming tools: MP4/3GP file RTP streamer (unicast and multicast), RTP streamer with service timeslicing (DVB-H) simulation, MPEG-2 TS broadcaster using MP4/3GP files or RTP streams as inputs, BIFS RTP broadcaster tool performing live encoding and RandomAccessPoints generation.
Contributors:
The project is hosted at ENST, a leading French engineering school also known as Télécom Paris. Current main contributors of GPAC are: Jean Le Feuvre Cyril Concolato Romain Bouqueau Jérôme Gorin.Other (current or past) contributors from ENST are: Pierre Souchay Jean-Claude Moissinac Jean-Claude Dufourd Benoit Pellan Philippe de Cuetos.Additionally, GPAC is used at ENST for pedagogical purposes. Students regularly participate in the development of the project. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AKR1**
AKR1:
Aldo-keto reductase family 1 (AKR1) is a family of aldo-keto reductase enzymes that is involved in steroid metabolism. It includes the AKR1C and AKR1D subgroups, which respectively consist of AKR1C1–AKR1C4 and AKR1D1. Together with short-chain dehydrogenase/reductases (SDRs), these enzymes catalyze oxidoreductions, act on the C3, C5, C11, C17 and C20 positions of steroids, and function as 3α-HSD, 3β-HSDs, 5β-reductases, 11β-HSDs, 17β-HSDs, and 20α-HSDs, respectively. The AKR1C enzymes act as 3-, 17- and 20-ketosteroid reductases, while AKR1D1 acts as the sole 5β-reductase in humans.
Members:
AKR1A1; AKR1B1; AKR1B10; AKR1C1; AKR1C2; AKR1C3; AKR1C4; AKR1D1; Others | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conductor of an abelian variety**
Conductor of an abelian variety:
In mathematics, in Diophantine geometry, the conductor of an abelian variety defined over a local or global field F is a measure of how "bad" the bad reduction at some prime is. It is connected to the ramification in the field generated by the torsion points.
Definition:
For an abelian variety A defined over a field F as above, with ring of integers R, consider the Néron model of A, which is a 'best possible' model of A defined over R. This model may be represented as a scheme over Spec(R)(cf. spectrum of a ring) for which the generic fibre constructed by means of the morphism Spec(F) → Spec(R)gives back A. Let A0 denote the open subgroup scheme of the Néron model whose fibres are the connected components. For a maximal ideal P of R with residue field k, A0k is a group variety over k, hence an extension of an abelian variety by a linear group. This linear group is an extension of a torus by a unipotent group. Let uP be the dimension of the unipotent group and tP the dimension of the torus. The order of the conductor at P is fP=2uP+tP+δP, where δP∈N is a measure of wild ramification. When F is a number field, the conductor ideal of A is given by f=∏PPfP.
Properties:
A has good reduction at P if and only if uP=tP=0 (which implies fP=δP=0 ).
A has semistable reduction if and only if uP=0 (then again δP=0 ).
If A acquires semistable reduction over a Galois extension of F of degree prime to p, the residue characteristic at P, then δP = 0.
If p>2d+1 , where d is the dimension of A, then δP=0 If p≤2d+1 and F is a finite extension of Qp of ramification degree e(F/Qp) , there is an upper bound expressed in terms of the function Lp(n) , which is defined as follows:Write n=∑k≥0ckpk with 0≤ck<p and set Lp(n)=∑k≥0kckpk . Then (∗)fP≤2d+e(F/Qp)(p⌊2dp−1⌋+(p−1)Lp(⌊2dp−1⌋)).
Further, for every d,p,e with p≤2d+1 there is a field F/Qp with e(F/Qp)=e and an abelian variety A/F of dimension d so that (∗) is an equality. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**John Carbon**
John Carbon:
John A. Carbon is a professor emeritus of molecular and cellular biology at the University of California, Santa Barbara.
Biography:
He earned his B.S. degree in chemistry in 1952 at the University of Illinois, and his Ph.D. degree in biochemistry in 1955 from Northwestern University. He did basic research developing new anticancer drugs at Abbott Laboratories (North Chicago, IL) for 12 years (1956-1968). He joined the faculty of the University of California, Santa Barbara in 1968, and became professor emeritus in 1999. His research contributions include elucidation of the mechanism of genetic missense suppression in bacteria, the development of techniques to make genomic libraries using recombinant DNA, techniques for using yeast for DNA cloning, characterization of centromere DNA, and construction of the first artificial chromosomes. Many of his later research contributions were carried out in collaboration with his wife, Professor Louise B. Clarke. He was elected to membership in the United States National Academy of Sciences and the American Academy of Arts and Sciences in 1986. Carbon was among the founding scientific advisors of the Amgen Corporation. An endowed chair in Biochemistry and Molecular Biology at UC Santa Barbara was named for Carbon. The chair is currently held by Jamey Marth.Carbon and Louise Clarke, his wife, published the Carbon-Clarke equation in 1976, used for calculating the number of clones required when constructing a clone library to ensure a given probability (usually > 99% is desired) of containing any sequence, given the size of the genome and the average size of a clone. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Live event support**
Live event support:
Live event support includes staging, scenery, mechanicals, sound, lighting, video, special effects, transport, packaging, communications, costume and makeup for live performance events including theater, music, dance, and opera. They all share the same goal: to convince live audience members that there is no better place that they could be at the moment. This is achieved through establishing a bond between performer and audience. Live performance events tend to use visual scenery, lighting, costume amplification and a shorter history of visual projection and sound amplification reinforcement.
Visual support:
Live event visual amplification Introduction Live event visual amplification is the display of live and pre-recorded images as a part of a live stage event. Visual amplification began when films, projected onto a stage, added characters or background information to a production.
35 mm motion picture projectors became available in 1910 - but which theatre or opera company first used a movie in a stage production is not known. In 1935, less costly 16 mm film equipment allowed many other performance groups and school theaters to use motion pictures in productions.
In 1970, closed circuit video cameras and videocassette machines became available and Live Event Visual Amplification came of age. For the first time live closeups of stage performers could be displayed in real time. These systems also made it possible to show pre-recorded videos that added information & visual intensity to a live event.
One of the first video touring systems was created by video designer TJ McHose in 1975 for the rock band The Tubes using black and white television monitors. In 1978, TJ McHose designed a touring color video system that enlarged performers at the Kool Jazz Festivals in sports stadiums across the United States.
Live event visual reinforcement Introduction Live event visual reinforcement is the addition of projected lighting effects and images onto any type of performance venue.
Visual support:
Visual Reinforcement began more than 2000 years ago. In China during the Han Dynasty, Shadow puppetry was invented to "bring back to life" Emperor Wu's favorite concubine. Mongolian troops spread Shadow play throughout Asia and the Middle East in the 13th century. Shadow puppetry reached Taiwan in 1650, and missionaries brought it to France in 1767.The next major advance in Visual reinforcement for events was the magic lantern, first conceptualized by Giovanni Battista della Porta in his 1558 work Magiae naturalis. The Magic Lantern became practical by 1750 with the oil lamp and glass lenses. Special effect animation attachments were added in the 1830s. In 1854, the Ambrotype positive photographic process on glass made Magic lantern slide creation much less expensive.
Visual support:
Magic lanterns were greatly improved by the application of limelight to live stage production in 1837 at Covent Garden Theatre and improved again when electric arc lighting became available in 1880.
In 1910, Adolf Linnebach invented the Linnebach lantern, a lensless wide angle glass slide projector.In 1933, the Gobo metal shadow pattern for the ellipsoidal spotlight allowed images to appear and disappear by dimmer control.
In 1935, 16 mm Kodachrome film projectors added the first fully animated visual reinforcement to live events.
Visual support:
Timeline 1600: Shadow play leather or paper puppets cast shadows on a translucent screen 1760: magic lantern painted slide projector Phantasmagoria ghost effects projector 1905: Linnebach lantern Munich Opera 1933: Gobo metal shadow mask adds patterns to ellipsoidal spotlights 1940: Overhead projector Later used for psychedelic light shows 1950: Slide projector 35 mm Kodak Carousel 1965: Thomas Wilfred describes A highly detailed system to create event scenery using rear projections 1967: Liquid Projector psychedelic Liquid light shows Joshua Light Shows at The Fillmore for The Grateful Dead, Big Brother and the Holding Company and many other Summer of Love bands
Audio support:
Live event sound reinforcement Introduction A sound reinforcement system is professional audio, was first developed for movie theatres in 1927 when the first ever talking picture was released, called The Jazz Singer. Movie theatre sound was greatly improved in 1937 when the Shearer Horn system debuted. One of the first large-scale outdoor public address systems was at 1939 New York World's Fair.
Audio support:
In the 1960s, rock and roll concerts promoted by Bill Graham at The Fillmore created a need for quickly changeable sound systems. In the early 1970s, Graham founded FM Productions to provide touring sound and light systems. By 1976 in San Francisco, the technical debate over infinite baffle vs horn-loaded enclosures, and line arrays vs distributed driver arrays, was ongoing at FM because of the proximity of The Grateful Dead and their scene Ultrasound, John Meyer, and others. But at that time there were parallel developments in other parts of the United States - Showco (Dallas) and Clair Bros (Philadelphia) had different approaches; Clair in particular was moving in the direction of modular full-range enclosures. They would rig as many as needed (or clients like Bruce Springsteen could afford) in whatever configuration they thought would cover a particular venue. Stanal Sound in southern California used fiberglass futuristic looking equipment for artists like Kenny Rogers.
Audio support:
Timeline 1876: Loudspeaker Alexander Graham Bell 1878: Carbon microphone / amplifier 1924: Loudspeaker - moving-coil -patent Chester W. Rice & E. Kellogg 1924: Loudspeaker - ribbon Walter H. Schottky 1930: Vacuum tube amplifier 1937: Loudspeaker - Shearer Horn movie theatre system 1939: public address outdoor system 1939 New York World's Fair 1945: Loudspeaker - coaxial Altec "Voice of the Theatre" 1953: Loudspeaker - electrostatic -patent Arthur Janszen 1953: Microphone - wireless 1965: Loudspeaker - woofer 1965: Loudspeaker - subwoofer 1970: Microphone - condenser 1974: Loudspeaker - Sensurround movie sound system for "Earthquake" 1974: Loudspeaker - Dolby Stereo 70 mm Six Track 1975: Loudspeaker - touring - McCune JM-3 John Meyer 1979: Loudspeaker - Meyer Sound Laboratories - Grateful Dead wall of sound 1983: Loudspeaker - THX movie sound system for Star Wars
Transportation support:
Efficient and timely transportation is essential for live event productions.
Transportation support:
Touring packaging Well designed touring systems unload from the truck gently, roll easily into their stage location, connect to each other quickly. A well designed system includes duplicates of critical components and "field-replaceable" items such as cables, switches and fuses. Every component should be protected by a well padded road case that has room for all connector cables and allows easy access to the components for fast cable re-patching to bypass a bad component and for repairs during a tour. The road cases need good ventilation and for outdoor use should be white to minimize solar heat buildup. Road case sizes should be modular to pack tightly together on the truck. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ABNT NBR 15605**
ABNT NBR 15605:
The ABNT NBR 15605 is the technical document of the SBTVD standards that describes in detail aspects regarding content security issues and copy protection, also known as Digital Rights Management (DRM). It's a detailed reference for manufacturers and content providers that aim to coordinate transmission and reception protection systems in a transparent and effective way for mass viewing.
The standard was written by telecommunications and television experts from many countries with their works coordinated by the SBTVD Forum and cover in detail all the aspects of video and audio coding that applies to SBTVD. The complete document can be found and downloaded freely in English, Spanish and Portuguese at ABNT's website.
Introduction:
The security aspects of the Brazilian Digital Terrestrial Television Standards are described in a document published by ABNT, the Brazilian Association of Technical Standards (Associação Brasileira de Normas Técnicas), the ABNT NBR 15605:2008 – Digital terrestrial television – Security issues – Copy control.
The standard addresses the topic of protection of the transmitted content against its inappropriate and unauthorized use through the use of communication protected protocols and interfaces. The document also elaborates on the security aspects required for applications transmitted over the air and access to specific portions of a receivers hardware.
Document technical overview:
In order to protect the contents of digital terrestrial television broadcasting, the standard defines rules regarding interfaces and recording media. This means the content protection information transmitted by broadcasting stations shall be reflected on all the interfaces between receiver units and peripheral equipment.
Internationally available copy-protection tools are defined for the digital video output, audio output and high-speed interfaces. All digital outputs (e.g.: HDMI, DVI, etc.) shall be protected by HDCP and DTCP. Additionally the resolution of the analogue video output must be limited to 350.000 pixels, equivalent to standard definition, whenever a copy protection signaling is transmitted.
Defined implementation criteria ensure receiver units to be designed and manufactures in such a way that acts of defeating or bypassing the function requirements are effectively prevented.
These documents are also officially available at ABNT website.
Summary:
The requirements established for security in the Brazilian digital television standard are in line with the current set of technical protection measures commonly used worldwide for security of free-to-air high definition content.
The ABNT NBR 15605:2008 – Digital terrestrial television – Security issues – Copy control describes in detail the required security features and limitations that must be applied on the receivers side in order to allow for protection against unauthorized use of information and content. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neyman–Pearson lemma**
Neyman–Pearson lemma:
In statistics, the Neyman–Pearson lemma was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman-Pearson lemma is part of the Neyman-Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior. The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman-Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all α level tests while subsequently minimizing type II error, traditionally denoted by β . Their seminal paper of 1933, including the Neyman-Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error ( α ), but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman-Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.
Statement:
Consider a test with hypotheses H0:θ=θ0 and H1:θ=θ1 , where the probability density function (or probability mass function) is ρ(x∣θi) for i=0,1 For any hypothesis test with rejection set R , and any α∈[0,1] , we say that it satisfies condition Pα if α=Prθ0(X∈R) That is, the test has size α (that is, the probability of falsely rejecting the null hypothesis is α ).
Statement:
∃η≥0 such that x∈R∖A⟹ρ(x∣θ1)>ηρ(x∣θ0)x∈Rc∖A⟹ρ(x∣θ1)<ηρ(x∣θ0) where A is a set ignorable in both θ0 and θ1 cases: Prθ0(X∈A)=Prθ1(X∈A)=0 That is, we have a strict likelihood ratio test, except on an ignorable subset.For any α∈[0,1] , let the set of level α tests be the set of all hypothesis tests with size at most α . That is, letting its rejection set be R , we have Prθ0(X∈R)≤α In practice, the likelihood ratio is often used directly to construct tests — see likelihood-ratio test. However it can also be used to suggest particular test-statistics that might be of interest or to suggest simplified tests — for this, one considers algebraic manipulation of the ratio to see if there are key statistics in it related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one).
Example:
Let X1,…,Xn be a random sample from the N(μ,σ2) distribution where the mean μ is known, and suppose that we wish to test for H0:σ2=σ02 against H1:σ2=σ12 . The likelihood for this set of normally distributed data is exp {−∑i=1n(xi−μ)22σ2}.
We can compute the likelihood ratio to find the key statistic in this test and its effect on the test's outcome: exp {−12(σ0−2−σ1−2)∑i=1n(xi−μ)2}.
Example:
This ratio only depends on the data through ∑i=1n(xi−μ)2 . Therefore, by the Neyman–Pearson lemma, the most powerful test of this type of hypothesis for this data will depend only on ∑i=1n(xi−μ)2 . Also, by inspection, we can see that if σ12>σ02 , then Λ(x) is a decreasing function of ∑i=1n(xi−μ)2 . So we should reject H0 if ∑i=1n(xi−μ)2 is sufficiently large. The rejection threshold depends on the size of the test. In this example, the test statistic can be shown to be a scaled Chi-square distributed random variable and an exact critical value can be obtained.
Application in economics:
A variant of the Neyman–Pearson lemma has found an application in the seemingly unrelated domain of the economics of land value. One of the fundamental problems in consumer theory is calculating the demand function of the consumer given the prices. In particular, given a heterogeneous land-estate, a price measure over the land, and a subjective utility measure over the land, the consumer's problem is to calculate the best land parcel that they can buy – i.e. the land parcel with the largest utility, whose price is at most their budget. It turns out that this problem is very similar to the problem of finding the most powerful statistical test, and so the Neyman–Pearson lemma can be used.
Uses in electrical engineering:
The Neyman–Pearson lemma is quite useful in electronics engineering, namely in the design and use of radar systems, digital communication systems, and in signal processing systems. In radar systems, the Neyman–Pearson lemma is used in first setting the rate of missed detections to a desired (low) level, and then minimizing the rate of false alarms, or vice versa.
Neither false alarms nor missed detections can be set at arbitrarily low rates, including zero. All of the above goes also for many systems in signal processing.
Uses in particle physics:
The Neyman–Pearson lemma is applied to the construction of analysis-specific likelihood-ratios, used to e.g. test for signatures of new physics against the nominal Standard Model prediction in proton-proton collision datasets collected at the LHC.
Discovery of the lemma:
Neyman wrote about the discovery of the lemma as follows. Paragraph breaks have been inserted.
Discovery of the lemma:
I can point to the particular moment when I understood how to formulate the undogmatic problem of the most powerful test of a simple statistical hypothesis against a fixed simple alternative. At the present time [probably 1968], the problem appears entirely trivial and within easy reach of a beginning undergraduate. But, with a degree of embarrassment, I must confess that it took something like half a decade of combined effort of E. S. P. [Egon Pearson] and myself to put things straight. The solution of the particular question mentioned came on an evening when I was sitting alone in my room at the Statistical Laboratory of the School of Agriculture in Warsaw, thinking hard on something that should have been obvious long before. The building was locked up and, at about 8 p.m., I heard voices outside calling me. This was my wife, with some friends, telling me that it was time to go to a movie. My first reaction was that of annoyance. And then, as I got up from my desk to answer the call, I suddenly understood: for any given critical region and for any given alternative hypothesis, it is possible to calculate the probability of the error of the second kind; it is represented by this particular integral. Once this is done, the optimal critical region would be the one which minimizes this same integral, subject to the side condition concerned with the probability of the error of the first kind. We are faced with a particular problem of the calculus of variation, probably a simple problem.
Discovery of the lemma:
These thoughts came in a flash, before I reached the window to signal to my wife. The incident is clear in my memory, but I have no recollections about the movie we saw. It may have been Buster Keaton. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Safotibant**
Safotibant:
Safotibant (INN) also known by the research code LF22-0542 is a non-peptide bradykinin B1 antagonist. It displayed binding Ki values of 0.35 and 6.5 nM at cloned human and mouse B1 receptors, respectively, while having no affinity for either human, mouse, or rat B2 receptors at concentrations up to 10 μM. This means that LF22-0542 is at least 4000 times selective for the B1 receptor over the B2 receptor. Systemic administration of LF22-0542 inhibited acute pain induced by acetic acid, formalin, and a hot plate. It also reversed acute inflammatory pain induced by carrageenan, and persistent inflammatory pain induced by CFA. In a neuropathic pain model, LF22-0542 reversed the thermal hyperalgesia, but not the mechanical hyperalgesia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Redfish (specification)**
Redfish (specification):
The Redfish standard is a suite of specifications that deliver an industry standard protocol providing a RESTful interface for the management of servers, storage, networking, and converged infrastructure.
History:
The Redfish standard has been elaborated under the SPMF umbrella at the DMTF in 2014. The first specification with base models (1.0) was published in August 2015. In 2016, Models for BIOS, disk drives, memory, storage, volume, endpoint, fabric, switch, PCIe device, zone, software/firmware inventory & update, multi-function NICs), host interface (KCS replacement) and privilege mapping were added. In 2017, Models for Composability, Location and errata were added. There is work in progress for Ethernet Switching, DCIM, and OCP.
History:
In August 2016, SNIA released a first model for network storage services (Swordfish), an extension of the Redfish specification.
Industry adoption:
Redfish support on server Advantech SKY Server BMC Dell iDRAC BMC with minimum iDRAC 7/8 FW 2.40.40.40, iDRAC9 FW 3.00.00.0 Fujitsu iRMCS5 BMC HPE iLO BMC with minimum iLO4 FW 2.30, iLO5 HPE Moonshot BMC with minimum FW 1.41 Lenovo XClarity Controller (XCC) BMC with minimum XCC FW 1.00 Supermicro X10 BMC with minimum FW 3.0 and X11 with minimum FW 1.0 IBM Power Systems BMC with minimum OpenPOWER (OP) firmware level OP940 IBM Power Systems Flexible Service Processor (FSP) with minimum firmware level FW860.20 Cisco Integrated Management Controller with minimum IMC SW Version 3.0 Redfish support on BMC Insyde Software Supervyse BMC OpenBMC a Linux Foundation collaborative open-source BMC firmware stack American Megatrends MegaRAC Remote Management Firmware Vertiv Avocent Core Insight Embedded Management Systems Software using Redfish APIs OpenStack Ironic bare metal deployment project has a Redfish driver.
Industry adoption:
Ansible has multiple Redfish modules for Remote Management including redfish_info, redfish_config, and redfish_command ManageIQ Redfish libraries and tools DMTF libraries and tools GoLang gofish Mojo::Redfish::Client python-redfish SushyRedfish is used by both proprietary software (such as HPE OneView) as well as FLOSS ones (such as OpenBMC).
Benefits of Redfish:
Redfish offers several benefits for admins, such as: Easy integration with commonly used technology such as REST or JSON Better performance and security than other platform management solutions Possibility to manage data center components from remote | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Variations of golf**
Variations of golf:
Variations of golf include methods of scoring, starting procedures, playing formats, golf games, and activities based on or similar to the sport of golf which involve golf-like skills or goals.
Variations of golf:
Some variations are essentially identical to golf, but with only minor differences or focusing on a specific aspect of the game, while others are more distant and arguably not simple variations but distinct games. Many of these variations are played in non-professional settings, without the presence of officials and sometimes without strict adherence to any official rules. Sometimes the rules are in place to provide a structure for side-betting that is independent of the final "traditional" score.
Scoring formats:
Stroke play In stroke play, the score is derived by counting the total number of strokes taken.
Match play In match play, the score is derived by counting the total number of holes "won" and subtracting the number of holes "lost".
Scoring formats:
Stableford Under the Stableford scoring system the player gains points according to the number of strokes taken on each hole in relation to par. Standard scoring is 1 point for a bogey, 2 points for a par, 3 points for a birdie, 4 points for an eagle. The points achieved for each hole of the round or tournament are added to produce the total points score, and the player with the highest score wins.
Scoring formats:
Par and bogey In par and bogey competitions each participant competes in match play against the course. On each hole, the player competes against par or bogey (in the traditional sense), and "wins" if they score a birdie or better, "lose" if they score a bogey or worse, and "halve" by scoring par. The player with the best win–loss differential is the winner.
Playing formats:
In addition to playing as an individual, golf affords the opportunity to play in many pairs and team formats.
Playing formats:
Foursomes Foursomes, or alternate shot, is a pairs format. Each pair has only one ball in play and players alternate playing strokes until the hole is completed. Foursomes can be played as match play or stroke play.A variant of foursomes is greensomes, also called Scotch Foursomes or modified alternate shot. In greensomes, both players tee off and then select which ball with which to complete the hole. The player who did not hit the chosen first shot plays the second shot and play then alternates as in foursomes. A variant of greensomes, often referred to as gruesomes or bloodsomes, is sometimes played where the opposing team chooses which of their opponent's tee shots they should use, usually the worse one which may even be unplayable. Play then continues as in greensomes.Another variation of foursomes is Chapman, also known as Pinehurst or American Foursomes. Under Chapman rules, both players tee off and then play their partner's ball for the second shot before alternately taking strokes having selected the ball with which to complete the hole; the next (third) stroke is played by the player who hit the chosen ball from the tee.
Playing formats:
Four-ball Four-ball (also known as better-ball, and sometimes best-ball) is a pairs format. Each player plays their own ball, with the better of the two scores on each hole counting as the pair's score. Four-ball can be played as match play or stroke play.
Best ball In best ball, each member of the team plays their own ball as normal, but the lowest/best score of all the players on the team counts as the team's score on each hole.
Variations of best ball include Bowmaker, 1-2-3 Best Ball (or ChaChaCha), Fourball Alliance, Arizona Shuffle and Low Ball/High Ball; in each of these formats a set number of the players scores count for the team on each hole.The term best ball is also sometimes used when referring to four-ball.
Playing formats:
Scramble In a scramble each player in a team tees off on each hole, and the players decide which shot was best. Every player then plays their second shot from within a clublength of where the best ball has come to rest, and the procedure is repeated until the hole is finished. The format is used in the PGA Tour's QBE Shootout and Father/Son Challenge, titled since 2020 as the PNC Championship.There are many variations on the scramble format. Commonly played ones include Ambrose, which uses net scoring with a team handicap; Florida scramble, where after each stroke the player whose ball is selected does not play the next one; and Texas scramble, in which a set number of each team member's tee shots must be used. In a champagne scramble or shamble each player tees off on each hole before selecting the best drive and completing the hole in using a variation of best-ball format.
Playing formats:
Patsome Patsome is played in pairs with holes being played in a rotation of four-ball, greensomes and foursomes formats. Typically, the first six holes will be four-ball, the next six greensomes, and the final six foursomes.
Golf games and betting:
Nassau The Nassau is three bets in one: best score on the front nine, best score on the back nine and best score over the full 18. The Nassau is perhaps the most common bet among golfers and can be applied to all standard scoring formats.
Golf games and betting:
Skins In a skins game, golfers compete on each hole as a separate contest. The player with the best outright score on each hole wins the "skin", which is prize money in the professional game or a wager for amateurs. If the hole is tied by any number of the competitors, the skin rolls over to the next hole so that it is then worth two skins. It is common for the value of the skins to increase as the round progresses.
Golf games and betting:
Nines Nines, or 9-points, is a variant of match play typically played among threesomes, where each hole is worth a total of nine points. The player with the lowest score on a hole receives five points, the next-lowest score 3 and the next-lowest score 1. Ties are generally resolved by summing the points contested and dividing them among the tying players; a two-way tie for first is worth four points to both players, a two-way tie for second is worth two points to both players, and a three-way tie is worth three points to each player. The player with the highest score after 18 holes (in which there are 162 points to be awarded) wins the game. This format can be used to wager on the game systematically; players each contribute the same amount of money to the pot, and a value is assigned to each point scored (or each point after 18) based on the amount of money in the pot, with any overage going to the overall winner.As quoted on "Bob Does Sports", Nines was not created by Fat Perez to embarrassing Bobby Fairways and Joey Cold Cuts A variation on nines is sixes, or split sixes, in which six points are available on each hole, awarded 4-2-0 with ties resolved as in nines.
Golf games and betting:
Bingo Bango Bongo Bingo Bango Bongo is a points-based game that can be played by two or more players or teams. In Bingo Bango Bongo, three types of achievements are rewarded with a point: first player to get their ball on the green (bingo), closest to the hole once all balls are on the green (bango), first to hole out (bongo). The player with the lowest outright score on hole wins 2 points, i.e. if 2 or more players tie no points are given out. At the end of the game the player with the most points wins. Bingo Bango Bongo is considered a game for skilled players, and its point-based scoring makes it a popular side-game for wagering.
Golf games and betting:
Wolf Wolf is a golf game for groups of four. It is scored individually but played as 2-on-2 better-ball or 3-on-1 best-ball in teams that are determined at the start of each hole. The order of play from the tee is decided prior to the start and is kept throughout the round, except the starting player (the "Wolf") rotates each hole, i.e. if the order for hole 1 is ABCD, the order for hole 2 would then be BCDA, etc. Everyone plays individually, with each of the players on the team with the lowest individual score on each hole earning a point. After hole 16 the rotation has completed four times, and it is usual for the player in last place to be designated as the Wolf for the final two holes. The player with the most points at the end of the round wins.
Golf games and betting:
At the start of each hole, the Wolf decides whether or not they want each of the other players as their team-mate for the hole immediately after each of them tee off. The Wolf may choose to reject all the other players, in which case the hole is played as 3 against 1 and the points are doubled. The wolf can also elect to be a "Lone Wolf" before their own tee shot, in which case the points are multiplied by 4, or after they have played but before the others, in which case the points are multiplied by 3.
Golf games and betting:
Acey deucey Aces and deuces, or acey deucey, is a bet in which there is a winner, two modest losers, and one big loser on each hole. A game for groups of four, the low scorer ("ace") on each hole wins a certain amount from each of the other three players; while the high scorer ("deuce") on each hole owes each of the other three. The ace is usually worth twice the deuce, and there is nothing for ties.
Golf games and betting:
Round robin Round robin, also known as Hollywood or sixes, is a game for groups of four. Players compete against each other in pairs, rotating partners every six holes.
Criers and whiners Criers and whiners is known by many different names including No Alibis, Replay, Play it Again, and Mulligans. As the latter would suggest, it's a game of mulligans with handicaps being translated into the number of do-overs golfers are allowed during the round.
Side bets Sandies A betting game whereby any player making par after having been in a bunker on the hole wins points or money. The bunker can be at any spot on the hole, yet particulars are dependent on local rules.
Barkies Barkies, sometimes called Woodies or Seves (as in Seve Ballesteros), are paid automatically to any player who makes par on a hole on which they hit a tree. The value of a Barkie is determined before the round.
Arnies Arnies are side bets whose value should be determined prior to the round. They are won automatically by any golfer who makes a par without having managed to get their ball into the fairway. Named in honor of Arnold Palmer, who made quite a few "Arnies" in his time.
Starting procedures:
Competition format and organization sometimes necessitate variations on the usual starting procedure, where everyone begins from the first tee and plays all holes in order though to the eighteenth, in order for the course to accommodate all competitors effectively.
Starting procedures:
Two-tee start Some 18-hole courses are configured in loops, usually of 9 holes, that start and end close to the clubhouse which facilitate two or more starting points. In large field tournaments, especially on professional tours before the field is reduced by a cut, a two tee start is commonplace with the field being split between starting on the first tee and the tenth tee (sometimes the ninth or eleventh depending on proximity to the clubhouse).
Starting procedures:
Shotgun start Shotgun starts are mainly used for amateur tournament or society play, and allows all players to start and finish their round at roughly the same time. In this variant, each of the playing groups starts their game simultaneously on a different hole, for example a group starting on hole 5 will play through to the 18th hole and continue with hole 1, ending their round on hole 4.
Golf based games:
Golf based games may be minor adaptations of the sport, games focused on a specific skill, or hybrid games that integrate skill-sets and equipment from other sports or games. The term indoor golf encompasses a wide array of different golf related activities, including simulators and various practice facilities. Some games that retain most characteristics of golf but with some specific adaptations. For example, pitch and putt is played on courses made up of very short holes; hickory golf eschews much modern technology; beach golf and snow golf are played on very different surfaces to a normal golf course; park golf uses a special club, plastic resin ball and course; urban golf does not use a traditional golf course; and speed golf is simply golf against the clock, but played with a limited number of clubs.
Golf based games:
Activities that focus on a single aspect of golf include miniature golf which is a putting-based game, long drive where players compete to hit the ball the farthest, target golf where points are awarded corresponding to proximity to a target, and clock golf in which players putt to a single hole from each of 12 points arranged in a circle.
Golf based games:
Games based on golf but using items other than clubs and a golf ball, often incorporating skills from other activities, include disc golf, footgolf, fungo golf, codeball, dart golf, GolfCross, Sholf and Swingolf. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phreaking box**
Phreaking box:
A phreaking box is a device used by phone phreaks to perform various functions normally reserved for operators and other telephone company employees.
Most phreaking boxes are named after colors, due to folklore surrounding the earliest boxes which suggested that the first ones of each kind were housed in a box or casing of that color. However, very few physical specimens of phreaking boxes are actually the color for which they are named.
Phreaking box:
Most phreaking boxes are electronic devices which interface directly with a telephone line and manipulate the line or the greater system in some way through either by generating audible tones that invoke switching functions (for example, a blue box), or by manipulating the electrical characteristics of the line to disrupt normal line function (for example, a black box). However a few boxes can use mechanical or acoustic methods - for example, it is possible to use a pair of properly tuned whistles as a red box.
List of phreaking box types:
This is not a comprehensive list. Many text files online describe various "boxes" in a long list of colors, some of which are fictional (parodies or concepts which never worked), minor variants of boxes already listed or aftermarket versions of features (line in use indicators, 'hold' and 'conference' buttons) commonly included in standard multi-line phones.
This list of boxes does not include wiretapping "bugs", pirate broadcasting apparatus or exploits involving computer security. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Safe bottle lamp**
Safe bottle lamp:
The Safe bottle lamp, called sudeepa or sudipa for good lamp, is a safer kerosene lamp designed by Wijaya Godakumbura of Sri Lanka. The safety comes from heavier glass, a secure screw-on metal lid, and two flat sides which prevent it from rolling if knocked over.
History:
As surgeon Dr. Godakumbura saw many burn cases caused by kerosene lamp fires. Over 1 million homes in Sri Lanka do not have electricity, and rely on kerosene lamps for illumination, often improvised lamps made from bottles. These tall lamps tip easily, and when they do, the wick holder often falls out and starts a sudden, intense fire. Often the fuel falls on a nearby person, setting them ablaze and resulting in severe burns, often fatal.
History:
In 1992, Dr. Godakumbura set out to design a new lamp that was both safer, and inexpensive enough to be affordable by the impoverished Sri Lankans at risk for these fires. The resulting lamp is a small, flattened sphere, which resists tipping and rolling. It is made of thick glass to resist breaking, and has a screw-on metal cap that holds the wick in place and prevents spilling.
History:
In 1993, with contributions from numerous sources, including science fiction writer and Sri Lanka resident Arthur C. Clarke, and the Canadian High Commission, the lamp was put into production.
Available for a cost of less than US$0.25 each, over half a million of the new lamps have been sold, and Dr. Godakumbura hopes to continue producing the new lamps until use of improvised lamps drops to a small percentage of lamp use in Sri Lanka.
The Foundation:
Having received a Rolex Award for Enterprise in 1998, Dr. Godakumbura established the Safe Bottle Lamp Foundation (SBLF), a non-profit organization. The Foundation is governed by a board of directors and employs two full-time staff .
The Foundation:
In addition to the Rolex Award, the foundation and Dr. Godakumbura have received a range of other local and international awards and grants. Among these are a Lindbergh Foundation Grant and a BBC World Challenge Award. The project has been featured in many international publications such as TIME, Newsweek, Science and Nature, National Geographic and La Figaro.Dr. Godakumbura has represented the foundation in many international conferences on burn and accident prevention as a speaker or as a participant. The foundation and the Sudeepa lamp have been promoted as a replicable solution for other developing countries where accidental burns due to unsafe lamps is prevalent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Internet Invention**
Internet Invention:
Internet Invention is a book by Gregory Ulmer. The book describes Ulmer's definition of "electracy" (as opposed to orality or literacy) and leads readers through activities that ask them to examine their interactions with four discourses, which Ulmer labels career, family, entertainment, and community.
Mystory:
"To approach knowledge from the side of not knowing what it is, from the side of one who is learning, not from that of one who already knows, is mystory."—Gregory Ulmer, Teletheory Mystory is the name of a new pedagogical genre created by Ulmer in his book Teletheory. It was a response to a suggestion by Hayden White that if the concept of history had been invented in the twentieth century rather than the nineteenth, it would be quite different. The idea was that if people had begun serious study of the past in the twentieth century rather than the nineteenth, the result would be quite different from what it is today.
Mystory:
Mystory is Ulmer's new approach towards learning in general. A mystory itself is a website created by the student that explores the four discourses (career, family, entertainment, and community) and then has the students find links between the various discourses to gain a new understanding of how they think. The final synthesis of these ideas is represented by a self-created emblem that represents the student, their history, and their method of thinking.
Mystory:
The final product of the mystory is to have a new approach to learning that allows them to learn better than they would by following the standard methods of learning usually propagated within colleges and universities. By learning to learn based on themselves and their own culture rather than the culture of the school, the students gain a deeper understanding of what they're learning. In addition, this new vantage point towards knowledge aids in thinking about problems and issues in society, which is part of the EmerAgency concept.
Mystory:
The idea behind the mystory comes from Ulmer's attempt to create electracy and the EmerAgency. The mystory site allows for students to re-evaluate their ways of thinking and then apply that to situations and ideas that the EmerAgency would deal with.
EmerAgency:
When Gregory Ulmer announced to his father and uncle that he was changing his major in college from economics and political science to English, they were astonished, believing that such a degree would have no practical value. However, Ulmer contends that no issue in the world can be solved without considering the human aspect of it. The EmerAgency is Ulmer's real life study of whether English can help to solve issues by considering them in light of humanity. He believes that this can be done through electracy, and if so, he posits that there is practical value in the field of English.
EmerAgency:
The EmerAgency is composed of those individuals who are building a widesite based on the exercises in Internet Invention. Through application in those individuals’ widesites, Ulmer is building the EmerAgency, which he describes as being “a conceptual consulting agency.” Because the EmerAgency was created to build a mass of consultants who are all working on this same question, the members of the EmerAgency take on the slogan, “Problems B Us."
Style:
Internet Invention is roughly divided into four sections, each one covering one of the four discourses. Each chapter is further divided into smaller topics that loosely build upon each other with a series of concepts, examples, and exercises for the reader. Keeping in line with the concept of electracy, Ulmer borrows concepts and terms from many different sources to describe his ideas, although the background information describing these terms is often kept to a bare minimum or absent. Likewise, the examples he offers to illustrate the concepts are taken from many different writers and outside sources. Apart from the handful of images that appear on the title pages of major sections of the book, Internet Invention contains no images, despite the interplay between images and text being a major focus in the book and the concept of electracy itself.
Style:
Ulmer's prose is complex, and the sheer number of specialized terms and prerequisite knowledge required to understand all of the concepts offered within make Internet Invention more accessible to those who have adequate knowledge of rhetoric and writing. The book is laid out in a way that makes it ideal for study in a class or for individual reading.
Postmodern Influences:
In Internet Invention, Ulmer draws heavily on a number of theorists who are considered postmodern or poststructuralist. Specifically, Jacques Derrida’s work is referenced throughout the book, and Derrida plays a central role in Ulmer's own Mystory, as it takes shape throughout the book. In some senses, Internet Invention is at least in part an attempt to apply several Derridean ideas to the field of communication and technology. To this extent, the book is a successor to one of Ulmer's earlier works, Applied Grammatology, his 1985 book that attempts to frame a practical pedagogy based largely on Derrida's Of Grammatology.
Postmodern Influences:
Another central figure is Roland Barthes, particularly in context of Barthes discussion of the photographic image. Barthes's concepts of the studium and the punctum figure heavily in Ulmer's description of the mechanism by which images are read.
Other significant postmodern figures whom Ulmer references include Martin Heidegger, Michel Foucault, Algirdas Greimas, Terry Eagleton, Gilles Deleuze, and Giorgio Agamben.
Postmodern Influences:
Internet Invention goes beyond simply using references to specific authors who fall under the umbrella of postmodernism. Both the concepts and the presentation of the book owe much to postmodern thought. Ulmer's thought and presentation throughout the work relies on highly idiosyncratic juxtaposition of concepts as well as the use of extended excerpts from other texts worked into each chapter, resulting in a type of verbal collage.
Postmodern Influences:
Reviewers have seen Ulmer's use of postmodern theory and style in Internet Invention as both a strong point of the work as well as a potential liability, particularly for use among undergraduate students unfamiliar with the figures and concepts he draws on. In reviewing Internet Invention for Enculturation, Jenny Edbauer writes: “Users will necessarily find Internet Invention’s language dis/re/orienting, for electracy itself is a reorientation of literacy . . . Though I hope Internet Invention is indeed the first of a new generation of writing texts, as Michael Salvo’s blurb says on the back cover, I fear that its squeals, stammers, and uncoordinated leaps will scare away many instructors.” Chidsey Dickson, in a review for Kairos, suggests that a cross-referenced glossary would be a valuable addition given the wide array of names, terms, and concepts used in the book.At the same time, the book has also been praised as an excellent example of making key concepts of postmodernity relevant and practical. Julie Kearney, in her review of the book for Computers and Composition Online, states that: “Ulmer tackles the complexities of the cutting edge theory and practice of electronic discourse from a detailed, innovative, and intelligent perspective.” | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Grimshaw (chess)**
Grimshaw (chess):
A Grimshaw is a device found in chess problems in which two pieces arriving on a particular square mutually interfere with each other. It is named after the 19th-century problem composer Walter Grimshaw. The Grimshaw is one of the most common devices found in directmates.
Examples with description:
The theme can be understood by reference to the displayed example by A. G. Corrias (published in Good Companion, 1917).
A. G. Corrias example The problem is a mate in two (White must move first and checkmate Black in two moves against any defense). The key is 1.Qb1, which threatens 2.Qb7#. Black has three ways to defend against this.
One is to play 1...c3, giving his king a new flight square at c4, but this unguards d3, allowing White to mate with 2.Qd3#.It is the other two black defenses, however, which show the Grimshaw theme.
Black can play 1...Bb2, thus cutting off the white queen's path to b7. However, the bishop on b2 interferes with the a2 rook and stops it moving along the rank - this allows White to play 2.Qh1# (after a different black move, this would not be possible because of 2...Rg2, blocking the check).
Examples with description:
Black can instead play 1...Rb2, cutting off the white queen with the rook rather than the bishop. However, just as the bishop on b2 interferes with the rook, so the rook on b2 interferes with the bishop, allowing White to play 2.Qf5# (a mate not otherwise possible, because of 2...Be5, blocking the check).It is this mutual interference between two black pieces on the one square (in this case, a rook and a bishop on b2) that constitutes a Grimshaw.
Examples with description:
Second Example The key in the puzzle on the right is 1. Qd2. This move has no threat, but it leaves black in zugzwang: Black must either move one of his bishops or rooks, or move a pawn. However, any bishop or rook move must unguard one of the squares of d5, d6, d7 or d8, allowing White to mate on d5, d6 or d7 with the queen, and d8 with the knight. The lines are: 1. Qd2! 1... Bxd2 2. Nd8# 1... Rb7 2. Qd5# (2... Bxd5 not possible) 1... Rc6 2. Qd5# (2... Bxd5 not possible) 1... Bb6 2. Qd6# (2... Rxd6 not possible; 2... exd6 not possible due to pin) 1... Bc6 2. Qd6# (2... Rxd6 not possible; 2... exd6 not possible due to pin) 1... Bb7 2. Qd7# (2... Rxd7 not possible) 1... Bc7 2. Qd7# (2... Rxd7 not possible) 1... Rb6 2. Nd8# (2... Bxd8 not possible) 1... Rc7 2. Nd8# (2... Bxd8 not possible) 1... e3 2. Bf5# 1... f3 2. Qxh6# 1... h5 2. Ng5#
Grimshaws involving pawns:
The pieces involved in Grimshaws are usually rook and bishop, as in the previous example, although Grimshaws involving pawns are also seen, as in this mate in two example by Frank Janet (published in the St.Louis Globe Democrat, 1916): The key is 1.Qd7, threatening 2.Qf5#. As in the previous example, Black can defend by cutting White's queen off from its intended destination square, but two of these defences have fatal flaws in that they interfere with other pieces: 1...Be6 interferes with the pawn on e7, allowing 2.Qxc7# (2...e5 would be possible were the bishop not on e6) and 1...e6 interferes with the bishop, allowing 2.Qxa4# (2...Bc4 would be possible were the pawn not on e6). It is this mutual interference between bishop and pawn on e6 which constitutes the pawn Grimshaw. There are several other non-thematic black defences in this problem — see below for them all.
Grimshaws involving pawns:
1.Qd7 (threatening 2.Qf5#) 1...Be6 2.Qxc7# 1...e6 2.Qxa4# 1...Ne6 2.Nd5# 1...Ra5 2.Qd4# 1...Nxe3 2.fxe3# 1...Ng3 2.fxg3#
Multiple Grimshaws:
Sometimes, multiple Grimshaws can be combined in one problem. Here are two examples by Lev Ilych Loshinsky each with three Grimshaws.
Multiple Grimshaws:
First example This was first published in L'Italia Scacchistica, 1930. It is a mate in two. The key is 1.Rb1, with the threat 2.d4#. Each of Black's defences produces a Grimshaw interference which stops him from capturing White's mating piece. Black's defences, with White's replies, are: 1...Re6 2.Nd7# (2...Bxd7 not possible) 1...Be6 2.Bd6# (2...Rxd6 not possible) 1...Rg4 2.Ne6# (2...Bxe6 not possible) 1...Bg4 2.Bg1# (2...Rxg1 not possible) 1...Rb2 2.Qxc3# (2...Bxc3 not possible) 1...Bb2 2.Qf2# (2...Rxf2 not possible)There is one other black defence: 1...Rd6 leading to the simple recapture 2.Bxd6# (this is essentially the same mate as that which follows 1...Be6).
Multiple Grimshaws:
Second example This second Loshinsky example, also a mate in two, is from Tijdschrift v.d. Nederlandse Schaakbond, 1930, and is one of the most famous of all chess problems. It is a complete block (if White could pass his first move, then he could reply to every black move with a mate), and White's key, 1.Bb3, holds this block, making no threat, but putting Black in zugzwang. Black has six defences leading to three Grimshaws, one of them a pawn Grimshaw: 1...Rb7 2.Rc6# (2...Bxc6 not possible) 1...Bb7 2.Re7# (2...Rxe7 not possible) 1...Rg7 2.Qe5# (2...Bxe5 not possible) 1...Bg7 2.Qxf7# (2...Rxf7 not possible) 1...Bf6 2.Qg4# (2...f5 not possible) 1...f6 2.Qe4# (2...Be5 not possible)After other black moves, White can play one of the above moves to mate; the three exceptions are 1...f5, taking away that square from the king and allowed 2.Qd6# and two recaptures: 1...Rxc7 2.Nxc7# and 1...Bxd4 2.Nxd4#.
Novotny:
A close relative of the Grimshaw is the Novotny, which is essentially a Grimshaw brought about by a white sacrifice on a square where it can be captured by two different black pieces – whichever black piece captures the white piece, it interferes with the other. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Data stream clustering**
Data stream clustering:
In computer science, data stream clustering is defined as the clustering of data that arrive continuously such as telephone records, multimedia data, financial transactions etc. Data stream clustering is usually studied as a streaming algorithm and the objective is, given a sequence of points, to construct a good clustering of the stream, using a small amount of memory and time.
History:
Data stream clustering has recently attracted attention for emerging applications that involve large amounts of streaming data. For clustering, k-means is a widely used heuristic but alternate algorithms have also been developed such as k-medoids, CURE and the popular BIRCH. For data streams, one of the first results appeared in 1980 but the model was formalized in 1998.
Definition:
The problem of data stream clustering is defined as: Input: a sequence of n points in metric space and an integer k.Output: k centers in the set of the n points so as to minimize the sum of distances from data points to their closest cluster centers.
This is the streaming version of the k-median problem.
Algorithms:
STREAM STREAM is an algorithm for clustering data streams described by Guha, Mishra, Motwani and O'Callaghan which achieves a constant factor approximation for the k-Median problem in a single pass and using small space.
To understand STREAM, the first step is to show that clustering can take place in small space (not caring about the number of passes). Small-Space is a divide-and-conquer algorithm that divides the data, S, into ℓ pieces, clusters each one of them (using k-means) and then clusters the centers obtained.
Algorithms:
Algorithm Small-Space(S) Where, if in Step 2 we run a bicriteria (a,b) -approximation algorithm which outputs at most ak medians with cost at most b times the optimum k-Median solution and in Step 4 we run a c-approximation algorithm then the approximation factor of Small-Space() algorithm is 2c(1+2b)+2b . We can also generalize Small-Space so that it recursively calls itself i times on a successively smaller set of weighted centers and achieves a constant factor approximation to the k-median problem.
Algorithms:
The problem with the Small-Space is that the number of subsets ℓ that we partition S into is limited, since it has to store in memory the intermediate medians in X. So, if M is the size of memory, we need to partition S into ℓ subsets such that each subset fits in memory, ( n/ℓ ) and so that the weighted ℓk centers also fit in memory, ℓk<M . But such an ℓ may not always exist.
Algorithms:
The STREAM algorithm solves the problem of storing intermediate medians and achieves better running time and space requirements. The algorithm works as follows: Other algorithms Other well-known algorithms used for data stream clustering are: BIRCH: builds a hierarchical data structure to incrementally cluster the incoming points using the available memory and minimizing the amount of I/O required. The complexity of the algorithm is O(N) since one pass suffices to get a good clustering (though, results can be improved by allowing several passes).
Algorithms:
COBWEB: is an incremental clustering technique that keeps a hierarchical clustering model in the form of a classification tree. For each new point COBWEB descends the tree, updates the nodes along the way and looks for the best node to put the point on (using a category utility function).
Algorithms:
C2ICM: builds a flat partitioning clustering structure by selecting some objects as cluster seeds/initiators and a non-seed is assigned to the seed that provides the highest coverage, addition of new objects can introduce new seeds and falsify some existing old seeds, during incremental clustering new objects and the members of the falsified clusters are assigned to one of the existing new/old seeds.
Algorithms:
CluStream: uses micro-clusters that are temporal extensions of BIRCH cluster feature vector, so that it can decide if a micro-cluster can be newly created, merged or forgotten based in the analysis of the squared and linear sum of the current micro-clusters data-points and timestamps, and then at any point in time one can generate macro-clusters by clustering these micro-clustering using an offline clustering algorithm like K-Means, thus producing a final clustering result. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Walking on water**
Walking on water:
Walking on water is an example of a superhuman task associated with some cultures. It may refer to: A Japanese myth about ninja, thought to be associated with Mizugumo.
Jesus walking on water, in the Christian gospels Animal locomotion on the water surfaceWalk on the Water, Walk on Water or Walking on Water may also refer to:
Film and television:
Summer's End (film) or Walk on Water, 1999 film Walk on Water (film), 2004 Israeli film Walking on Water (2002 film), Australian film Walking on Water (2018 film), documentary film "Walk on Water" (Grey's Anatomy), 2007 episode of Grey's Anatomy
Music:
Walk on Water (band), a Swedish contemporary Christian music band Albums Walk on the Water (album), a 1980 album by Gerry Mulligan Walk on Water (Jerry Harrison album) (1990) Walk on Water (Katrina and the Waves album) (1997) Walk on Water (UFO album) (1995) Songs "Walk on the Water", a 1968 song by Creedence Clearwater Revival on the band's eponymous album Walk on the Water (song), a 2009 song by Britt Nicole "Walk on Water" (Aerosmith song) (1994) "Walk on Water" (Basshunter song) (2009) "Walk on Water" (Eddie Money song) (1988) "Walk on Water" (Eminem song) (2017) "Walk on Water" (Ira Losco song) (2016) "Walk on Water" (Thirty Seconds to Mars song) (2017) "Walk on Water", a 1972 song by Neil Diamond from Moods "Walk on Water", a 1990 song by Dio from Lock Up the Wolves "Walk on Water", a 1995 song by Audio Adrenaline from Bloom "Walk on Water", a 1996 song by Ozzy Osbourne from the Beavis and Butt-Head Do America soundtrack "Walk on Water", a 2000 song by Milk Inc. from Land Of The Living "Walk on Water", a 2017 song by ASAP Mob from Cozy Tapes Vol. 2: Too Cozy "Walk on Water", a 2015 song by Kat Dahlia from My Garden | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Male chest reconstruction**
Male chest reconstruction:
Male chest reconstruction refers to any of various surgical procedures to masculinise the chest by removing breast tissue or altering the nipples and areolae. Male chest reconstruction may be performed in cases of gynecomastia and gender dysphoria. Transmasculine people may pursue chest reconstruction, also known as top surgery, as part of transitioning.
Male chest reconstruction:
The removal of breast tissue in male chest reconstruction is a type of mastectomy called a subcutaneous (under the skin) mastectomy. This type of mastectomy removes tissue from inside the breast (subcutaneous tissue), as well as excess skin. The surgeon then contours the chest into a masculine shape, altering the size and position of the areolae and nipples as needed. People who were assigned female at birth who may identify as genderqueer, non binary, agender, or somewhere else within the gender nonconforming umbrella or spectrum, may opt to forgo nipple grafts, with the intent of having a completely blank, flat chest, or, to have them tattooed on at a later date. Some patients may also request specific shapes for the nipples that will be reattached, such as hearts or stars; some surgeons may have no qualms with providing this service, while others may feel less skilled or experienced in creating 'non binary' top surgery chests.
History:
In 1942, British physician and author Michael Dillon underwent a chest masculinizing mastectomy as part of his transition to male. This would be among the first of Dillon's 13 gender-affirming surgeries. All were performed by Harold Gillies, a New Zealand plastic surgeon, who is sometimes referred to as "the father of modern plastic surgery." It is possible this was the first top surgery performed.
History:
In the mid-1970s, Chicago surgeon Dr. Michael Brownstein (having graduated from UCSF) opened a plastic surgery practise in San Francisco. In 1978, Dr. Michael Brownstein conducted his first chest reconstructive surgery under the request of a FTM identified as "John L." The surgery was successful, and shortly thereafter, "FTMs were 'flocking to him,' including some who had not had any so-called gender counseling." Brownstein continued to provide the plastic surgery until Paul Walker contacted him, stating that he was violating the Standards of Care. Following this, Brownstein requested referrals from trans patients and Brownstein became known for his "outstanding results." Brownstein became a "world renowned" surgeon, with patients including Lou Sullivan in 1980 and Chaz Bono in 2009. Brownstein retired in 2013, "after 35 years of serving the transgender and gender-non-conforming communities."Canadian actor Elliot Page underwent the surgery circa March 2021; he stated, "It has completely transformed my life... [It's] not only life-changing but lifesaving."
Patients:
Male chest reconstruction surgery candidates desire a flat chest that appears masculine. These candidates may include cisgender men with gynecomastia; transgender men who are medically transitioning and have chest dysphoria; and non-binary people with breasts. All of the above listed may experience chest dysphoria and a desire to masculinize its size or shape.Gynecomastia is a common breast deformity that can occur in cisgender men, which may require surgical intervention. Causes of gynecomastia may vary but may include drug side effects or genetics.People assigned female at birth who are transitioning to male, masculine, or non-binary genders may experience gender dysphoria caused by their chest and/or gender euphoria after the surgical recovery.
Procedures:
Inverted "T" A transverse inframammary incision with free nipple areolar grafts may be one approach. If there is too much blousing of the skin, the alternatives are to extend the incision laterally (chasing a dog ear) or to make a vertical midline incision (inverted T).The areola is trimmed to a pre-agreed-upon diameter and the nipple sectioned with a pie-shaped excision and reconstituted. There may be varying sensory loss because of nerve disruption.
Procedures:
Double incision One of the most common male chest reconstructive procedures, double incision involves an incision above and below the breast mass, the removal of the fatty and glandular tissue, and the closure of the skin. This method leaves scars under the pectoral muscles, stretching from the underarms to the medial pectoral.Double incision is usually accompanied by free nipple grafts to make male-looking nipples. The areola and nipple is removed from the breast tissue, cutting away along the circumference and removing the top layer of flesh from the rest of the tissue. After the chest has been reconstructed, the nipples are grafted on in the appropriate male position. The areolae are often sized down as well as the nipples themselves, as female areolae are often larger in circumference and the nipples protrude farther.
Procedures:
Nipple grafts are generally associated with double incision style chest reconstruction, but may be used in any reconstruction procedure if necessary.
With nipple grafts comes the possibility of rejection. In such cases, the nipple is often tattooed back on cosmetically or further surgical procedures may be applied.
Some sensation will usually return to the grafted nipples over time. However, the procedure severs the nerves that go into the nipple-areola and there is a substantial likelihood for loss of sensation.
Keyhole To remove the glandular and fatty tissue which constitute the breast mass and the added skin that drapes the mass, there are three basic approaches.
Procedures:
For petite breasts, such as an A or a small B, a peri-areolar incision can be done. That is a circular incision around the areola, combined with an inner circular incision to remove some of the excess areola. Drawing the skin into the center will result in some puckering, but this often smooths out with time. There will be significant tension on the scar line, and to prevent spreading of the scar, a permanent fixation suture is needed. Leaving outer dermis (raw skin) underneath the marginalized areola helps in its survival.
Procedures:
The keyhole incision (i.e., skeleton key) augments the periareolar incision further by making a vertical closure underneath (lollipop), which results after the unwanted skin is pulled in from side to side and the excess is removed.An anchor incision adds to that a transverse incision usually in the infra mammary fold to further remove excessive skin. Draping or blousing is not desirable. This is reserved for much larger breasts or topographically a larger surface area as seen in women with postpartum breast atrophy.
Procedures:
The nipple areolar complex may be supported by a pedicle which has the advantage of leaving some sensation and blood supply intact, but can have the disadvantage when the pedicle has sufficient bulk not to provide the flat look most FTM patients desire.
Procedures:
"Dog ear" Occasionally, the side limbs may be quite long, and the expression doctors use is "chasing a dog ear" into the axilla (or underarm). A dog ear may occur when the skin at the edge or corner of an incision 'flows over,' when there is too much gathering, usually at an angle greater than 30 degrees. This usually becomes more apparent after several months of healing, and can be caused by things like weight gain (excess skin or fat changing the shape in areas like torso, hips, stomach, or buttocks, may also occur along the incision line), or due to 'poor surgical planning and execution.' Using a curved incision can reduce the chances of dog ears developing because it requires less gathering of skin to be done, but some patients dislike the appearance of the curved scar as it can mimic the appearance of breasts.
Procedures:
Not uncommonly, a surgeon may revise the incision lines after 3 or more months of settling shows some residual problem areas. Other revisions may include changing 'slight irregularities,' such as reshaping of the nipple that may have stretched 'out of shape' due to too much upper arm/over the head arm movement, or general 'overextension' during the healing process (which may also cause asymmetry), bulges or puckering (typically along incision lines), failed nipple graphs (which may result in one or both nipples 'failing' to 'take' to the patient's healing chest, or scarring patterns a patient may not be happy with. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Angiopoietin 1**
Angiopoietin 1:
Angiopoietin 1 is a type of angiopoietin and is encoded by the gene ANGPT1.
Angiopoietin 1:
Angiopoietins are proteins with important roles in vascular development and angiogenesis. All angiopoietins bind with similar affinity to an endothelial cell-specific tyrosine-protein kinase receptor. The protein encoded by this gene is a secreted glycoprotein that activates the receptor by inducing its tyrosine phosphorylation. It plays a critical role in mediating reciprocal interactions between the endothelium and surrounding matrix and mesenchyme. The protein also contributes to blood vessel maturation and stability, and may be involved in early development of the heart. During pregnancy, angiopoietins act complementary to the VEGF system and contribute to endothelial cell survival and the remodeling of vessels. Few studies have examined the role of angiopoietins in human pregnancy complications like preeclampsia and intrauterine growth restriction (IUGR).
Angiopoietin 1:
A knockout model of ANGPT1 was introduced in mice embryos. Results showed that embryos began to appear abnormal by day 11 and were dead by day 12.5 of pregnancy. The embryos showed prominent defects in endocardial and myocardial development as well as a less complex vascular network.
Interactions:
Angiopoietin 1 has been shown to interact with TEK tyrosine kinase.
Placental Malaria:
Recently, studies in malaria-endemic areas suggest that placental malaria (PM) may be associated with a dysregulation in angiopoietins. Increased levels of angiopoietin-1 appear to be associated with a decrease in placental weight and placental barrier thickness in women infected with Plasmodium (the causative agent of malaria). In a mouse model of PM, Plasmodium infection of pregnant mice led to decreased angiopoietin-1, increased angiopoietin-2, and an elevated ratio of angiopoietin-2/angiopoietin-1 in the placenta. This suggests that angiopoietin levels could be clinically significant biomarkers to identify mothers infected with PM. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fidget spinner**
Fidget spinner:
A fidget spinner is a toy that consists of a ball bearing in the center of a multi-lobed (typically three) flat structure made from metal or plastic designed to spin along its axis with pressure. Fidget spinners became trending toys in 2017, although similar devices had been invented as early as 1993.The toy has been promoted as helping people who have trouble focusing or those who may need to fidget to relieve nervous energy, anxiety, or psychological stress. There are claims that a fidget spinner can help calm down people who have anxiety or neurodivergences, such as ADHD and autism, though peer reviewed studies for this notion are lacking.A spinner consists of a round, flat central bearing (usually a ball bearing) that allows the arms connected to it to rotate; around this central axis, there are usually three weighted arms, but their number varies depending on the model. They can be rotated for up to several minutes, depending on the model.
Development:
In October 2017, inspired by the Fidget Cube Kickstarter campaign, Allan Maman used his Byram Hills High School's 3-D printers to make Fidget360 with the help of his physics teacher, Eric Savino and worked with Cooper Weiss to promote the toy.In an interview appearing on 4 May 2017 on NPR, Scott McCoskery described how he invented a metal spinning device, Torqbar, in 2014 to cope with his own fidgeting in IT meetings and conference calls. In response to requests from an online community, he began selling the device online.
Popularity and usage:
With the rapid increase in the popularity of fidget spinners in 2017, many children and teenagers began using them in school, and some schools also reported that students were trading and selling the spinner toys.As a result of their frequent use by children at school, many school districts banned the toy. Some teachers argued that the spinners distracted students from their schoolwork. According to a survey conducted by Alexi Roy and published in May 2017, 32% of the largest 200 American public and private high schools had banned spinners on campus.When fidget spinners rose in popularity in 2017, many publications in the popular press discussed the marketing claims made about them for people with ADHD, autism, or anxiety. However, there has not been research proving this notion. They quickly fell in popularity and sales after peaking in May 2017.
Patent status:
As of 2017, the patent status of the various fidget spinners on the market was unclear. Catherine Hettinger, a chemical engineer by training, was initially credited by some news stories as having been the inventor of the fidget spinner, including by media outlets such as The Guardian, The New York Times, and the New York Post. Hettinger filed a patent application for a "spinning toy" in 1993 and a patent was issued, but Hettinger allowed the patent to lapse in 2005 after she could not find a commercial partner. However, a May 2017 Bloomberg News article showed that Hettinger was not the inventor of the fidget spinner, and Hettinger agreed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subparhelic circle**
Subparhelic circle:
The subparhelic circle is a rare halo, an optical phenomenon, located below the horizon. It passes through both the subsun (below the Sun) and the antisolar point (opposite to the Sun). The subparhelic circle is the subhorizon counterpart to the parhelic circle, located above the horizon.
Located on the subparhelic circle are several relatively rare optical phenomena: the subsun, the subparhelia, the 120° subparhelia, Liljequist subparhelia, the diffuse arcs, and the Parry antisolar arcs.On the accompanying photo centred at the antisolar point, the subparhelic circle appears as a gently curved horizontal line intercepted by anthelic arcs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fotemustine**
Fotemustine:
Fotemustine is a nitrosourea alkylating agent used in the treatment of metastatic melanoma. It is available in Europe but has not been approved by the United States FDA. A study has shown that fotemustine produces improved response rates and but does not increase survival (over dacarbazine in the treatment of disseminated cutaneous melanoma. Median survival was 7.3 months with fotemustine versus 5.6 months with DTIC (P=.067). There was also toxicity prevalence in fotemustine arm. The main toxicity was grade 3 to 4 neutropenia (51% with fotemustine v 5% with DTIC) and thrombocytopenia (43% v 6%, respectively). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lindiwe Majele Sibanda**
Lindiwe Majele Sibanda:
Lindiwe Sibanda Majele (born 1963) is a Zimbabwean professor, scientist, policy advocate and influencer on food systems. She currently serves as director and chair of the ARUA Centre of Excellence in Sustainable Food Systems (ARUA-SFS) at the University of Pretoria in Pretoria, South Africa as well as founder and managing director of Linds Agricultural Services Pvt Ltd. in Harare, Zimbabwe. She is currently a board member of Nestlé where she is also a member of the Sustainability Committee.
Life:
Prof Lindiwe Majele Sibanda is a food systems scientist, policy advocate and trusted key influencer on food systems. She has over 25 years of trans-disciplinary work experience in agriculture and rural development, public and private sector policy reforms and management, 15 of them having been at senior level in the academic, scientific, private and public institutions. She is a renowned preeminent technical leader and diplomat. Globally, Prof Lindiwe Majele Sibanda is a recognized leader and has served as trustee and adviser to numerous international food security related initiatives. She is a serving member of the SDG Target by 2030 Champions 12.3, co-Chair of the Global Alliance of Climate-Smart Agriculture, member of the World Vegetable Board, and a commissioner for the EAT-Lancet report on Sustainable Healthy Food Systems. Previously, she has served as a member of the United Nations (UN) Committee for Policy Development (CDP), and the African Union Commission (AUC) Leadership Council; university professor in agriculture, animal sciences and veterinary sciences and is a regular guest lecturer at several universities. She is a recipient of numerous awards for her contribution towards agriculture and food security in Africa; including the, Science Diplomacy Award by the Government of South Africa (2015); FARA Award for Exemplary leadership (2014); and Yara 2013 Prize Laureate; (2013). She holds a BSc Degree Animal Production First Class Honours from the University of Alexandria in Egypt and MSc and PhD, University of Reading, UK. She is currently Director and Chairwoman, African Research Universities Alliance Centre of Excellence in Sustainable Food Systems (ARUA-SFS)
Awards:
Science Diplomacy Award by the Government of South Africa (2015); https://www.fanrpan.org/archive/documents/d01934/ Forum for FARA Award for Exemplary leadership (2014); https://www.weforum.org/people/lindiwe-majele-sibanda Yara 2013 Prize Laureate; (2013) https://www.weforum.org/people/lindiwe-majele-sibanda Food Tank recognition as one of the women working to change the food system in honor of International Women’s day, https://champions123.org/person/lindiwemajele-sibanda Nominated as Global citizen (2012) Nominated as Global citizen https://www.fanrpan.org/archive/documents/d01426/
Mandates:
CGIAR System Board as a voting Board Member 3 year term (2021) Nestle Board of Directors (2021) Advisory Board Member of Infinite Foods.
AGRA VP for Country Support, Policy and Delivery (2017)
Mandates: Not for profit:
World Vegetable Centre Board of Directors https://avrdc.org/new-members-appointed-in-2018-to-worldvegetable-center-board-of-directors/ ILRI board chair (2012) https://newsarchive.ilri.org/index.php/archives/9996 Board member and Chair of the nominations committee, World Vegetable Board.
Serving member, Champions UN-SDG 12.3, accelerating progress toward reducing food loss and waste towards achieving SDG Target 12.3 by 2030.
Co-Chair, UN-Global Alliance for Climate Smart Agriculture (GACSA).
Presidential advisory council member on Agriculture in Zimbabwe.
Deputy Chair of Council for the National University of Science and Technology (NUST).
Presidential advisory council member. She is currently serving as one of the presidential advisory council members on Agriculture in Zimbabwe International Advisory Panel Member, Regional Universities Forum for Capacity Building in Agriculture (RUFORUM).
2020 Food Planet Prize: Jury Member Cornell University–Nature Sustainability expert panel on “Innovations to build sustainable, equitable, inclusive food value chains”, through Cornell Atkinson Center for Sustainability’s food security working group, with the journal Nature Sustainability and its sister journal, Nature Food.
Advisory board member of the ARUA UKRI GCRF- Partnership Programme for Capacity building in Food Security for Africa (CaBFoodS-Africa) hosted by the University of Pretoria, in collaboration with the University of Nairobi, and the University of Ghana, Legon.
Deputy Chairperson of the Iam4Byo Fighting Covid-19 Trust responsible for communications, public relations and research coalition.
Publications:
2020. Socio-technical Innovation Bundles for Agri-food Systems Transformation, Report of the International Expert Panel on Innovations to Build Sustainable, Equitable, Inclusive Food Value Chains. Barrett, Christopher B., Tim Benton, Jessica Fanzo, Mario Herrero, Rebecca J. Nelson, Elizabeth Bageant, Edward Buckler, Karen Cooper, Isabella Culotta, Shenggen Fan, Rikin Gandhi, Steven James, Mark Kahn, Laté Lawson-Lartego, Jiali Liu, Quinn Marshall, Daniel Mason-D’Croz, Alexander Mathys, Cynthia Mathys, Veronica Mazariegos-Anastassiou, Alesha (Black) Miller, Kamakhya Misra, Andrew G. Mude, Jianbo Shen, Lindiwe Majele Sibanda, Claire Song, Roy Steiner, Philip Thornton, and Stephen Wood. Ithaca, NY, and London: Cornell Atkinson Center for Sustainability and Springer Nature, 2020 2020. Men’s nutrition knowledge is important for women’s and children’s nutrition in Ethiopia. Ambikapathi R, Passarelli S, Madzorera I, et al. Matern Child Nutr. 2021;17e13062.
Publications:
2020. A Chicken Production Intervention and Additional Nutrition Behavior Change Component Increased Child Growth in Ethiopia: A Cluster-Randomized Trial Simone Passarelli, Ramya Ambikapathi, Nilupa S Gunaratna, Isabel Madzorera, Chelsey R Canavan, Abdallah R Noor, Amare Worku, Yemane Berhane, Semira Abdelmenan, Simbarashe Sibanda, Bertha Munthali, Tshilidzi Madzivhandila, Lindiwe M Sibanda, Kumlachew Geremew, Tadelle Dessie, Solomon Abegaz, Getnet Assefa, Christopher Sudfeld, Margaret McConnell, Kirsten Davison, Wafaie Fawzi The Journal of Nutrition, nxaa181. 2018 EAT Lancet report with Johan Rockström and Walter Willett. Healthy diets from sustainable food systems. Food in the Anthropocene. Willett W, Rockström J, Loken B, Springmann M, Lang T, Vermeulen S, Garnett T, Tilman D, DeClerck F, Wood A, Jonell M, Clark M, Gordon LJ, Fanzo J, Hawkes C, Zurayk R, Rivera JA, De Vries W, Majele Sibanda L, Afshin A, Chaudhary A, Herrero M, Agustina R, Branca F, Lartey A, Fan S, Crona B, Fox E, Bignet V, Troell M, Lindahl T, Singh S, Cornell SE, Srinath Reddy K, Narain S, Nishtar S, Murray CJL. Lancet. PMID 30660336.
Publications:
2018. Sustainable and Equitable Increases in Fruit and Vegetable Productivity and Consumption are Needed to Achieve Global Nutrition Security. Position Paper resulting from a workshop organized by the Aspen Global Change Institute and hosted at the Keystone Policy Center July 30 – August 3, 2018 https://www.agci.org/sites/default/files/pdfs/lib/publications/AGCI-FV-Position-Paper.pdf Sibanda, L. M and Mwamakamba, S.N. (2016) Africa’s Rainbow Revolution: Feeding a Continent and the World in a Changing Climate. Solutions, May–June 2016 in press. https://thesolutionsjournal.com/2016/06/17/africas-rainbow-revolution-feeding-continent-world-changing-climate/ Johan Rockstro ̈m, John Williams, Gretchen Daily, Andrew Noble, Nathanial Matthews, Line Gordon, Hanna Wetterstrand, Fabrice DeClerck, Mihir Shah, Pasquale Steduto, Charlotte de Fraiture, Nuhu Hatibu, Olcay Unver, Jeremy Bird, Lindiwe Sibanda, Jimmy Smith. 2016 .Sustainable intensification of agriculture for human prosperity and global sustainability. Springerlink.com Editor in Chief for FANRPAN’s AgriDeal Magazine (2015). Climate Smart Agriculture (CSA). Vol 3, ISSN 2304-8824 Douglas J. Merrey and Lindiwe Sibanda. 2014. Options for Policy Reforms to Enhance the Development Impact of Public and Private Investments in Smallholder Agricultural Water Management. FANRPAN Zinyengere N., Crespo O., Hachigonta S. Sibanda L. (2013). Climate Change Adaptation in Southern Africa: Linking science studies and policy decisions to drive evidence-based action. FANRPAN Policy Brief Issue 1 Volume XIII February 2013.
Publications:
Hachigonta, S; Nelson, G.C; Thomas, T.S; Sibanda, L.M (2013) Southern African Agriculture and Climate Change, A comprehensive analysis. Published by International Food Policy Research Institute. ISBN 978-0-89629-208-6 Hachigonta, Sepo; Nelson, Gerald C.; Thomas, Timothy S. and Sibanda, Lindiwe M. 2013. Overview. In Southern African Agriculture and Climate Change: A comprehensive analysis. Chapter 1 pp. 1–23. Washington, D.C.: International Food Policy Research Institute (IFPRI). http://ebrary.ifpri.org/cdm/ref/collection/p15738coll2/id/127787 Mugabe, Francis T.; Thomas, Timothy S.; Hachigonta, Sepo and Sibanda, Lindiwe M. 2013. Zimbabwe. In Southern African Agriculture and Climate Change: A comprehensive analysis. Chapter 10 pp. 289–323. Washington, D.C.: International Food Policy Research Institute (IFPRI) http://ebrary.ifpri.org/cdm/ref/collection/p15738coll2/id/127792 Manyatsi, Absalom M.; Thomas, Timothy S.; Masarirambi, Michael T.; Hachigonta, Sepo and Sibanda, Lindiwe M. 2013. Swaziland. In Southern African Agriculture and Climate Change: A comprehensive analysis. Chapter 8 pp. 213–253. Washington, D.C.: International Food Policy Research Institute (IFPRI) http://ebrary.ifpri.org/cdm/ref/collection/p15738coll2/id/127793 Johnston, Peter; Thomas, Timothy S.; Hachigonta, Sepo and Sibanda, Lindiwe M. 2013. South Africa. In Southern African Agriculture and Climate Change: A comprehensive analysis. Chapter 7 pp. 175–212. Washington, D.C.: International Food Policy Research Institute (IFPRI). http://ebrary.ifpri.org/cdm/ref/collection/p15738coll2/id/127786 Maure, Genito A.; Thomas, Timothy S.; Hachigonta, Sepo and Sibanda, Lindiwe M. 2013. Mozambique. In Southern African Agriculture and Climate Change: A comprehensive analysis. Chapter 6 pp. 147–173. Washington, D.C.: International Food Policy Research Institute (IFPRI). http://ebrary.ifpri.org/cdm/ref/collection/p15738coll2/id/127789 Kanyanga, Joseph; Thomas, Timothy S.; Hachigonta, Sepo and Sibanda, Lindiwe M. 2013. Zambia. In Southern African Agriculture and Climate Change: A comprehensive analysis. Chapter 9 pp. 255–287. Washington, D.C.: International Food Policy Research Institute (IFPRI). http://ebrary.ifpri.org/cdm/ref/collection/p15738coll2/id/127790 Saka, John D.K.; Sibale, Pickford; Thomas, Timothy S.; Hachigonta, Sepo and Sibanda, Lindiwe M. 2013. Malawi. In Southern African Agriculture and Climate Change: A comprehensive analysis. Chapter 5 pp. 111–146. Washington, D.C.: International Food Policy Research Institute (IFPRI) http://ebrary.ifpri.org/cdm/ref/collection/p15738coll2/id/127791 Gwimbi, Patrick; Thomas, Timothy S.; Hachigonta, Sepo and Sibanda, Lindiwe M. 2013. Lesotho. In Southern African Agriculture and Climate Change: A comprehensive analysis. Chapter 4 pp. 71–109. Washington, D.C.: International Food Policy Research Institute (IFPRI). http://ebrary.ifpri.org/cdm/ref/collection/p15738coll2/id/127788 Zhou, Peter P.; Simbini, Tichakunda; Ramokgotlwane, Gorata; Thomas, Timothy S.; Hachigonta, Sepo and Sibanda, Lindiwe M. 2013. Botswana. In Southern African Agriculture and Climate Change: A comprehensive analysis. Chapter 3 pp. 41–70. Washington, D.C.: International Food Policy Research Institute (IFPRI). http://ebrary.ifpri.org/cdm/ref/collection/p15738coll2/id/127785 Editor in Chief for FANRPAN’s AgriDeal Magazine (2013). The River between. Vol 2, ISSN 2304-8824 Zinyengere, N Crespo, O, Hachigonta, S and Sibanda, L.M (2013). Climate Change Adaptation in Southern Africa: Linking science studies and policy decisions to drive evidence-based action. FANRPAN Policy Brief, Issue no. 1: Volume XIII Sullivan A, Mumba A, Hachigonta,S Connolly, M and Sibanda L.M (2013). Appropriate Climate Smart Technologies for Smallholder Farmers in Sub-Saharan Africa. FANRPAN Policy Brief, Issue no. 2: Volume XIIIEditor in chief for FANRPAN’s Agri Deal Magazine (2012). Women Warming Africa Vol 1, ISSN 2304-8824 Sullivan, A, Mwamakamba, S, Mumba A, Hachigonta S and Sibanda L.M (2012). Climate Smart Agriculture: More Than Technologies Are Needed to Move Smallholder Farmers toward Resilient and Sustainable Livelihoods. FANRPAN Policy Brief Issue no. 2: Volume XIII Sullivan, A. and SIBANDA, L.M (2010). Vulnerable Populations, Unreliable Water and Low Water Productivity: A Role for Institutions in the Limpopo Basin. Published in: Water International, Vol. 35, Issue 5 September 2010, pages 545 - 572 SIBANDA, L.M and Ndema, S. (2008). The Global Food Crisis: Who are the Architects of our Livelihoods? FANRPAN Policy Brief Series 01/8 August 2008.
Publications:
SIBANDA, L.M (2008). African Think Tanks and Policy Dialogues: Time to Start Talking Again. In: Global Future: The global food price crisis: ensuring food security for all. No. 3, 2008. A World Vision Journal of Human Development.
Publications:
SIBANDA, L.M (2007). Food Agriculture and Policy Analysis Network (FANRPAN) and its Evolving Partnership with the CGIAR. Alignment and Collective Action Updates. A quarterly newsletter on the CGIAR’s institutional and partnership innovations for greater impact in eastern and southern Africa. August 2007, Vol. 1:3 pg 2 SIBANDA, L.M, Kalibwani, F and Kureya, T. (2006). Silent Hunger: Policy Options for Effective Response to the Impact of HIV and AIDS on Agriculture and Food Security in the SADC Region. FANRPAN, 2006.
Publications:
MERREY D.J and SIBANDA, L.M (2006). From Rain fed Poverty to Irrigated Prosperity: Expanding Micro-Agricultural Water Management in Sub-Saharan Africa.
SIBANDA, L. M., BRYANT, M. J. and NDLOVU, L. R. (2000). Live weight and Body Condition Score Changes of Female Matebele Goats During their Breeding Cycle in a Semi- Arid Environment under Traditional Management. Small Ruminant Research, 35: 271 – 275.
SIBANDA, L. M., NDLOVU, L. R. and BRYANT, M. J. (1999). Effects of a Low Plane of Nutrition during Pregnancy and Lactation on the Performance of MatebeleDoes and their kids. Small Ruminant Research, 32: 234-250.
SIBANDA, L.M., SIMELA, L. (1997). Carcass characteristics of the marketed Matebele goat from south-western Zimbabwe. DOI: 10.1016/S0921-4488(98)00182-5 SIBANDA, L.M., BRYANT, M.J., NDLOVU, L.R. (1997). Factors Affecting the Growth and Survival of Goat Kids in a Semi-Arid Environment under Smallholder Management. Journal of Applied Science in Southern Africa, 3: 27-33.
SIBANDA, L.M., NDLOVU, L.R., BRYANT, M. J. (1997). Reproductive Performance of Matebele Goats in a Semi-Arid Environment under Smallholder Management. Journal of Applied Science in Southern Africa, 3: 35-42.
SIBANDA, L.M., NDLOVU, L.R. and BRYANT, M.J. (1997). Effects of Feeding Varying Amounts of a Grain-Forage Diet during Late Pregnancy and Lactation on the Performance of Matebele Goats. Journal of Agricultural Science (Cambridge), 128: 469-477.
SIBANDA, L.M., SIMELA, L. (1997). Milk production, processing and marketing to improve the nutrition and income generating capacity of rural households in Zimbabwe NDLOVU, L.R. and SIBANDA, L.M. (1996). The Potential of Dolichos Lablab and Acacia tortilis Pods in Smallholder Feeding Systems for Goat Kids in Semi-Arid Areas of Southern Africa. Small Ruminant Research, 21:273-276.
SIBANDA, L.M, NDLOVU L. R., BRYANT M. J. (1997). Factors Affecting the Growth and Survival of MatebeleGoat Kids in a Semi-Arid Environment under Smallholder Management. In: Journal of Applied Science in Southern Africa, Vol. 3, Nos. 1 & 2.
NDLOVU, L. R., SIBANDA, H.M., SIBANDA, L.M. and HOVE, E. (1995). Nutritive Value of Indigenous BrowsableTree Species in a Semi-Arid Area of Zimbabwe. IVthInternationalSymposium on the Nutrition of Herbivores. Clermont-Ferrand, France, 11–15 September 1995.
SIBANDA, L.M., BRYANT, M.J. and NDLOVU, L.R. (1993). Litter Size in the Matebele Goat and its Effect on Productivity. Animal Production, 56:440.
NDLOVU, L.R. and SIBANDA, L.M. (1993). Management Strategies for Minimizing Environmental Constraints to Small ruminant Production in Semi-Arid Areas of Southern Africa. In: Animal Production in Developing Countries. (Gill, M., Owen, E., Pollott, G.E. and Lawrence, T.L.J., Editors). British Society of Animal Production, Occasional Publication No. 16 pp 178.
SIBANDA, L.M., BRYANT, M.J. and NDLOVU, L.R. (1993). The Effect of Kidding Season on Productivity of Indigenous Matebele Goats of Zimbabwe. In: Animal Production in Developing Countries. (Gill, M., Owen, E., Pollott, G.E. and Lawrence, T.L.J. Editors). British Society of Animal Production Occasional Publication No. 16 pp 184–185.
NDLOVU, L. R. and SIBANDA, L.M. (1993). Improving the Productivity of Indigenous Goats in Zimbabwe. In: Improving Productivity of Indigenous Livestock using Radioimmunoassay (RIA) and other Techniques. International Atomic Energy Agency, Vienna, Austria. pp 177–189.
NDLOVU, L. R. and SIBANDA, L.M. (1993). Management Strategies for Minimizing Environmental Constraints to Small Ruminant Production in Semi-Arid Areas of Southern Africa. In: Animal Production in Developing Countries. Occasional Publication No. 16. (M. Gill, E. Owen, G.E. Pollot and T.L.J. Lawrence eds.) British Society of Animal Production. pp 178.
SIBANDA, L.M., BRYANT, M.J. and NDLOVU, L.R. (1992). Responses of Matebele Goats of Zimbabwe to Feeding Level: Lactation. Animal Production 54 (3) 472.
SIBANDA, L.M., NDLOVU, L. R. and BRYANT, M.J. (1992). Experiences in Adapting Previously Free-Ranging Traditionally Managed Matebele Goats of Zimbabwe to Individual Stall Feeding. In: Small Ruminant Research and Development in Africa (Rey, B., Lebbie, S.H. and Reynolds, L. eds) pp 345–354. ILCA, Addis Ababa, Ethiopia.
NDLOVU, L. R. and SIBANDA, L.M. (1991). Management Strategies for Minimizing Environmental Constraints to Small Ruminant Production in Semi-Arid Areas of Southern Africa. British Society of Animal Production Occasional Meeting, Kent, 2–4 September 1991.
SIBANDA, L.M. and NDLOVU, L. R. (1991). Productivity of Indigenous Matebele Goats of Zimbabwe under Traditional Management. British Society of Animal Production Occasional Meeting, Kent, 2–4 September 1991.
NDLOVU, L. R. and SIBANDA, L.M. (1991). Improving the Nutritional Status of Smallholder Livestock in Agro-Silvicultural Systems in Semi-Arid Southern Africa. In: Isotopes and Related Techniques in Animal Production and Health. International Atomic Agency, Vienna, pp 201–209.
SIBANDA, L.M., NDLOVU, L. R. and BRYANT, M.J. (1992). Veld Hay and Lucerne as Feed for Indigenous Matebele Does Kidding During the Dry Season. In: Complementarity of feed resources in African livestock production. (Stares, J.E.S., Said, A.N. and Kategile, J.A. eds). ILCA, Addis Ababa, Ethiopia, pp 205–214 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virucide**
Virucide:
A virucide (alternatively spelled viricide or named biocidal agent or known as microbicides or biocides) is any physical or chemical agent that deactivates or destroys viruses. The substances are not only virucidal but can be also bactericidal, fungicidal, sporicidal or tuberculocidal.Virucides are to be used outside the human body, and as such fall into the category of disinfectants (applied not to the human body) and antiseptics (applied to the surface of skin) for those safe enough. Overall, the notion of virucide differs from an antiviral drug such as Aciclovir, which inhibits the proliferation of the virus inside the body.CDC's Disinfection and Sterilization list of Chemical Disinfectants mentions and discusses substances such as: Alcohol, Chlorine and chlorine compounds, Formaldehyde, Glutaraldehyde, Hydrogen peroxide, Iodophors, Ortho-phthalaldehyde (OPA), Peracetic acid, Peracetic acid and hydrogen peroxide, Phenolics, Quaternary ammonium compounds, with different, but usually potent microbicidal activity. Other inactivating agents such as UV, Metals, Ozone, etc. exist.
Definitions:
According to the Centers for Disease Control and Prevention (CDC), a virucide is "An agent that kills viruses to make them noninfective."According to a definition by Robert Koch Institute Germany and further institutions, virucide means effective against enveloped and non-enveloped viruses.Due to the complexity of the subject, in Germany, Robert-Koch-Institute introduced sub-definitions such as "limited virucidal" or "limited virucidal plus" (translated from German) to differentiate its meaning further.Note that the meaning of Virus inactivation or Viral clearance is specific for the medical process industry, i. e. to remove HIV from blood.
Functioning:
Different substances have interactions between microbicides and viruses such as: Alteration of the viral envelope Structural alteration Alteration of viral markers or Alteration of the viral genomeThe exact mechanisms, for example of Iodine (PVP-I) are still not clear, but it is targeting the bacterial protein synthesis due to disruption of electron transport, DNA denaturation or disruptive effects on the virus membrane.
Registration:
The U.S. Centers for Disease Control and Prevention administers a regulatory framework for disinfectants and sterilants. To earn virucidal registration, extensive data on harder-to-kill viruses demonstrating long-lasting virucidal efficacy need to be provided.
Regulations Europe: Biocide products regulation EN 528/2012 Testing EN 14476:2019 (suspensions test) EN 16777:2018 (surfaces test) EN 1500 (hand rub test) ISO 18184:2019 (textile products) ISO 21702:2019 (plastics and non-porous surfaces) Other related tests A specific protocol for hand-hygiene testing has been researched and established by microbiologist Prof. Graham Ayliffe.
Safety:
Virucides are not intended for use inside the body, and most are disinfectants that are not intended for use on the surface of the body. Most substances are toxic. None of the listed substances replaces vaccination or antiviral drugs, if available. Virucides are usually labeled with instructions for safe, effective use. The correct use and scope of disinfectants is very important.Potential serious side-effects with using "quats" (Quaternary ammonium compounds) exist, and over-use "can have a negative impact on your customers' septic systems."Mouth-rinsing or gargling can reduce virus load, however experts warn that "Viruses in the nose, lungs or trachea that are released when speaking, sneezing and coughing are unlikely to be reached because the effect is based on physical accessibility of the surface mucous membrane".According to Deutsche Dermatologische Gesellschaft, medical practitioners recommend that disinfectants are gentler on the skin compared to soap-washing. The disinfected hands should then also be creamed to support the regeneration of the skin barrier. Skin care does not reduce the antiseptic effect of the alcoholic disinfectants.The "explosive" use of antibacterial cleansers has led the CDC to monitor substances in adults.On April 5, 2021, a Press Briefing by White House COVID-19 Response Team and Public Health Officials mentions that "Cleaning with household cleaners containing soap or detergent will physically remove germs from surfaces. This process does not necessarily kill germs, but reduces the risk of infection by removing them. Disinfecting uses a chemical product, which is a process that kills the germs on the surfaces. In most situations, regular cleaning of surfaces with soap and detergent, not necessarily disinfecting those surfaces, is enough to reduce the risk of COVID-19 spread. Disinfection is only recommended in indoor settings — schools and homes — where there has been a suspected or confirmed case of COVID-19 within the last 24 hours. In most situations, regular cleaning of surfaces with soap and detergent, not necessarily disinfecting those surfaces, is enough to reduce the risk of COVID-19 spread."The CDC issued a special report "Knowledge and Practices Regarding Safe Household Cleaning and Disinfection for COVID-19 Prevention" due to the increased number of calls to poison centers regarding exposures to cleaners and disinfectants since the onset of the COVID-19 pandemic, concluding that "Public messaging should continue to emphasize evidence-based, safe cleaning and disinfection practices to prevent SARS-CoV-2 transmission in households, including hand hygiene and cleaning and disinfection of high-touch surfaces."CDC provides a Guideline for Disinfection and Sterilization in Healthcare Facilities.
Microbicidal activity:
Each mentioned item in the list has different microbicidal activity, i. e. some viruses can be more or less resistant. For example, Poliovirus is resistant to H2O2, even after a contact time of 10 minutes however 7.5% H2O2 takes 30 minutes to inactivate 99.9% of Poliovirus. Generally, hydrogen peroxide is considered as potent virucide in appropriate concentrations, specifically in other forms such as gaseous.Another example is Povidone-iodine (PVP-I), which is found to be effective against herpes simplex virus or SARS-CoV-2, and other viruses, but coxsackievirus and polio was rather resistant or less sensitive to inactivation.
Microbicidal activity:
SARS-CoV-2 (COVID-19) In the beginning of the COVID-19 pandemic, former US President Donald Trump delivered a very dangerous message to the public on the use of disinfectants, which was immediately rejected and refuted by health professionals. In essence, and as mentioned above, virucides are usually toxic depending on concentrations, mixture, etc., and can be deadly not just to viruses, but also if inside a human or animal body or on surface of body.With regards to the COVID-19 pandemic, some of the mentioned agents are still under research about their microbicidal activity and effectivity against SARS-CoV-2 e. g. on surfaces, as mouth-washes, hand-washing, etc.
Microbicidal activity:
A mixture of 62–71% ethanol, 0.5% hydrogen peroxide or 0.1% sodium hypochlorite is found to be able to deactivate the novel Coronavirus on surfaces within 1 minute.A 2020 systematic review on hydrogen peroxide (H2O2) mouth-washes concludes, that they don't have an effect on virucidal activity, recommending that "dental care protocols during the COVID-19 pandemic should be revised." Additional research with relation to the Coronavirus virucidal efficacy is on-going.Various information and overview of light-based strategies (UV-C and other types of light sources; see also Ultraviolet germicidal irradiation) to combat the COVID-19 pandemic are available.As systematic review of 16 studies by Cochrane on Antimicrobial mouthwashes (gargling) and nasal sprays concludes that "there is currently no evidence relating to the benefits and risks of patients with COVID‐19 using antimicrobial mouthwashes or nasal sprays." SARS-CoV Treatment of SARS-CoV for 2 min with Isodine (PVP-I) is found to strongly reduce the virus infectivity.
Research:
The International Society of Antimicrobial Chemotherapy (ISAC) is one of the major umbrella organizations for education, research and development in the area of therapy of infections. Its members are national organizations, currently 86 and over 50,000 individual members.
List of virucides:
Note that many of the substances, if sold commercially, are usually combinations and mixtures with varying molecular contents. Also note that most products have a limited viricide efficacy. A specific test-protocol is applied. The lists' scope is limited. For further products refer to other lists. Other factors such as stability of the concentrate, application concentration, exposure time, timing of the solution, hydrogen ion concentration (pH value), temperature, etc. play an certain role for the effectivity of a virucide.EPA is providing a public listing called "List N" General substance listing of active component or compound 1-Docosanol Alcohols: Ethanol Isopropyl alcohol n-propanol Benzalkonium chlorides, e.g.
List of virucides:
Alkyl dimethyl benzyl ammonium chlorides (C12-16) Alkyl dimethyl benzyl ammonium chloride (C14 60%, C16 30%, C12 5%, C18 5%) Alkyl dimethyl ethylbenzyl ammonium chloride (C12-14) Alkyl dimethyl ethylbenzyl ammonium chlorides (C12-18) Bleach (Sodium hypochlorite)Sodium hypochlorite washes Didecyldimethylammonium chloride Hand washing (see also Surfactants) Hand washing is a mechanical process of removing germs and viruses, and chemicals.
Hand washing with e. g. ethanol added to a hand disinfectant shows virucidal effects, but caution is given (small children) and it is not recommended over "proper hand washing".
Hand gels are often found to not comply with EN 1500 standards to meet antimicrobial efficacy.
Prof. Graham Ayliffe's hand-cleaning and disinfection technique is promoted nowadays by the WHO and is similar to German standard DIN EN 1500 (hygienic hand disinfection).
Hydrogen peroxide Oral rinse (see Cochrane systematic review in case of SARS-CoV-2) Chlorhexidine (CHX) - mainly against enveloped viruses.
Dequalinium Povidone-iodine (Isodine, PVP-I),High potency for virucidal activity has been observed against viruses of significant global concern, including hepatitis A and influenza, as well as the Middle-East Respiratory Syndrome and Sudden Acute Respiratory Syndrome coronaviruses.
Application types and names: Isodine, Scrub, Isodine Nodo Fresh Surfactants Soap Triton X-100 Example products Betadine products and medical variants by Avrio Health (part of Purdue Pharma)Ingredients: Povidon-iodine etc.
As of June 2021, not recommended by manufacturer to "kill" coronaviruses.
Bleach products: Clorox Cyosan Zonrox Bleach Henkel products: biff Hygiene Total Ingredients: Benzalkonium chloride and Formic acid Tested against SARS-CoV-2 according to producer statement on website.
List of virucides:
Bref Power Bakterien & Schimmel Purex Heitmann Hygiene & Care products: Universal Hygiene Laundry Rinse 1.5Ingredients: Didecyldimethylammonium chloride Hygiene Spray Ingredients: Ethanol, 2-propanol According to manufacturer is effective against SARS-CoV-2 Listerine Ingredients: Alcohol, sodium fluoride, essential oils (specifically in case of management of inflammatory periodontal diseases) Unknown or limited virucidal activity Lysol Ingredient: Benzalkonium chloride Some of the products having been tested against SARS-CoV-2.
List of virucides:
Sterillium Ingredients: 1-Propanol, 2-Propanol and Mecetronium ethylsulfate By former Bode Chemie, now Hartmann AG, one of Germany's major health-care brands available in 50 countries, and according to website "the world's most scientifically researched hand disinfectant with approximately 60 scientific publications in trade journals in 2015." Other substances, drugs, proteins, therapeutics, research-level topics Antimicrobial peptides Auriclosene (NVC-422) - see also Keratoconjunctivitis Bacteriocin Chlorine dioxide Copper alloys CLR01 (Molecular tweezers) found to inhibit Ebola, Zika or possibly SARS-CoV-2 Cyanovirin-N General so called "Drug repurposing" for example in case of SARS-CoV-2/COVID-19 Griffithsin Interferon Nanomedicines "Novel Anti-Infectives" research by Helmholtz Centre for Infection Research Peracetic acid Scytovirin Urumin Agricultural, veterinary V-Bind Virkon Turnip yellow mosaic virus (TuYV) resistant products, such as Bayer AG's DK Excited | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydrogen sulfide**
Hydrogen sulfide:
Hydrogen sulfide is a chemical compound with the formula H2S. It is a colorless chalcogen-hydride gas, and is poisonous, corrosive, and flammable, with trace amounts in ambient atmosphere having a characteristic foul odor of rotten eggs. The underground mine gas term for foul-smelling hydrogen sulfide-rich gas mixtures is stinkdamp. Swedish chemist Carl Wilhelm Scheele is credited with having discovered the chemical composition of purified hydrogen sulfide in 1777. The British English spelling of this compound is hydrogen sulphide, a spelling no longer recommended by the Royal Society of Chemistry or the International Union of Pure and Applied Chemistry.
Hydrogen sulfide:
Hydrogen sulfide is toxic to humans and most other animals by inhibiting cellular respiration in a manner similar to hydrogen cyanide. When it is inhaled or its salts are ingested in high amounts, damage to organs occurs rapidly with symptoms ranging from breathing difficulties to convulsions and death. Despite this, the human body produces small amounts of this sulfide and its mineral salts, and uses it as a signalling molecule.Hydrogen sulfide is often produced from the microbial breakdown of organic matter in the absence of oxygen, such as in swamps and sewers; this process is commonly known as anaerobic digestion, which is done by sulfate-reducing microorganisms. It also occurs in volcanic gases, natural gas deposits, and sometimes in well-drawn water.
Properties:
Hydrogen sulfide is slightly denser than air. A mixture of H2S and air can be explosive. In general, hydrogen sulfide acts as a reducing agent, although in the presence of a base, it can act as an acid by donating a proton and forming SH−.
Properties:
Hydrogen sulfide burns in oxygen with a blue flame to form sulfur dioxide (SO2) and water: H2S + 3/2 O2 → SO2 + H2OIf an excess of oxygen is present, sulfur trioxide (SO3) is formed, which quickly hydrates to sulfuric acid: H2S + 2 O2 → H2SO4At high temperatures or in the presence of catalysts, sulfur dioxide reacts with hydrogen sulfide to form elemental sulfur and water. This reaction is exploited in the Claus process, an important industrial method to dispose of hydrogen sulfide.
Properties:
Hydrogen sulfide is slightly soluble in water and acts as a weak acid (pKa = 6.9 in 0.01–0.1 mol/litre solutions at 18 °C), giving the hydrosulfide ion HS− (also written SH−). Hydrogen sulfide and its solutions are colorless. When exposed to air, it slowly oxidizes to form elemental sulfur, which is not soluble in water. The sulfide anion S2− is not formed in aqueous solution.Hydrogen sulfide reacts with metal ions to form metal sulfides, which are insoluble, often dark colored solids. Lead(II) acetate paper is used to detect hydrogen sulfide because it readily converts to lead(II) sulfide, which is black. Treating metal sulfides with strong acid or electrolysis often liberates hydrogen sulfide. Hydrogen sulfide is also responsible for tarnishing on various metals including copper and silver; the chemical responsible for black toning found on silver coins is silver sulfide (Ag2S), which is produced when the silver on the surface of the coin reacts with atmospheric hydrogen sulfide.At pressures above 90 GPa (gigapascal), hydrogen sulfide becomes a metallic conductor of electricity. When cooled below a critical temperature this high-pressure phase exhibits superconductivity. The critical temperature increases with pressure, ranging from 23 K at 100 GPa to 150 K at 200 GPa. If hydrogen sulfide is pressurized at higher temperatures, then cooled, the critical temperature reaches 203 K (−70 °C), the highest accepted superconducting critical temperature as of 2015. By substituting a small part of sulfur with phosphorus and using even higher pressures, it has been predicted that it may be possible to raise the critical temperature to above 0 °C (273 K) and achieve room-temperature superconductivity.Hydrogen sulfide decomposes without a presence of a catalyst under atmospheric pressure around 1200 °C into hydrogen and sulfur.
Production:
Hydrogen sulfide is most commonly obtained by its separation from sour gas, which is natural gas with a high content of H2S. It can also be produced by treating hydrogen with molten elemental sulfur at about 450 °C. Hydrocarbons can serve as a source of hydrogen in this process.Sulfate-reducing (resp. sulfur-reducing) bacteria generate usable energy under low-oxygen conditions by using sulfates (resp. elemental sulfur) to oxidize organic compounds or hydrogen; this produces hydrogen sulfide as a waste product.
Production:
A standard lab preparation is to treat ferrous sulfide with a strong acid in a Kipp generator: FeS + 2 HCl → FeCl2 + H2SFor use in qualitative inorganic analysis, thioacetamide is used to generate H2S: CH3C(S)NH2 + H2O → CH3C(O)NH2 + H2SMany metal and nonmetal sulfides, e.g. aluminium sulfide, phosphorus pentasulfide, silicon disulfide liberate hydrogen sulfide upon exposure to water: 6 H2O + Al2S3 → 3 H2S + 2 Al(OH)3This gas is also produced by heating sulfur with solid organic compounds and by reducing sulfurated organic compounds with hydrogen.
Production:
Water heaters can aid the conversion of sulfate in water to hydrogen sulfide gas. This is due to providing a warm environment sustainable for sulfur bacteria and maintaining the reaction which interacts between sulfate in the water and the water heater anode, which is usually made from magnesium metal.
Production:
Biosynthesis in the body Hydrogen sulfide can be generated in cells via enzymatic or non-enzymatic pathways. H2S in the body acts as a gaseous signaling molecule which is known to inhibit Complex IV of the mitochondrial electron transport chain which effectively reduces ATP generation and biochemical activity within cells. Three enzymes are known to synthesize H2S: cystathionine γ-lyase (CSE), cystathionine β-synthetase (CBS) and 3-mercaptopyruvate sulfurtransferase (3-MST). These enzymes have been identified in a breadth of biological cells and tissues, and their activity has been observed to be induced by a number of disease states. It is becoming increasingly clear that H2S is an important mediator of a wide range of cell functions in health and in diseases. CBS and CSE are the main proponents of H2S biogenesis, which follows the trans-sulfuration pathway. These enzymes are characterized by the transfer of a sulfur atom from methionine to serine to form a cysteine molecule. 3-MST also contributes to hydrogen sulfide production by way of the cysteine catabolic pathway. Dietary amino acids, such as methionine and cysteine serve as the primary substrates for the transulfuration pathways and in the production of hydrogen sulfide. Hydrogen sulfide can also be synthesized by non-enzymatic pathway, which is derived from proteins such as ferredoxins and Rieske proteins. There has been continuing interest in exploiting such knowledge of hydrogen sulfide's role in signaling through development of mechanistically related therapeutic agents.Hydrogen sulfide has been shown to be involved in physiological processes such as vasodilation in animals, as well as in increasing seed germination and stress responses in plants. Hydrogen sulfide signaling is also innately intertwined with physiological processes that are known to be moderated by reactive oxygen species (ROS) and reactive nitrogen species (RNS). H2S has been shown to interact with NO resulting in several different cellular effects, as well as the formation of a new signal called nitrosothiol. Hydrogen sulfide is also known to increase the levels of glutathione which acts to reduce or disrupt ROS levels in cells. The field of H2S biology has advanced from environmental toxicology to investigate the roles of endogenously produced H2S in physiological conditions and in various pathophysiological states. According to a current classification, pathophysiological states with H2S overproduction (such as cancer and Down syndrome) and pathophysiological states with H2S deficit (e.g. vascular disease) can be identified. Although the understanding of H2S biology has significantly advanced over the last decade, many questions remain, for instance related to the quantification of endogenous H2S levels.
Uses:
Production of sulfur, thioorganic compounds, and alkali metal sulfides The main use of hydrogen sulfide is as a precursor to elemental sulfur. Several organosulfur compounds are produced using hydrogen sulfide. These include methanethiol, ethanethiol, and thioglycolic acid.Upon combining with alkali metal bases, hydrogen sulfide converts to alkali hydrosulfides such as sodium hydrosulfide and sodium sulfide: H2S + NaOH → NaSH + H2O NaSH + NaOH → Na2S + H2OThese compounds are used in the paper making industry. Specifically, salts of SH− break bonds between lignin and cellulose components of pulp in the Kraft process.Reversibly sodium sulfide in the presence of acids turns into hydrosulfides and hydrogen sulfide; this supplies hydrosulfides in organic solutions and is utilized in the production of thiophenol.
Uses:
Analytical chemistry For well over a century hydrogen sulfide was important in analytical chemistry in the qualitative inorganic analysis of metal ions. In these analyses, heavy metal (and nonmetal) ions (e.g., Pb(II), Cu(II), Hg(II), As(III)) are precipitated from solution upon exposure to H2S). The components of the resulting precipitate redissolve with some selectivity, and are thus identified.
Uses:
Precursor to metal sulfides As indicated above, many metal ions react with hydrogen sulfide to give the corresponding metal sulfides. This conversion is widely exploited. For example, gases or waters contaminated by hydrogen sulfide can be cleaned with metals, by forming metal sulfides. In the purification of metal ores by flotation, mineral powders are often treated with hydrogen sulfide to enhance the separation. Metal parts are sometimes passivated with hydrogen sulfide. Catalysts used in hydrodesulfurization are routinely activated with hydrogen sulfide, and the behavior of metallic catalysts used in other parts of a refinery is also modified using hydrogen sulfide.
Uses:
Miscellaneous applications Hydrogen sulfide is used to separate deuterium oxide, or heavy water, from normal water via the Girdler sulfide process.
Uses:
Scientists from the University of Exeter discovered that cell exposure to small amounts of hydrogen sulfide gas can prevent mitochondrial damage. When the cell is stressed with disease, enzymes are drawn into the cell to produce small amounts of hydrogen sulfide. This study could have further implications on preventing strokes, heart disease and arthritis.Depending on the level of toning present, coins that have been subject to toning by hydrogen sulfide and other sulfur-containing compounds may add to the numismatic value of a coin based on the toning's aesthetics. Coins can also be intentionally treated with hydrogen sulfide to induce toning, though artificial toning can be distinguished from natural toning, and is generally criticised among collectors.A suspended animation-like state has been induced in rodents with the use of hydrogen sulfide, resulting in hypothermia with a concomitant reduction in metabolic rate. Oxygen demand was also reduced, thereby protecting against hypoxia. In addition, hydrogen sulfide has been shown to reduce inflammation in various situations.
Uses:
Occurrence Volcanoes and some hot springs (as well as cold springs) emit some H2S, where it probably arises via the hydrolysis of sulfide minerals, i.e. MS + H2O → MO + H2S. Hydrogen sulfide can be present naturally in well water, often as a result of the action of sulfate-reducing bacteria. Hydrogen sulfide is produced by the human body in small quantities through bacterial breakdown of proteins containing sulfur in the intestinal tract, therefore it contributes to the characteristic odor of flatulence. It is also produced in the mouth (halitosis).A portion of global H2S emissions are due to human activity. By far the largest industrial source of H2S is petroleum refineries: The hydrodesulfurization process liberates sulfur from petroleum by the action of hydrogen. The resulting H2S is converted to elemental sulfur by partial combustion via the Claus process, which is a major source of elemental sulfur. Other anthropogenic sources of hydrogen sulfide include coke ovens, paper mills (using the Kraft process), tanneries and sewerage. H2S arises from virtually anywhere where elemental sulfur comes in contact with organic material, especially at high temperatures. Depending on environmental conditions, it is responsible for deterioration of material through the action of some sulfur oxidizing microorganisms. It is called biogenic sulfide corrosion.
Uses:
In 2011 it was reported that increased concentrations of H2S were observed in the Bakken formation crude, possibly due to oil field practices, and presented challenges such as "health and environmental risks, corrosion of wellbore, added expense with regard to materials handling and pipeline equipment, and additional refinement requirements".Besides living near gas and oil drilling operations, ordinary citizens can be exposed to hydrogen sulfide by being near waste water treatment facilities, landfills and farms with manure storage. Exposure occurs through breathing contaminated air or drinking contaminated water.In municipal waste landfill sites, the burial of organic material rapidly leads to the production of anaerobic digestion within the waste mass and, with the humid atmosphere and relatively high temperature that accompanies biodegradation, biogas is produced as soon as the air within the waste mass has been reduced. If there is a source of sulfate bearing material, such as plasterboard or natural gypsum (calcium sulfate dihydrate), under anaerobic conditions sulfate reducing bacteria converts this to hydrogen sulfide. These bacteria cannot survive in air but the moist, warm, anaerobic conditions of buried waste that contains a high source of carbon – in inert landfills, paper and glue used in the fabrication of products such as plasterboard can provide a rich source of carbon – is an excellent environment for the formation of hydrogen sulfide.
Uses:
In industrial anaerobic digestion processes, such as waste water treatment or the digestion of organic waste from agriculture, hydrogen sulfide can be formed from the reduction of sulfate and the degradation of amino acids and proteins within organic compounds. Sulfates are relatively non-inhibitory to methane forming bacteria but can be reduced to H2S by sulfate reducing bacteria, of which there are several genera.
Uses:
Removal from water A number of processes have been designed to remove hydrogen sulfide from drinking water.
Uses:
Continuous chlorination For levels up to 75 mg/L chlorine is used in the purification process as an oxidizing chemical to react with hydrogen sulfide. This reaction yields insoluble solid sulfur. Usually the chlorine used is in the form of sodium hypochlorite.Aeration For concentrations of hydrogen sulfide less than 2 mg/L aeration is an ideal treatment process. Oxygen is added to water and a reaction between oxygen and hydrogen sulfide react to produce odorless sulfate.Nitrate addition Calcium nitrate can be used to prevent hydrogen sulfide formation in wastewater streams.
Uses:
Removal from fuel gases Hydrogen sulfide is commonly found in raw natural gas and biogas. It is typically removed by amine gas treating technologies. In such processes, the hydrogen sulfide is first converted to an ammonium salt, whereas the natural gas is unaffected.
RNH2 + H2S ⇌ [RNH3]+ + SH−The bisulfide anion is subsequently regenerated by heating of the amine sulfide solution. Hydrogen sulfide generated in this process is typically converted to elemental sulfur using the Claus Process.
Safety:
Hydrogen sulfide is a highly toxic and flammable gas (flammable range: 4.3–46%). Being heavier than air, it tends to accumulate at the bottom of poorly ventilated spaces. Although very pungent at first (it smells like rotten eggs), it quickly deadens the sense of smell, creating temporary anosmia, so victims may be unaware of its presence until it is too late. Safe handling procedures are provided by its safety data sheet (SDS).
Safety:
Toxicity Hydrogen sulfide is a broad-spectrum poison, meaning that it can poison several different systems in the body, although the nervous system is most affected. The toxicity of H2S is comparable with that of carbon monoxide. It binds with iron in the mitochondrial cytochrome enzymes, thus preventing cellular respiration. Its toxic properties were described in detail in 1843 by Justus von Liebig.
Safety:
Low-level exposure Since hydrogen sulfide occurs naturally in the body, the environment, and the gut, enzymes exist to detoxify it. At some threshold level, believed to average around 300–350 ppm, the oxidative enzymes become overwhelmed. Many personal safety gas detectors, such as those used by utility, sewage and petrochemical workers, are set to alarm at as low as 5 to 10 ppm and to go into high alarm at 15 ppm. Detoxification is affected via oxidation to sulfate, which is harmless. Hence, low levels of hydrogen sulfide may be tolerated indefinitely.
Safety:
Exposure to lower concentrations can result in eye irritation, a sore throat and cough, nausea, shortness of breath, and fluid in the lungs (pulmonary edema). These effects are believed to be due to hydrogen sulfide combining with alkali present in moist surface tissues to form sodium sulfide, a caustic. These symptoms usually subside in a few weeks.
Long-term, low-level exposure may result in fatigue, loss of appetite, headaches, irritability, poor memory, and dizziness. Chronic exposure to low level H2S (around 2 ppm) has been implicated in increased miscarriage and reproductive health issues among Russian and Finnish wood pulp workers, but the reports have not (as of 1995) been replicated.
Safety:
High-level exposure Short-term, high-level exposure can induce immediate collapse, with loss of breathing and a high probability of death. If death does not occur, high exposure to hydrogen sulfide can lead to cortical pseudolaminar necrosis, degeneration of the basal ganglia and cerebral edema. Although respiratory paralysis may be immediate, it can also be delayed up to 72 hours. Diagnostic of extreme poisoning by H2S is the discolouration of copper coins in the pockets of the victim.
Safety:
Inhalation of H2S resulted in about 7 workplace deaths per year in the U.S. (2011–2017 data), second only to carbon monoxide (17 deaths per year) for workplace chemical inhalation deaths.
Safety:
Exposure thresholds Exposure limits stipulated by the United States government:10 ppm REL-Ceiling (NIOSH): recommended permissible exposure ceiling (the recommended level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs) 20 ppm PEL-Ceiling (OSHA): permissible exposure ceiling (the level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs) 50 ppm PEL-Peak (OSHA): peak permissible exposure (the level that must never be exceeded) 100 ppm IDLH (NIOSH): immediately dangerous to life and health (the level that interferes with the ability to escape) 0.00047 ppm or 0.47 ppb is the odor threshold, the point at which 50% of a human panel can detect the presence of an odor without being able to identify it.
Safety:
10–20 ppm is the borderline concentration for eye irritation.
50–100 ppm leads to eye damage.
At 100–150 ppm the olfactory nerve is paralyzed after a few inhalations, and the sense of smell disappears, often together with awareness of danger.
320–530 ppm leads to pulmonary edema with the possibility of death.
530–1000 ppm causes strong stimulation of the central nervous system and rapid breathing, leading to loss of breathing.
800 ppm is the lethal concentration for 50% of humans for 5 minutes' exposure (LC50).
Concentrations over 1000 ppm cause immediate collapse with loss of breathing, even after inhalation of a single breath.
Treatment Treatment involves immediate inhalation of amyl nitrite, injections of sodium nitrite, or administration of 4-dimethylaminophenol in combination with inhalation of pure oxygen, administration of bronchodilators to overcome eventual bronchospasm, and in some cases hyperbaric oxygen therapy (HBOT). HBOT has clinical and anecdotal support.
Safety:
Incidents Hydrogen sulfide was used by the British Army as a chemical weapon during World War I. It was not considered to be an ideal war gas, partially due to its flammability and because the distinctive smell could be detected from even a small leak, alerting the enemy to the presence of the gas. It was nevertheless used on two occasions in 1916 when other gases were in short supply.On September 2, 2005, a leak in the propeller room of a Royal Caribbean Cruise Liner docked in Los Angeles resulted in the deaths of 3 crewmen due to a sewage line leak. As a result, all such compartments are now required to have a ventilation system.A dump of toxic waste containing hydrogen sulfide is believed to have caused 17 deaths and thousands of illnesses in Abidjan, on the West African coast, in the 2006 Côte d'Ivoire toxic waste dump.
Safety:
In September 2008, three workers were killed and two suffered serious injury, including long term brain damage, at a mushroom growing company in Langley, British Columbia. A valve to a pipe that carried chicken manure, straw and gypsum to the compost fuel for the mushroom growing operation became clogged, and as workers unclogged the valve in a confined space without proper ventilation the hydrogen sulfide that had built up due to anaerobic decomposition of the material was released, poisoning the workers in the surrounding area. Investigator said there could have been more fatalities if the pipe had been fully cleared and/or if the wind had changed directions.In 2014, levels of hydrogen sulfide as high as 83 ppm were detected at a recently built mall in Thailand called Siam Square One at the Siam Square area. Shop tenants at the mall reported health complications such as sinus inflammation, breathing difficulties and eye irritation. After investigation it was determined that the large amount of gas originated from imperfect treatment and disposal of waste water in the building.In 2014, hydrogen sulfide gas killed workers at the Promenade shopping center in North Scottsdale, Arizona, USA after climbing into 15ft deep chamber without wearing personal protective gear. "Arriving crews recorded high levels of hydrogen cyanide and hydrogen sulfide coming out of the sewer." In November 2014, a substantial amount of hydrogen sulfide gas shrouded the central, eastern and southeastern parts of Moscow. Residents living in the area were urged to stay indoors by the emergencies ministry. Although the exact source of the gas was not known, blame had been placed on a Moscow oil refinery.In June 2016, a mother and her daughter were found deceased in their still-running 2006 Porsche Cayenne SUV against a guardrail on Florida's Turnpike, initially thought to be victims of carbon monoxide poisoning. Their deaths remained unexplained as the medical examiner waited for results of toxicology tests on the victims, until urine tests revealed that hydrogen sulfide was the cause of death. A report from the Orange-Osceola Medical Examiner's Office indicated that toxic fumes came from the Porsche's starter battery, located under the front passenger seat.In January 2017, three utility workers in Key Largo, Florida, died one by one within seconds of descending into a narrow space beneath a manhole cover to check a section of paved street. In an attempt to save the men, a firefighter who entered the hole without his air tank (because he could not fit through the hole with it) collapsed within seconds and had to be rescued by a colleague. The firefighter was airlifted to Jackson Memorial Hospital and later recovered. A Monroe County Sheriff officer initially determined that the space contained hydrogen sulfide and methane gas produced by decomposing vegetation.On May 24, 2018, two workers were killed, another seriously injured, and 14 others hospitalized by hydrogen sulfide inhalation at a Norske Skog paper mill in Albury, New South Wales. An investigation by SafeWork NSW found that the gas was released from a tank used to hold process water. The workers were exposed at the end of a 3-day maintenance period. Hydrogen sulfide had built up in an upstream tank, which had been left stagnant and untreated with biocide during the maintenance period. These conditions allowed sulfate-reducing bacteria to grow in the upstream tank, as the water contained small quantities of wood pulp and fiber. The high rate of pumping from this tank into the tank involved in the incident caused hydrogen sulfide gas to escape from various openings around its top when pumping was resumed at the end of the maintenance period. The area above it was sufficiently enclosed for the gas to pool there, despite not being identified as a confined space by Norske Skog. One of the workers who was killed was exposed while investigating an apparent fluid leak in the tank, while the other who was killed and the worker who was badly injured were attempting to rescue the first after he collapsed on top of it. In a resulting criminal case, Norske Skog was accused of failing to ensure the health and safety of its workforce at the plant to a reasonably practicable extent. It pleaded guilty, and was fined AU$1,012,500 and ordered to fund the production of an anonymized educational video about the incident.In October 2019, an Odessa, Texas employee of Aghorn Operating Inc. and his wife were killed due to a water pump failure. Produced water with a high concentration of hydrogen sulfide was released by the pump. The worker died while responding to an automated phone call he had received alerting him to a mechanical failure in the pump, while his wife died after driving to the facility to check on him. A CSB investigation cited lax safety practices at the facility, such as an informal lockout-tagout procedure and a nonfunctioning hydrogen sulfide alert system.
Safety:
Suicides The gas, produced by mixing certain household ingredients, was used in a suicide wave in 2008 in Japan. The wave prompted staff at Tokyo's suicide prevention center to set up a special hotline during "Golden Week", as they received an increase in calls from people wanting to kill themselves during the annual May holiday.As of 2010, this phenomenon has occurred in a number of US cities, prompting warnings to those arriving at the site of the suicide. These first responders, such as emergency services workers or family members are at risk of death or injury from inhaling the gas, or by fire. Local governments have also initiated campaigns to prevent such suicides.
Safety:
In 2020, H2S ingestion was used as a suicide method by Japanese pro wrestler Hana Kimura.
Hydrogen sulfide in the natural environment:
Microbial: The sulfur cycle Hydrogen sulfide is a central participant in the sulfur cycle, the biogeochemical cycle of sulfur on Earth.In the absence of oxygen, sulfur-reducing and sulfate-reducing bacteria derive energy from oxidizing hydrogen or organic molecules by reducing elemental sulfur or sulfate to hydrogen sulfide. Other bacteria liberate hydrogen sulfide from sulfur-containing amino acids; this gives rise to the odor of rotten eggs and contributes to the odor of flatulence.
Hydrogen sulfide in the natural environment:
As organic matter decays under low-oxygen (or hypoxic) conditions (such as in swamps, eutrophic lakes or dead zones of oceans), sulfate-reducing bacteria will use the sulfates present in the water to oxidize the organic matter, producing hydrogen sulfide as waste. Some of the hydrogen sulfide will react with metal ions in the water to produce metal sulfides, which are not water-soluble. These metal sulfides, such as ferrous sulfide FeS, are often black or brown, leading to the dark color of sludge.
Hydrogen sulfide in the natural environment:
Several groups of bacteria can use hydrogen sulfide as fuel, oxidizing it to elemental sulfur or to sulfate by using dissolved oxygen, metal oxides (e.g., iron oxyhydroxides and manganese oxides), or nitrate as electron acceptors.The purple sulfur bacteria and the green sulfur bacteria use hydrogen sulfide as an electron donor in photosynthesis, thereby producing elemental sulfur. This mode of photosynthesis is older than the mode of cyanobacteria, algae, and plants, which uses water as electron donor and liberates oxygen.
Hydrogen sulfide in the natural environment:
The biochemistry of hydrogen sulfide is a key part of the chemistry of the iron-sulfur world. In this model of the origin of life on Earth, geologically produced hydrogen sulfide is postulated as an electron donor driving the reduction of carbon dioxide.
Hydrogen sulfide in the natural environment:
Animals Hydrogen sulfide is lethal to most animals, but a few highly specialized species (extremophiles) do thrive in habitats that are rich in this compound.In the deep sea, hydrothermal vents and cold seeps with high levels of hydrogen sulfide are home to a number of extremely specialized lifeforms, ranging from bacteria to fish. Because of the absence of sunlight at these depths, these ecosystems rely on chemosynthesis rather than photosynthesis.Freshwater springs rich in hydrogen sulfide are mainly home to invertebrates, but also include a small number of fish: Cyprinodon bobmilleri (a pupfish from Mexico), Limia sulphurophila (a poeciliid from the Dominican Republic), Gambusia eurystoma (a poeciliid from Mexico), and a few Poecilia (poeciliids from Mexico). Invertebrates and microorganisms in some cave systems, such as Movile Cave, are adapted to high levels of hydrogen sulfide.
Hydrogen sulfide in the natural environment:
Interstellar and planetary occurrence Hydrogen sulfide has often been detected in the interstellar medium. It also occurs in the clouds of planets in our solar system.
Hydrogen sulfide in the natural environment:
Mass extinctions Hydrogen sulfide has been implicated in several mass extinctions that have occurred in the Earth's past. In particular, a buildup of hydrogen sulfide in the atmosphere may have caused, or at least contributed to, the Permian-Triassic extinction event 252 million years ago.Organic residues from these extinction boundaries indicate that the oceans were anoxic (oxygen-depleted) and had species of shallow plankton that metabolized H2S. The formation of H2S may have been initiated by massive volcanic eruptions, which emitted carbon dioxide and methane into the atmosphere, which warmed the oceans, lowering their capacity to absorb oxygen that would otherwise oxidize H2S. The increased levels of hydrogen sulfide could have killed oxygen-generating plants as well as depleted the ozone layer, causing further stress. Small H2S blooms have been detected in modern times in the Dead Sea and in the Atlantic ocean off the coast of Namibia.
Additional resources:
Committee on Medical and Biological Effects of Environmental Pollutants (1979). Hydrogen Sulfide. Baltimore: University Park Press. ISBN 978-0-8391-0127-7.
Siefers, Andrea (2010). A novel and cost-effective hydrogen sulfide removal technology using tire derived rubber particles (MS thesis). Iowa State University. Retrieved 8 February 2013. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Malate dehydrogenase (oxaloacetate-decarboxylating) (NADP+)**
Malate dehydrogenase (oxaloacetate-decarboxylating) (NADP+):
Malate dehydrogenase (oxaloacetate-decarboxylating) (NADP+) (EC 1.1.1.40) or NADP-malic enzyme (NADP-ME) is an enzyme that catalyzes the chemical reaction in the presence of a bivalent metal ion: (S)-malate + NADP+ ⇌ pyruvate + CO2 + NADPHThus, the two substrates of this enzyme are (S)-malate and NADP+, whereas its 3 products are pyruvate, CO2, and NADPH. Malate is oxidized to pyruvate and CO2, and NADP+ is reduced to NADPH.
Malate dehydrogenase (oxaloacetate-decarboxylating) (NADP+):
This enzyme belongs to the family of oxidoreductases, to be specific those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (S)-malate:NADP+ oxidoreductase (oxaloacetate-decarboxylating). This enzyme participates in pyruvate metabolism and carbon fixation. NADP-malic enzyme is one of three decarboxylation enzymes used in the inorganic carbon concentrating mechanisms of C4 and CAM plants. The others are NAD-malic enzyme and PEP carboxykinase. Although often one of the three photosynthetic decarboxylases predominate, the simultaneous operation of all three is also shown to exist.
Enzyme structure:
Based on crystallography data of homologous NADP-dependent malic enzymes of mammalian origin, a 3D model for C4 pathway NADP-ME in plants has been developed, identifying the key residues involved in substrate-binding or catalysis. Dinucleotide binding involves two glycine-rich GXGXXG motifs, a hydrophobic groove involving at least six amino acid residues, and a negatively charged residue at the end of the βB-strand. The primary sequence of the first motif, 240GLGDLG245, is a consensus marker for phosphate binding, evidencing involvement with NADP binding, while the other glycine rich motif adopts a classical Rossmann fold—also a typical marker for NADP cofactor binding. Mutagenesis experiments in maize NADP-ME have supported the current model. Valine substitution for glycine in either motif region rendered the enzyme completely inactive while spectral analysis indicated no major changes from wild-type form. The data is suggestive of direct impairment at a key residue involved in binding or catalysis rather than an inter-domain residue influencing conformational stability. Additionally, a key arginine residue at site 237 has been shown to interact both with malate and NADP+ substrates, forming key favorable electrostatic interactions to the negatively charged carboxylic-acid and phosphate group respectively. Elucidation of whether the residue plays a role in substrate binding or substrate positioning for catalysis has yet to be determined.Lysine residue 255 has been implicated as a catalytic base for the enzymes reactivity; however, further studies are still required to conclusively establish its biochemical role.
Enzyme structure:
Structural studies As of 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1GQ2, 1GZ4, and 2AW5.
Biological function:
In a broader context, malic enzymes are found within a wide range of eukaryotic organisms, from fungi to mammals, and beyond that, are shown to localize in range of subcellular locations, including the cytosol, mitochondria, and chloroplast. C4 NADP-ME, specifically, is in plants localized in bundle sheath chloroplasts.During C4 photosynthesis, an evolved pathway to increase localized CO2 concentrations under the threat of enhanced photorespiration, CO2 is captured within mesophyll cells, fixed as oxaloacetate, converted into malate and released internally within bundle sheath cells to directly feed RuBisCO activity. This release of fixed CO2, triggered by the favorable decarboxylation of malate into pyruvate, is mediated by NADP-dependent malic enzyme. In fact, the significance of NADP-ME activity in CO2 conservation is evidenced by a study performed with transgenic plants exhibiting a NADP-ME loss of function mutation. Plants with the mutation experienced 40% the activity of wild-type NADP-ME and achieved significantly reduced CO2 uptake even at high intercellular levels of CO2, evidencing the biological importance of NADP-ME at regulating carbon flux towards the Calvin cycle.
Enzyme regulation:
NADP-ME expression has been shown to be regulated by abiotic stress factors. For CAM plants, drought conditions cause stoma to largely remain shut to avoid water loss by evapotranspiration, which unfortunately leads to CO2 starvation. In compensation, closed stoma activates the translation of NADP-ME to reinforce high efficiency of CO2 assimilation during the brief intervals of CO2 intake, allowing for carbon fixation to continue.
Enzyme regulation:
In addition to regulation at the longer time scale by means of expression control, regulation at the short-time scale can occur through allosteric mechanisms. C4 NADP-ME has been shown to be partially inhibited by its substrate, malate, suggesting two independent binding sites: one at the active site and one at an allosteric site. However, the inhibitory effect exhibits pH-dependence – existent at a pH of 7 but not a pH of 8. The control of enzyme activity due to pH changes align with the hypothesis that NADP-ME is most active while photosynthesis is in progress: Active light reactions leads to a rise in basicity within the chloroplast stroma, the location of NADP-ME, leading to a diminished inhibitory effect of malate on NADP-ME and thereby promoting a more active state. Conversely, slowed light reactions leads to a rise in acidity within the stroma, promoting the inhibition of NADP-ME by malate. Because the high energy products of the light reactions, NADPH and ATP, are required for the Calvin cycle to proceed, a buildup of CO2 without them is not useful, explaining the need for the regulatory mechanism.This protein may use the morpheein model of allosteric regulation.
Evolution:
NADP-malic enzyme, as all other C4 decarboxylases, did not evolve de novo for CO2 pooling to aid RuBisCO. Rather, NADP-ME was directly transformed from a C3 species in photosynthesis, and even earlier origins from an ancient cystolic ancestor. In the cytosol, the enzyme existed as a series of housekeeping isoforms purposed towards a variety of functions including malate level maintenance during hypoxia, microspore separation, and pathogen defense. In regards to the mechanism of evolution, the C4 functionality is thought to have stemmed from gene duplication error both within promoter regions, triggering overexpression in bundle-sheath cells, and within the coding region, generating neofunctionalization. Selection for CO2 preservation function as well as enhanced water and nitrogen utilization under stressed conditions was then shaped by natural pressures. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Steroid ester**
Steroid ester:
A steroid ester is an ester of a steroid. They include androgen esters, estrogen esters, progestogen esters, and corticosteroid esters. Steroid esters may be naturally occurring/endogenous like DHEA sulfate or synthetic like estradiol valerate. Esterification is useful because it is often able to render the parent steroid into a prodrug of itself with altered chemical properties such as improved metabolic stability, water solubility, and/or lipophilicity. This, in turn, can enhance pharmacokinetics, for instance by improving the steroid's bioavailability and/or conferring depot activity and hence an extended duration with intramuscular or subcutaneous injection.Esterification of steroids with fatty acids was developed to prolong the duration of effect of steroid hormones. By 1957, more than 500 steroid esters had been synthesized, most frequently of androgens. The longer the fatty acid chain, up to a certain optimal length, the longer the duration when prepared as an oil solution and injected. Across a chain length range of 6 to 12 carbon atoms, a length of 9 or 10 carbon atoms (nonanoate or decanoate ester) was found to be optimal in rodents in the case of testosterone esters. Fatty acid esters increase the lipophilicity of steroids, with longer fatty acids resulting in greater lipophilicity. The greater solubility in oil allows the steroid esters to be dissolved in a smaller oil volume, thereby allowing for larger doses with intramuscular injection. In addition, the greater the lipophilicity of the steroid, as measured by the octanol/water partition coefficient (logP), the slower its release from the oily depot at the injection site and the longer its duration.Steroid esters can also be prepared as crystalline aqueous suspensions. Aqueous suspensions of steroid crystals result in prolongation of duration with intramuscular injection similarly to oil solutions. The duration is longer than that of oil solutions, intermediate between oil solutions and subcutaneous pellet implants. The sizes of crystals in suspensions varies and can range from 0.1 μm to some hundreds of μm. The duration of crystalline steroid suspensions increases directly with the size of the crystals. However, crystalline suspensions have an irritating effect in the body, and intramuscular injections of crystalline steroid suspensions result in painful local reactions. These reactions worsen with larger crystals, and for this reason, crystal sizes must be limited to minimize local reactions. Particle sizes of more than 300 μg in the case of estradiol benzoate by intramuscular injection have been found to be too painful for use.In some cases, crystalline steroid suspensions are used not for prolongation of effect, but because the solubility of the steroid result in this preparation being the only practical way to deliver the steroid in a reasonable injection volume. Examples include cortisone acetate and hydrocortisone and its esters. A requirement of long-lasting crystalline steroid administration is that the steroid be sufficiently water-insoluble, so that it dissolves slowly and thereby attains a prolonged therapeutic effect. The crystals in suspensions can sometimes clump together or aggregate and grow in size. This can be avoided by careful formulation. Crystalline suspensions of steroids are prepared either by precipitation or by dispersing finely divided material in an aqueous suspension medium. Desired particle size can be achieved by grinding, for instance through the use of an atomizer.Adolf Butenandt reported in 1932 that estrone benzoate in oil solution had a prolonged duration with injection in animals. No such prolongation of action occurred if it was given by intravenous injection. Estradiol benzoate was synthesized in 1933 and was marketed for use the same year.
Sulfur-based esters:
Certain sulfur-based steroid esters have a sulfamate or sulfonamide moiety as the ester, typically at the C3 and/or C17β positions. Like many other steroid esters, they are prodrugs. Unlike other steroid esters however, they bypass first-pass metabolism with oral administration and have high oral bioavailability and potency, abolished first-pass hepatic impact, and long elimination half-lives and durations of action. They are under development for potential clinical use. Examples include the estradiol esters estradiol sulfamate (E2MATE; also a potent steroid sulfatase inhibitor) and EC508 (estradiol 17β-(1-(4-(aminosulfonyl)benzoyl)-L-proline)), the testosterone ester EC586 (testosterone 17β-(1-((5-(aminosulfonyl)-2-pyridinyl)carbonyl)-L-proline)), and sulfonamide esters of levonorgestrel and etonogestrel. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**World Action And Adventure**
World Action And Adventure:
World Action and Adventure is a role-playing game published by M.S. Kinney Corporation in 1985.
Description:
World Action and Adventure is a universal system, with character creation, skill, combat, and mass combat rules. The boxed set includes the Official Guide, a GM's screen, character record sheets, and blank forms.
Publication history:
World Action and Adventure was designed by Gregory L. Kinney, and published by M.S. Kinney Corporation in 1985 as a boxed set containing a 160-page hardcover book, a cardstock screen, a packet of blank forms, a small pad of character sheets, and dice.
Reception:
Lawrence Schick comments: "WA&A is beyond doubt the nicest RPG ever published. The three books attempt to describe everything on Earth from a staggeringly naive worldview. The rulebooks consist of a multitude of very general tables that list everything the author could think of. The descriptions that accompany the tables are so ingenuous, they're just priceless. Then there's the poem, "World Action and Adventure": 20 verses, each describing a different aspect of adventure. An excerpt: 'Pyramids are being built/the Pharaoh has a suppressed guilt/so many slaves they thirst and die/to build these things that touch the sky" ... it just goes on and on." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bessel–Maitland function**
Bessel–Maitland function:
In mathematics, the Bessel–Maitland function, or Wright generalized Bessel function, is a generalization of the Bessel function, introduced by Edward Maitland Wright (1934). The word "Maitland" in the name of the function seems to be the result of confusing Edward Maitland Wright's middle and last names. It is given by Jμ,ν(z)=∑k≥0(−z)kΓ(kμ+ν+1)k!. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Embroidery of India**
Embroidery of India:
Embroidery in India includes dozens of embroidery styles that vary by region and clothing styles. Designs in Indian embroidery are formed on the basis of the texture and the design of the fabric and the stitch. The dot and the alternate dot, the circle, the square, the triangle, and permutations and combinations of these constitute the design.
Aari:
Aari work involves a hook, plied from the top but fed by silk thread from below with the material spread out on a frame. This movement creates loops, and repeats of these lead to a line of chain stitches. The fabric is stretched on a frame and stitching is done with a long needle ending with a hook such as a crewel, tambour (a needle similar to a very fine crochet hook but with a sharp point) or Luneville work. The other hand feeds the thread from the underside, and the hook brings it up, making a chainstitch, but it is much quicker than chainstitch done in the usual way: looks like machine-made and can also be embellished with sequins and beads - which are kept on the right side, and the needle goes inside their holes before plunging below, thus securing them to the fabric.there are many types of materials used like zari threads, embellishments,sequins etc..
Aari:
Aari embroidery is practiced in various regions such as in Kashmir and Kutch (Gujarat).
Banjara embroidery:
Practiced by the Lambada gypsy tribes of Andhra Pradesh, Banjara embroidery is a mix of applique with mirrors and beadwork. Bright red, yellow, black and white coloured cloth is laid in bands and joined with a white criss-cross stitch. The Banjaras of Madhya Pradesh who are found in the districts of Malwa and Nimar have their own style of embroidery where designs are created according to the weave of the cloth, and the textured effect is achieved by varying colours and stitches of the geometric patterns and designs. Motifs are generally highlighted by cross-stitch.
Banni or Heer Bharat (Gujarat):
The Banni or Heer Bharat embroidery originates in Gujarat, and is practiced mainly by the Lohana community. It is done with silk floss (Heer means "silk floss") and it is famous for its vibrancy and richness in color pallets & design patterns, which include shisha (mirror) work. Bagh and phulkari embroidery of the Punjab region has influenced Heer Bharat embroidery in its use of geometrical motifs and stitchery.
Chamba Rumal (Himachal Pradesh):
It originated in Chamba kingdom of Himachal Pradesh in 17th century. This embroidery flourished in the princely hill states of Kangra, Chamba, Basholi, and other neighbouring provinces. The Chamba region has highly skilled craftsmen. Chamba embroidery has its own distinctive style, small squares or rectangles of clothe embroidered with untwisted silk threads. While untwisted silk is most common some Chamba embroidery make use of thin metal wires or metallic yarn. While the chamba rumal originated in the 17th century it reached widespread popularity in the 18th century after rulers in the Himalayan region patronized Chamba Rumal embroiderers. The original Chamba embroideries were done by women or young children, the embroideries often depicted gods or goddesses. Original Chamba embroideries were very important in marriages as the embroideries were kept as the brides dowry. Chamba embroideries often began by drawing an outline on the rectangular square of fabric, while originally embroidered by women at the height of popularity in the 18th century many male painters drew the outlines and embroidered the clothe themselves to ensure high quality work. Not long after its height of popularity in the 18th century the chamba rumal's popularity declined. The rumals began to lose their sacredness, today most rumals are made by families trying to sell them to survive, and the Chamba Rumals are not of the same quality as they were in the 17th and 18th century. While this art style has declined over the years and almost been lost, in 2009 Lalita Vakil was given the Shilp Guru award for her ability and skill in Chamba embroidery.
Chikankari (Uttar Pradesh):
The present form of chikan (meaning elegant patterns on fabric) work is associated with the city of Lucknow, in Uttar Pradesh. Chikan embroidery on silk is Lucknow's own innovation. The other chikan styles are that of Calcutta and Dacca. However, characteristic forms of stitch were developed in Lucknow: phanda and murri.Chikan embroidery is believed to have been introduced by Nur Jahan, the wife of Jahangir. Chikan embroidery involves the use of white thread on white muslin (tanzeb), fine cotton (mulmul), or voile, fine almost sheer fabrics which showcases shadow work embroidery the best. Other colours can also be used. The artisans usually create individual motifs or butis of animals and flowers (rose, lotus, jasmine, creepers). The designs are first printed onto the fabric not with chaulk, but with a mixture of glue and indigo.
Chikankari (Uttar Pradesh):
At least 40 different stitches are documented, of which about 30 are still practiced today and include flat, raised and embossed stitches, and the open trellis-like jaali work. Some of the stitches that are used in Chikankari work include: taipchi, pechni, pashni, bakhia (ulta bakhia and sidhi bakhia), gitti, jangira, murri, phanda, jaalis etc. In English: chain stitch, buttonhole stitch, French knots and running stitch, shadow work. Another is the khatao (also called khatava or katava).
Gota (Jaipur, Rajasthan):
It is a form of appliqué in gold thread, used for women's formal attire. Small pieces of zari ribbon are applied onto the fabric with the edges sewn down to create elaborate patterns. Lengths of wider golden ribbons are stitched on the edges of the fabric to create an effect of gold zari work. Khandela in Shekhawati is famous for its manufacture. The Muslim community uses Kinari or edging, a fringed border decoration. Gota-kinari practiced mainly in Jaipur, utilising fine shapes of bird, animals, human figures which are cut and sewn on to the material.it is very famous in rajasthan as well as in many other parts of the world.
Kamal kadai (Andhra Pradesh):
Is an embroidery from native Andhra Pradesh. Woven Trellis stitch is used to make flowers and leaves and other stitches are done on fabric to complete the embroidery.
Kantha (Bengal):
Naksha is embroidery on many layers of cloth (like quilting), with running stitch. It is also known as dorukha which mean the designs/motifs are equally visible in both sides: there is no right or wrong side so both side are usable. Traditionally, worn out clothes and saris were piled together and stitched into quilts. Rural Bengali women still do this with cotton saris, the embroidery thread being taken from the sari border. It started as a method of making quilts, but the same type of embroidery can also be found on saris, salwar suits, stoles, napkins, etc. Themes include human beings, animals, flowers, geometric designs and mythological figures.
Karchobi - Rajasthan:
It is a raised zari metallic thread embroidery created by sewing flat stitches on cotton padding. This technique is commonly used for bridal and formal costumes as well as for velvet coverings, tent hangings, curtains and the coverings of animal carts and temple chariots.
Kasuti or Kasuthi (Karnataka):
Kasuti (Kai=hand and Suti = weave /wrap) comes from the state of Karnataka, Kasuti is originated in Karnataka during chalukya period (6th to 12th century) [5] and done with single thread and involves counting of each thread on the cloth. The patterns are stitched without knots, so that both sides of the cloth look alike. Stitches like Gavanti, Murgi, Negi and Menthi form intricate patterns like gopura, chariot, palanquin, lamps and conch shells, as well as peacocks and elephants, in fixed designs and patterns.
Kathi (Gujarat):
Kathi embroidery was introduced by 'Kathi' the cattle breeders, who were wanderers. This technique combines chain stitch, appliqué work and mirror-like insertions.
Kaudi (Karnataka):
Kaudi (ಕೌದಿ) is a blanket or bedspread and applique embroidery from Karnataka. Old Fabrics are cut into pieces and stitched with simple running stitch, and Gubbi Kaalu stitch.
Khneng (Meghalaya):
Is an embroidery from meghalaya. Mustoh village is only known place for khneng embroidery and the embroidery is traditionally Done on eri silk shawls. [6]
Kutch or Aribharat:
The best known of the Kutch (Gujarat) embroidery techniques is Aribharat, named after the hooked needle which forms the chainstitch. It is also known as Mochibharat, as it used to be done by mochis (cobblers).
Kutchi bharat/Sindhi stitch (Gujarat):
A variation of Kutch work, this geometric embroidery starts with a foundation framework of herringbone stitch or Cretan stitch, and then this framework is completely filled with interlacing. It is said that this technique originated in far away land of Armenia and found its way to Gujarat by travelling Nomads. Sindhi stitch or Maltese cross stitch is also similar but the innovation of the Kutchi women have taken it beyond the traditional designs meow Kutch work
Kashmiri embroidery:
Kashmiri Kashida Kashmiri embroidery (also Kashida) is originated during Mughal period and used for phirans (woollen kurtas) and namdahs (woollen rugs) as well as stoles. It draws inspiration from nature. Birds, blossoms and flowers, creepers, chinar leaves, ghobi, mangoes, lotus, and trees are the most common themes. The entire pattern is made with one or two embroidery stitches, and mainly chain stitch on a base of silk, wool and cotton: the colour is usually white, off-white or cream but nowadays one can find stoles and salwar-kameez sets in many other colours such as brown, deep blue, sky blue, maroon and rani pink. Kashida is primarily done on canvas with crystal threads, but Kashida also employs pashmina and leather threads. Apart from clothes, it is found on home furnishings like bed spreads, sofa and floor cushions, and pillow covers.
Kashmiri embroidery:
The base cloth, whether wool or cotton, is generally white or cream or a similar shade. Pastel colors are also often used. The craftsmen use shades that blend with the background. Thread colors are inspired by local flowers. Only one or two stitches are employed on one fabric.
Kashmiri embroidery is known for the skilled execution of a single stitch, which is often called the Kashmiri stitch and which may comprise the chain stitch, the satin stitch, the slanted darn stitch, the stem stitch, and the herringbone stitch. Sometimes, the doori (knot) stitches are used but not more than one or two at a time.
Kashmiri embroidery:
Kashmiri stitches The stitches include sozni (satin), zalakdozi (chain) and vata chikan (button hole). Other styles include dorukha in which the motif appears on both sides of the shawl with each side having a different color; papier-mâché; aari (hook) embroidery; shaaldaar; chinar-kaam; samovar (the antique Kashimiri tea-pot) is a very typical and popular design used in Kashmiri embroidery. The samovar pattern is then filled up with intricate flowers and leaves and twigs; Kashir-jaal which implies fine network of embroidery, particularly on the neckline and sleeves of a dress material.
Kashmiri embroidery:
Further styles include naala jaal which involves embroidery particularly on the neckline and chest/yoke: naala means neck in the Koshur dialect of Kashmiri language; jaama is a very dense embroidery covering the whole base fabric with a thick spread of vine/creepers and flowers, badaam and heart shapes, a variation of this form is neem-jaama, where neem means demi or half, because the embroidery is less dense, allowing a view of the fabric underneath; and jaal consisting of bel-buti: a fine and sparse net of vine/creepers and flowers. Variation of this form is neem-jaal, where again the work is less dense.
Mukaish Work- (similar to chikankari) -Lucknow:
Small rectangular pieces of metal are squeezed shut around some threads of the fabric. Mukesh work (known also as badla or fardi), includes women making shiny stitches amid chikan embroidery using a needle and long, thin strips of metal.
Phool Patti ka Kaam (Uttar Pradesh):
Flower embroidery of Uttar Pradesh, especially in Aligarh.
Phulkari (Punjab and Haryana):
Phulkari (Phul=flower, Kari=work) is originated in the late 17th century in Punjab region. the most famous rural embroidery tradition of Punjab, mentioned in the Punjabi folklore of Heer Ranjha by Waris Shah. Its present form and popularity goes back to 15th century, during Maharaja Ranjit Singh's reign Phulkari also means headscarf, and it comes from the 19th century tradition of carrying an odhani or a head-scarf with flower patterns. Its distinctive property is that the base is a dull hand-spun or khadi cloth, with bright coloured threads that cover it completely, leaving no gaps. It uses a darn stitch done from the wrong side of the fabric using darning needles, one thread at a time, leaving a long stitch below to form the basic pattern. Famous for Phulkari are the cities of Amritsar, Jalandhar, Ambala, Ludhiana, Nabha, Jind, Faridkot, and Kapurthala. Other cities include Gurgaon (Haryana), Karnal, Hissar, Rohtak and Delhi. Bagh is an offshoot of phulkari and almost always follows a geometric pattern, with green as its basic colour.
Phulkari (Punjab and Haryana):
Other styles The embroidery styles of the Punjab region include kalabatun embroidery using thin wires. Kalabatan surkh involves using gold wires on orange coloured and red silk. Kalabatan safed involves using silver wires on white material. There are two kinds of gold embroidery, one of a solid and rich kind called kar-chob and the other called tila-kar or kar-chikan utilising gold thread. The former is used for carpets and saddle cloths whereas the latter is used for dresses. The Punjab region also uses mukesh embroidery: mukesh bati-hui, twisted tinsel, mukesh gokru, flattened gold wire for embroidery of a heavy kind, and waved mukesh, made by crimping mukesh batihui with iron tongs. Ludhiana and Amritsar are known for embroidery using white, silver and gold threads on clothes such as chogas and waistcoats (phatuhi). Patchwork is also a tradition of the region.
Pichwai (Rajasthan):
Colourful embroidered cloth-hangings made in Nathdwara, Rajasthan. The central themes focus on Lord Krishna.
Pipli (Odisha):
Appliqué or Pipli work originates from the Pipli village in Odisha and some parts of Gujarat. It is called Chandua based on patchwork: brightly coloured and patterned fabric pieces are sewn together on a plain background mostly velvet along with Mirror and lace work. Designs include Hindu gods, human forms, animals, flowers and vehicles. Originally Chandua work was done to build the chariots for Puri Rath Yatra and was also used for parasols, canopies and pillows for the Rath Yatra. Nowadays different home décor items can be found, such as lamp shades, garden umbrellas and bed covers and utility products like Hand bags, Wallets, Files.
Rabari (Rajasthan and Gujarat):
This embroidery style is made by the Rabari or Rewari community of Rajasthan and Gujarat. This very colourful embroidery style, using stark contrast was traditionally used only for garments, but now it can be found on bags, accessories, home furnishings, etc.
Mirrors of all shapes and sizes are incorporated in the embroidery, as a result of the belief that mirrors protect from evil spirits. Designs include not only flowers and fruit and animals such as parrots and elephants, but also temples, women carrying pots, and the ubiquitous mango shape.
Shamilami (Manipur):
A combination of weaving and embroidery and was once a high status symbol.
Shisha or Mirrorwork (Gujarat, Haryana, Rajasthan):
This ornamentation method originated in Persia during 13th century and involves little pieces of mirror in various sizes which are encased in the decoration of the fabric first by interlacing threads and then with buttonhole stitch.Originally, pieces of mica were used as the mirrors, but later, people started using thin blown-glass pieces, hence the name, which in Hindi means "little glass". Until recently they were all irregular, made by hand, and used mercury, nowadays one can also find them machine made and regularly shaped. It's usually found in combination with other types of stitches like cross stitch, buttonhole stitch and satin stitch, nowadays not only by hand but also by machine. Mirrorwork is very popular for cushion covers and bedcovers, purses and decorative hangings as well as in decorative borders in women's salwar-kameez and sari. Thousands of women from kutch (Gujarat) and sikar, churu (Rajasthan) are engaged in doing hand embroidery work like tie, mirror work, beads on fabric.
Shisha or Mirrorwork (Gujarat, Haryana, Rajasthan):
There are various types of Chikan work: Taipchi, Bakhia, Phunda, Murri, Jaali, Hathkati, Pechni, Ghas Patti, and Chaana Patti.
Toda embroidery:
The Toda embroidery has its origins in Tamil Nadu. The Nilgiri Hills, inhabited by the Todu community have their own style called pugur, means flower. This embroidery, like Kantha, is practiced by women.
The embroidery adorns the shawls. The shawl, called poothkuli, has red and black bands between which the embroidery is done. As Todas worship the buffaloes, buffalo becomes an important motif in the Toda embroidery among mettvi kaanpugur, Izhadvinpuguti and others. Stylized sun, moon, stars and the eye of the peacock feathers are used in Toda embroidery.
Zardozi or Zari or kalabattu:
The most opulent form of Indian embroidery is the Zari and the Zardozi or Zardosi, known since the late 16th century, brought in India by the Moghuls. The word Zardozi comes from the two Persian words, Zar (gold) and Dozi (embroidery). This form uses metallic thread.
Zardozi or Zari or kalabattu:
Once real gold and silver thread was used, on silk, brocade and velvet fabric. Metal ingots were melted and pressed through perforated steel sheets to convert into wires, which then were hammered to the required thinness. Plain wire is called 'badla', and when wound round a thread, it is called 'kasav'. Smaller spangles are called 'sitara' and tiny dots made of badla are called 'mukais' or 'mukesh'.
Zardozi or Zari or kalabattu:
Zardozi is either a synonym or a more elaborate version of zari where the gold or silver embroidery is embellished with pearls and precious stones, gota and kinari, making this art only affordable by rich people.
Nowadays Zardosi thread has a plastic core and a golden-coloured outside. The thread consists of coiled metal wires placed on the right side of the fabric and couched with a thinner thread. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sequence learning**
Sequence learning:
In cognitive psychology, sequence learning is inherent to human ability because it is an integrated part of conscious and nonconscious learning as well as activities. Sequences of information or sequences of actions are used in various everyday tasks: "from sequencing sounds in speech, to sequencing movements in typing or playing instruments, to sequencing actions in driving an automobile." Sequence learning can be used to study skill acquisition and in studies of various groups ranging from neuropsychological patients to infants. According to Ritter and Nerb, “The order in which material is presented can strongly influence what is learned, how fast performance increases, and sometimes even whether the material is learned at all.” Sequence learning, more known and understood as a form of explicit learning, is now also being studied as a form of implicit learning as well as other forms of learning. Sequence learning can also be referred to as sequential behavior, behavior sequencing, and serial order in behavior.
History:
In the first half of the 20th century, Margaret Floy Washburn, John B. Watson, and other behaviorists believed behavioral sequencing to be governed by the reflex chain, which states that stimulation caused by an initial movement triggers an additional movement, which triggers another additional movement, and so on. In 1951, Karl Lashley, a neurophysiologist at Harvard University, published “The Problem of Serial Order in Behavior,” addressing the current beliefs about sequence learning and introducing his hypothesis. He criticized the previous view on the basis of six lines of evidence: The first line is that movements can occur even when sensory feedback is interrupted. The second is that some movement sequences occur too quickly for elements of the sequences to be triggered by feedback from the preceding elements. Next is that the errors in behavior suggest internal plans for what will be done later. Also, the time to initiate a movement sequence can increase with the length or complexity of the sequence. The next line is the properties of movements occurring early in a sequence can anticipate later features. Then lastly the neural activity can indicate preparation of upcoming behavior events, including upcoming behavior events in the relatively long-term future.
History:
Lashley argued that sequence learning, or behavioral sequencing or serial order in behavior, is not attributable to sensory feedback. Rather, he proposed that there are plans for behavior since the nervous system prepares for some behaviors but not others. He said that there was a hierarchical organization of plans. He came up with several lines of evidence. The first of these is that the context changes functional interpretations of the same behaviors, such as the way “wright, right, right, rite, and write” are interpreted based on the context of the sentence. “Right” can be interpreted as a direction or as something good depending on the context. A second line of evidence says that errors are involved in human behavior as hierarchical organization. In addition, “hierarchical organization of plans comes from the timing of behavioral sequences.” The larger the phrase, the longer the response time, which factors into “decoding” or “unpacking” hierarchical plans. Additional evidence is how easy or hard it is to learn a sequence. The mind can create a “memory for what is about to happen” as well as a “memory for what has happened.” The final evidence for the hierarchical organization of plans is characterized by "chunking". This skill combines multiple units into larger units.
Types of sequence learning:
There are two broad categories of sequence learning—explicit and implicit—with subcategories. Explicit sequence learning has been known and studied since the discovery of sequence learning. However, recently, implicit sequence learning has gained more attention and research. As a form of implicit learning, implicit sequence learning concerns underlying learning methods of which people are unaware—in other words, learning without knowing. The exact properties and number of mechanisms of implicit learning are debated. Other forms of implicit sequence learning include motor sequence learning, temporal sequence learning, and associative sequence learning.
Sequence learning problems:
Sequence learning problems are used to better understand the different types of sequence learning. There are four basic sequence learning problems: sequence prediction, sequence generation, sequence recognition, and sequential decision making. These “problems” show how sequences are formulated. They show the patterns sequences follow and how these different sequence learning problems are related to each other.
Sequence learning problems:
Sequence prediction attempts to predict the next immediate element of a sequence based on all the preceding elements. Sequence generation is basically the same as sequence prediction: an attempt to piece together a sequence one by one the way it naturally occurs. Sequence recognition takes certain criteria and determines whether the sequence is legitimate. Sequential decision making or sequence generation through actions breaks down into three variations: goal-oriented, trajectory-oriented, and reinforcement-maximizing. These three variations all want to pick the action(s) or step(s) that will lead to the goal in the future.These sequence learning problems reflect hierarchical organization of plans because each element in the sequences builds on the previous elements.
Sequence learning problems:
In a classic experiment published in 1967, Alfred L. Yarbus demonstrated that though subjects viewing portraits reported apprehending the portrait as a whole, their eye movements successively fixated on the most informative parts of the image. These observations suggest that underlying an apparently parallel process of face perception, a serial oculomotor process is concealed. It is a common observation that when a skill is being acquired, we are more attentive in the initial phase, but after repeated practice, the skill becomes nearly automatic; this is also known as unconscious competence. We can then concentrate on learning a new action while performing previously learned actions skillfully. Thus, it appears that a neural code or representation for the learned skill is created in our brain, which is usually called procedural memory. The procedural memory encodes procedures or algorithms rather than facts.
Ongoing research:
There are many other areas of application for sequence learning. How humans learn sequential procedures has been a long-standing research problem in cognitive science and currently is a major topic in neuroscience. Research work has been going on in several disciplines, including artificial intelligence, neural networks, and engineering.
For a philosophical perspective, see Inductive reasoning and Problem of induction.
For a theoretical computer-science perspective, see Solomonoff's theory of inductive inference and Inductive programming.
For a mathematical perspective, see Extrapolation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sparse Fourier transform**
Sparse Fourier transform:
The sparse Fourier transform (SFT) is a kind of discrete Fourier transform (DFT) for handling big data signals. Specifically, it is used in GPS synchronization, spectrum sensing and analog-to-digital converters.:The fast Fourier transform (FFT) plays an indispensable role on many scientific domains, especially on signal processing. It is one of the top-10 algorithms in the twentieth century. However, with the advent of big data era, the FFT still needs to be improved in order to save more computing power. Recently, the sparse Fourier transform (SFT) has gained a considerable amount of attention, for it performs well on analyzing the long sequence of data with few signal components.
Definition:
Consider a sequence xn of complex numbers. By Fourier series, xn can be written as xn=(F∗X)n=∑k=0N−1Xkej2πNkn.
Similarly, Xk can be represented as Xk=1N(Fx)k=1N∑k=0N−1xne−j2πNkn.
Hence, from the equations above, the mapping is F:CN→CN Single frequency recovery Assume only a single frequency exists in the sequence. In order to recover this frequency from the sequence, it is reasonable to utilize the relationship between adjacent points of the sequence.
Phase encoding The phase k can be obtained by dividing the adjacent points of the sequence. In other words, cos sin (2πkN).
Notice that xn∈CN An aliasing-based search Seeking phase k can be done by Chinese remainder theorem (CRT).Take 104,134 for an example. Now, we have three relatively prime integers 100, 101, and 103. Thus, the equation can be described as 104,134 34 mod 00 mod 01 mod 03.
Definition:
By CRT, we have 104,134 mod 100 101 103 104,134 mod 040,300 Randomly binning frequencies Now, we desire to explore the case of multiple frequencies, instead of a single frequency. The adjacent frequencies can be separated by the scaling c and modulation b properties. Namely, by randomly choosing the parameters of c and b, the distribution of all frequencies can be almost a uniform distribution. The figure Spread all frequencies reveals by randomly binning frequencies, we can utilize the single frequency recovery to seek the main components.
Definition:
xn′=Xkej2πN(c⋅k+b), where c is scaling property and b is modulation property.
By randomly choosing c and b, the whole spectrum can be looked like uniform distribution. Then, taking them into filter banks can separate all frequencies, including Gaussians, indicator functions, spike trains, and Dolph-Chebyshev filters. Each bank only contains a single frequency.
The prototypical SFT:
Generally, all SFT follows the three stages Identifying frequencies By randomly bining frequencies, all components can be separated. Then, taking them into filter banks, so each band only contains a single frequency. It is convenient to use the methods we mentioned to recover this signal frequency.
Estimating coefficients After identifying frequencies, we will have many frequency components. We can use Fourier transform to estimate their coefficients.
Xk′=1L∑l=1Lxn′e−j2πNn′ℓ Repeating Finally, repeating these two stages can we extract the most important components from the original signal.
xn−∑k′=1kXk′ej2πNk′n
Sparse Fourier transform in the discrete setting:
In 2012, Hassanieh, Indyk, Katabi, and Price proposed an algorithm that takes log log (n/k)) samples and runs in the same running time.
Sparse Fourier transform in the high dimensional setting:
In 2014, Indyk and Kapralov proposed an algorithm that takes log log n samples and runs in nearly linear time in n . In 2016, Kapralov proposed an algorithm that uses sublinear samples log log log n and sublinear decoding time log O(d)n . In 2019, Nakos, Song, and Wang introduced a new algorithm which uses nearly optimal samples log log k) and requires nearly linear time decoding time. A dimension-incremental algorithm was proposed by Potts, Volkmer based on sampling along rank-1 lattices.
Sparse Fourier transform in the continuous setting:
There are several works about generalizing the discrete setting into the continuous setting.
Implementations:
There are several works based on MIT, MSU, ETH and Universtity of Technology Chemnitz [TUC]. Also, they are free online.
MSU implementations ETH implementations MIT implementations GitHub TUC implementations | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Morpheus (1998 video game)**
Morpheus (1998 video game):
Morpheus is an American computer game released in 1998.
Gameplay:
The game is a first-person adventure game similar to Myst with a point and click interface however, the player may also pan around a location by clicking and dragging the mouse. Clicking the mouse to go in a certain direction results in a transition video showing the player's movement.
Reception:
Morpheus became a hit in Spain, with sales of 50,000 units in that region. Bob Mandel of The Adrenaline Vault gave the game the "Seal of Excellence". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UDP-2,3-diacetamido-2,3-dideoxyglucuronic acid 2-epimerase**
UDP-2,3-diacetamido-2,3-dideoxyglucuronic acid 2-epimerase:
UDP-2,3-diacetamido-2,3-dideoxyglucuronic acid 2-epimerase (EC 5.1.3.23, UDP-GlcNAc3NAcA 2-epimerase, UDP-alpha-D-GlcNAc3NAcA 2-epimerase, 2,3-diacetamido-2,3-dideoxy-alpha-D-glucuronic acid 2-epimerase, WbpI, WlbD) is an enzyme with systematic name 2,3-diacetamido-2,3-dideoxy-alpha-D-glucuronate 2-epimerase. This enzyme catalyses the following chemical reaction UDP-2,3-diacetamido-2,3-dideoxy-alpha-D-glucuronate ⇌ UDP-2,3-diacetamido-2,3-dideoxy-alpha-D-mannuronateThis enzyme participates in the biosynthetic pathway for UDP-alpha-D-ManNAc3NAcA. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ATLAS experiment**
ATLAS experiment:
ATLAS is the largest general-purpose particle detector experiment at the Large Hadron Collider (LHC), a particle accelerator at CERN (the European Organization for Nuclear Research) in Switzerland. The experiment is designed to take advantage of the unprecedented energy available at the LHC and observe phenomena that involve highly massive particles which were not observable using earlier lower-energy accelerators. ATLAS was one of the two LHC experiments involved in the discovery of the Higgs boson in July 2012. It was also designed to search for evidence of theories of particle physics beyond the Standard Model.
ATLAS experiment:
The experiment is a collaboration involving 6,003 members, out of which 3,822 are physicists (last update: June 26, 2022) from 257 institutions in 42 countries.
History:
Particle accelerator growth The first cyclotron, an early type of particle accelerator, was built by Ernest O. Lawrence in 1931, with a radius of just a few centimetres and a particle energy of 1 megaelectronvolt (MeV). Since then, accelerators have grown enormously in the quest to produce new particles of greater and greater mass. As accelerators have grown, so too has the list of known particles that they might be used to investigate.
History:
ATLAS Collaboration The ATLAS Collaboration, the international group of physicists belonging to different universities and research centres who built and run the detector, was formed in 1992 when the proposed EAGLE (Experiment for Accurate Gamma, Lepton and Energy Measurements) and ASCOT (Apparatus with Super Conducting Toroids) collaborations merged their efforts to build a single, general-purpose particle detector for a new particle accelerator, the Large Hadron Collider. At present, the ATLAS Collaboration involves 5,767 members, out of which 2,646 are physicists (last census: September 9, 2021) from 180 institutions in 40 countries.
History:
Detector design and construction The design was a combination of two previous projects for LHC, EAGLE and ASCOT, and also benefitted from the detector research and development that had been done for the Superconducting Super Collider, a US project interrupted in 1993. The ATLAS experiment was proposed in its current form in 1994, and officially funded by the CERN member countries in 1995. Additional countries, universities, and laboratories have joined in subsequent years. Construction work began at individual institutions, with detector components then being shipped to CERN and assembled in the ATLAS experiment pit starting in 2003.
History:
Detector operation Construction was completed in 2008 and the experiment detected its first single proton beam events on 10 September of that year.
History:
Data-taking was then interrupted for over a year due to an LHC magnet quench incident. On 23 November 2009, the first proton–proton collisions occurred at the LHC and were recorded by ATLAS, at a relatively low injection energy of 900 GeV in the center of mass of the collision. Since then, the LHC energy has been increasing: 1.8 TeV at the end of 2009, 7 TeV for the whole of 2010 and 2011, then 8 TeV in 2012. The first data-taking period performed between 2010 and 2012 is referred to as Run I. After a long shutdown (LS1) in 2013 and 2014, in 2015 ATLAS saw 13 TeV collisions.
History:
The second data-taking period, Run II, was completed, always at 13 TeV energy, at the end of 2018 with a recorded integrated luminosity of nearly 140 fb−1 (inverse femtobarn). A second long shutdown (LS2) in 2019-22 with upgrades to the ATLAS detector was followed by Run III, which started in July 2022.
Leadership The ATLAS Collaboration is currently led by Spokesperson Andreas Hoecker and Deputy Spokespersons Marumi Kado and Manuella Vincter. Former Spokespersons have been:
Experimental program:
In the field of particle physics, ATLAS studies different types of processes detected or detectable in energetic collisions at the Large Hadron Collider (LHC). For the processes already known, it is a matter of measuring more and more accurately the properties of known particles or finding quantitative confirmations of the Standard model. Processes not observed so far would allow, if detected, to discover new particles or to have confirmation of physical theories that go beyond the Standard model.
Experimental program:
Standard Model The Standard model of particle physics is the theory describing three of the four known fundamental forces (the electromagnetic, weak, and strong interactions, while omitting gravity) in the universe, as well as classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists around the world, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, confirmation of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Experimental program:
Although the Standard model is believed to be theoretically self-consistent and has demonstrated huge successes in providing experimental predictions, it leaves some phenomena unexplained and falls short of being a complete theory of fundamental interactions. It does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the accelerating expansion of the universe as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses.
Experimental program:
Precision measurements With the important exception of the Higgs boson, detected by the ATLAS and the CMS experiments in 2012, all of the particles predicted by the Standard Model had been observed by previous experiments. In this field, in addition to the discovery of the Higgs boson, the experimental work of ATLAS has focused on precision measurements, aimed at determining with ever greater accuracy the many physical parameters of theory.
Experimental program:
In particular for the Higgs boson; W and Z bosons; the top and bottom quarksATLAS measures: masses; channels of production, decay and mean lifetimes; interaction mechanisms and coupling constants for electroweak and strong interactions.For example, the data collected by ATLAS made it possible in 2018 to measure the mass [(8037±19) MeV] of the W boson, one of the two mediators of the weak interaction, with a measurement uncertainty of ±2.4‰.
Experimental program:
Higgs boson One of the most important goals of ATLAS was to investigate a missing piece of the Standard Model, the Higgs boson. The Higgs mechanism, which includes the Higgs boson, gives mass to elementary particles, leading to differences between the weak force and electromagnetism by giving the W and Z bosons mass while leaving the photon massless.
Experimental program:
On July 4, 2012, ATLAS — together with CMS, its sister experiment at the LHC — reported evidence for the existence of a particle consistent with the Higgs boson at a confidence level of 5 sigma, with a mass around 125 GeV, or 133 times the proton mass. This new "Higgs-like" particle was detected by its decay into two photons ( H→γγ ) and its decay to four leptons ( H→ZZ∗→4l and H→WW∗→eνμν ).
Experimental program:
In March 2013, in the light of the updated ATLAS and CMS results, CERN announced that the new particle was indeed a Higgs boson. The experiments were also able to show that the properties of the particle as well as the ways it interacts with other particles were well-matched with those of a Higgs boson, which is expected to have spin 0 and positive parity. Analysis of more properties of the particle and data collected in 2015 and 2016 confirmed this further.In October 2013, two of the theoretical physicists who predicted the existence of the Standard Model Higgs boson, Peter Higgs and François Englert, were awarded the Nobel Prize in Physics.
Experimental program:
Top quark properties The properties of the top quark, discovered at Fermilab in 1995, had been measured approximately. With much greater energy and greater collision rates, the LHC produces a tremendous number of top quarks, allowing ATLAS to make much more precise measurements of its mass and interactions with other particles. These measurements provide indirect information on the details of the Standard Model, with the possibility of revealing inconsistencies that point to new physics.
Experimental program:
Beyond the Standard model While the Standard Model predicts that quarks, leptons and neutrinos should exist, it does not explain why the masses of these particles are so different (they differ by orders of magnitude). Furthermore, the mass of the neutrinos should be, according to the Standard Model, exactly zero as that of the photon. Instead, neutrinos have mass. In 1998 research results at detector Super-Kamiokande determined that neutrinos can oscillate from one flavor to another, which dictates that they have a mass other than zero. For these and other reasons, many particle physicists believe it is possible that the Standard Model will break down at energies at the teraelectronvolt (TeV) scale or higher. Most alternative theories, the Grand Unified Theories (GUTs) including Supersymmetry (SUSY), predicts the existence of new particles with masses greater than those of Standard Model.
Experimental program:
Supersymmetry Most of the currently proposed theories predict new higher-mass particles, some of which may be light enough to be observed by ATLAS. Models of supersymmetry involve new, highly massive particles. In many cases these decay into high-energy quarks and stable heavy particles that are very unlikely to interact with ordinary matter. The stable particles would escape the detector, leaving as a signal one or more high-energy quark jets and a large amount of "missing" momentum. Other hypothetical massive particles, like those in the Kaluza–Klein theory, might leave a similar signature. The data collected up to the end of LHC Run II do not show evidence of supersymmetric or unexpected particles, the research of which will continue in the data that will be collected from Run III onwards.
Experimental program:
CP violation The asymmetry between the behavior of matter and antimatter, known as CP violation, is also being investigated. Recent experiments dedicated to measurements of CP violation, such as BaBar and Belle, have not detected sufficient CP violation in the Standard Model to explain the lack of detectable antimatter in the universe. It is possible that new models of physics will introduce additional CP violation, shedding light on this problem. Evidence supporting these models might either be detected directly by the production of new particles, or indirectly by measurements of the properties of B- and D-mesons. LHCb, an LHC experiment dedicated to B-mesons, is likely to be better suited to the latter.
Experimental program:
Microscopic black holes Some hypotheses, based on the ADD model, involve large extra dimensions and predict that micro black holes could be formed by the LHC. These would decay immediately by means of Hawking radiation, producing all particles in the Standard Model in equal numbers and leaving an unequivocal signature in the ATLAS detector.
ATLAS detector:
The ATLAS detector is 46 metres long, 25 metres in diameter, and weighs about 7,000 tonnes; it contains some 3,000 km of cable.At 27 km in circumference, the Large Hadron Collider (LHC) at CERN collides two beams of protons together, with each proton carrying up to 6.8 TeV of energy – enough to produce particles with masses significantly greater than any particles currently known, if these particles exist. When the proton beams produced by the Large Hadron Collider interact in the center of the detector, a variety of different particles with a broad range of energies are produced.
ATLAS detector:
General-purpose requirements The ATLAS detector is designed to be general-purpose. Rather than focusing on a particular physical process, ATLAS is designed to measure the broadest possible range of signals. This is intended to ensure that whatever form any new physical processes or particles might take, ATLAS will be able to detect them and measure their properties. ATLAS is designed to detect these particles, namely their masses, momentum, energies, lifetime, charges, and nuclear spins.
ATLAS detector:
Experiments at earlier colliders, such as the Tevatron and Large Electron–Positron Collider, were also designed for general-purpose detection. However, the beam energy and extremely high rate of collisions require ATLAS to be significantly larger and more complex than previous experiments, presenting unique challenges of the Large Hadron Collider.
ATLAS detector:
Layered design In order to identify all particles produced at the interaction point where the particle beams collide, the detector is designed in layers made up of detectors of different types, each of which is designed to observe specific types of particles. The different traces that particles leave in each layer of the detector allow for effective particle identification and accurate measurements of energy and momentum. (The role of each layer in the detector is discussed below.) As the energy of the particles produced by the accelerator increases, the detectors attached to it must grow to effectively measure and stop higher-energy particles. As of 2022, the ATLAS detector is the largest ever built at a particle collider.
ATLAS detector:
Detector systems The ATLAS detector consists of a series of ever-larger concentric cylinders around the interaction point where the proton beams from the LHC collide. Maintaining detector performance in the high radiation areas immediately surrounding the proton beams is a significant engineering challenge. The detector can be divided into four major systems: Inner Detector; Calorimeters; Muon Spectrometer; Magnet system.Each of these is in turn made of multiple layers. The detectors are complementary: the Inner Detector tracks particles precisely, the calorimeters measure the energy of easily stopped particles, and the muon system makes additional measurements of highly penetrating muons. The two magnet systems bend charged particles in the Inner Detector and the Muon Spectrometer, allowing their electric charges and momenta to be measured.
ATLAS detector:
The only established stable particles that cannot be detected directly are neutrinos; their presence is inferred by measuring a momentum imbalance among detected particles. For this to work, the detector must be "hermetic", meaning it must detect all non-neutrinos produced, with no blind spots.
The installation of all the above detector systems was finished in August 2008. The detectors collected millions of cosmic rays during the magnet repairs which took place between fall 2008 and fall 2009, prior to the first proton collisions. The detector operated with close to 100% efficiency and provided performance characteristics very close to its design values.
ATLAS detector:
Inner Detector The Inner Detector begins a few centimetres from the proton beam axis, extends to a radius of 1.2 metres, and is 6.2 metres in length along the beam pipe. Its basic function is to track charged particles by detecting their interaction with material at discrete points, revealing detailed information about the types of particles and their momentum. The Inner Detector has three parts, which are explained below.
ATLAS detector:
The magnetic field surrounding the entire inner detector causes charged particles to curve; the direction of the curve reveals a particle's charge and the degree of curvature reveals its momentum. The starting points of the tracks yield useful information for identifying particles; for example, if a group of tracks seem to originate from a point other than the original proton–proton collision, this may be a sign that the particles came from the decay of a hadron with a bottom quark (see b-tagging).
ATLAS detector:
Pixel Detector The Pixel Detector, the innermost part of the detector, contains four concentric layers and three disks on each end-cap, with a total of 1,744 modules, each measuring 2 centimetres by 6 centimetres. The detecting material is 250 µm thick silicon. Each module contains 16 readout chips and other electronic components. The smallest unit that can be read out is a pixel (50 by 400 micrometres); there are roughly 47,000 pixels per module.
ATLAS detector:
The minute pixel size is designed for extremely precise tracking very close to the interaction point. In total, the Pixel Detector has over 92 million readout channels, which is about 50% of the total readout channels of the whole detector. Having such a large count created a considerable design and engineering challenge. Another challenge was the radiation to which the Pixel Detector is exposed because of its proximity to the interaction point, requiring that all components be radiation hardened in order to continue operating after significant exposures.
ATLAS detector:
Semi-Conductor Tracker The Semi-Conductor Tracker (SCT) is the middle component of the inner detector. It is similar in concept and function to the Pixel Detector but with long, narrow strips rather than small pixels, making coverage of a larger area practical. Each strip measures 80 micrometres by 12 centimetres. The SCT is the most critical part of the inner detector for basic tracking in the plane perpendicular to the beam, since it measures particles over a much larger area than the Pixel Detector, with more sampled points and roughly equal (albeit one-dimensional) accuracy. It is composed of four double layers of silicon strips, and has 6.3 million readout channels and a total area of 61 square meters.
ATLAS detector:
Transition Radiation Tracker The Transition Radiation Tracker (TRT), the outermost component of the inner detector, is a combination of a straw tracker and a transition radiation detector. The detecting elements are drift tubes (straws), each four millimetres in diameter and up to 144 centimetres long. The uncertainty of track position measurements (position resolution) is about 200 micrometres. This is not as precise as those for the other two detectors, but it was necessary to reduce the cost of covering a larger volume and to have transition radiation detection capability. Each straw is filled with gas that becomes ionized when a charged particle passes through. The straws are held at about −1,500 V, driving the negative ions to a fine wire down the centre of each straw, producing a current pulse (signal) in the wire. The wires with signals create a pattern of 'hit' straws that allow the path of the particle to be determined. Between the straws, materials with widely varying indices of refraction cause ultra-relativistic charged particles to produce transition radiation and leave much stronger signals in some straws. Xenon and argon gas is used to increase the number of straws with strong signals. Since the amount of transition radiation is greatest for highly relativistic particles (those with a speed very near the speed of light), and because particles of a particular energy have a higher speed the lighter they are, particle paths with many very strong signals can be identified as belonging to the lightest charged particles: electrons and their antiparticles, positrons. The TRT has about 298,000 straws in total.
ATLAS detector:
Calorimeters The calorimeters are situated outside the solenoidal magnet that surrounds the Inner Detector. Their purpose is to measure the energy from particles by absorbing it. There are two basic calorimeter systems: an inner electromagnetic calorimeter and an outer hadronic calorimeter. Both are sampling calorimeters; that is, they absorb energy in high-density metal and periodically sample the shape of the resulting particle shower, inferring the energy of the original particle from this measurement.
ATLAS detector:
Electromagnetic calorimeter The electromagnetic (EM) calorimeter absorbs energy from particles that interact electromagnetically, which include charged particles and photons. It has high precision, both in the amount of energy absorbed and in the precise location of the energy deposited. The angle between the particle's trajectory and the detector's beam axis (or more precisely the pseudorapidity) and its angle within the perpendicular plane are both measured to within roughly 0.025 radians. The barrel EM calorimeter has accordion shaped electrodes and the energy-absorbing materials are lead and stainless steel, with liquid argon as the sampling material, and a cryostat is required around the EM calorimeter to keep it sufficiently cool.
ATLAS detector:
Hadron calorimeter The hadron calorimeter absorbs energy from particles that pass through the EM calorimeter, but do interact via the strong force; these particles are primarily hadrons. It is less precise, both in energy magnitude and in the localization (within about 0.1 radians only). The energy-absorbing material is steel, with scintillating tiles that sample the energy deposited. Many of the features of the calorimeter are chosen for their cost-effectiveness; the instrument is large and comprises a huge amount of construction material: the main part of the calorimeter – the tile calorimeter – is 8 metres in diameter and covers 12 metres along the beam axis. The far-forward sections of the hadronic calorimeter are contained within the forward EM calorimeter's cryostat, and use liquid argon as well, while copper and tungsten are used as absorbers.
ATLAS detector:
Muon Spectrometer The Muon Spectrometer is an extremely large tracking system, consisting of three parts: A magnetic field provided by three toroidal magnets; A set of 1200 chambers measuring with high spatial precision the tracks of the outgoing muons; A set of triggering chambers with accurate time-resolution.The extent of this sub-detector starts at a radius of 4.25 m close to the calorimeters out to the full radius of the detector (11 m). Its tremendous size is required to accurately measure the momentum of muons, which first go through all the other elements of the detector before reaching the muon spectrometer. It was designed to measure, standalone, the momentum of 100 GeV muons with 3% accuracy and of 1 TeV muons with 10% accuracy. It was vital to go to the lengths of putting together such a large piece of equipment because a number of interesting physical processes can only be observed if one or more muons are detected, and because the total energy of particles in an event could not be measured if the muons were ignored. It functions similarly to the Inner Detector, with muons curving so that their momentum can be measured, albeit with a different magnetic field configuration, lower spatial precision, and a much larger volume. It also serves the function of simply identifying muons – very few particles of other types are expected to pass through the calorimeters and subsequently leave signals in the Muon Spectrometer. It has roughly one million readout channels, and its layers of detectors have a total area of 12,000 square meters.
ATLAS detector:
Magnet System The ATLAS detector uses two large superconducting magnet systems to bend the trajectory of charged particles, so that their momenta can be measured. This bending is due to the Lorentz force, whose modulus is proportional to the electric charge q of the particle, to its speed v and to the intensity B of the magnetic field: F=qvB.
Since all particles produced in the LHC's proton collisions are traveling at very close to the speed of light in vacuum (v≃c) , the Lorentz force is about the same for all the particles with same electric charge q :F≃qcB.
The radius of curvature r due to the Lorentz force is equal to r=pqB.
where p=γmv is the relativistic momentum of the particle. As a result, high-momentum particles curve very little (large r ), while low-momentum particles curve significantly (small r ). The amount of curvature can be quantified and the particle momentum can be determined from this value.
ATLAS detector:
Solenoid Magnet The inner solenoid produces a two tesla magnetic field surrounding the Inner Detector. This high magnetic field allows even very energetic particles to curve enough for their momentum to be determined, and its nearly uniform direction and strength allow measurements to be made very precisely. Particles with momenta below roughly 400 MeV will be curved so strongly that they will loop repeatedly in the field and most likely not be measured; however, this energy is very small compared to the several TeV of energy released in each proton collision.
ATLAS detector:
Toroid Magnets The outer toroidal magnetic field is produced by eight very large air-core superconducting barrel loops and two smaller end-caps air toroidal magnets, for a total of 24 barrel loops all situated outside the calorimeters and within the muon system. This magnetic field extends in an area 26 metres long and 20 metres in diameter, and it stores 1.6 gigajoules of energy. Its magnetic field is not uniform, because a solenoid magnet of sufficient size would be prohibitively expensive to build. It varies between 2 and 8 Teslameters.
ATLAS detector:
Forward detectors The ATLAS detector is complemented by a set of four sub-detectors in the forward region to measure particles at very small angles.
ATLAS detector:
LUCID (LUminosity Cherenkov Integrating Detector) is the first of these detectors designed to measure luminosity, and located in the ATLAS cavern at 17 m from the interaction point between the two muon endcaps; ZDC (Zero Degree Calorimeter) is designed to measure neutral particles on-axis to the beam, and located at 140 m from the IP in the LHC tunnel where the two beams are split back into separate beam pipes; AFP (Atlas Forward Proton) is designed to tag diffractive events, and located at 204 m and 217 m; ALFA (Absolute Luminosity For ATLAS) is designed to measure elastic proton scattering located at 240 m just before the bending magnets of the LHC arc.
ATLAS detector:
Data systems Data generation Earlier particle detector read-out and event detection systems were based on parallel shared buses such as VMEbus or FASTBUS. Since such a bus architecture cannot keep up with the data requirements of the LHC detectors, all the ATLAS data acquisition systems rely on high-speed point-to-point links and switching networks. Even with advanced electronics for data reading and storage, the ATLAS detector generates too much raw data to read out or store everything: about 25 MB per raw event, multiplied by 40 million beam crossings per second (40 MHz) in the center of the detector. This produces a total of 1 petabyte of raw data per second. By avoiding to write empty segments of each event (zero suppression), which do not contain physical information, the average size of an event is reduced to 1.6 MB, for a total of 64 terabyte of data per second.
ATLAS detector:
Trigger system The trigger system uses fast event reconstruction to identify, in real time, the most interesting events to retain for detailed analysis. In the second data-taking period of the LHC, Run-2, there were two distinct trigger levels: The Level 1 trigger (L1), implemented in custom hardware at the detector site. The decision to save or reject an event data is made in less than 2.5 μs. It uses reduced granularity information from the calorimeters and the muon spectrometer, and reduces the rate of events in the read-out from 40 MHz to 100 kHz. The L1 rejection factor in therefore equal to 400.
ATLAS detector:
The High Level Trigger trigger (HLT), implemented in software, uses a computer battery consisting of approximately 40,000 CPUs. In order to decide which of the 100,000 events per second coming from L1 to save, specific analyses of each collision are carried out in 200 μs. The HLT uses limited regions of the detector, so-called Regions of Interest (RoI), to be reconstructed with the full detector granularity, including tracking, and allows matching of energy deposits to tracks. The HLT rejection factor is 100: after this step, the rate of events is reduced from 100 to 1 kHz. The remaining data, corresponding to about 1,000 events per second, are stored for further analyses.
ATLAS detector:
Analysis process ATLAS permanently records more than 10 petabyte of data per year.
ATLAS detector:
Offline event reconstruction is performed on all permanently stored events, turning the pattern of signals from the detector into physics objects, such as jets, photons, and leptons. Grid computing is being used extensively for event reconstruction, allowing the parallel use of university and laboratory computer networks throughout the world for the CPU-intensive task of reducing large quantities of raw data into a form suitable for physics analysis. The software for these tasks has been under development for many years, and refinements are ongoing, even after data collection has begun.
ATLAS detector:
Individuals and groups within the collaboration are continuously writing their own code to perform further analyses of these objects, searching the patterns of detected particles for particular physical models or hypothetical particles. This activity requires processing 25 petabyte of data per week.
Trivia:
The researcher pictured for scale in the famous ATLAS detector image is Roger Ruber, a researcher from Uppsala University, Sweden. Ruber, one of the researchers responsible for the ATLAS detector's central cryostat magnet, was inspecting the magnets in the LHC tunnel at the same time Maximilien Brice, the photographer, was setting up to photograph the ATLAS detector. Brice asked Ruber to stand at the base of the detector to illustrate the scale of the ATLAS detector. This was revealed by Maximilien Brice, and confirmed by Roger Ruber during interviews in 2020 with Rebecca Smethurst of the University of Oxford. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MAC service data unit**
MAC service data unit:
MAC service data unit (media access control service data unit, MSDU) is the service data unit that is received from the logical link control (LLC) sub-layer which lies above the media access control (MAC) sub-layer in a protocol stack. The LLC and MAC sub-layers are collectively referred to as the data link layer (DLL). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clausius–Mossotti relation**
Clausius–Mossotti relation:
In electromagnetism, the Clausius–Mossotti relation, named for O. F. Mossotti and Rudolf Clausius, expresses the dielectric constant (relative permittivity, εr) of a material in terms of the atomic polarizability, α, of the material's constituent atoms and/or molecules, or a homogeneous mixture thereof. It is equivalent to the Lorentz–Lorenz equation, which relates the refractive index (rather than the dielectric constant) of a substance to its polarizability. It may be expressed as: where εr=εε0 is the dielectric constant of the material, which for non-magnetic materials is equal to n2, where n is the refractive index; ε0 is the permittivity of free space; N is the number density of the molecules (number per cubic meter); α is the molecular polarizability in SI-units [C·m2/V].In the case that the material consists of a mixture of two or more species, the right hand side of the above equation would consist of the sum of the molecular polarizability contribution from each species, indexed by i in the following form: In the CGS system of units the Clausius–Mossotti relation is typically rewritten to show the molecular polarizability volume α′=α4πε0 which has units of volume [m3]. Confusion may arise from the practice of using the shorter name "molecular polarizability" for both α and α′ within literature intended for the respective unit system.
Clausius–Mossotti relation:
The Clausius-Mossotti relation assumes only an induced dipole relevant to its polarizability and is thus inapplicable for substances with a significant permanent dipole. It is applicable to gases such as N2, CO2, CH4 and H2 at sufficiently low densities and pressures. For example, the Clausius-Mossotti relation is accurate for N2 gas up to 1000 atm between 25 °C and 125 °C. Moreover, the Clausius-Mossotti relation may be applicable to substances if the applied electric field is at a sufficiently high frequencies such that any permanent dipole modes are inactive.
Lorentz–Lorenz equation:
The Lorentz–Lorenz equation is similar to the Clausius–Mossotti relation, except that it relates the refractive index (rather than the dielectric constant) of a substance to its polarizability. The Lorentz–Lorenz equation is named after the Danish mathematician and scientist Ludvig Lorenz, who published it in 1869, and the Dutch physicist Hendrik Lorentz, who discovered it independently in 1878.
The most general form of the Lorentz–Lorenz equation is (in Gaussian-CGS units) n2−1n2+2=4π3Nαm where n is the refractive index, N is the number of molecules per unit volume, and αm is the mean polarizability. This equation is approximately valid for homogeneous solids as well as liquids and gases.
Lorentz–Lorenz equation:
When the square of the refractive index is n2≈1 , as it is for many gases, the equation reduces to: n2−1≈4πNαm or simply n−1≈2πNαm This applies to gases at ordinary pressures. The refractive index n of the gas can then be expressed in terms of the molar refractivity A as: n≈1+3ApRT where p is the pressure of the gas, R is the universal gas constant, and T is the (absolute) temperature, which together determine the number density N. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multiview orthographic projection**
Multiview orthographic projection:
In technical drawing and computer graphics, a multiview projection is a technique of illustration by which a standardized series of orthographic two-dimensional pictures are constructed to represent the form of a three-dimensional object. Up to six pictures of an object are produced (called primary views), with each projection plane parallel to one of the coordinate axes of the object. The views are positioned relative to each other according to either of two schemes: first-angle or third-angle projection. In each, the appearances of views may be thought of as being projected onto planes that form a six-sided box around the object. Although six different sides can be drawn, usually three views of a drawing give enough information to make a three-dimensional object. These views are known as front view, top view and end view. Other names for these views include plan, elevation and section. When the plane or axis of the object depicted is not parallel to the projection plane, and where multiple sides of an object are visible in the same image, it is called an auxiliary view.
Overview:
To render each such picture, a ray of sight (also called a projection line, projection ray or line of sight) towards the object is chosen, which determines on the object various points of interest (for instance, the points that are visible when looking at the object along the ray of sight); those points of interest are mapped by an orthographic projection to points on some geometric plane (called a projection plane or image plane) that is perpendicular to the ray of sight, thereby creating a 2D representation of the 3D object.
Overview:
Customarily, two rays of sight are chosen for each of the three axes of the object's coordinate system; that is, parallel to each axis, the object may be viewed in one of 2 opposite directions, making for a total of 6 orthographic projections (or "views") of the object: Along a vertical axis (often the y-axis): The top and bottom views, which are known as plans (because they show the arrangement of features on a horizontal plane, such as a floor in a building).
Overview:
Along a horizontal axis (often the z-axis): The front and back views, which are known as elevations (because they show the heights of features of an object such as a building).
Overview:
Along an orthogonal axis (often the x-axis): The left and right views, which are also known as elevations, following the same reasoning.These six planes of projection intersect each other, forming a box around the object, the most uniform construction of which is a cube; traditionally, these six views are presented together by first projecting the 3D object onto the 2D faces of a cube, and then "unfolding" the faces of the cube such that all of them are contained within the same plane (namely, the plane of the medium on which all of the images will be presented together, such as a piece of paper, or a computer monitor, etc.). However, even if the faces of the box are unfolded in one standardized way, there is ambiguity as to which projection is being displayed by a particular face; the cube has two faces that are perpendicular to a ray of sight, and the points of interest may be projected onto either one of them, a choice which has resulted in two predominant standards of projection: First-angle projection: In this type of projection, the object is imagined to be in the first quadrant. Because the observer normally looks from the right side of the quadrant to obtain the front view, the objects will come in between the observer and the plane of projection. Therefore, in this case, the object is imagined to be transparent, and the projectors are imagined to be extended from various points of the object to meet the projection plane. When these meeting points are joined in order on the plane they form an image, thus in the first angle projection, any view is so placed that it represents the side of the object away from it. First angle projection is often used throughout parts of Europe so that it is often called European projection.
Overview:
Third-angle projection: In this type of projection, the object is imagined to be in the third quadrant. Again, as the observer is normally supposed to look from the right side of the quadrant to obtain the front view, in this method, the projection plane comes in between the observer and the object. Therefore, the plane of projection is assumed to be transparent. The intersection of this plan with the projectors from all the points of the object would form an image on the transparent plane.
Primary views:
Multiview projections show the primary views of an object, each viewed in a direction parallel to one of the main coordinate axes. These primary views are called plans and elevations. Sometimes they are shown as if the object has been cut across or sectioned to expose the interior: these views are called sections.
Plan A plan is a view of a 3-dimensional object seen from vertically above (or sometimes below). It may be drawn in the position of a horizontal plane passing through, above, or below the object. The outline of a shape in this view is sometimes called its planform, for example with aircraft wings.
The plan view from above a building is called its roof plan. A section seen in a horizontal plane through the walls and showing the floor beneath is called a floor plan.
Elevation Elevation is the view of a 3-dimensional object from the position of a vertical plane beside an object. In other words, an elevation is a side view as viewed from the front, back, left or right (and referred to as a front elevation, [left/ right] side elevation, and a rear elevation).
An elevation is a common method of depicting the external configuration and detailing of a 3-dimensional object in two dimensions. Building façades are shown as elevations in architectural drawings and technical drawings.
Primary views:
Elevations are the most common orthographic projection for conveying the appearance of a building from the exterior. Perspectives are also commonly used for this purpose. A building elevation is typically labeled in relation to the compass direction it faces; the direction from which a person views it. E.g. the North Elevation of a building is the side that most closely faces true north on the compass.Interior elevations are used to show details such as millwork and trim configurations.
Primary views:
In the building industry elevations are non-perspective views of the structure. These are drawn to scale so that measurements can be taken for any aspect necessary. Drawing sets include front, rear, and both side elevations. The elevations specify the composition of the different facades of the building, including ridge heights, the positioning of the final fall of the land, exterior finishes, roof pitches, and other architectural details.
Primary views:
Developed elevation A developed elevation is a variant of a regular elevation view in which several adjacent non-parallel sides may be shown together as if they have been unfolded. For example, the north and west views may be shown side-by-side, sharing an edge, even though this does not represent a proper orthographic projection.
Section A section, or cross-section, is a view of a 3-dimensional object from the position of a plane through the object.
A section is a common method of depicting the internal arrangement of a 3-dimensional object in two dimensions. It is often used in technical drawing and is traditionally crosshatched. The style of crosshatching often indicates the type of material the section passes through.
With computed axial tomography, computers construct cross-sections from x-ray data.
Auxiliary views:
An auxiliary view or pictorial, is an orthographic view that is projected into any plane other than one of the six primary views. These views are typically used when an object has a surface in an oblique plane. By projecting into a plane parallel with the oblique surface, the true size and shape of the surface are shown. Auxiliary views are often drawn using isometric projection.
Multiviews:
Quadrants in descriptive geometry Modern orthographic projection is derived from Gaspard Monge's descriptive geometry. Monge defined a reference system of two viewing planes, horizontal H ("ground") and vertical V ("backdrop"). These two planes intersect to partition 3D space into 4 quadrants, which he labeled: I: above H, in front of V II: above H, behind V III: below H, behind V IV: below H, in front of VThese quadrant labels are the same as used in 2D planar geometry, as seen from infinitely far to the "left", taking H and V to be the X-axis and Y-axis, respectively.
Multiviews:
The 3D object of interest is then placed into either quadrant I or III (equivalently, the position of the intersection line between the two planes is shifted), obtaining first- and third-angle projections, respectively. Quadrants II and IV are also mathematically valid, but their use would result in one view "true" and the other view "flipped" by 180° through its vertical centerline, which is too confusing for technical drawings. (In cases where such a view is useful, e.g. a ceiling viewed from above, a reflected view is used, which is a mirror image of the true orthographic view.) Monge's original formulation uses two planes only and obtains the top and front views only. The addition of a third plane to show a side view (either left or right) is a modern extension. The terminology of quadrant is a mild anachronism, as a modern orthographic projection with three views corresponds more precisely to an octant of 3D space.
Multiviews:
First-angle projection In first-angle projection, the object is conceptually located in quadrant I, i.e. it floats above and before the viewing planes, the planes are opaque, and each view is pushed through the object onto the plane furthest from it. (Mnemonic: an "actor on a stage".) Extending to the 6-sided box, each view of the object is projected in the direction (sense) of sight of the object, onto the (opaque) interior walls of the box; that is, each view of the object is drawn on the opposite side of the box. A two-dimensional representation of the object is then created by "unfolding" the box, to view all of the interior walls. This produces two plans and four elevations. A simpler way to visualize this is to place the object on top of an upside-down bowl. Sliding the object down the right edge of the bowl reveals the right side view.
Multiviews:
Third-angle projection In third-angle projection, the object is conceptually located in quadrant III, i.e. it is positioned below and behind the viewing planes, the planes are transparent, and each view is pulled onto the plane closest to it. (Mnemonic: a "shark in a tank", esp. that is sunken into the floor.) Using the 6-sided viewing box, each view of the object is projected opposite to the direction (sense) of sight, onto the (transparent) exterior walls of the box; that is, each view of the object is drawn on the same side of the box. The box is then unfolded to view all of its exterior walls. A simpler way to visualize this is to place the object in the bottom of a bowl. Sliding the object up the right edge of the bowl reveals the right side view.
Multiviews:
Here is the construction of third angle projections of the same object as above. Note that the individual views are the same, just arranged differently.
Multiviews:
Additional information First-angle projection is as if the object were sitting on the paper and, from the "face" (front) view, it is rolled to the right to show the left side or rolled up to show its bottom. It is standard throughout Europe and Asia (excluding Japan). First-angle projection was widely used in the UK, but during World War II, British drawings sent to be manufactured in the USA, such as of the Rolls-Royce Merlin, had to be drawn in third-angle projection before they could be produced, e.g., as the Packard V-1650 Merlin. This meant that some British companies completely adopted third angle projection. BS 308 (Part 1) Engineering Drawing Practice, gave the option of using both projections, but generally, every illustration (other than the ones explaining the difference between first and third-angle) was done in first-angle. After the withdrawal of BS 308 in 1999, BS 8888 offered the same choice since it referred directly to ISO 5456-2, Technical drawings – Projection methods – Part 2: Orthographic representations.
Multiviews:
Third-angle is as if the object were a box to be unfolded. If we unfold the box so that the front view is in the center of the two arms, then the top view is above it, the bottom view is below it, the left view is to the left, and the right view is to the right. It is standard in the USA (ASME Y14.3-2003 specifies it as the default projection system), Japan (JIS B 0001:2010 specifies it as the default projection system), Canada, and Australia (AS1100.101 specifies it as the preferred projection system).
Multiviews:
Both first-angle and third-angle projections result in the same 6 views; the difference between them is the arrangement of these views around the box.
Symbol A great deal of confusion has ensued in drafting rooms and engineering departments when drawings are transferred from one convention to another. On engineering drawings, the projection is denoted by an international symbol representing a truncated cone in either first-angle or third-angle projection, as shown by the diagram on the right.
The 3D interpretation is a solid truncated cone, with the small end pointing toward the viewer. The front view is, therefore, two concentric circles. The fact that the inner circle is drawn with a solid line instead of dashed identifies this view as the front view, not the rear view. The side view is an isosceles trapezoid.
In first-angle projection, the front view is pushed back to the rear wall, and the right side view is pushed to the left wall, so the first-angle symbol shows the trapezoid with its shortest side away from the circles.
In third-angle projection, the front view is pulled forward to the front wall, and the right side view is pulled to the right wall, so the third-angle symbol shows the trapezoid with its shortest side towards the circles.
Multiviews without rotation:
Orthographic multiview projection is derived from the principles of descriptive geometry and may produce an image of a specified, imaginary object as viewed from any direction of space. Orthographic projection is distinguished by parallel projectors emanating from all points of the imaged object and which intersect of projection at right angles. Above, a technique is described that obtains varying views by projecting images after the object is rotated to the desired position.
Multiviews without rotation:
Descriptive geometry customarily relies on obtaining various views by imagining an object to be stationary and changing the direction of projection (viewing) in order to obtain the desired view.
Multiviews without rotation:
See Figure 1. Using the rotation technique above, note that no orthographic view is available looking perpendicularly at any of the inclined surfaces. Suppose a technician desired such a view to, say, look through a hole to be drilled perpendicularly to the surface. Such a view might be desired for calculating clearances or for dimensioning purposes. To obtain this view without multiple rotations requires the principles of Descriptive Geometry. The steps below describe the use of these principles in third angle projection.
Multiviews without rotation:
Fig.1: Pictorial of the imaginary object that the technician wishes to image.
Fig.2: The object is imagined behind a vertical plane of projection. The angled corner of the plane of projection is addressed later.
Fig.3: Projectors emanate parallel from all points of the object, perpendicular to the plane of projection.
Fig.4: An image is created thereby.
Fig.5: A second, horizontal plane of projection is added, perpendicular to the first.
Fig.6: Projectors emanate parallel from all points of the object perpendicular to the second plane of projection.
Fig.7: An image is created thereby.
Fig.8: The third plane of projection is added, perpendicular to the previous two.
Fig.9: Projectors emanate parallel from all points of the object perpendicular to the third plane of projection.
Fig.10: An image is created thereby.
Fig.11: The fourth plane of projection is added parallel to the chosen inclined surface, and perforce, perpendicular to the first (Frontal) plane of projection.
Fig.12: Projectors emanate parallel from all points of the object perpendicularly from the inclined surface, and perforce, perpendicular to the fourth (Auxiliary) plane of projection.
Fig.13: An image is created thereby.
Fig.14-16: The various planes of projection are unfolded to be planar with the Frontal plane of projection.
Fig.17: The final appearance of an orthographic multiview projection and which includes an "Auxiliary view" showing the true shape of an inclined surface.
Territorial use:
First-angle is used in most of the world.Third-angle projection is most commonly used in America, Japan (in JIS B 0001:2010); and is preferred in Australia, as laid down in AS 1100.101—1992 6.3.3.In the UK, BS8888 9.7.2.1 allows for three different conventions for arranging views: Labelled Views, Third Angle Projection, and First Angle Projection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Manganese pentacarbonyl bromide**
Manganese pentacarbonyl bromide:
Manganese pentacarbonyl bromide is an organomanganese compound with the formula BrMn(CO)5. It is a bright orange solid that is a precursor to other manganese complexes. The compound is prepared by treatment of dimanganese decacarbonyl with bromine: Mn2(CO)10 + Br2 → 2 BrMn(CO)5The complex undergoes substitution by a variety of donor ligands (L), e.g. to give derivatives of the type BrMn(CO)3L2.
Manganese pentacarbonyl bromide:
The complex adopts an octahedral coordination geometry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prostaglandin EP1 receptor**
Prostaglandin EP1 receptor:
Prostaglandin E2 receptor 1 (EP1) is a 42kDa prostaglandin receptor encoded by the PTGER1 gene. EP1 is one of four identified EP receptors, EP1, EP2, EP3, and EP4 which bind with and mediate cellular responses principally to prostaglandin E2) (PGE2) and also but generally with lesser affinity and responsiveness to certain other prostanoids (see Prostaglandin receptors). Animal model studies have implicated EP1 in various physiological and pathological responses. However, key differences in the distribution of EP1 between these test animals and humans as well as other complicating issues make it difficult to establish the function(s) of this receptor in human health and disease.
Gene:
The PTGER1 gene is located on human chromosome 19 at position p13.12 (i.e. 19p13.12), contains 2 introns and 3 exons, and codes for a G protein-coupled receptor (GPCR) of the rhodopsin-like receptor family, Subfamily A14 (see rhodopsin-like receptors#Subfamily A14).
Expression:
Studies in mice, rats, and guinea pigs have found EP1 Messenger RNA and protein to be expressed in the papillary collecting ducts of the kidney, in the kidney, lung, stomach, thalamus, and in the dorsal root ganglia neurons as well as several central nervous system sites. However, the expression of EP1 In humans, its expression appears to be more limited: EP1 receptors have been detected in human mast cells, pulmonary veins, keratinocytes, myometrium, and colon smooth muscle.
Ligands:
Activating ligands The following standard prostaglandins have the following relative potencies in binding to and activating EP1: PGE2≥PGE1>PGF2alpha>PGD2. The receptor binding affinity Dissociation constant Kd (i.e. ligand concentration needed to bind with 50% of available EP1 receptors) is ~20 nM and that of PGE1 ~40 for the mouse receptor and ~25 nM for PGE2 with the human receptor.Because PGE2 activates multiple prostanoid receptors and has a short half-life in vivo due to its rapidly metabolism in cells by omega oxidation and beta oxidation], metabolically resistant EP1-selective activators are useful for the study of EP1's function and could be clinically useful for the treatment of certain diseases. Only one such agonist that is highly selective in stimulating EP1 has been synthesized and identified, ONO-D1-OO4. This compound has a Ki inhibitory binding value (see Biochemistry#Receptor/ligand binding affinity) of 150 nM compared to that of 25 nM for PGE2 and is therefore ~5 times weaker than PGE2.
Ligands:
Inhibiting ligands SC51322 (Ki=13.8 nM), GW-848687 (Ki=8.6 nM), ONO-8711, SC-19220, SC-51089, and several other synthetic compounds given in next cited reference are selective competitive antagonists for EP1 that have been used for studies in animal models of human diseases. Carbacylin, 17-phenyltrinor PGE1, and several other tested compounds are dual EP1/EP3 antagonists (most marketed prostanoid receptor antagonists exhibit poor receptor selectivity).
Mechanism of cell activation:
When initially bound to PGE2 or other stimulating ligand, EP1 mobilizes G proteins containing the Gq alpha subunit (Gαq/11)-G beta-gamma complex. These two subunits in turn stimulate the Phosphoinositide 3-kinase pathway that raises cellular cytosolic Ca2+ levels thereby regulating Ca2+-sensitive cell signal pathways which include, among several others, those that promote the activation of certain protein kinase C isoforms. Since, this rise in cytosolic Ca2+ can also contract muscle cells, EP1 has been classified as a contractile type of prostanoid receptor. The activation of protein kinases C feeds back to phosphorylate and thereby desensitizes the activated EP1 receptor (see homologous desensitization but may also desensitize other types of prostanoid and non-prostanoid receptors (see heterologous desensitization). These desensitizations limit further EP1 receptor activation within the cell. Concurrently with the mobilization of these pathways, ligand-activated EP1 stimulates ERK, p38 mitogen-activated protein kinases, and CREB pathways that lead to cellular functional responses.
Function:
Studies using animals genetically engineered to lack EP1 and supplemented by studies using treatment with EP1 receptor antagonists and agonists indicate that this receptor serves several functions. 1) It mediates hyperalgesia due to EP11 receptors located in the central nervous system but suppresses pain perception due to E1 located on dorsal root ganglia neurons in rats. Thus, PGE2 causes increased pain perception when administered into the central nervous system but inhibits pain perception when administered systemically; 2) It promotes colon cancer development in Azoxymethane-induced and APC gene knockout mice. 3) It promotes hypertension in diabetic mice and spontaneously hypertensive rats. 4) It suppresses stress-induced impulsive behavior and social dysfunction in mice by suppressing the activation of Dopamine receptor D1 and Dopamine receptor D2 signaling. 5) It enhances the differentiation of uncommitted T cell lymphocytes to the Th1 cell phenotype and may thereby favor the development of inflammatory rather than allergic responses to immune stimulation in rodents. Studies with human cells indicate that EP1 serves a similar function on T cells. 6) It may reduce expression of Sodium-glucose transport proteins in the apical membrane or cells of the intestinal mucosa in rodents. 7) It may be differentially involved in etiology of acute brain injuries. Pharmacological inhibition or genetic deletion of EP1 receptor produce either beneficial or deleterious effects in rodent models of neurological disorders such as ischemic stroke, epileptic seizure, surgically induced brain injury and traumatic brain injury.
Clinical studies:
EP1 receptor antagonists have been studied clinically primarily to treat hyperalgesia. Numerous EP antagonists have been developed including SC51332, GW-848687X, a benzofuran-containing drug that have had some efficacy in treating various hyperalgesic syndromes in animal models. None have as yet been reported to be useful in humans. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Java Web Services Development Pack**
Java Web Services Development Pack:
The Java Web Services Development Pack (JWSDP) is a free software development kit (SDK) for developing Web Services, Web applications and Java applications with the newest technologies for Java.
Oracle replaced JWSDP with GlassFish. All components of JWSDP are part of GlassFish and WSIT and several are in Java SE 6 ("Mustang"). The source is available under the Open Source Initiative-approved CDDL license.
Java APIs:
These are the components and APIs available in the JWSDP 1.6: Java API for XML Processing (JAXP), v 1.3 Java API for XML Registries (JAXR) Java Architecture for XML Binding (JAXB), v 1.0 and 2.0 JAX-RPC v 1.1 JAX-WS v 2.0 SAAJ (SOAP with Attachments API for Java) Web Services RegistryStarting with JWSDP 1.6, the JAX-RPC and JAX-WS implementations support the Fast Infoset standard for the binary encoding of the XML infoset. Earlier versions of JWSDP also included Java Servlet JavaServer Pages JavaServer Faces
Related technologies:
There are many other Java implementations of Web Services or XML processors. Some of them support the Java standards, some support other standards or non-standard features. Related technologies include: Eclipse Metro - web services stack from GlassFish Apache Axis - web services framework XINS - RPC/web services framework xmlenc - XML output library JBossWS - web services stack from JBoss | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eileen Southgate**
Eileen Southgate:
Eileen Southgate is a British biologist who mapped the complete nervous system of the roundworm Caenorhabditis elegans (C. elegans), together with John White, Nichol Thomson, and Sydney Brenner. The work, done largely by hand-tracing thousands of serial section electron micrographs, was the first complete nervous system map of any animal and it helped establish C. elegans as a model organism. Among other projects carried out as a laboratory assistant at the Medical Research Council Laboratory of Molecular Biology (MRC-LMB), Southgate contributed to work on solving the structure of hemoglobin with Max Perutz and John Kendrew, and investigating the causes of sickle cell disease with Vernon Ingram.
Career:
Southgate spent her entire career as a laboratory technician at the Medical Research Council Laboratory of Molecular Biology (MRC LMB). She began working there in 1956, at the age of 16, after being given the option by a career officer who came to her school.Southgate initially worked for Max Perutz and John Kendrew studying hemoglobin, the protein responsible for carrying oxygen throughout the bloodstream, and the related protein myoglobin. Among other jobs, she was tasked with helping prepare hemoglobin and myoglobin for x-ray crystallography, a technique used to determine the structures of crystallized molecules such as proteins, based on how they interact with x-ray beams to produce a diffraction pattern. Thanks in part to Southgate's assistance, Perutz and Kendrew solved crystal structures of hemoglobin and myoglobin, winning them the 1962 Nobel Prize in chemistry for “for being the first to successfully identify the structures of complex proteins.” Southgate carried out additional research on hemoglobin with Vernon Ingram, assisting with his research on sickle cell disease, a genetic disease in which a mutation in hemoglobin causes it to form chains (polymerize) and block blood vessels.In 1962, Southgate briefly worked with Reuben Lebermen on his studies of plant viruses; she grew the plants, which were then infected by viruses he wanted to study, then she harvested them and purified out the viral particles. She then went to work for Tony Stretton, where after initial work involved helping him investigate β-galactosidase, she aided in his exploration of the nervous system of the parasitic nematode Ascaris lumbricoides using light microscopy. When Stretton left for the University of Wisconsin in 1971, Southgate went to work with John White, who was then a PhD student under Sydney Brenner.Brenner was interested in establishing C. elegans as a model organism at MRC LMB, and using it to study the nervous system and its connection to genetics. In pursuit of this goal, he wanted to obtain a complete map of the C. elegans nervous system, and Southgate was tasked with helping John White and electron microscopist Nichol Thomas achieve this. C. elegans is around 100 times smaller than Ascaris (~1mm compared to ~10 cm), so they had to use a higher-resolution imaging technique, electron microscopy. Nichol Thomson helped prepare thousands of serial transverse sections of C. elegans worms, which Southgate imaged, printed out, and traced. She labeled the cell bodies, processes, and connections in each image and worked with John White to trace each neuron's journey through the worm. The process took close to 15 years and culminated in a 340-page-long paper published in 1986 in the Philosophical Transactions of the Royal Society B. Officially titled “The structure of the nervous system of the nematode Caenorhabditis elegans,” it is commonly referred to by its running title, “The Mind of a Worm.” They identified 302 neurons in the hermaphrodite C. elegans worm, which they grouped into 118 classes, and they discovered that the layout and connections were virtually the same in genetically-identical worms. They found close to 8,000 total synapses (cell to cell connections) which included around 2000 neuromuscular junctions, 5000 chemical synapses & 600 gap junctions (where communication is through electrical signals). Having the map helped establish C. elegans as a model organism and allowed for further research into neural circuitry and the genes involved in establishing C. elegans' neural layout. Additionally, it aided researchers in studying analogous nerves other nematodes, including Ascaris, which, due to its larger size, is more amenable to electrophysiological investigation. Southgate retired in 1993. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tanabe–Sugano diagram**
Tanabe–Sugano diagram:
In coordination chemistry, Tanabe–Sugano diagrams are used to predict absorptions in the ultraviolet (UV), visible and infrared (IR) electromagnetic spectrum of coordination compounds. The results from a Tanabe–Sugano diagram analysis of a metal complex can also be compared to experimental spectroscopic data. They are qualitatively useful and can be used to approximate the value of 10Dq, the ligand field splitting energy. Tanabe–Sugano diagrams can be used for both high spin and low spin complexes, unlike Orgel diagrams, which apply only to high spin complexes. Tanabe–Sugano diagrams can also be used to predict the size of the ligand field necessary to cause high-spin to low-spin transitions.
Tanabe–Sugano diagram:
In a Tanabe–Sugano diagram, the ground state is used as a constant reference, in contrast to Orgel diagrams. The energy of the ground state is taken to be zero for all field strengths, and the energies of all other terms and their components are plotted with respect to the ground term.
Background:
Until Yukito Tanabe and Satoru Sugano published their paper "On the absorption spectra of complex ions", in 1954, little was known about the excited electronic states of complex metal ions. They used Hans Bethe's crystal field theory and Giulio Racah's linear combinations of Slater integrals, now called Racah parameters, to explain the absorption spectra of octahedral complex ions in a more quantitative way than had been achieved previously. Many spectroscopic experiments later, they estimated the values for two of Racah's parameters, B and C, for each d-electron configuration based on the trends in the absorption spectra of isoelectronic first-row transition metals. The plots of the energies calculated for the electronic states of each electron configuration are now known as Tanabe–Sugano diagrams.
Background:
Number must be fit for each octahedral coordination complex because the C/B can deviate strongly from the theoretical value of 4.0. This ratio changes the relative energies of the levels in the Tanabe–Sugano diagrams, and thus the diagrams may vary slightly between sources depending on what C/B ratio was selected when plotting.
Parameters:
The x-axis of a Tanabe–Sugano diagram is expressed in terms of the ligand field splitting parameter, Δ, or Dq (for "differential of quanta"), divided by the Racah parameter B. The y-axis is in terms of energy, E, also scaled by B. Three Racah parameters exist, A, B, and C, which describe various aspects of interelectronic repulsion. A is an average total interelectron repulsion. B and C correspond with individual d-electron repulsions. A is constant among d-electron configuration, and it is not necessary for calculating relative energies, hence its absence from Tanabe and Sugano's studies of complex ions. C is necessary only in certain cases. B is the most important of Racah's parameters in this case. One line corresponds to each electronic state. The bending of certain lines is due to the mixing of terms with the same symmetry. Although electronic transitions are only "allowed" if the spin multiplicity remains the same (i.e. electrons do not change from spin up to spin down or vice versa when moving from one energy level to another), energy levels for "spin-forbidden" electronic states are included in the diagrams, which are also not included in Orgel diagrams. Each state is given its molecular-symmetry label (e.g. A1g, T2g, etc.), but "g" and "u" subscripts are usually left off because it is understood that all the states are gerade. Labels for each state are usually written on the right side of the table, though for more complicated diagrams (e.g. d6) labels may be written in other locations for clarity. Term symbols (e.g. 3P, 1S, etc.) for a specific dn free ion are listed, in order of increasing energy, on the y-axis of the diagram. The relative order of energies is determined using Hund's rules. For an octahedral complex, the spherical, free ion term symbols split accordingly: Certain Tanabe–Sugano diagrams (d4, d5, d6, and d7) also have a vertical line drawn at a specific Dq/B value, which is accompanied by a discontinuity in the slopes of the excited states' energy levels. This pucker in the lines occurs when the identity of the ground state changes, shown in the diagram below. The left depicts the relative energies of the d7 ion states as functions of crystal field strength (Dq), showing an intersection of the 4T1 and the 2E states near Dq/B ~ 2.1. Subtracting the ground state energy produces the standard Tanabe–Sugano diagram shown on the right. This change in identity generally happens when the spin pairing energy, P, is equal to the ligand field splitting energy, Dq. Complexes to the left of this line (lower Dq/B values) are high-spin, while complexes to the right (higher Dq/B values) are low-spin. There is no low-spin or high-spin designation for d2, d3, or d8 because none of the states cross at reasonable crystal field energies.
Tanabe–Sugano diagrams:
The seven Tanabe–Sugano diagrams for octahedral complexes are shown below.
Unnecessary diagrams: d1, d9 and d10:
d1 There is no electron repulsion in a d1 complex, and the single electron resides in the t2g orbital ground state. A d1 octahedral metal complex, such as [Ti(H2O)6]3+, shows a single absorption band in a UV-vis experiment. The term symbol for d1 is 2D, which splits into the 2T2g and 2Eg states. The t2g orbital set holds the single electron and has a 2T2g state energy of -4Dq. When that electron is promoted to an eg orbital, it is excited to the 2Eg state energy, +6Dq. This is in accordance with the single absorption band in a UV-vis experiment. The prominent shoulder in this absorption band is due to a Jahn–Teller distortion which removes the degeneracy of the two 2Eg states. However, since these two transitions overlap in a UV-vis spectrum, this transition from 2T2g to 2Eg does not require a Tanabe–Sugano diagram.
Unnecessary diagrams: d1, d9 and d10:
d9 Similar to d1 metal complexes, d9 octahedral metal complexes have 2D spectral term. The transition is from the (t2g)6(eg)3 configuration (2Eg state) to the (t2g)5(eg)4 configuration (2T2g state). This could also be described as a positive "hole" that moves from the eg to the t2g orbital set. The sign of Dq is opposite that for d1, with a 2Eg ground state and a 2T2g excited state. Like the d1 case, d9 octahedral complexes do not require the Tanabe–Sugano diagram to predict their absorption spectra.
Unnecessary diagrams: d1, d9 and d10:
d10 There are no d-d electron transitions in d10 metal complexes because the d orbitals are completely filled. Thus, UV-vis absorption bands are not observed and a Tanabe–Sugano diagram does not exist.
Diagrams for tetrahedral symmetry:
Tetrahedral Tanabe–Sugano diagrams are generally not found in textbooks because the diagram for a dn tetrahedral will be similar to that for d(10-n) octahedral, remembering that ΔT for tetrahedral complexes is approximately 4/9 of ΔO for an octahedral complex. A consequence of the much smaller size of ΔT results in (almost) all tetrahedral complexes being high spin and therefore the change in the ground state term seen on the X-axis for octahedral d4-d7 diagrams is not required for interpreting spectra of tetrahedral complexes.
Advantages over Orgel diagrams:
In Orgel diagrams, the magnitude of the splitting energy exerted by the ligands on d orbitals, as a free ion approach a ligand field, is compared to the electron-repulsion energy, which are both sufficient at providing the placement of electrons. However, if the ligand field splitting energy, 10Dq, is greater than the electron-repulsion energy, then Orgel diagrams fail in determining electron placement. In this case, Orgel diagrams are restricted to only high spin complexes.Tanabe–Sugano diagrams do not have this restriction, and can be applied to situations when 10Dq is significantly greater than electron repulsion. Thus, Tanabe–Sugano diagrams are utilized in determining electron placements for high spin and low spin metal complexes. However, they are limited in that they have only qualitative significance. Even so, Tanabe–Sugano diagrams are useful in interpreting UV-vis spectra and determining the value of 10Dq.
Applications as a qualitative tool:
In a centrosymmetric ligand field, such as in octahedral complexes of transition metals, the arrangement of electrons in the d-orbital is not only limited by electron repulsion energy, but it is also related to the splitting of the orbitals due to the ligand field. This leads to many more electron configuration states than is the case for the free ion. The relative energy of the repulsion energy and splitting energy defines the high-spin and low-spin states.
Applications as a qualitative tool:
Considering both weak and strong ligand fields, a Tanabe–Sugano diagram shows the energy splitting of the spectral terms with the increase of the ligand field strength. It is possible for us to understand how the energy of the different configuration states is distributed at certain ligand strengths. The restriction of the spin selection rule makes it even easier to predict the possible transitions and their relative intensity. Although they are qualitative, Tanabe–Sugano diagrams are very useful tools for analyzing UV-vis spectra: they are used to assign bands and calculate Dq values for ligand field splitting.
Applications as a qualitative tool:
Examples Manganese(II) hexahydrate In the [Mn(H2O)6]2+ metal complex, manganese has an oxidation state of +2, thus it is a d5 ion. H2O is a weak field ligand (spectrum shown below), and according to the Tanabe–Sugano diagram for d5 ions, the ground state is 6A1. Note that there is no sextet spin multiplicity in any excited state, hence the transitions from this ground state are expected to be spin-forbidden and the band intensities should be low. From the spectra, only very low intensity bands are observed (low molar absorptivity (ε) values on y-axis).
Applications as a qualitative tool:
Cobalt(II) hexahydrate Another example is [Co(H2O)6]2+. Note that the ligand is the same as the last example. Here the cobalt ion has the oxidation state of +2, and it is a d7 ion. From the high-spin (left) side of the d7 Tanabe–Sugano diagram, the ground state is 4T1(F), and the spin multiplicity is a quartet. The diagram shows that there are three quartet excited states: 4T2, 4A2, and 4T1(P). From the diagram one can predict that there are three spin-allowed transitions. However, the spectra of [Co(H2O)6]2+ does not show three distinct peaks that correspond to the three predicted excited states. Instead, the spectrum has a broad peak (spectrum shown below). Based on the T–S diagram, the lowest energy transition is 4T1 to 4T2, which is seen in the near IR and is not observed in the visible spectrum. The main peak is the energy transition 4T1(F) to 4T1(P), and the slightly higher energy transition (the shoulder) is predicted to be 4T1 to 4A2. The small energy difference leads to the overlap of the two peaks, which explains the broad peak observed in the visible spectrum. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chemosis**
Chemosis:
Chemosis is the swelling (or edema) of the conjunctiva. The term derives from the Greek words cheme and -osis, cheme meaning cockleshell due to the swollen conjunctiva resembling it, and -osis meaning condition. The swelling is due to the oozing of exudate from abnormally permeable capillaries. In general, chemosis is a nonspecific sign of eye irritation. The outer surface covering appears to have fluid in it. The conjunctiva becomes swollen and gelatinous in appearance. Often, the eye area swells so much that the eyes become difficult or impossible to close fully. Sometimes, it may also appear as if the eyeball has moved slightly backwards from the white part of the eye due to the fluid filled in the conjunctiva all over the eyes except the iris. The iris is not covered by this fluid and so it appears to be moved slightly inwards.
Causes:
It is usually caused by allergies or viral infections, often inciting excessive eye rubbing. Chemosis is also included in the Chandler Classification system of orbital infections.
Causes:
If chemosis has occurred due to excessive rubbing of the eye, the first aid to be given is a cold water wash for eyes.Other causes of chemosis include: Superior vena cava obstruction, accompanied by facial oedema Hyperthyroidism, associated with exophthalmos, periorbital puffiness, lid retraction, and lid lag Cavernous sinus thrombosis, associated with infection of the paranasal sinuses, proptosis, periorbital oedema, retinal haemorrhages, papilledema, extraocular movement abnormalities, and trigeminal nerve sensory loss Carotid-cavernous fistula - classic triad of chemosis, pulsatile proptosis, and ocular bruit Cluster headache Trichinellosis Systemic lupus erythematosus (SLE) Angioedema Acute glaucoma Panophthalmitis Orbital cellulitis Gonorrheal conjunctivitis Dacryocystitis Spitting cobra venom to the eye High concentrations of phenacyl chloride in chemical mace spray Urticaria Trauma HSV Keratitis Post surgical Mucor Rhabdomyosarcoma of the orbit
Diagnosis:
An eye doctor may most often diagnose chemosis by doing a physical examination of the affected area. They can also ask questions about the severity and length of other symptoms.
Treatment:
Treatment depends on the cause of the chemosis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Viral evolution**
Viral evolution:
Viral evolution is a subfield of evolutionary biology and virology that is specifically concerned with the evolution of viruses. Viruses have short generation times, and many—in particular RNA viruses—have relatively high mutation rates (on the order of one point mutation or more per genome per round of replication). Although most viral mutations confer no benefit and often even prove deleterious to viruses, the rapid rate of viral mutation combined with natural selection allows viruses to quickly adapt to changes in their host environment. In addition, because viruses typically produce many copies in an infected host, mutated genes can be passed on to many offspring quickly. Although the chance of mutations and evolution can change depending on the type of virus (e.g., double stranded DNA, double stranded RNA, single strand DNA), viruses overall have high chances for mutations.
Viral evolution:
Viral evolution is an important aspect of the epidemiology of viral diseases such as influenza (influenza virus), AIDS (HIV), and hepatitis (e.g. HCV). The rapidity of viral mutation also causes problems in the development of successful vaccines and antiviral drugs, as resistant mutations often appear within weeks or months after the beginning of a treatment. One of the main theoretical models applied to viral evolution is the quasispecies model, which defines a viral quasispecies as a group of closely related viral strains competing within an environment.
Origins:
Three classical hypotheses Viruses are ancient. Studies at the molecular level have revealed relationships between viruses infecting organisms from each of the three domains of life, suggesting viral proteins that pre-date the divergence of life and thus infecting the last universal common ancestor. This indicates that some viruses emerged early in the evolution of life, and that they have probably arisen multiple times. It has been suggested that new groups of viruses have repeatedly emerged at all stages of evolution, often through the displacement of ancestral structural and genome replication genes.There are three classical hypotheses on the origins of viruses and how they evolved: Virus-first hypothesis: Viruses evolved from complex molecules of protein and nucleic acid before cells first appeared on earth. By this hypothesis, viruses contributed to the rise of cellular life. This is supported by the idea that all viral genomes encode proteins that do not have cellular homologs. The virus-first hypothesis has been dismissed by some scientists because it violates the definition of viruses, in that they require a host cell to replicate.
Origins:
Reduction hypothesis (degeneracy hypothesis): Viruses were once small cells that parasitized larger cells. This is supported by the discovery of giant viruses with similar genetic material to parasitic bacteria. However, the hypothesis does not explain why even the smallest of cellular parasites do not resemble viruses in any way.
Origins:
Escape hypothesis (vagrancy hypothesis): Some viruses evolved from bits of DNA or RNA that "escaped" from the genes of larger organisms. This does not explain the structures that are unique to viruses and are not seen anywhere in cells. It also does not explain the complex capsids and other structures of virus particles.Virologists are in the process of re-evaluating these hypotheses.
Origins:
Later hypotheses Coevolution hypothesis (Bubble Theory): At the beginning of life, a community of early replicons (pieces of genetic information capable of self-replication) existed in proximity to a food source such as a hot spring or hydrothermal vent. This food source also produced lipid-like molecules self-assembling into vesicles that could enclose replicons. Close to the food source replicons thrived, but further away the only non-diluted resources would be inside vesicles. Therefore, evolutionary pressure could push replicons along two paths of development: merging with a vesicle, giving rise to cells; and entering the vesicle, using its resources, multiplying and leaving for another vesicle, giving rise to viruses.
Origins:
Chimeric-origins hypothesis: Based on the analyses of the evolution of the replicative and structural modules of viruses, a chimeric scenario for the origin of viruses was proposed in 2019. According to this hypothesis, the replication modules of viruses originated from the primordial genetic pool, although the long course of their subsequent evolution involved many displacements by replicative genes from their cellular hosts. By contrast, the genes encoding major structural proteins evolved from functionally diverse host proteins throughout the evolution of the virosphere. This scenario is distinct from each of the three traditional scenarios but combines features of the Virus-first and Escape hypotheses.One of the problems for studying viral origins and evolution is the high rate of viral mutation, particularly the case in RNA retroviruses like HIV/AIDS. A recent study based on comparisons of viral protein folding structures, however, is offering some new evidence. Fold Super Families (FSFs) are proteins that show similar folding structures independent of the actual sequence of amino acids, and have been found to show evidence of viral phylogeny. The proteome of a virus, the viral proteome, still contains traces of ancient evolutionary history that can be studied today. The study of protein FSFs suggests the existence of ancient cellular lineages common to both cells and viruses before the appearance of the 'last universal cellular ancestor' that gave rise to modern cells. Evolutionary pressure to reduce genome and particle size may have eventually reduced viro-cells into modern viruses, whereas other coexisting cellular lineages eventually evolved into modern cells. Furthermore, the long genetic distance between RNA and DNA FSFs suggests that the RNA world hypothesis may have new experimental evidence, with a long intermediary period in the evolution of cellular life.
Origins:
Definitive exclusion of a hypothesis on the origin of viruses is difficult to make on Earth given the ubiquitous interactions between viruses and cells, and the lack of availability of rocks that are old enough to reveal traces of the earliest viruses on the planet. From an astrobiological perspective, it has therefore been proposed that on celestial bodies such as Mars not only cells but also traces of former virions or viroids should be actively searched for: possible findings of traces of virions in the apparent absence of cells could provide support for the virus-first hypothesis.
Evolution:
Viruses do not form fossils in the traditional sense, because they are much smaller than the finest colloidal fragments forming sedimentary rocks that fossilize plants and animals. However, the genomes of many organisms contain endogenous viral elements (EVEs). These DNA sequences are the remnants of ancient virus genes and genomes that ancestrally 'invaded' the host germline. For example, the genomes of most vertebrate species contain hundreds to thousands of sequences derived from ancient retroviruses. These sequences are a valuable source of retrospective evidence about the evolutionary history of viruses, and have given birth to the science of paleovirology.The evolutionary history of viruses can to some extent be inferred from analysis of contemporary viral genomes. The mutation rates for many viruses have been measured, and application of a molecular clock allows dates of divergence to be inferred.Viruses evolve through changes in their RNA (or DNA), some quite rapidly, and the best adapted mutants quickly outnumber their less fit counterparts. In this sense their evolution is Darwinian. The way viruses reproduce in their host cells makes them particularly susceptible to the genetic changes that help to drive their evolution. The RNA viruses are especially prone to mutations. In host cells there are mechanisms for correcting mistakes when DNA replicates and these kick in whenever cells divide. These important mechanisms prevent potentially lethal mutations from being passed on to offspring. But these mechanisms do not work for RNA and when an RNA virus replicates in its host cell, changes in their genes are occasionally introduced in error, some of which are lethal. One virus particle can produce millions of progeny viruses in just one cycle of replication, therefore the production of a few "dud" viruses is not a problem. Most mutations are "silent" and do not result in any obvious changes to the progeny viruses, but others confer advantages that increase the fitness of the viruses in the environment. These could be changes to the virus particles that disguise them so they are not identified by the cells of the immune system or changes that make antiviral drugs less effective. Both of these changes occur frequently with HIV.
Evolution:
Many viruses (for example, influenza A virus) can "shuffle" their genes with other viruses when two similar strains infect the same cell. This phenomenon is called genetic shift, and is often the cause of new and more virulent strains appearing. Other viruses change more slowly as mutations in their genes gradually accumulate over time, a process known as antigenic drift.Through these mechanisms new viruses are constantly emerging and present a continuing challenge in attempts to control the diseases they cause. Most species of viruses are now known to have common ancestors, and although the "virus first" hypothesis has yet to gain full acceptance, there is little doubt that the thousands of species of modern viruses have evolved from less numerous ancient ones. The morbilliviruses, for example, are a group of closely related, but distinct viruses that infect a broad range of animals. The group includes measles virus, which infects humans and primates; canine distemper virus, which infects many animals including dogs, cats, bears, weasels and hyaenas; rinderpest, which infected cattle and buffalo; and other viruses of seals, porpoises and dolphins. Although it is not possible to prove which of these rapidly evolving viruses is the earliest, for such a closely related group of viruses to be found in such diverse hosts suggests the possibility that their common ancestor is ancient.
Evolution:
Bacteriophage Escherichia virus T4 (phage T4) is a species of bacteriophage that infects Escherichia coli bacteria. It is a double-stranded DNA virus in the family Myoviridae. Phage T4 is an obligate intracellular parasite that reproduces within the host bacterial cell and its progeny are released when the host is destroyed by lysis. The complete genome sequence of phage T4 encodes about 300 gene products. These virulent viruses are among the largest, most complex viruses that are known and one of the best studied model organisms. They have played a key role in the development of virology and molecular biology. The numbers of reported genetic homologies between phage T4 and bacteria and between phage T4 and eukaryotes are similar suggesting that phage T4 shares ancestry with both bacteria and eukaryotes and has about equal similarity to each. Phage T4 may have diverged in evolution from a common ancestor of bacteria and eukaryotes or from an early evolved member of either lineage. Most of the phage genes showing homology with bacteria and eukaryotes encode enzymes acting in the ubiquitous processes of DNA replication, DNA repair, recombination and nucleotide synthesis. These processes likely evolved very early. The adaptive features of the enzymes catalyzing these early processes may have been maintained in the phage T4, bacterial, and eukaryotic lineages because they were established well-tested solutions to basic functional problems by the time these lineages diverged.
Transmission:
Viruses have been able to continue their infectious existence due to evolution. Their rapid mutation rates and natural selection has given viruses the advantage to continue to spread. One way that viruses have been able to spread is with the evolution of virus transmission. The virus can find a new host through: Droplet transmission- passed on through body fluids (sneezing on someone) An example is the influenza virus Airborne transmission- passed on through the air (brought in by breathing) An example would be how viral meningitis is passed on Vector transmission- picked up by a carrier and brought to a new host An example is viral encephalitis Waterborne transmission- leaving a host, infecting the water, and being consumed in a new host Poliovirus is an example for this Sit-and-wait-transmission- the virus is living outside a host for long periods of time The smallpox virus is also an example for thisVirulence, or the harm that the virus does on its host, depends on various factors. In particular, the method of transmission tends to affect how the level of virulence will change over time. Viruses that transmit through vertical transmission (transmission to the offspring of the host) will evolve to have lower levels of virulence. Viruses that transmit through horizontal transmission (transmission between members of the same species that don't have a parent-child relationship) will usually evolve to have a higher virulence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mountainboarding**
Mountainboarding:
Mountainboarding, also known as dirtboarding, offroad boarding, and All-Terrain Boarding (ATB), is a well-established but little-known action sport, derived from snowboarding. The sport was initially pioneered by James Stanley during a visit to the Matterhorn in the 1990's, where snow was not available. A mountainboard is made up of components including a deck, bindings (to secure the rider to the deck), four wheels with pneumatic tires, and two steering mechanisms known as trucks. Mountainboarders, also known as riders, ride specifically designed boardercross tracks, slopestyle parks, grass hills, woodlands, gravel tracks, streets, skateparks, ski resorts, BMX courses, and mountain bike trails. It is this ability to ride such a variety of terrain that makes mountainboarding unique from other board sports.
History:
Origins Morton Hellig's 'Supercruiser Inc.' was the first company to manufacture and retail the 'All Terrain Dirtboard', patented in 1989.
Mountainboarding (name coined by Jason Lee) began in the UK, the United States and Australia in 1992. Unknown to each other, riders from other boardsports started to design, build, and eventually manufacture boards that could be ridden off-road. This desire to expand the possible terrain that a boarder can ride created the sport of Mountainboarding.
History:
United Kingdom Dave and Pete Tatham, Joe Inglis and Jim Aveline, whilst looking for an off-season alternative to surfing and snowboarding, began designing boards that could be ridden down hills. Inglis developed initial prototypes, and in 1992 noSno was started. Extensive research and development produced the noSno truck system which enabled the boards to be steered and remain stable at high speeds. NoSno boards utilized snowboard bindings and boots, with large tyres for rough ground, and the option for a hand-operated hydraulic disc brake.
History:
United States In 1992, after having snowboarded at Heavenly Valley Resort in Northern California, friends Jason Lee, Patrick McConnell and Joel Lee went looking for an alternative for the summer season. Not finding anything suitable they co-founded Mountain Board Sports (MBS) in 1993 to build boards that they could use to carve down hills. The original MBS boards, known as 'Frame Boards' had a small wooden deck metal posts to hold the rider's feet, a tubular metal frame connecting trucks which used springs to enable steering and thus create the carving sensation that the MBS co-founders were looking for.
History:
The first recorded mountainboarding act occurred in the summer of 1978, when local skateboarder Mike Motta residing in Medford Massachusetts navigated down a hill known as Seven Bumps in Malden Massachusetts on a bet, using a standard Franklin skateboard.
Australia John Milne developed a three-wheeled version of a mountainboard in 1992 in his spare time during periods of very poor surf. It used a unique steering system to emulate surfing on land. It had three wheels and a skate-style deck with no bindings.
History:
Mid-to-late nineties From the early days of invention there has always been a competitive element in mountainboarding. Encompassing racing, freestyle and downhill, competitions have been organized in the USA since 1993 and in the UK since 1997. In the same year the ATBA-UK (All Terrain Boarding Association), the national governing body for mountainboarding in the UK was born. As a non-profit making organization it represented and promoted the sport by putting riders interests first, promoting safety, sanctioning events, providing training, and sourcing funding to put on the ATBA-UK National Series, an annual series of competitions. The competitions did much to promote the sport and in 1998 mountainboarding had an estimated participation of over 1 million athletes worldwide. The components evolved, and the sport continued to grow. MBS developed the open heel binding, the channel truck, the "eggshock" and the reverse V Brake system and sold boards in around 30 countries worldwide. In 1998 Maxtrack started distributing MBS mountainboards in the UK and Europe.
History:
Future As of recent there have been some powered mountain boards gaining traction in the board enthusiast world. Small gas or electric motors attached to allow for mountainboarding to be done on flat ground or to climb hills rather than just going downhill. Many DIY electric mountainboard builders are constantly developing new drivetrains for their boards with electric motors, rivaling the power of small motorcycles, becoming the norm.
Equipment:
Board components Deck Mountainboard decks are the part that most of the components are attached to, and provide the base for the rider to stand on. They are generally from 90–110 cm in length, and can be made from a range of construction methods and materials. For example, high specification boards may be made from composite carbon and glass reinforced plastics, possibly with a wooden core, similarly made to a snowboard deck. Basic decks are generally made using laminated wood pressed into shape, comparable to a longboard deck with larger dimensions and a different shape. There are variable characteristics such as flex, weight, shape, length and tip angle that can be catered for in custom or stock boards from a variety of manufacturers.
Equipment:
Trucks Trucks are the components made up of a hanger, damping and/or spring system, and axles which attach the wheels to the deck. They also have the mechanisms required to allow the board to turn.
Equipment:
Skate trucks Skate trucks have a rigid axle and a top hanger, with a single bolt and bushings, also called rubbers or grommets, that provide the cushion mechanism for turning the mountainboard. The bushings cushion the truck when it turns. The stiffer the bushings, the more resistant the mountainboard is to turning. The softer the bushings, the easier it is to turn. A bolt called a kingpin holds these parts together and fits inside the bushings. Thus by tightening or loosening the kingpin nut, the trucks can be adjusted loosely for better turning and tighter for more control. Skate-style mountainboard trucks are similar to skateboard trucks but more robust and with a longer axle.
Equipment:
Channel trucks Channel trucks are common on mountainboards, and are made up of an axles mounted to the truck bottom piece, which is suspended from a top hanger by a kingpin. They are mounted to the deck using nuts and bolts through the hanger part, on an angle, (usually 35°). When the board is tilted laterally the axles turn together to angle the wheels in the direction of the turn. Two polyurethane dampers sometimes known as "egg shocks" are mounted between the hanger and the axle housing on each truck to provide resistance to the lean of the rider during turning. Springs are mounted in the same place with the dampers inside them.
Equipment:
The 'shocks' present in channel trucks are there to dampen the turning system, and help reduce the oscillations of the trucks on the board commonly described as speed wobble. The springs are there to return the deck to centre after a turn has been performed, neither are there to provide suspension between the deck and axles. They have a kingpin that can't move vertically which prevents this.
Equipment:
Also, the effectiveness of springs as employed in current (2009) channel truck designs is open to debate.
Equipment:
In a "Coil over Oil" shock, the extension of the spring is dampened as well as contraction. In a channel truck design, this is not the case as the damper sits freely inside the spring—therefore only contraction is dampened, not extension. This means that when a spring ceases to be under load and extends, it can extend past the equilibrium point.
Equipment:
NoSno trucks NoSno trucks use two 'kingpin'-type bolts to create a floating pivot, an axle with a plate into which the bolts go, an angled base plate that attaches to the deck, and polyurethane bushings to dampen the turn. The amount of turn available in the trucks can be adjusted by tightening the bolts or by using bushings of different hardness. A similar design was adopted by Howla Mountainboards for the limited time that they manufactured boards.
Equipment:
Bindings Bindings involve adjustable straps that hold the rider on to the board while allowing room to move their feet.
Equipment:
Snowboard bindings Ratchet-strap bindings Velcro Bindings Bar-Bindings Heelstraps Wheels Wheels are made up of plastic or metal hubs and pneumatic tires ranging in size of 8–13 inches. The 8" wheel has evolved into the best choice for freestyle riding, and also an all purpose wheel for general riding. Larger wheels (generally 9" and 10") are more useful to the downhill rider; granting the rider access to high-speed runs and more stability when travelling at speed.
Equipment:
Tyres Various tyres have been made available by different mountainboard manufacturers, giving riders a choice of tire specifications. For example, the thickness of the tyre is variable between tyres, usually either 2 or 4 ply. 2 ply tyres are lighter, but more susceptible to punctures than 4 ply tyres. There is a variety of tread patterns available, ranging from street slicks to deep tread designed for maximum grip with split center beads to channel water away. Width and diameter is also variable.
Equipment:
Brakes Brakes are generally reserved for big mountain riding where riders need an increased ability to control their speed over long runs. The brakes are most usually attached to both front wheels of the mountainboard rather than the rear to give greater braking efficiency and reduce the chances of the rear wheels locking up and skidding. They are operated via a hand-held lever which when pulled causes both brake mechanisms to work simultaneously. There are four types of brakes used on mountainboards: Mechanical drum brakes Those brakes use brake drums attached to the wheel with the 5 wheel-screws (Scrub). They are cheap and brake rigidly but get extremely hot and tend to melt the plastic hub. Good emergency brakes only, not any good for long steep hills. There is currently no heat resistant hub where they would attach to, which could however be easily made of e.g. alloy.Hydraulic Disc Brakes Hydraulic disc brakes use rotors attached to the hubs with hydraulically operated brake mechanisms that force ceramic pads against the rotors to effect braking. Advantages include high braking power and reliability. Disadvantages include cost, vulnerability of the discs, heat build up, and weight.Hydraulic Rim Brakes Hydraulic Rim Brakes use the hub, or preferably, a bolt on metal disc as the braking surface for hydraulically operated brake mechanisms that push polyurethane blocks against the braking surface. Advantages include good braking power, and good modulation. Disadvantages include possible damage to bearings.Cable-pull 'V' Brakes Cable-pull 'V' Brakes also use the hub or metal discs as a braking surface. The hand operated lever pulls a metal cable to push polyurethane blocks against the braking surface. Advantages include low cost, low weight, and easy installation and maintenance. Disadvantages include low braking power, and the need to be regularly adjusted.
Equipment:
Protections Mountainboarders wear a range of protective equipment while riding.
Helmets - are designed to protect the wearer's head from falls and damage to the brain. There are two types; full-face, which provides more protection to the wearer, and open face, which provides greater visibility for the wearer.
Wristguards - are designed to protect the wearer's wrists from impacts. They come in two types, gloves and wrap-arounds, but both include plastic splints which prevent the wearers wrists from bending backwards during a fall and protect the palms against cuts and grazes.
Elbow pads - are designed to protect the wearer's elbows from impact during falls. Sometimes forearm guards are incorporated into the elbow pads.
Knee pads - are designed to protect the wearer's knees from impact during falls.
Padded Shorts - are designed to protect the wearer's hips, coccyx, and buttocks from impact during falls.
Body Armour - is designed to protect the wearer's upper body, arms, shoulders and back from impact during falls.
Disciplines:
Mountainboarding consists of four main disciplines: Downhill (DH) Timed one-man descents. Usually relatively long courses (1 km+) in the mountains. Sometimes referred to as big mountain.
Boardercross (BoarderX, BX) Two to four-man racing on a specifically designed track.
Freestyle (FS) Slopestyle: Performing tricks on a slopestyle course consisting of multiple jumps, rails and innovative features.
Big Air: Performing tricks including grabs, spins and inverts over jumps.
Jibbing: Similar to Slopestyle except with the focus on smaller more technical features such as rails, quarterpipes, drops and smaller kickers.
Freeriding (FR) Non-competitive riding over a range of natural terrain including woodland.
Similar Sports Similar all terrain boardsports include Dirtsurfing and Kiteboarding.
Crossover Sports Skateboarding Streetboarding Surfing Snowboarding Wakeboarding Mountain biking Sandboarding Dirtsurfing Grassboarding Kite landboarding
Media:
The following are some of the numerous publications Mountain Boarding has had in various news media outlets and other media, including for the annual Mountain Board US Open in Snowmass and the Twighlight Showdown Mountainboard Championships.
Media:
Historical Magazines Off-Road Boarding Magazine founded in '99 with its editor Brian Bishop and other dedicated riders. It ran numerous pictorials, US riding spots, rider profiles and carried virtually no ads. It started small, and was given away at comps and shops. The last issue of the mag was printed in full color and a new name "Mountainboard Magazine". The new title was later adopted by a UK publisher.
Media:
All Terrain Boarding Magazine aka ATBMag: The longest running, 4 years, and only Mountainboard magazine to make it onto mainstream newsagent shelves. Distributed worldwide it ran to 39 copies and one photo album featuring the work of Paul Taylor. ATBMag was also responsible for the creation of the World Freestyle Championships, running it for the first 2 years. It also created the World Series, taking place in 12 countries. ATBmag sponsored a team of riders, who were later sponsored by EXIT. The team featured Tom Kirkman, Laurie Kaye, Alex Downie, Oli Morrison, Arno Van Den Vejver, Ig Wilkinson, Jack Chew and Tuai Lovejoy. 2005 saw the team take to Europe and ride in 7 countries following the World Series Tour. In 2006 the magazine made its final issue.
Media:
Scuz Mountainboarding Zine was first published in July 2004 as a paid-for magazine, however subsequent issues were published and distributed for free both as a printed hardcopy version and on the internet as a downloadable PDF. It was announced in October 2006 that issue twelve would be the final issue.
Mountainboard Magazine was produced by the same people who created scuz, and it was re-branded to suit changing trends in mountainboarding, and a cover charge was introduced to help pay for the costs involved in producing the magazine as the advertising featured was sufficient. Only one issue was ever printed.
Mountainboarding Video Magazine (MVM) The only video mag to showcase mountain boarding from around the world. This publication only made nine issues, co-produced and edited by Justin Rhodes, Van DeWitt, and Brett Dooley.
UKATB ran for 6 years between 2000 and 2006 and was the first website to feature in-depth advice and tips from board maintenance to ramp building and trick tips. At its peak the site attracted over 10 000 unique visitors a month.
Movies Johnny Kapahala: Back on Board is the 70th Disney Channel Original Movie and is the sequel to the 1999 film Johnny Tsunami. Its popularity encouraged people to take interest in the extreme sport.
TV Mountainboard Aux Saisies TV coverage of the 2009 noSno World Downhill Championships, from the French TV channel Savoie ACTU.
History Channel. The history of extreme sport on the History Channel. Featuring Mountainboarding and many other board sports.
They Think It's All Over. Pete and Dave Tatham from noSno taking part in "Guess the sportsman" on BBC's sports comedy program "They think it's all over" Park City TV: What is Mountainboarding? The Utah DirtStar Army team on Park City TV in late 2005.
Good Morning Utah. The DirtStar Army live on Good Morning Utah 2005.
US Open Mountainboard Championships 2006, held in Snowmass, Colorado. JSP TV talks with the youth division winner and the director of the Dirt Dogs.
Toasted TV. Interview with Munroboards team rider Ryan Slater on the Channel 10 show toasted TV.
Domino's Pizza. "That was Puff" commercial featuring mountainboarders: Ryan Slater, Clint Farqhuar, Markus Lubitz, Adam Zemunic.
Horizon TV. Willingen D-MAX World Series Mountainboard 2007.
Rockon. TV report on WDR on the mountain board park opening in Winterberg.
At Your Leisure: The DirtStar Army. TV report on Utah's DSA mountain board team ripping up the Park City dirt jumps.
Top Gear. TV item showing a staged race between Tom Kirkman and a Mitsubishi Evo rally car and Bowler Wildcat.
Friday Download. Kids TV report on mountain boarding (2012).
Newspapers & Magazines The Guardian. What do snowboarders do when faced with the perennially powderless slopes of the UK? They find the nearest verdant hill and hurtle down it. Tim Moore and son go gung-ho in Surrey.
The Telegraph. Jonny Beardsall loses balance and bottle as he faces a 40 mph slalom on a mountain board.
Men's Health. Fancy traveling at speeds of 60 mph on a board down a mountain? Read on…Chad Harding features in Stroud news and journal about his win in the under 14s UK championship freestyle.
Public service/Community (online):
Atboarders. UK based mountainboarding badasses who reignite the hype. Always Fresh. Always A Ting The Dirt. US based mountainboard blog & news site.
Surfing Dirt. International mountainboarders community forum.
Remolition. Free mountainboard webzine with regular features.
MountainboardingUK. Beginner mountainboard-riders website (free advice)
Competitions:
World Freestyle Championships From 2005 to 2008 was named Fat Face Night Air WFC From 2009 to 2010 was named Battle of Bugs 2004 (Weston Super X Arena, Weston Super Mare, UK) - Leon Robbins, USA 2005 (SWMBC, Bideford, UK) - Tom Kirkman, UK 2006 (SWMBC, Bideford, UK) - Alex Downie, UK 2007 (SWMBC, Bideford, UK) - Arno VDV, Belgium 2008 (Bugs Boarding, Gloucester, UK) - Renny Myles, UK 2009 (Bugs Boarding, Gloucester, UK) - Tom Kirkman, UK 2012 (Luzhniki, Moscow, Russia) - Matt Brind, UK 2017 (Venette, France) - Matt Brind, UK; Natasha Chernikova, RUS 2018 (Kranj, Slovenia) - Matt Brind, UK; Simona Petrò, ITA 2019 (Moszczenica, Poland) - Nicolas Geerse, NLD; Maja Bilik, POLWorld Downhill Championships 2009 (Les Saisies, France) - Pete Tatham, UK 2010 (Bardonecchia, Italy) - Pete Tatham, UK 2011 (Bardonecchia, Italy) - Pete Tatham, UK 2012 (Les Saisies, France) - Jonathan Charles, UKWorld Boardercross Championships 2013 (Bukovac, Novi Sad, Serbia) - Kody Stewart USA; Martina Lippolis, ITA 2014 (Bukovac, Novi Sad, Serbia) - Matt Brind, UK; Martina Lippolis, ITA 2015 (Großerlach, Germany) - Matt Brind, UK; Simona Petro, ITA 2016 (Bukovac, Novi Sad, Serbia) - Matt Brind, UK; Sonya Nicolau, ROM 2017 (Venette, France) - Matt Brind, UK; Senka Bajić, SRB 2018 (Kranj, Slovenia) - Kody Stewart USA; Senka Bajić, SRB 2019 (Bukovac, Novi Sad, Serbia) - Matt Brind, UK; Vanja Rakovic, SRBOverall World Champions 2017 (Venette, France) - Matt Brind, UK; Ola Tomalczyk, POL 2018 (Kranj, Slovenia) - Matt Brind, UK; Simona Petrò, ITA 2019 (Moszczenica, Poland; Bukovac, Novi Sad, Serbia) - Matt Brind, UKEuropean Downhill Championships 2009 (Bardonecchia, Italy) - Pete Tatham, UK 2010 (Bardonecchia, Italy) - Jonathan Charles, UK 2014 (Monte Penice, Italy) - Matt Brind, UKEuropean Mountainboard Tour 2010 - Arno VDV, Belgium 2014 - Matt Brind, UKEuropean Mountainboard Challenge 2010 (Bukovac, Novi Sad, Serbia) - Marcin Staszczyk, POL; Senka Bajić, SRB 2011 (Bukovac, Novi Sad, Serbia) - Marcin Staszczyk, POL; Senka Bajić, SRB 2012 (Bukovac, Novi Sad, Serbia) - James Wanklyn, UK; Sonya Nicolau, ROM 2015 (Kranj, Slovenia) - Dawid Rzaca, POL; Senka Bajić, SRB 2016 (Kranj, Slovenia) - Matteo Andreassi, ITA; Senka Bajić, SRB 2017 (Kranj, Slovenia) - Nicolas Geerse, NED; Senka Bajić, SRB | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stubble burning**
Stubble burning:
Stubble burning is the practice of intentionally setting fire to the straw stubble that remains after grains, such as rice and wheat, have been harvested. The technique is still widespread today.
Effects:
The burning of stubble has both positive and negative consequences.
Effects:
Generally helpful effects Cheaper and easier than other removal methods Helps to combat pests and weeds Can reduce nitrogen tie-up Generally harmful effects Loss of nutrients Pollution from smoke. Including greenhouse gases and others that damage the ozone layer Damage to electrical and electronic equipment from floating threads of conductive waste Risk of fires spreading out of control Alternative to stubble burning Agriculture residues can have other uses, such as in particle board and biofuel, though these uses can still cause problems like erosion and nutrient loss.
Effects:
Spraying an enzyme, which decomposes the stubble into useful fertiliser, improves the soil, avoids air pollution and prevents carbon dioxide emissions.Several companies worldwide use leftover agricultural waste to make new products. Agricultural waste can serve as raw materials for new applications, such as paper and board, bio-based oils, leather, catering disposables, fuel and plastic.
Attitudes toward stubble burning:
Stubble burning has been effectively prohibited since 1993 in the United Kingdom. A perceived increase in blackgrass, and particularly herbicide resistant blackgrass, has led to a campaign by some arable farmers for its return.
In Australia stubble burning is "not the preferred option for the majority of farmers" but is permitted and recommended in some circumstances. Farmers are advised to rake and burn windrows, and leave a fire break of 3 metres around any burn off.
In the United States, fires are fairly common in mid-western states, but some states such as Oregon and Idaho regulate the practice.
In the European Union, the Common Agricultural Policy strongly discourages stubble burning.
In China, there is a government ban on stubble burning; however the practice remains fairly common.
In northern India, despite a ban by the Punjab Pollution Control Board, stubble burning is still practiced since the 1980s. Authorities are starting to enforce this ban more proactively, and to research alternatives.
Stubble burning is allowed by permit in some Canadian provinces, including Manitoba where 5% of farmers were estimated to do it in 2007.
Attitudes toward stubble burning:
India Stubble burning in Punjab, Haryana, and Uttar Pradesh in north India has been cited as a major cause of air pollution in Delhi since 1980. Consequently, the government is considering implementation of the 1,600 km long and 5 km wide Great Green Wall of Aravalli. From April to May and October to November each year, farmers mainly in Punjab, Haryana, and Uttar Pradesh burn an estimated 35 million tons of crop waste from their wheat and paddy fields after harvesting as a low-cost straw-disposal practice to reduce the turnaround time between harvesting and sowing for the first (summer) crop and the second (winter) crop. Smoke from this burning produces a cloud of particulates visible from space and has produced what has been described as a "toxic cloud" in New Delhi, resulting in declarations of an air-pollution emergency. For this, the NGT (National Green Tribunal) instituted a fine of ₹2 lakh on the Delhi Government for failing to file an action plan providing incentives and infrastructural assistance to farmers to stop them from burning crop residue to prevent air pollution.Although harvesters such as the Indian-manufactured "Happy Seeder" that shred the crop residues into small pieces and uniformly spread them across the field are available as an alternative to burning stubble, and crops such as millets and maize can be grown as an sustainable alternative to rice and wheat in order to conserve water, some farmers complain that the cost of these machines is a significant financial burden, with the crops not incurred under MSP prices when compared to burning the fields and purchasing crops that are produced under MSP prices.The Indian Agricultural Research Institute, developed an enzyme bio-decomposer solution, that can be sprayed after the harvest, to increase organic carbon in the soil and maintain overall soil health. In 2021, they began licensing its use to various companies.
Attitudes toward stubble burning:
In May 2022, the Government of Punjab announced they will purchase maize, bajra, sunflower and moong crops at MSP, encouraging farmers to adopt less water consuming options as a sustainable alternative to paddy and wheat in the wake of fast-depleting groundwater. Stubble burning has increased 160% now in Rajasthan in India claims a minister. [1] | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PATRIC**
PATRIC:
PATRIC (Pathosystems Resource Integration Center) is a bacterial bioinformatics website from the Bioinformatics Resource Center. It is an information system integrating databases with various types of data about bacterial pathogens (transcriptomic, proteomic, structural, biochemical) together with analysis tools. It is designed to support the biomedical research community's work on bacterial infectious diseases via these integrations of various pieces of pathogen information.
Description:
PATRIC is a project of Virginia Tech's Cyberinfrastructure Division, and is funded by the National Institutes of Allergy and Infectious Diseases (NIAID), a component of the National Institutes of Health (NIH). PATRIC centralize available bacterial phylogenomic data, proteomic and other various experiment pieces of data linked to specific pathogens from numerous sources. The PATRIC platform provides an interface comprehensive comparative genomics.
Bacterial Organisms Covered in the PATRIC Database:
Bacillus Bartonella Borrelia Brucella Burkholderia Campylobacter Chlamydophila Clostridium Coxiella Ehrlichia Escherichia Francisella Helicobacter Listeria Mycobacterium Rickettsia Salmonella Shigella Staphylococcus Vibrio Yersinia Other Bacteria
About Cyberinfrastructure Division and VBI:
The CyberInfrastructure Division at VBI develops methods, infrastructure, and resources to help enable scientific discoveries in infectious disease research. The group applies the principles of cyberinfrastructure to integrate data, computational infrastructure, and people. CyberInfrastructure Division has developed public resources for curated, diverse molecular and literature data from various infectious disease systems, and implemented the processes, systems, and databases required to support them. It also conducts research by applying its methods and data to make new discoveries of its own.
About Cyberinfrastructure Division and VBI:
The Virginia Bioinformatics Institute (VBI) at Virginia Tech has a research platform centered on understanding the "disease triangle" of host-pathogen-environment interactions in plants, humans and other animals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Basal metabolic rate**
Basal metabolic rate:
Basal metabolic rate (BMR) is the rate of energy expenditure per unit time by endothermic animals at rest. It is reported in energy units per unit time ranging from watt (joule/second) to ml O2/min or joule per hour per kg body mass J/(h·kg). Proper measurement requires a strict set of criteria to be met. These criteria include being in a physically and psychologically undisturbed state and being in a thermally neutral environment while in the post-absorptive state (i.e., not actively digesting food). In bradymetabolic animals, such as fish and reptiles, the equivalent term standard metabolic rate (SMR) applies. It follows the same criteria as BMR, but requires the documentation of the temperature at which the metabolic rate was measured. This makes BMR a variant of standard metabolic rate measurement that excludes the temperature data, a practice that has led to problems in defining "standard" rates of metabolism for many mammals.Metabolism comprises the processes that the body needs to function. Basal metabolic rate is the amount of energy per unit of time that a person needs to keep the body functioning at rest. Some of those processes are breathing, blood circulation, controlling body temperature, cell growth, brain and nerve function, and contraction of muscles. Basal metabolic rate affects the rate that a person burns calories and ultimately whether that individual maintains, gains, or loses weight. The basal metabolic rate accounts for about 60 to 75% of the daily calorie expenditure by individuals. It is influenced by several factors. In humans, BMR typically declines by 1–2% per decade after age 20, mostly due to loss of fat-free mass, although the variability between individuals is high.
Description:
The body's generation of heat is known as thermogenesis and it can be measured to determine the amount of energy expended. BMR generally decreases with age, and with the decrease in lean body mass (as may happen with aging). Increasing muscle mass has the effect of increasing BMR. Aerobic (resistance) fitness level, a product of cardiovascular exercise, while previously thought to have effect on BMR, has been shown in the 1990s not to correlate with BMR when adjusted for fat-free body mass. But anaerobic exercise does increase resting energy consumption (see "aerobic vs. anaerobic exercise"). Illness, previously consumed food and beverages, environmental temperature, and stress levels can affect one's overall energy expenditure as well as one's BMR.
Description:
BMR is measured under very restrictive circumstances when a person is awake. An accurate BMR measurement requires that the person's sympathetic nervous system not be stimulated, a condition which requires complete rest. A more common measurement, which uses less strict criteria, is resting metabolic rate (RMR).BMR may be measured by gas analysis through either direct or indirect calorimetry, though a rough estimation can be acquired through an equation using age, sex, height, and weight. Studies of energy metabolism using both methods provide convincing evidence for the validity of the respiratory quotient (RQ), which measures the inherent composition and utilization of carbohydrates, fats and proteins as they are converted to energy substrate units that can be used by the body as energy.
Phenotypic flexibility:
BMR is a flexible trait (it can be reversibly adjusted within individuals), with, for example, lower temperatures generally resulting in higher basal metabolic rates for both birds and rodents. There are two models to explain how BMR changes in response to temperature: the variable maximum model (VMM) and variable fraction model (VFM). The VMM states that the summit metabolism (or the maximum metabolic rate in response to the cold) increases during the winter, and that the sustained metabolism (or the metabolic rate that can be indefinitely sustained) remains a constant fraction of the former. The VFM says that the summit metabolism does not change, but that the sustained metabolism is a larger fraction of it. The VMM is supported in mammals, and, when using whole-body rates, passerine birds. The VFM is supported in studies of passerine birds using mass-specific metabolic rates (or metabolic rates per unit of mass). This latter measurement has been criticized by Eric Liknes, Sarah Scott, and David Swanson, who say that mass-specific metabolic rates are inconsistent seasonally.In addition to adjusting to temperature, BMR also may adjust before annual migration cycles. The red knot (ssp. islandica) increases its BMR by about 40% before migrating northward. This is because of the energetic demand of long-distance flights. The increase is likely primarily due to increased mass in organs related to flight. The end destination of migrants affects their BMR: yellow-rumped warblers migrating northward were found to have a 31% higher BMR than those migrating southward.In humans, BMR is directly proportional to a person's lean body mass. In other words, the more lean body mass a person has, the higher their BMR; but BMR is also affected by acute illnesses and increases with conditions like burns, fractures, infections, fevers, etc. In menstruating females, BMR varies to some extent with the phases of their menstrual cycle. Due to the increase in progesterone, BMR rises at the start of the luteal phase and stays at its highest until this phase ends. There are different findings in research how much of an increase usually occurs. Small sample, early studies, found various figures, such as; a 6% higher postovulatory sleep metabolism, a 7% to 15% higher 24 hour expenditure following ovulation, and an increase and a luteal phase BMR increase by up to 12%. A study by the American Society of Clinical Nutrition found that an experimental group of female volunteers had an 11.5% average increase in 24 hour energy expenditure in the two weeks following ovulation, with a range of 8% to 16%. This group was measured via simultaneously direct and indirect calorimetry and had standardized daily meals and sedentary schedule in order to prevent the increase from being manipulated by change in food intake or activity level. A 2011 study conducted by the Mandya Institute of Medical Sciences found that during a woman's follicular phase and menstrual cycle is no significant difference in BMR, however the calories burned per hour is significantly higher, up to 18%, during the luteal phase. Increased state anxiety (stress level) also temporarily increased BMR.
Physiology:
The early work of the scientists J. Arthur Harris and Francis G. Benedict showed that approximate values for BMR could be derived using body surface area (computed from height and weight), age, and sex, along with the oxygen and carbon dioxide measures taken from calorimetry. Studies also showed that by eliminating the sex differences that occur with the accumulation of adipose tissue by expressing metabolic rate per unit of "fat-free" or lean body mass, the values between sexes for basal metabolism are essentially the same. Exercise physiology textbooks have tables to show the conversion of height and body surface area as they relate to weight and basal metabolic values.
Physiology:
The primary organ responsible for regulating metabolism is the hypothalamus. The hypothalamus is located on the diencephalon and forms the floor and part of the lateral walls of the third ventricle of the cerebrum. The chief functions of the hypothalamus are: control and integration of activities of the autonomic nervous system (ANS) The ANS regulates contraction of smooth muscle and cardiac muscle, along with secretions of many endocrine organs such as the thyroid gland (associated with many metabolic disorders).
Physiology:
Through the ANS, the hypothalamus is the main regulator of visceral activities, such as heart rate, movement of food through the gastrointestinal tract, and contraction of the urinary bladder.
Physiology:
production and regulation of feelings of rage and aggression regulation of body temperature regulation of food intake, through two centers: The feeding center or hunger center is responsible for the sensations that cause us to seek food. When sufficient food or substrates have been received and leptin is high, then the satiety center is stimulated and sends impulses that inhibit the feeding center. When insufficient food is present in the stomach and ghrelin levels are high, receptors in the hypothalamus initiate the sense of hunger.
Physiology:
The thirst center operates similarly when certain cells in the hypothalamus are stimulated by the rising osmotic pressure of the extracellular fluid. If thirst is satisfied, osmotic pressure decreases.All of these functions taken together form a survival mechanism that causes us to sustain the body processes that BMR measures.
Physiology:
BMR estimation formulas Several equations to predict the number of calories required by humans have been published from the early 20th–21st centuries. In each of the formulas below: P is total heat production at complete rest, m is mass (kg), h is height (cm), a is age (years).The original Harris–Benedict equationHistorically, the most notable formula was the Harris–Benedict equation, which was published in 1919: for men, 13.7516 kg 5.0033 cm 6.7550 year 66.4730 kcal day , for women, 9.5634 kg 1.8496 cm 4.6756 year 655.0955 kcal day .
Physiology:
The difference in BMR for men and women is mainly due to differences in body mass. For example, a 55-year-old woman weighing 130 pounds (59 kg) and 66 inches (170 cm) tall would have a BMR of 1,272 kilocalories (5,320 kJ) per day.
The revised Harris–Benedict equationIn 1984, the original Harris–Benedict equations were revised using new data. In comparisons with actual expenditure, the revised equations were found to be more accurate: for men, 13.397 kg 4.799 cm 5.677 year 88.362 kcal day , for women, 9.247 kg 3.098 cm 4.330 year 447.593 kcal day .
It was the best prediction equation until 1990, when Mifflin et al. introduced the equation: The Mifflin St Jeor equation 10.0 kg 6.25 cm 5.0 year kcal day , where s is +5 for males and −161 for females.
According to this formula, the woman in the example above has a BMR of 1,204 kilocalories (5,040 kJ) per day.
During the last 100 years, lifestyles have changed, and Frankenfield et al. showed it to be about 5% more accurate.
These formulas are based on body mass, which does not take into account the difference in metabolic activity between lean body mass and body fat. Other formulas exist which take into account lean body mass, two of which are the Katch–McArdle formula and Cunningham formula.
The Katch–McArdle formula (resting daily energy expenditure)The Katch–McArdle formula is used to predict resting daily energy expenditure (RDEE).
The Cunningham formula is commonly cited to predict RMR instead of BMR; however, the formulas provided by Katch–McArdle and Cunningham are the same.
370 21.6 ⋅ℓ, where ℓ is the lean body mass (LBM in kg): 100 ), where f is the body fat percentage.
According to this formula, if the woman in the example has a body fat percentage of 30%, her resting daily energy expenditure (the authors use the term of basal and resting metabolism interchangeably) would be 1262 kcal per day.
Physiology:
Research on individual differences in BMR The basic metabolic rate varies between individuals. One study of 150 adults representative of the population in Scotland reported basal metabolic rates from as low as 1,027 kilocalories (4,300 kJ) per day to as high as 2,499 kilocalories (10,460 kJ); with a mean BMR of 1,500 kilocalories (6,300 kJ) per day. Statistically, the researchers calculated that 62% of this variation was explained by differences in fat free mass. Other factors explaining the variation included fat mass (7%), age (2%), and experimental error including within-subject difference (2%). The rest of the variation (27%) was unexplained. This remaining difference was not explained by sex nor by differing tissue size of highly energetic organs such as the brain.A cross-sectional study of more than 1400 subjects in Europe and the US showed that once adjusted for differences in body composition (lean and fat mass) and age, BMR has fallen over the past 35 years. The decline was also observed in a meta-analysis of more than 150 studies dating back to the early 1920s, translating into a decline in total energy expenditure of about 6%.
Biochemistry:
About 70% of a human's total energy expenditure is due to the basal life processes taking place in the organs of the body (see table). About 20% of one's energy expenditure comes from physical activity and another 10% from thermogenesis, or digestion of food (postprandial thermogenesis). All of these processes require an intake of oxygen along with coenzymes to provide energy for survival (usually from macronutrients like carbohydrates, fats, and proteins) and expel carbon dioxide, due to processing by the Krebs cycle.
Biochemistry:
For the BMR, most of the energy is consumed in maintaining fluid levels in tissues through osmoregulation, and only about one-tenth is consumed for mechanical work, such as digestion, heartbeat, and breathing.What enables the Krebs cycle to perform metabolic changes to fats, carbohydrates, and proteins is energy, which can be defined as the ability or capacity to do work. The breakdown of large molecules into smaller molecules—associated with release of energy—is catabolism. The building up process is termed anabolism. The breakdown of proteins into amino acids is an example of catabolism, while the formation of proteins from amino acids is an anabolic process.
Biochemistry:
Exergonic reactions are energy-releasing reactions and are generally catabolic. Endergonic reactions require energy and include anabolic reactions and the contraction of muscle. Metabolism is the total of all catabolic, exergonic, anabolic, endergonic reactions.
Biochemistry:
Adenosine triphosphate (ATP) is the intermediate molecule that drives the exergonic transfer of energy to switch to endergonic anabolic reactions used in muscle contraction. This is what causes muscles to work which can require a breakdown, and also to build in the rest period, which occurs during the strengthening phase associated with muscular contraction. ATP is composed of adenine, a nitrogen containing base, ribose, a five carbon sugar (collectively called adenosine), and three phosphate groups. ATP is a high energy molecule because it stores large amounts of energy in the chemical bonds of the two terminal phosphate groups. The breaking of these chemical bonds in the Krebs Cycle provides the energy needed for muscular contraction.
Biochemistry:
Glucose Because the ratio of hydrogen to oxygen atoms in all carbohydrates is always the same as that in water—that is, 2 to 1—all of the oxygen consumed by the cells is used to oxidize the carbon in the carbohydrate molecule to form carbon dioxide. Consequently, during the complete oxidation of a glucose molecule, six molecules of carbon dioxide and six molecules of water are produced and six molecules of oxygen are consumed.
Biochemistry:
The overall equation for this reaction is 12 CO 2+6H2O (30–32 ATP molecules produced depending on type of mitochondrial shuttle, 5–5.33 ATP molecules per molecule of oxygen.) Because the gas exchange in this reaction is equal, the respiratory quotient (R.Q.) for carbohydrate is unity or 1.0: R.Q.
CO 1.0.
Biochemistry:
Fats The chemical composition for fats differs from that of carbohydrates in that fats contain considerably fewer oxygen atoms in proportion to atoms of carbon and hydrogen. When listed on nutritional information tables, fats are generally divided into six categories: total fats, saturated fatty acid, polyunsaturated fatty acid, monounsaturated fatty acid, dietary cholesterol, and trans fatty acid. From a basal metabolic or resting metabolic perspective, more energy is needed to burn a saturated fatty acid than an unsaturated fatty acid. The fatty acid molecule is broken down and categorized based on the number of carbon atoms in its molecular structure. The chemical equation for metabolism of the twelve to sixteen carbon atoms in a saturated fatty acid molecule shows the difference between metabolism of carbohydrates and fatty acids. Palmitic acid is a commonly studied example of the saturated fatty acid molecule.
Biochemistry:
The overall equation for the substrate utilization of palmitic acid is 16 32 23 16 CO 16 H2O (106 ATP molecules produced, 4.61 ATP molecules per molecule of oxygen.) Thus the R.Q. for palmitic acid is 0.696: R.Q.
16 CO 23 0.696.
Biochemistry:
Proteins Proteins are composed of carbon, hydrogen, oxygen, and nitrogen arranged in a variety of ways to form a large combination of amino acids. Unlike fat the body has no storage deposits of protein. All of it is contained in the body as important parts of tissues, blood hormones, and enzymes. The structural components of the body that contain these amino acids are continually undergoing a process of breakdown and replacement. The respiratory quotient for protein metabolism can be demonstrated by the chemical equation for oxidation of albumin: 72 112 18 22 77 63 CO 38 SO CO NH 2)2 The R.Q. for albumin is 0.818: R.Q.
Biochemistry:
The reason this is important in the process of understanding protein metabolism is that the body can blend the three macronutrients and based on the mitochondrial density, a preferred ratio can be established which determines how much fuel is utilized in which packets for work accomplished by the muscles. Protein catabolism (breakdown) has been estimated to supply 10% to 15% of the total energy requirement during a two-hour aerobic training session. This process could severely degrade the protein structures needed to maintain survival such as contractile properties of proteins in the heart, cellular mitochondria, myoglobin storage, and metabolic enzymes within muscles.
Biochemistry:
The oxidative system (aerobic) is the primary source of ATP supplied to the body at rest and during low intensity activities and uses primarily carbohydrates and fats as substrates. Protein is not normally metabolized significantly, except during long term starvation and long bouts of exercise (greater than 90 minutes.) At rest approximately 70% of the ATP produced is derived from fats and 30% from carbohydrates. Following the onset of activity, as the intensity of the exercise increases, there is a shift in substrate preference from fats to carbohydrates. During high intensity aerobic exercise, almost 100% of the energy is derived from carbohydrates, if an adequate supply is available.
Biochemistry:
Aerobic vs. anaerobic exercise Studies published in 1992 and 1997 indicate that the level of aerobic fitness of an individual does not have any correlation with the level of resting metabolism. Both studies find that aerobic fitness levels do not improve the predictive power of fat free mass for resting metabolic rate.
Biochemistry:
However, recent research from the Journal of Applied Physiology, published in 2012, compared resistance training and aerobic training on body mass and fat mass in overweight adults (STRRIDE AT/RT). When you consider time commitments against health benefits, aerobic training is the optimal mode of exercise for reducing fat mass and body mass as a primary consideration, resistance training is good as a secondary factor when aging and lean mass are a concern. Resistance training causes injuries at a much higher rate than aerobic training. Compared to resistance training, it was found that aerobic training resulted in a significantly more pronounced reduction of body weight by enhancing the cardiovascular system which is what is the principal factor in metabolic utilization of fat substrates. Resistance training if time is available is also helpful in post-exercise metabolism, but it is an adjunctive factor because the body needs to heal sufficiently between resistance training episodes, whereas with aerobic training, the body can accept this every day. RMR and BMR are measurements of daily consumption of calories. The majority of studies that are published on this topic look at aerobic exercise because of its efficacy for health and weight management.
Biochemistry:
Anaerobic exercise, such as weight lifting, builds additional muscle mass. Muscle contributes to the fat-free mass of an individual and therefore effective results from anaerobic exercise will increase BMR. However, the actual effect on BMR is controversial and difficult to enumerate. Various studies suggest that the resting metabolic rate of trained muscle is around 55 kJ/kg per day. Even a substantial increase in muscle mass, say 5 kg, would make only a minor impact on BMR.
Longevity:
In 1926, Raymond Pearl proposed that longevity varies inversely with basal metabolic rate (the "rate of living hypothesis"). Support for this hypothesis comes from the fact that mammals with larger body size have longer maximum life spans (large animals do have higher total metabolic rates, but the metabolic rate at the cellular level is much lower, and the breathing rate and heartbeat are slower in larger animals) and the fact that the longevity of fruit flies varies inversely with ambient temperature. Additionally, the life span of houseflies can be extended by preventing physical activity. This theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy, across the animal kingdom—including humans. Calorie restriction and reduced thyroid hormone levels, both of which decrease the metabolic rate, have been associated with higher longevity in animals.However, the ratio of total daily energy expenditure to resting metabolic rate can vary between 1.6 and 8.0 between species of mammals. Animals also vary in the degree of coupling between oxidative phosphorylation and ATP production, the amount of saturated fat in mitochondrial membranes, the amount of DNA repair, and many other factors that affect maximum life span.One problem with understanding the associations of lifespan and metabolism is that changes in metabolism are often confounded by other factors that may affect lifespan. For example under calorie restriction whole body metabolic rate goes down with increasing levels of restriction, but body temperature also follows the same pattern. By manipulating the ambient temperature and exposure to wind it was shown in mice and hamsters that body temperature is a more important modulator of lifespan than metabolic rate.
Longevity:
Organism longevity and basal metabolic rate In allometric scaling, maximum potential life span (MPLS) is directly related to metabolic rate (MR), where MR is the recharge rate of a biomass made up of covalent bonds. That biomass (W) is subjected to deterioration over time from thermodynamic, entropic pressure. Metabolism is essentially understood as redox coupling, and has nothing to do with thermogenesis. Metabolic efficiency (ME) is then expressed as the efficiency of this coupling, a ratio of amperes captured and used by biomass, to the amperes available for that purpose. MR is measured in watts, W is measured in grams. These factors are combined in a power law, an elaboration on Kleiber's law relating MR to W and MPLS, that appears as MR = W^ (4ME-1)/4ME. When ME is 100%, MR = W^3/4; this is popularly known as quarter power scaling, a version of allometric scaling that is premised upon unrealistic estimates of biological efficiency.
Longevity:
The equation reveals that as ME drops below 20%, for W < one gram, MR/MPLS increases so dramatically as to endow W with virtual immortality by 16%. The smaller W is to begin with, the more dramatic is the increase in MR as ME diminishes. All of the cells of an organism fit into this range, i.e., less than one gram, and so this MR will be referred to as BMR.
Longevity:
But the equation reveals that as ME increases over 25%, BMR approaches zero. The equation also shows that for all W > one gram, where W is the organization of all of the BMRs of the organism's structure, but also includes the activity of the structure, as ME increases over 25%, MR/MPLS increases rather than decreases, as it does for BMR. An MR made up of an organization of BMRs will be referred to as an FMR. As ME decreases below 25%, FMR diminishes rather than increases as it does for BMR.
Longevity:
The antagonism between FMR and BMR is what marks the process of aging of biomass W in energetic terms. The ME for the organism is the same as that for the cells, such that the success of the organism's ability to find food (and lower its ME), is key to maintaining the BMR of the cells driven, otherwise, by starvation, to approaching zero; while at the same time a lower ME diminishes the FMR/MPLS of the organism.
Medical considerations:
A person's metabolism varies with their physical condition and activity. Weight training can have a longer impact on metabolism than aerobic training, but there are no known mathematical formulas that can exactly predict the length and duration of a raised metabolism from trophic changes with anabolic neuromuscular training.
Medical considerations:
A decrease in food intake will typically lower the metabolic rate as the body tries to conserve energy. Researcher Gary Foster estimates that a very low calorie diet of fewer than 800 calories a day would reduce the metabolic rate by more than 10 percent.The metabolic rate can be affected by some drugs, such as antithyroid agents, drugs used to treat hyperthyroidism, such as propylthiouracil and methimazole, bring the metabolic rate down to normal and restore euthyroidism. Some research has focused on developing antiobesity drugs to raise the metabolic rate, such as drugs to stimulate thermogenesis in skeletal muscle.The metabolic rate may be elevated in stress, illness, and diabetes. Menopause may also affect metabolism.
Cardiovascular implications:
Heart rate is determined by the medulla oblongata and part of the pons, two organs located inferior to the hypothalamus on the brain stem. Heart rate is important for basal metabolic rate and resting metabolic rate because it drives the blood supply, stimulating the Krebs cycle. During exercise that achieves the anaerobic threshold, it is possible to deliver substrates that are desired for optimal energy utilization. The anaerobic threshold is defined as the energy utilization level of heart rate exertion that occurs without oxygen during a standardized test with a specific protocol for accuracy of measurement, such as the Bruce Treadmill protocol (see metabolic equivalent of task). With four to six weeks of targeted training the body systems can adapt to a higher perfusion of mitochondrial density for increased oxygen availability for the Krebs cycle, or tricarboxylic cycle, or the glycolytic cycle. This in turn leads to a lower resting heart rate, lower blood pressure, and increased resting or basal metabolic rate.By measuring heart rate we can then derive estimations of what level of substrate utilization is actually causing biochemical metabolism in our bodies at rest or in activity. This in turn can help a person to maintain an appropriate level of consumption and utilization by studying a graphical representation of the anaerobic threshold. This can be confirmed by blood tests and gas analysis using either direct or indirect calorimetry to show the effect of substrate utilization. The measures of basal metabolic rate and resting metabolic rate are becoming essential tools for maintaining a healthy body weight. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bromide peroxidase**
Bromide peroxidase:
Bromide peroxidase (EC 1.11.1.18, bromoperoxidase, haloperoxidase (ambiguous), eosinophil peroxidase) is a family of enzymes with systematic name bromide:hydrogen-peroxide oxidoreductase. These enzymes catalyse the following chemical reaction: HBr + H2O2 ⇌ HOBr + H2OThe HOBr is a potent brominating agent. The many organobromine compounds observed in marine environments are the products of reaction with this oxidized form of bromine.
Bromo peroxidases of red and brown marine algae (Rhodophyta and Phaeophyta) contain vanadate (vanadium bromoperoxidase). Otherwise vanadium is unusual cofactor in biology. By virtue of this family of enzymes, a variety of brominated natural products have been isolated from marine sources.
Related chloroperoxidase enzymes effect chlorination. In the nomenclature of haloperoxidase, bromoperoxidases classically are unable to oxidize chloride at all. For example, eosinophil peroxidase appears to prefer bromide over chloride, yet is not considered a bromoperoxidase because it is able to use chloride.
Bromide peroxidase:
Muricidae (was Murex) spp. snails have a bromoperoxidase used to produce Tyrian purple dye. The enzyme is very specific to bromide and physically stable, but has not been characterized as to its active site metal. As of 2019, no specific gene has been assigned to such an enzyme in the snail genome. Such an activity is probably provided by symbiotic Bacillus bacteria instead. The identified enzyme belongs to the alpha/beta hydrolase superfamily; a structure for a similar bromoperoxidase is available as PDB: 3FOB. It runs on a catalytic triad of Ser 99, Asp 229 and His 258 and does not require metal cofactors.
Additional reading:
De Boer, E.; Tromp, M.G.M.; Plat, H.; Krenn, G.E.; Wever, R (1986). "Vanadium(v) as an essential element for haloperoxidase activity in marine brown-algae - purification and characterization of a vanadium(V)-containing bromoperoxidase from Laminaria saccharina". Biochim. Biophys. Acta. 872 (1–2): 104–115. doi:10.1016/0167-4838(86)90153-6.
Tromp MG, Olafsson G, Krenn BE, Wever R (September 1990). "Some structural aspects of vanadium bromoperoxidase from Ascophyllum nodosum". Biochimica et Biophysica Acta (BBA) - Protein Structure and Molecular Enzymology. 1040 (2): 192–8. doi:10.1016/0167-4838(90)90075-q. PMID 2400770.
Isupov MN, Dalby AR, Brindley AA, Izumi Y, Tanabe T, Murshudov GN, Littlechild JA (June 2000). "Crystal structure of dodecameric vanadium-dependent bromoperoxidase from the red algae Corallina officinalis". Journal of Molecular Biology. 299 (4): 1035–49. doi:10.1006/jmbi.2000.3806. PMID 10843856.
Carter-Franklin JN, Butler A (November 2004). "Vanadium bromoperoxidase-catalyzed biosynthesis of halogenated marine natural products". Journal of the American Chemical Society. 126 (46): 15060–6. doi:10.1021/ja047925p. PMID 15548002.
Ohshiro T, Littlechild J, Garcia-Rodriguez E, Isupov MN, Iida Y, Kobayashi T, Izumi Y (June 2004). "Modification of halogen specificity of a vanadium-dependent bromoperoxidase". Protein Science. 13 (6): 1566–71. doi:10.1110/ps.03496004. PMC 2279980. PMID 15133166. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Underwater logging**
Underwater logging:
Underwater logging is the process of logging trees from underwater forests. When artificial reservoirs and dams are built, large areas of forest are often inundated; although the trees die, the wood is often preserved. The trees can then be felled using special underwater machinery and floated up to the surface. One such machine is the sawfish harvester. There is an ongoing debate to determine whether or not underwater logging is a sustainable practice and if it is more environmentally sustainable than traditional logging.
Underwater logging:
Underwater logging has been introduced in select locations around the world, including Ghana's Lake Volta, the largest reservoir by surface area in the world.
A related form of logging consists of salvaging logs which loggers have abandoned after they became waterlogged and sank. This activity can be quite profitable, since the prime "targets" are decades-old trees of a size and species difficult or impossible to find in their natural habitat.
History:
Rivers were a main method of transportation in the logging industry in the 19th and early 20th centuries in the United States. In the spring, logs were floated down waterways, especially those surrounding the Great Lakes and Maine, to transport them to mills downriver. Logs with a higher density than the density of water would sink. Other logs would get caught in jams, sloughs, or floods, and become lodged in the riverbed. Such logs were often known as "sinkers" or "deadheads." Loggers attempted to reduce the number of logs which remained in the river in order to maximize profits, but some losses were inevitable. Logs with legible log marks were sometimes returned to their owners.Underwater logs are safe from many of the forces which cause decomposition, including fungi. Log salvage operations began in the early 20th century across the United States. John Cayford and Ronald Scott's book Underwater Logging describes the process and prospects for retrieving sunken wood from American waterways, known as salvage logging.Salvage logging differs from underwater logging. Salvage logging recovers full-sized logs that were lost during past logging expeditions. Underwater logging uses new technology to cut down drowned trees that have been lost due to rising water levels or artificial reservoirs.
Logging methods:
Remote controlled vehicle One method of unearthing sunken trees is by sending a remote controlled vehicle, like a sawfish harvester, underwater to fell the trees. The vehicle is controlled by a cable that sends electricity and control inputs to the vessel which sends back a video feed for the operator. The operator sends inputs from a control panel on a barge. When a tree is found the Sawfish attaches and inflates a flotation device to it so that after the tree is cut it immediately rises to the surface for extraction from the water.
Logging methods:
Attaching buoys Attaching buoys is one of the main processes by which underwater logs are salvaged from the bottom of lakes and rivers. First, a scuba-diver must locate the sunken logs in the water, searching from about three feet from the bottom of the lake or river. Then, a buoy is placed around the log about three feet from its back. From there, a boat uses a gaff hook to catch the buoys and pulls the log close enough to the boat where the crew is able to tie the logs close to the side of the boat. This process repeats itself until the boat is filled to its capacity, after which the expedition is completed, and crew must return to base before harvesting any additional logs.
Logging methods:
Floating logs In the case of floating logs that have not been drowned but may have been separated from initial logging routes and stuck on the banks of rivers and lakes, a new process is utilized. Here, truck inner tubes are completely deflated so that a diver can slip them over the logs. After this occurs and once the tubes are securely in place, a hookah compressor and a low-pressure hose re-inflates them so that they form a tight grip around the floating logs. This process gives the logs more buoyancy and gives loggers easier access points to harvest them. As many tubes that are needed are used to float the logs.
Environmental impacts:
Marine pollution Ships are polluting both in the marine environment and in the atmosphere, and although it is difficult to estimate the magnitude of the problem, there is no uncertainty that increased usage of such ships will increase pollution. As the underwater logging industry becomes more popular and profitable, this increased usage will occur. The process of underwater logging itself will also have a negative impact on the environment, as the logs themselves add weight to the ships, forcing said ships to work harder and use more time and energy to transport their cargo. In terms of transportation, cargo ships transport the logs across the water. They use an immense amount of ballast water, which can have negative effects on the environment. When the ships reach the mills they empty the water, “Ballast water discharge typically contains a variety of biological materials, including plants, animals, viruses, and bacteria”. Dumping the ballast can change the aquatic ecosystems and even make the water undrinkable.
Environmental impacts:
Accidents Accidents related to this industry usually result in the release of oil and other resources, as these spills are difficult to maintain due to the fluidity of lakes and rivers. What this means is that the potential for collateral damage is large, both for marine and human life, because toxic resources such as oil can contaminate surrounding ecosystems. It is necessary, therefore, to exercise caution when partaking in processes, such as underwater logging, that require the use of potentially harmful resources.
Environmental impacts:
Deforestation Because the underwater logging process is essentially retrieving drowned logs and sunken trees that were already lost in previous logging expeditions, the logs are considered “rediscovered wood.” Because underwater logging is retrieving “rediscovered wood,” this has a positive impact on the forestry industry, as it reduces the need to log in land forests. In addition, when logging on land logging companies have to create new roads to get to higher quality wood. Road building is eliminated with underwater logging because the transportation paths across the rivers already exist.
Environmental impacts:
Potential erosion of lakes and rivers As some of these logs have been lost for upwards of a few decades, the local environment has inevitably grown and developed around said logs. Removing these logs, which provide structural support to a variety of these ecosystems, could result in erosion of the lakes and rivers that would change the structure and potentially degrade these bodies of water.
Environmental impacts:
Marine life Some of the logs that are retrieved have been underwater for upwards of a few decades, meaning local marine life will have formed their habitats around these drowned logs. These logs provide substantial structural support for these ecosystems, and removing them would inevitably destroy said natural habitats. Boats and crew members of underwater logging fleets can stir up and degrade the local ecosystems.
Sustainability:
Nature & Faune magazine describes the process of underwater logging's sustainability impact. The hydroelectric dam in Ghana built in Akosombo submerges forests of timber logs. The Clark Sustainable Resource Developments uses SHARC ROV technology to keep the roots of the trees intact not to disturb the lake bottom or disturb pollutants. After, they put canopies and buttresses to create artificial fish reefs and educated locals about fishing practices. Lastly, they can cut up to 25 meters (82 ft) below the lakes surface, which creates a depth sufficient to support routes for transportation lake vessels. This process was awarded for being sustainable by avoiding deforestation and making artificial reefs to maintain the current aquatic ecosystem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SampTA**
SampTA:
SampTA (Sampling Theory and Applications) is a biennial interdisciplinary conference for mathematicians, engineers, and applied scientists. The main purpose of SampTA is to exchange recent advances in sampling theory and to explore new trends and directions in the related areas of application. The conference focuses on such fields as signal processing and image processing, coding theory, control theory, real analysis and complex analysis, harmonic analysis, and the theory of differential equations. All of these topics have received a large degree of attention from machine learning researchers, with SampTA serving as bridge between these two communities.
SampTA:
SampTA features plenary talks by prominent speakers, special sessions on selected topics reflecting the current trends in sampling theory and its applications to the engineering sciences, as well as regular sessions about traditional topics in sampling theory. Contributions from authors attending the SampTA conferences are usually published in special issues of Sampling Theory in Signal and Image Processing, an international journal dedicated to sampling theory and its applications. The proceedings of SampTA 2015 were indexed in IEEE Xplore.The SampTA conference series began as a small workshop in 1995 in Riga, Latvia, but the meetings grew into full-fledged conferences attracting an even mix of mathematicians and engineers as the interest in sampling theory and its many applications blossomed. This even mix makes the SampTA meetings unique in the scientific community. The conference organization is headed by an international steering committee consisting of prominent mathematicians and engineers, and a technical committee responsible for the conference program. Due to the COVID-19 pandemic, SampTA paused from 2020–2022, but resumed in Summer 2032.
SampTA:
The biennial meetings are announced in various Mathematics and Engineering Calendars, including the Mathematics Calendar of the American Mathematical Society, the Wavelet Digest., the Numerical Harmonic Analysis Group (NuHAG) at the University of Vienna, the Norbert Wiener Center at the University of Maryland, and the IEEE Signal Processing Society.
Past meetings:
New Haven, U.S.A., July 10–14, 2023 link Bordeaux, France, July 8–12, 2019 link Tallinn, Estonia, July 3–7, 2017 link Washington, D.C., U.S., May 25–29, 2015 link Bremen, Germany, July 1–5, 2013 link Singapore, May 2–6, 2011 link Marseille, France, May 18–22, 2009 link Thessaloniki, Greece, June 1–5, 2007 link Samsun, Turkey, July 10–15, 2005 link Strobl, Austria, May 26– 30, 2003 link Orlando, U.S., May 13– 17, 2001 link Loen, Norway, August 11– 14, 1999 link Aveiro, Portugal, July 16– 19, 1997 link Riga, Latvia, September 20– 22, 1995 link | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mountain formation**
Mountain formation:
Mountain formation refers to the geological processes that underlie the formation of mountains. These processes are associated with large-scale movements of the Earth's crust (tectonic plates). Folding, faulting, volcanic activity, igneous intrusion and metamorphism can all be parts of the orogenic process of mountain building. The formation of mountains is not necessarily related to the geological structures found on it.The understanding of specific landscape features in terms of the underlying tectonic processes is called tectonic geomorphology, and the study of geologically young or ongoing processes is called neotectonics.From the late 18th century until its replacement by plate tectonics in the 1960s, geosyncline theory was used to explain much mountain-building.
Types of mountains:
There are five main types of mountains: volcanic, fold, plateau, fault-block and dome. A more detailed classification useful on a local scale predates plate tectonics and adds to these categories.
Volcanic mountains Movements of tectonic plates create volcanoes along the plate boundaries, which erupt and form mountains. A volcanic arc system is a series of volcanoes that form near a subduction zone where the crust of a sinking oceanic plate melts and drags water down with the subducting crust.
Types of mountains:
Most volcanoes occur in a band encircling the Pacific Ocean (the Pacific Ring of Fire), and in another that extends from the Mediterranean across Asia to join the Pacific band in the Indonesian Archipelago. The most important types of volcanic mountain are composite cones or stratovolcanoes (Vesuvius, Kilimanjaro and Mount Fuji are examples) and shield volcanoes (such as Mauna Loa on Hawaii, a hotspot volcano).A shield volcano has a gently sloping cone due to the low viscosity of the emitted material, primarily basalt. Mauna Loa is the classic example, with a slope of 4°-6°. (The relation between slope and viscosity falls under the topic of angle of repose.) The composite volcano or stratovolcano has a more steeply rising cone (33°-40°), due to the higher viscosity of the emitted material, and eruptions are more violent and less frequent than for shield volcanoes. Besides the examples already mentioned are Mount Shasta, Mount Hood and Mount Rainier. Vitosha - the domed mountain next to Sofia, capital of Bulgaria, is also formed by volcanic activity.
Types of mountains:
Fold mountains When plates collide or undergo subduction (that is – ride one over another), the plates tend to buckle and fold, forming mountains. Most of the major continental mountain ranges are associated with thrusting and folding or orogenesis. Examples are the Balkan Mountains, the Jura and the Zagros mountains.
Types of mountains:
Block mountains When a fault block is raised or tilted, block mountains can result. Higher blocks are called horsts and troughs are called grabens. A spreading apart of the surface causes tensional forces. When the tensional forces are strong enough to cause a plate to split apart, it does so such that a center block drops down relative to its flanking blocks.
Types of mountains:
An example of this is the Sierra Nevada Range, where delamination created a block 650 km long and 80 km wide that consists of many individual portions tipped gently west, with east facing slips rising abruptly to produce the highest mountain front in the continental United States.Another good example is the Rila - Rhodope mountain Massif in Bulgaria, Southeast Europe, including the well defined horsts of Belasitsa (linear horst), Rila mountain (vaulted domed shaped horst) and Pirin mountain - a horst forming a massive anticline situated between the complex graben valleys of Struma and that of Mesta.
Types of mountains:
Uplifted passive margins Unlike orogenic mountains there is no widely accepted geophysical model that explains elevated passive continental margins such as the Scandinavian Mountains, Eastern Greenland, the Brazilian Highlands or Australia's Great Dividing Range.
Types of mountains:
Different elevated passive continental margins most likely share the same mechanism of uplift. This mechanism is possibly related to far-field stresses in Earth's lithosphere. According to this view elevated passived margins can be likened to giant anticlinal lithospheric folds, where folding is caused by horizontal compression acting on a thin to thick crust transition zone (as are all passive margins).
Models:
Hotspot volcanoes Hotspots are supplied by a magma source in the Earth's mantle called a mantle plume. Although originally attributed to a melting of subducted oceanic crust, recent evidence belies this connection. The mechanism for plume formation remains a research topic.
Models:
Fault blocks Several movements of the Earth's crust that lead to mountains are associated with faults. These movements actually are amenable to analysis that can predict, for example, the height of a raised block and the width of an intervening rift between blocks using the rheology of the layers and the forces of isostasy. Early bent plate models predicting fractures and fault movements have evolved into today's kinematic and flexural models. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biblioteca ilustrada de Gaspar y Roig**
Biblioteca ilustrada de Gaspar y Roig:
Biblioteca ilustrada de Gaspar y Roig (Spanish for 'Illustrated Library of Gaspar y Roig') is an editorial collection published by Gaspar y Roig since 1851, in Madrid, Spain, under the direction of Eduardo Chao.
Overview:
The Biblioteca ilustrada de Gaspar y Roig is a collection of cheap illustrated books of medium production quality, with two columns, condensed fonts and narrow margins to reducing costs, and engravings inserted into the text. The folio format is intended for newspaper readers.All the books begin with an engraving illustration in order to encourage reading, be it an allusion to the fine arts, a printing press, a garden with ladies or some reading gentlemen.Noted for its encyclopaedia-like material, the content matter of the collection covers a wide range of subjects, such as reference works (Diccionario Enciclopédico Gaspar y Roig); scientific disciplines (Buffon's Histoire Naturelle); history books (Cesare Cantù's Storia Universale, Juan de Mariana's Historia general de España, William H. Prescott's History of the Conquest of Peru, with a Preliminary View of the Civilization of the Incas); biographies (Washington Irving's A History of the Life and Voyages of Christopher Columbus); compilation of Costumbrist prints (Los españoles pintados por sí mismos); Romantic literary works by François-René de Chateaubriand; classic works by Miguel de Cervantes and Victor Hugo; novels by Manuel Fernández y González, Torcuato Tárrago y Mateos, Sophie Ristaud Cottin, Jules Verne, Walter Scott, among others; as well as a whole Bible in Spanish (Biblia de Scío). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Keytar**
Keytar:
Keytar (a portmanteau of keyboard and guitar) is a keyboard instrument similar to a synthesizer or MIDI controller that is supported by a strap around the neck and shoulders, similar to the way a guitar is held.
Overview:
Though the term "keytar" has been used since the introduction of the instrument, it was not used by a major manufacturer until 2012, when the Alesis company referred to the "Vortex", the company's first product of this type, as a "USB/MIDI Keytar Controller". CEO and co-founder of Tap Tap Strum, Kyle Zimmerman, later came out with the design for the Keytar L8R.Keytars allow players a greater range of movement onstage, compared to conventional keyboards, which are placed on stationary stands or which are part of heavy, floor-mounted structures. The instrument has a musical keyboard for triggering musical notes and sounds. Various controls are placed on the instrument's "neck", including those for pitch bends, vibrato, portamento, and sustain.
Overview:
Keytars may either contain their own synthesizer engines, or be MIDI controllers. In either case, a keytar needs to be connected to a keyboard amplifier or PA system to produce a sound that the performer and audience can hear. MIDI controller keytars trigger notes and other MIDI data on an external MIDI-capable synthesizer, sound module or computer with synthesizer software. While keytars are usually used to create musical sounds, like any other MIDI controller, it could also be used to trigger such as MIDI-enabled lighting controllers, effects devices and audio consoles.
History:
Early history (18th century–1970s) The oldest forerunner of the keytar probably is the orphica, a small portable piano invented in Vienna in 1795, which was played in a similar position as the modern keytar. The piano accordion first appeared in 1852. In 1963, the East German manufacturer Weltmeister introduced the Basset, as a Keytar shaped Electric Bass Piano.
History:
In 1966, Swedish organ manufacturer Joh Mustad AB introduced the Tubon, a tubular electric organ. This instrument was worn with a strap around the shoulder and could be played standing or sitting. The Tubon had a half-keyboard on one end accessible to the right hand, controls to be used at the "neck" on the opposite end for the left hand, and a speaker at the end of the tube. It was sold in the UK as the Livingstone. It saw use by Kraftwerk and Paul McCartney in the 1960s and early 1970s.In the early 1970s, Edgar Winter often performed with keyboards slung around his neck, but they were not technically keytars because they had no "neck"; he actually used an ARP 2600 keyboard and a lightweight Univox electronic piano with shoulder straps added.
History:
Keytar as synthesizer/controller (1970s–) The earlier keytars commercially released in late 1970s–early 1980s includes: Hillwood RockeyBoard RB-1 (synth piano with VCF) released in 1977 with influence from Edgar Winter, PMS Syntar, an early keytar synthesizer released by George Mattson (Performance Music Systems) and exhibited at 1979 Atlanta NAMM Davis Clavitar (controller) used by George Duke and Herbie Hancock in early 1980 Powell Probe (controller) designed by Roger Powell, and Royalex Probe (controller) helped to develop and used by Jan Hammer in early 1980setc. (for details, see List of keytars) In late 1970s and early 1980s, Jan Hammer, the composer best known for his composition and sound design successes for Miami Vice, frequently used several keytars including Royalex PROBE which he helped develop. Hammer is seen for instance using his PROBE in the music video for the "Miami Vice Theme". Also in the 1980s, Wayne Famous of the band the Producers strapped on a regular Oberheim synthesizer, which caused him to develop back problems.
History:
Among them, the most widely known earlier keytar may be the "Moog Liberation" released in 1980. Early users included Spyro Gyra keyboardist Tom Schuman. Though Devo is associated with keytars, they never used them except in music videos and promotional ads for the Liberation. The earliest printed use of the word "keytar" was in 1980, when it appeared in an interview with Jeffrey Abbott (Keytarjeff) by Tom Lounges of Illianabeat magazine (now Midwest BEAT Magazine) who now hosts a weekly interview show featuring legends of the music industry on N.W. Indiana's PBR radio station.
History:
Although Steve Masakowski has been incorrectly credited for many years as the inventor of the keytar, in an interview with Peter Hartlaub of the San Francisco Chronicle on December 11, 2009, he only claimed to have invented an instrument called the Key-tar which was a string-based instrument.
History:
The keytar was made popular in the 1980s by glam metal bands, as well as synthpop, new wave and electro musicians. Changing trends in music diminished the keytar's popularity during the 1990s, continuing on until the late 2000s when a major revival was sparked by artists and groups such as The Black Eyed Peas, Flight of The Conchords, Motion City Soundtrack, No Doubt, and Steely Dan. Another instance is in early 2008 with Snoop Dogg's music video for his single "Sensual Seduction", in which he uses a keytar as a throwback to old school bands.
History:
Current state (2010s–) Notable manufacturers of keytar models have included Moog, Roland, Yamaha, Korg and Casio. As of 2013, the Roland AX-Synth, the Roland Lucina, the Alesis Vortex and Rock Band 3 Wireless Pro Keyboard, are the mass-manufactured keytars on the market.
Examples:
1980s–1990s The Moog Liberation was released in 1980 by Moog Music, and was considered the first mass-produced strap synthesizer. It included two monophonic VCOs and a polyphonic section that could play organ sounds. The neck had spring-loaded wheels for filter cutoff, modulation, and volume as well as a ribbon-controlled pitch bend. The Liberation had a single VCF and two ADS envelope generators.
Examples:
The Roland SH-101 is a small, 32-key, monophonic analog synthesizer from the early 1980s. It has one oscillator with two waveforms, an 'octave-divided' sub-oscillator, and a low-pass filter/VCF capable of self oscillation. When a shoulder strap is connected to it, and the small handgrip with a pitch bend wheel and a pitch modulation trigger is used, the SH-101 becomes a keytar.
Examples:
The Yamaha SHS-10, released in 1987, has a small keyboard with 32 minikeys and a pitch-bend wheel, an internal Frequency modulation (usually referred to as FM) synthesizer offering 25 different voices with 6-note polyphony. Onboard voices include a range of keyboard instruments (pipe organ, piano, electric piano, etc.); strings (violin, guitar, double bass, etc.); and wind and brass (clarinet, flute, trumpet, etc.). A larger model, the Yamaha SHS-200, was released the following year, and came with 49 keys and dual stereo speakers.
Examples:
2000s–present The Roland AX-7, which was manufactured from 2001 to 2007, contains many more advanced features than early keytars. It has 45 velocity sensitive keys (without aftertouch), and a 3-character LED display. Several features aimed towards stage performance are present, such as a pitch bend ribbon, touchpad-like expression bar, sustain switch, and volume control knob, all on the upper neck of the instrument. There is also a proprietary "D-Beam" interface, made up of infrared sensors that detect nearby motion. This interface can be used to trigger and control effects.
Examples:
In August 2009, Roland released the Roland AX-Synth, a model of keytar that contains its own synthesizer sounds in addition to being a MIDI/USB controller. In 2010, Roland released the Roland Lucina AX-09. This model does not have a traditional neck, but is still considered a keytar because of it is a strap-on model and is in the AX line, with many identical features to its AX predecessors. It is unique in that it includes an additional, front panel USB port to accommodate a USB flash drive, which may contain MP3, WAV or AIFF files for playback. The Lucina has 150 internal sounds and may also be used as a MIDI/USB controller.
Examples:
Also in 2010, Mad Catz released the Wireless Pro Keyboard for Rock Band 3, a 25-key velocity-sensitive MIDI-compatible keytar controller. Despite its sub-$100 price, it is designed for serious use outside of the game. Synthpop band Freezepop have used it on stage.In 2012, Alesis released its first ever keytar and is the first major manufacturer to actually use the term "keytar" in the model name and description. The Alesis Vortex USB/MIDI Keytar Controller is unique in that it includes eight velocity-sensitive drum pads/sample triggers, which enable the performer to create beats or trigger clips, built right into the body of the keytar. It also features an accelerometer, which allows the performer to control MIDI parameters by tilting the neck. Although Alesis claims to have manufactured "the first USB keytar controller", the Roland AX-Synth and the Roland Lucina feature USB connections with the same function and were released three years prior to the Alesis Vortex. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CAMSAP3**
CAMSAP3:
Calmodulin-regulated spectrin-associated protein family member 3 (CAMSAP3) is a human protein encoded by the gene CAMSAP3. The protein is commonly referred to as Nezha.
Function:
CAMSAP3 acts as a minus-end anchor of microtubules, and binds to them through its CKK domain.In epithelial cells, it anchors microtubules to the apical cortex, causing them to grow in an apical-to-basal direction. This gives the epithelial cells their rectangular shape.
In early mouse embryogenesis, the interphase bridge linking sister cells is enriched with CAMSAP3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Key-based routing**
Key-based routing:
Key-based routing (KBR) is a lookup method used in conjunction with distributed hash tables (DHTs) and certain other overlay networks. While DHTs provide a method to find a host responsible for a certain piece of data, KBR provides a method to find the closest host for that data, according to some defined metric. This may not necessarily be defined as physical distance, but rather the number of network hops.
Key-based routing networks:
Freenet GNUnet Kademlia Onion routing Garlic routing | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interferon gamma receptor 2**
Interferon gamma receptor 2:
Interferon gamma receptor 2 also known as IFN-γR2 is a protein which in humans is encoded by the IFNGR2 gene.
Function:
This gene (IFNGR2) encodes the non-ligand-binding beta chain of the gamma interferon receptor. Human interferon-gamma receptor is a multimer of two IFN-γR1 chains (encoded by IFNGR1) and two IFN-γR2 chains.
Clinical significance:
Defects in IFNGR2 are a cause of autosomal recessive mendelian susceptibility to mycobacterial disease (MSMD), also known as familial disseminated atypical mycobacterial infection.
All known mutations in IFNGR2 are collected in the IFNGR2 mutation database. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vanadium(III) bromide**
Vanadium(III) bromide:
Vanadium(III) bromide, also known as vanadium tribromide, describes the inorganic compounds with the formula VBr3 and its hydrates. The anhydrous material is a green-black solid. In terms of its structure, the compound is polymeric with octahedral vanadium(III) surrounded by six bromide ligands.
Vanadium(III) bromide:
VBr3 has been prepared by treatment of vanadium tetrachloride with hydrogen bromide: 2 VCl4 + 8 HBr → 2 VBr3 + 8 HCl + Br2The reaction proceeds via the unstable vanadium(IV) bromide (VBr4), which releases Br2 near room temperature.Like VCl3, VBr3 forms red-brown soluble complexes with dimethoxyethane and THF, such as mer-VBr3(THF)3.Aqueous solutions prepared from VBr3 contain the cation trans-[VBr2(H2O)4]+. Evaporation of these solutions give the salt trans-[VBr2(H2O)4]Br.(H2O)2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coin magic**
Coin magic:
Coin magic is the manipulating of coins to entertain audiences. Because coins are small, most coin tricks are considered close-up magic or table magic, as the audience must be close to the performer to see the effects. Though stage conjurers generally do not use coin effects, coin magic is sometimes performed onstage using large coins. In a different type of performance setting, a close-up coin magician (or 'coin worker') will use a large video projector so the audience can see the magic on a big screen. Coin magic is generally considered harder to master than other close-up techniques such as card magic, as it requires great skill and grace to perform convincingly, and this requires much practice to acquire.
Elements:
Coin effects include productions, vanishes, transformations, transpositions, teleportations, penetrations, restorations, levitations and mental magic—some are combined in a single routine. A simple effect might involve borrowing a coin, making it vanish, concealing the coin, then reproducing it again unexpectedly and returning it to the owner. More complex effects may involve multiple coins, substituting or switching coins and other objects or props can be employed (i.e. handkerchiefs, glasses) as well as the coins. However, the power of most coin magic lies in its simplicity and the solidity of the object; the basic skills of sleight of hand and misdirection often appear most magical without complex equipment. Almost any audience will be amazed by the simplest mystery, such as passing a coin through a table.
Sleights and tricks:
Some classic coin magic effects: Coin vanish - making a coin seemingly vanish.
Coin production - making a coin seemingly appear.
Transposition - making two coins switch placesSome classic coin magic plots: Chink-a-chink - A bare-handed Matrix.
Coins Across - The magical transfer of multiple coins from one hand to another.
Coin Bite - Taking a bite out of a coin then visually restoring it right in front of the spectator.
Coins Through Table - Coins penetrate through the surface of the table.
Coin to Bottle - A coin is slammed into a sealed bottle.
Coins to Glass - Similar to coins across - coins transfer from one hand to a glass.
Matrix - A teleportation illusion of four coins moving invisibly under the cover of four playing cards.
Miser's Dream - Grabbing multiple coins from thin air. Popularized by Thomas Nelson Downs, who would drop coin after coin into a borrowed top hat.
Spellbound - Visually changing one coin into another, while only showing one coin at all times.
Tenkai Pennies - A two coin routine where one coin travels from one hand to the other.
Three fly - A coins across type effect involving three coins visually transferring from one hand to another.A sampling of coin sleights and moves: Palming - A form of concealment.
Sleeving - A form of concealment.
Lapping - A form of ditching a coin.
The French Drop - a retention of vision coin vanish involving the Passing of a coin from one hand to the other than making it disappear.
The Muscle Pass - Shooting a coin from one hand to the other, this can be done in such a way that can make the coin look as if it is defying gravity
Coin magicians:
Some magicians widely known for coin magic include: Thomas Nelson Downs (considered, along with J.B. Bobo, one of the magicians key to the development and teaching of modern coin magic) J.B. Bobo (author of Modern Coin Magic, a core reference and starting point for coin magicians) Tony Slydini (a well-known magician whose style of magic transformed close-up magic including his impressive coin routines) Dai Vernon Ed Marlo David Roth (most important developer of coin magic in the twentieth century and inventor of the standard plots common in current coin magic) Larry Jennings Michael Ammar (one of the most prolific publishers and teachers, an experienced all around magician, including coin work) Dean Dill (coin magician and inventor who has appeared on television and also works as a barber) Michael Vincent Shoot Ogawa (Last Vegas restaurant performer known for highly stylized, high-difficulty, impressive coin magic) Apollo Robbins (contemporary of Shoot Ogawa and co-contributor to a number coin teaching materials) David Stone (talented performer and teacher of fast-paced, flashy coin magic) Rocco Silano Jay Sankey Rich Ferguson (author of Chip Tricks, a magician and mentalist who has authored various magic instructional videos) Luis Piedrahita Michael Rubinstein Mike Gallo Paul Cummins Ryan Hayashi
Performance:
Although some coin magic use gimmicks (e.g. modified coins or trick coins), such gimmicks usually do not entirely create the magical effect. Gimmicked coins are made by several major manufacturers, such as Sterling, Johnson, Sasco or Tango Magic. Producing a memorable mystery requires significant skill in presenting the effect and utilizing misdirection to distract the audience from the secret of the gimmick. A performer who relies entirely on special equipment may not impress an audience. Many people are more impressed by an effect which depends (or seems to depend) entirely on skillful manipulation and misdirection than by an effect which appears to depend to some extent on specially made props. A performer who has mastered the basic skills can nonetheless use gimmicks to powerful effect without it being obvious to the audience. Some prefer not to use gimmicks at all, though most well-known coin magicians do use simple coin gimmicks.
In literature:
Canadian novelist Robertson Davies devotes a good part of his Deptford Trilogy to the art of coin magic. All three novels follow in part or wholly the career of a fictitious magician, Magnus Eisengrim, who was abducted as a boy by a traveling circus and learned his craft while concealed in a papier-mâché automaton. The descriptions of coin magic throughout are remarkable for their clarity. The final novel in the series, World of Wonders, details his life and career.
In literature:
In the Neil Gaiman novel American Gods, the main character, Shadow, is experienced with coin magic, and many different tricks and aspects of coin magic are discussed in the book.
In the Dean Koontz novel From the Corner of His Eye, a police officer uses coin magic to interrogate suspects.
In Stephen King's Dark Tower series of novels, the gunslinger Roland Deschain uses the coin walk, albeit substituting a bullet, to induce a hypnotic state in those concentrating on the object's movement across his knuckles.
In literature:
Thieves, wizards, and jesters, in historical and fantasy literature are often depicted as being skilled in sleight of hand, and are often depicted doing standard coin magic. Rolling a coin across the knuckles (coin walking) is a popular image. Silk in David Eddings's Belgariad, and Mat Cauthon and Thom Merrilin in Robert Jordan's Wheel of Time do this frequently. Johnny Depp's whimsical character Jack Sparrow coin walks in the end of Pirates of the Caribbean. Also, Vila Restal in the BBC science fiction television program Blake's 7 mixed his skills as a thief with such tricks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Loophole**
Loophole:
A loophole is an ambiguity or inadequacy in a system, such as a law or security, which can be used to circumvent or otherwise avoid the purpose, implied or explicitly stated, of the system.
Loophole:
Originally, the word meant an arrowslit, a narrow vertical window in a wall through which an archer (or, later, gunman) could shoot. Loopholes were commonly used in U.S. forts built during the 1800s. Located in the sally port, a loophole was considered a last ditch defense, where guards could close off the inner and outer doors trapping enemy soldiers and using small arms fire through the slits.Legal loopholes are distinct from lacunae, although the two terms are often used interchangeably. In a loophole, a law addressing a certain issue exists, but can be legally circumvented due to a technical defect in the law, such as a situation where the details are under-specified. A lacuna, on the other hand, is a situation in which no law exists in the first place to address that particular issue.
Use and remediation:
Loopholes are searched for and used strategically in a variety of circumstances, including elections, politics, taxes, the criminal justice system, or in breaches of security. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Back injury**
Back injury:
Back injuries result from damage, wear, or trauma to the bones, muscles, or other tissues of the back. Common back injuries include sprains and strains, herniated discs, and fractured vertebrae. The lumbar spine is often the site of back pain. The area is susceptible because of its flexibility and the amount of body weight it regularly bears. It is estimated that low-back pain may affect as much as 80 to 90 percent of the general population in the United States.
Classification:
Soft tissue graded system Muscle and soft tissue injuries can be classified using a graded system.
Grade 1 muscle strain is the least severe with damage to few muscle fibers and little if any loss of function.
Grade 2 muscle strain indicates a mild to moderate injury with appreciable tissue damage and some loss of function or strength.
Grade 3 muscle strain is the most severe injury grade with the muscle being either completely torn or experiencing complete loss of function.
Classification:
AO spine injury classification system Spinal column or vertebral injuries can be classified using the AO spine injury classification system. The three categories - A, B, and C - are based on the location of damage on the vertebra (either on the anterior or posterior segment) and by the direction of the applied injurious force. Type A injuries are those associated with a compression force with damage to the vertebral bodies.
Classification:
Type B injuries are those associated with a distraction force resulting in structural damage to the posterior components of the vertebral column.
Type C injuries are those associated with damage to both anterior and posterior aspects of the vertebral column resulting in displacement of the disconnected segments in any direction.This classification system can be used to classify injury to the cervical, thoracolumbar, and sacral regions of the spinal column.
MSU classification for herniated discs Herniated discs can be graded based on the size and location of the herniation as seen on an MRI.
Classification:
Size The size of the herniation is the extent to which it protrudes into the vertebral foramen. The MSU Classification for herniated discs uses the proximity of the disc to the facet joint when measuring the size of a herniated disc. Using the MSU Classification, a grade of 1, 2 or 3 can be used to describe the size of a herniated disc with 1 being the least severe and 3 being the most severe.
Classification:
Location The location of the herniation can also be described using the MSU Classification for herniated discs. This classification describes how far away from mid-line a disc protrusion is using a grade of A, B, or C.
Grade A describes a herniation at midline.
Grade C herniations are the most lateral and protrude into the intervertebral foramen (through which spinal nerves travel).
Grade B herniated discs are those located between grade A and C, using the facet joint as the landmark for the lateral border.MSU Classification is primarily used for classifying herniated discs in the lumbar spine.
Causes:
Many back injuries share similar causes. Strains and sprains to the back muscles can be caused by improper movements while lifting heavy loads, overuse of a muscle, sudden forceful movements, or direct trauma. Herniated discs are associated with age-related degeneration, trauma such as a fall or car accident, and bending or twisting while lifting heavy weights. Common causes of vertebral fractures include trauma from a direct blow, a compression force resulting in improper or excessive axial loading, and hyper-flexion or hyper-extension.Vertebral fractures in children or elderly individuals can be related to the development or health of their spine. The most common vertebral fracture in children is spondylolysis which can progress to spondylolisthesis. The immature skeleton contains growth plates which have not yet completely ossified into stronger mature bone. Vertebral fractures in elderly individuals are exacerbated by weakening of the skeleton associated with osteoporosis.
Diagnosis:
Diagnosis of a back injury begins with a physical examination and thorough medical history by health-care personnel. Some injuries, such as sprains and strains or herniated discs, can be diagnosed in this manner. To confirm these diagnoses, or to rule out other injuries or pathology, imaging of the injured region can be ordered. X-rays are often used to visualize pathology of bones and can be ordered when a vertebral fracture is suspected. CT scans produce higher resolution images when compared to x-rays and can be used to view more subtle fractures which may otherwise go undetected on x-ray. MRI is commonly referred to as the gold standard for visualizing soft tissue and can be used to assist with diagnosing many back injuries, including herniated discs and neurological disorders, bleeding, and edema.
Prevention:
Suggestions for preventing various back injuries primarily address the causes of those injuries. The risk for back sprains and strains may be reduced with lifestyle choices, including smoking cessation, limiting alcohol, maintaining a healthy weight, and keeping bones and muscles strong with adequate exercise and a healthy diet. The risk for disc herniations can be reduced by using proper techniques when lifting heavy loads, smoking cessation, and weight loss to reduce the load placed on the spine. Vertebral fractures may be difficult to prevent since common causes are related to accidents or age-related degeneration associated with osteoporosis. Treating osteoporosis with pharmacotherapy, enrolling in a fall prevention program, strengthening muscles and bones with a weight-bearing exercise program, and adopting a nutritional program that promotes bone health are all options to reduce the risk of vertebral fractures associated with osteoporosis.
Treatment:
Treatment for back injuries depends on the diagnosis, level of pain, and whether there is loss of function or quality of life.
Conservative Cold therapy reduces inflammation, edema, pain, and muscle spasms associated with acute back injury.
Heat therapy is used to reduce pain and alleviate sore and stiff muscles. Heat therapy is proposed to work by facilitating delivery of nutrients and oxygen to the site of injury to accommodate healing.
Treatment:
Medication: Non-steroidal anti-inflammatory drugs (NSAIDs) or acetaminophen can be taken to reduce mild to moderate pain associated with back injuries. NSAIDs are suggested to be more effective for persistent pain than for acute pain. If pain remains intolerable while taking over the counter medications, a stronger pain medication such as a narcotic or a muscle relaxant can be prescribed at a physician's discretion.
Treatment:
Therapy and alternative medicine: an active approach to recovery is recommended over bed rest for most cases of back injury. Activity promotes strength and functional rehabilitation and counters atrophy associated with disuse. Physical therapy can help reduce pain and regain strength and function. The gentle movement of yoga and tai chi are suggested to improve function and to counter the negative psychosocial effects that can be secondary to injury. Spinal manipulation, massage, and acupuncture have been used to treat the pain associated with various back injuries, but there is little consensus on their degree of effectiveness.
Treatment:
Injections: Spinal nerve blocks and epidural injections are options available to alleviate pain and neurological symptoms. Injections of anesthetics alleviate pain while steroid injections can be used to reduce the inflammation and swelling surrounding spinal nerves.
Non-Conservative Surgery is considered when symptoms persist after attempting conservative treatment. It is estimated 10-20 percent of individuals with low back pain fail to improve with conservative measures.
A discectomy is a common procedure used to alleviate the radiating pain and neurological symptoms associated with a herniated disc. There are multiple variations of a discectomy with differing approaches to access the herniated disc, but the goal of the procedure is to remove the portion of the intervertebral disc that is protruding into the vertebral foramen.
A total disc replacement can also be performed to address a herniated disc. Rather than removing only the portion of the disc that has prolapsed as in a discectomy, this procedure involves removing the entire vertebral disc and replacing it with an artificial one.
Surgical remedies for vertebral fractures are found to be more effective than conservative treatment. Vertebroplasty and kyphoplasty are considered minimally invasive surgical procedures and are proposed to relieve pain and restore function of fractured vertebrae.
Epidemiology:
The two age groups with the highest rate of vertebral column injuries are ages 15–29 and 65 and older.
An estimated 50 percent of spinal injuries are attributed to motor vehicle accidents.
Although the majority of vertebral fractures go undiagnosed, the annual cost related to treatment of vertebral fractures is estimated to be $1 billion in the U.S.
Symptomatic disc herniations are most common between ages 30–50 years. 95 percent of herniated discs diagnosed in patients 25–55 years are located in the lumbar spine.
By age 15 an estimated 26-50 percent of children have experienced acute or chronic back pain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quadruple reed**
Quadruple reed:
A quadruple reed is a type of reed by means of which the sound is originated in various wind instruments. The term "quadruple reed" comes from the fact that there are four pieces of dried palm leaf vibrating against each other, in pairs. A quadruple reed, such as the Thai pinai, operates in a similar way as the double reed and produces a timbre similar to the oboe. The Arabic pii chawaa is "sometimes described as having a double reed, though this is actually folded yet again, creating four layers of reed and thus requiring considerable lung power to play".Presumably a quadruple reed is folded twice, in opposite directions, instead of once (\/\ or \/\/ instead of \/ shaped), or either folded twice in the same direction or wrapped around (◎ instead of ○ shaped). Both options could result in what may be considered a reed of quadruple thickness. A reed may be folded into the center at 1/4 and 3/4 the length, and then this may be folded in half, with the center being outwards and the four sides being enclosed, making a single reed of quadruple thickness.
Instruments which use quadruple reeds:
Hne (Myanmar) Pi (Thailand) Pui' Pui' (Makassar, Indonesia) Sawnay (Mindanao, Philippines) Sarunay (Sulu archipelago, Philippines) Serunai (Malaysia) Shehnai (India) Shawm (Asia) "Each side of the [Thai and Cambonian] shawm's double reed is made from two layers of a smoked palm-leaf, making it in fact a quadruple reed. (Nepalese, Burmese, and Malaysian shawms have similar quadruple reeds.)" Sralai (Cambodia) Sri Lankan oboe Serune Kalee (Aceh, Indonesia) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CAT(k) space**
CAT(k) space:
In mathematics, a CAT (k) space, where k is a real number, is a specific type of metric space. Intuitively, triangles in a CAT (k) space are "slimmer" than corresponding "model triangles" in a standard space of constant curvature k . In a CAT (k) space, the curvature is bounded from above by k . A notable special case is k=0 ; complete CAT (0) spaces are known as "Hadamard spaces" after the French mathematician Jacques Hadamard.
CAT(k) space:
Originally, Aleksandrov called these spaces “ Rk domain”.
The terminology CAT (k) was coined by Mikhail Gromov in 1987 and is an acronym for Élie Cartan, Aleksandr Danilovich Aleksandrov and Victor Andreevich Toponogov (although Toponogov never explored curvature bounded above in publications).
Definitions:
For a real number k , let Mk denote the unique complete simply connected surface (real 2-dimensional Riemannian manifold) with constant curvature k . Denote by Dk the diameter of Mk , which is ∞ if k≤0 and is πk if k>0 Let (X,d) be a geodesic metric space, i.e. a metric space for which every two points x,y∈X can be joined by a geodesic segment, an arc length parametrized continuous curve γ:[a,b]→X,γ(a)=x,γ(b)=y , whose length sup {∑i=1rd(γ(ti−1),γ(ti))|a=t0<t1<⋯<tr=b,r∈N} is precisely d(x,y) . Let Δ be a triangle in X with geodesic segments as its sides. Δ is said to satisfy the CAT (k) inequality if there is a comparison triangle Δ′ in the model space Mk , with sides of the same length as the sides of Δ , such that distances between points on Δ are less than or equal to the distances between corresponding points on Δ′ The geodesic metric space (X,d) is said to be a CAT (k) space if every geodesic triangle Δ in X with perimeter less than 2Dk satisfies the CAT (k) inequality. A (not-necessarily-geodesic) metric space (X,d) is said to be a space with curvature ≤k if every point of X has a geodesically convex CAT (k) neighbourhood. A space with curvature ≤0 may be said to have non-positive curvature.
Examples:
Any CAT (k) space (X,d) is also a CAT (ℓ) space for all ℓ>k . In fact, the converse holds: if (X,d) is a CAT (ℓ) space for all ℓ>k , then it is a CAT (k) space.
The n -dimensional Euclidean space En with its usual metric is a CAT (0) space. More generally, any real inner product space (not necessarily complete) is a CAT (0) space; conversely, if a real normed vector space is a CAT (k) space for some real k , then it is an inner product space.
The n -dimensional hyperbolic space Hn with its usual metric is a CAT (−1) space, and hence a CAT (0) space as well.
The n -dimensional unit sphere Sn is a CAT (1) space.
Examples:
More generally, the standard space Mk is a CAT (k) space. So, for example, regardless of dimension, the sphere of radius r (and constant curvature {\textstyle {\frac {1}{r^{2}}}} ) is a CAT {\textstyle \operatorname {CAT} \left({\frac {1}{r^{2}}}\right)} space. Note that the diameter of the sphere is πr (as measured on the surface of the sphere) not 2r (as measured by going through the centre of the sphere).
Examples:
The punctured plane Π=E2∖{0} is not a CAT (0) space since it is not geodesically convex (for example, the points (0,1) and (0,−1) cannot be joined by a geodesic in Π with arc length 2), but every point of Π does have a CAT (0) geodesically convex neighbourhood, so Π is a space of curvature ≤0 The closed subspace X of E3 given by and z>0} equipped with the induced length metric is not a CAT (k) space for any k Any product of CAT (0) spaces is CAT (0) . (This does not hold for negative arguments.)
Hadamard spaces:
As a special case, a complete CAT(0) space is also known as a Hadamard space; this is by analogy with the situation for Hadamard manifolds. A Hadamard space is contractible (it has the homotopy type of a single point) and, between any two points of a Hadamard space, there is a unique geodesic segment connecting them (in fact, both properties also hold for general, possibly incomplete, CAT(0) spaces). Most importantly, distance functions in Hadamard spaces are convex: if σ1,σ2 are two geodesics in X defined on the same interval of time I, then the function I→R given by t↦d(σ1(t),σ2(t)) is convex in t.
Properties of CAT(k) spaces:
Let (X,d) be a CAT (k) space. Then the following properties hold: Given any two points x,y∈X (with d(x,y)<Dk if k>0 ), there is a unique geodesic segment that joins x to y ; moreover, this segment varies continuously as a function of its endpoints.
Every local geodesic in X with length at most Dk is a geodesic.
The d -balls in X of radius less than Dk/2 are (geodesically) convex.
The d -balls in X of radius less than Dk are contractible.
Properties of CAT(k) spaces:
Approximate midpoints are close to midpoints in the following sense: for every λ<Dk and every ϵ>0 there exists a δ=δ(k,λ,ϵ)>0 such that, if m is the midpoint of a geodesic segment from x to y with d(x,y)≤λ and then d(m,m′)<ϵ It follows from these properties that, for k≤0 the universal cover of every CAT (k) space is contractible; in particular, the higher homotopy groups of such a space are trivial. As the example of the n -sphere Sn shows, there is, in general, no hope for a CAT (k) space to be contractible if k>0
Surfaces of non-positive curvature:
In a region where the curvature of the surface satisfies K ≤ 0, geodesic triangles satisfy the CAT(0) inequalities of comparison geometry, studied by Cartan, Alexandrov and Toponogov, and considered later from a different point of view by Bruhat and Tits; thanks to the vision of Gromov, this characterisation of non-positive curvature in terms of the underlying metric space has had a profound impact on modern geometry and in particular geometric group theory. Many results known for smooth surfaces and their geodesics, such as Birkhoff's method of constructing geodesics by his curve-shortening process or van Mangoldt and Hadamard's theorem that a simply connected surface of non-positive curvature is homeomorphic to the plane, are equally valid in this more general setting.
Surfaces of non-positive curvature:
Alexandrov's comparison inequality The simplest form of the comparison inequality, first proved for surfaces by Alexandrov around 1940, states that The distance between a vertex of a geodesic triangle and the midpoint of the opposite side is always less than the corresponding distance in the comparison triangle in the plane with the same side-lengths.
The inequality follows from the fact that if c(t) describes a geodesic parametrized by arclength and a is a fixed point, then f(t) = d(a,c(t))2 − t2is a convex function, i.e.
0.
Taking geodesic polar coordinates with origin at a so that ‖c(t)‖ = r(t), convexity is equivalent to 1.
Changing to normal coordinates u, v at c(t), this inequality becomes u2 + H−1Hrv2 ≥ 1,where (u,v) corresponds to the unit vector ċ(t). This follows from the inequality Hr ≥ H, a consequence of the non-negativity of the derivative of the Wronskian of H and r from Sturm–Liouville theory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NOV (gene)**
NOV (gene):
NOV (nephroblastoma overexpressed) also known as CCN3 is a matricellular protein that in humans is encoded by the NOV gene.
CCN family:
NOV is a member of the CCN family of secreted, extracellular matrix (ECM)-associated signaling proteins (see also CCN intercellular signaling protein). The CCN acronym is derived from the first three members of the family being identified, namely CYR61 (cysteine-rich angiogenic inducer 61, or CCN1), CTGF (connective tissue growth factor, or CCN2), and NOV. These proteins, together with WISP1 (CCN4), WISP2 (CCN5), and WISP3 (CCN6) comprise the six-member CCN family in vertebrates and have been renamed CCN1-6 in the order of their discovery by international consensus.
Structure:
The human NOV protein contains 357 amino acids with an N-terminal secretory signal peptide followed by four structurally distinct domains with homologies to insulin-like growth factor binding protein (IGFBP), von Willebrand type C repeats (vWC), thrombospondin type 1 repeat (TSR), and a cysteine knot motif within the C-terminal (CT) domain.
Function:
NOV regulates multiple cellular activities including cell adhesion, migration, proliferation, differentiation, and survival. It functions by direct binding to integrin receptors, as well as other receptors such as NOTCH1 and fibulin 1c (FBLN1).
Function:
NOV is expressed during wound healing and induces angiogenesis in vivo. It is essential for self-renewal of CD34+ hematopoietic stem cells from umbilical cord blood. Nov is regulated by the hematopoietic transcription factor MZF-1.NOV can bind BMP2 and inhibit its functions in promoting osteogenic differentiation, and stimulate osteoclastogenesis through a process that may involve calcium flux. Overexpression of Nov in transgenic mice in osteoblasts antagonizes both BMP and Wnt-signaling and result in osteopenia.In February 2017, it was reported that the NOV protein was involved in regulatory T cell-mediated oligodendrocyte differentiation in the regeneration of myelin following damage to the myelin sheath. This finding revealed a new function for regulatory T cells that is distinct from their role in immunomodulation. NOV (CCN3) has recently been implicated in mood disorders, notably in the postpartum period; these effects may be mediated by its effects on myelination
Role in embryo development:
In contrast to the lethality of Cyr61 (CCN1) and Ctgf (CCN2) genetic knockout in mice, Nov-null mice are viable and largely normal, exhibiting only modest and transient sexually dimorphic skeletal abnormalities. However, Nov-null mice show enhanced blood vessel neointimal thickening when challenged with vascular injury, indicating that NOV inhibits neoinitimal hyperplasia.
Role in cancer:
Although NOV inhibits the proliferation of cancer cells, it appears to promote metastasis. Nov overexpression results in reduced tumor size in glioma cells xenografts, but enhances metastatic potential in xenotransplanted melanoma cells. NOV expression is associated with a higher risk of metastasis and worse prognosis in patients with cancers such as Ewing's sarcoma, melanoma, and breast cancer. In chronic myeloid leukemia (CML), NOV is downregulated as a consequence of the kinase activity of BCR-ABL, a chimeric protein generated through the chromosomal translocation between chromosome 9 and 22. Forced expression of NOV inhibits proliferation and restores growth control in CML cells, suggesting that NOV may be an alternate target for novel therapeutics against CML. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Insulated shipping container**
Insulated shipping container:
Insulated shipping containers are a type of packaging used to ship temperature sensitive products such as foods, pharmaceuticals, organs, blood, biologic materials, vaccines and chemicals. They are used as part of a cold chain to help maintain product freshness and efficacy. The term can also refer to insulated intermodal containers or insulated swap bodies.
Construction:
A variety of constructions have been developed. An insulated shipping container might be constructed of: a vacuum flask, similar to a "thermos" bottle fabricated thermal blankets or liners molded expanded polystyrene foam (EPS, styrofoam), similar to a cooler other molded foams such as polyurethane, polyethylene sheets of foamed plastics Vacuum Insulated Panels (VIPs) reflective materials: (metallised film) bubble wrap or other gas filled panels other packaging materials and structuresSome are designed for single use while others are returnable for reuse. Some insulated containers are decommissioned refrigeration units. Some empty containers are sent to the shipper disassembled or “knocked down”, assembled and used, then knocked down again for easier return shipment.
Construction:
Shipping containers are available for maintaining cryogenic temperatures, with the use of liquid nitrogen. Some carriers have these as a specialized service
Use:
Insulated shipping containers are part of a comprehensive cold chain which controls and documents the temperature of a product through its entire distribution cycle. The containers may be used with a refrigerant or coolant such as: block or cube ice, slurry ice dry ice Gel or ice packs (often formulated for specific temperature ranges) Phase change materials (PCMs) Some products (such as frozen meat) have sufficient thermal mass to contribute to the temperature control and no excess coolant is requiredA digital Temperature data logger or a time temperature indicator is often enclosed to monitor the temperature inside the container for its entire shipment.
Use:
Labels and appropriate documentation (internal and external) are usually required.
Personnel throughout the cold chain need to be aware of the special handling and documentation required for some controlled shipments. With some regulated products, complete documentation is required.
Design and evaluation:
The use of “off the shelf” insulated shipping containers does not necessarily guarantee proper performance. Several factors need to be considered: the sensitivity of the product to temperatures (high and low) and to time at temperatures the specific distribution system being used: the expected (and worst case) time and temperatures regulatory requirements the specific combination of packaging components and materials being usedIn specifying an insulated shipping container, the two primary characteristics of the material are its thermal conductivity or R-value, and its thickness. These two attributes will help determine the resistance to heat transfer from the ambient environment into the payload space. The coolant material load temperature, quantity, latent heat, and sensible heat will help determine the amount of heat the parcel can absorb while maintaining the desired control temperature. Combining the attributes from the insulator and coolant will allow analysis of expected duration of the insulated shipping container system. Testing of multi-component systems is needed.
Design and evaluation:
It is wise (and sometimes mandatory) to have formal verification of the performance of the insulated shipping container. Laboratory package testing might include ASTM D3103-07, Standard Test Method for Thermal Insulation Performance of Packages, ISTA Guide 5B: Focused Simulation Guide for Thermal Performance Testing of Temperature Controlled Transport Packaging, and others. In addition, validation of field performance (performance qualification) is extremely useful.
Design and evaluation:
Specialists in design and testing of packaging for temperature sensitive products are often needed. These may be consultants, independent laboratories, universities, or reputable vendors. Many laboratories have certifications and accreditations: ISO 9000s, ISO/IEC 17025, etc.
Environmental Impact:
Parcel to pallet sized insulated shipping containers have historically been single-use products due to the low-cost material composition of EPS and water-based gel packs. The insulation material typically finds its way into landfill streams as it is not readily recyclable in the United States.
The development of reusable high-performance shipping containers have been shown to reduce packing waste by 95% while also contributing significant savings to other environmental pollutants.
External links and resources:
"Cold Chain Management", 2003, 2006, [1] Brody, A. L., and Marsh, K, S., "Encyclopedia of Packaging Technology", John Wiley & Sons, 1997, ISBN 0-471-06397-5 Lockhart, H., and Paine, F.A., "Packaging of Pharmaceuticals and Healthcare Products", 2006, Blackie, ISBN 0-7514-0167-6 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**VAMP4**
VAMP4:
Vesicle-associated membrane protein 4 is a protein that in humans is encoded by the VAMP4 gene.
Function:
Synaptobrevins/VAMPs, syntaxins, and the 25-kD synaptosomal-associated protein SNAP25 are the main components of a protein complex involved in the docking and/or fusion of synaptic vesicles with the presynaptic membrane. The protein encoded by this gene is a member of the vesicle-associated membrane protein (VAMP)/synaptobrevin family. This protein may play a role in trans-Golgi network-to-endosome transport.
Interactions:
VAMP4 has been shown to interact with AP1M1, STX6 and STX16. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Psionics (role-playing games)**
Psionics (role-playing games):
Psionics, in tabletop role-playing games, is a broad category of fantastic abilities originating from the mind, similar to the psychic abilities that some people claim in reality.
Common features:
Psionics are primarily distinguished, in most popular gaming systems, by one or more of the following: Magical or super/meta human-like abilities including: Extrasensory perception – learn secrets long forgotten, to glimpse the immediate future and predict the far future, to find hidden objects, and to know what is normally unknowable. Examples: Clairsentience, scrying, precognitives, retrocognitives, transduction, remote viewing, psychometry, omniscience, intuitiveness, aura reading, dowsing.
Common features:
Manifest manipulation – Powers that create objects, creatures, or some form of matter. Examples: Metacreativity, automatic writing, transmutation, apportation, multiplication, heighten senses, mediumship, energy healing.
Common features:
Intellect manipulation – Exclusive or near-exclusive association with highly advanced Intelligence quotients, disciplined, and/or willful beings well over the "superhuman intuitive genius" level; can choreograph entire wars with ease, comprehend and alter any science instantaneously, decipher any language, analyze and copy any fighting style, construct complex devices and inventions, compute mathematics at a superhuman level and have an eidetic memory. Amplify brain waves of others to enhance their intelligence, and thinking speed; they can also use their power to boost others power by advancing the power portion of their brain. Examples: Metapsionics, intellpsionic, synapsionic, psionic mimicry, psi-absorption, psi-augmentation, psi-bestowal, psi-negation, psi-sensing Physics manipulation – Manipulate energy or tap the power of the mind to produce a desired end; Mind over matter projecting pure force via the mind; Other (non-force) "Energy"-based abilities. Examples: Psychokinesis, hyperkinesis, pyrokinesis, electrokinesis, cryokinesis, hydrokinesis etc., soulites Molecular manipulation – Change the physical properties of some creature, thing, or condition. Examples: Psychometabolism, intangibility, healing/regeneration, environmental resistance, shape-shifting/replication, transvection, imperviousness, invisibility, elasticity Space/Time manipulation – Move an object or another creature through space and time; by manipulation of the flow of time, Psionics have the power to shift three-dimensional energies into virtually any environment they can conceive (such as a world inside a mirror); They can use portals to transport themselves or others. They can also open gates to pocket dimensions, and even alternate and warp realities. They can also go back in time and alter the timeline. Examples: Psychoportation, astral projection, time travel, bilocation, multi-dimensionals walking, etherialization, stasis suspension Thought manipulation – Communicate with others – especially other Psionics, mentally. Powerful psionics can completely alter a person's personality. Not only are they able to pick up on an emotion, if the emotion is strong enough, the psionic himself is able to feel that emotion. A frequent example is pain. Examples: Telepathics, extreme tele-empathy, tele-cybers, pain synthesizers, psionic blast, psychic weapons, mental attacks/defenses, mind control, and projected illusions lack of arcane rituals, gestures, components, and other typical features of magic.
Systems:
The following role-playing game systems present psionics, each in their own way. Often a system will present both magic and psionics. In these cases, psionics is usually defined in terms of its differences from and interactions with the magic system rather than on any specific capabilities. The following are some of the more prominent examples; there are also other variations and systems in use among games.
Systems:
Bureau 13 The Bureau 13 system, produced in the 80's and 90's, involved humans hunting down supernatural creatures. Psychic characters were one of the character options that could be optionally rolled to determine. This is one of the few systems that does not attempt to make psionics just a form of 'mind magic', i.e. that doesn't just use magic rules in a psionic context. Powers for magic and psionics are completely separate.
Systems:
Champions/Hero System The Hero System implements a wide variety of mechanical abilities, many of which are compatible with (and often used to build) psionic characters (often referred to as "mentalists" in Champions).
Systems:
Dawning Star The Dawning Star science-fiction setting introduces a modern take on the concept called Red Truth. This is a parallel dimension of pure information that overlays our own. The system itself uses the basic d20 Modern format, modified to comport with the concept. For example, information manipulation is much more viable than matter manipulation, and accessing the dimension can ultimately drive practitioners insane. Red Truth was first introduced in Helios Rising.
Systems:
Dungeons & Dragons Dungeons & Dragons introduced psionics as an option as far back as the Eldritch Wizardry supplement for the original Dungeons & Dragons in the mid 1970s. Psionics in D&D are designed to be on-par with magic, and so cover nearly every mechanical ability that the magic system does, organized into categories (disciplines) reminiscent of the Wizard's schools. The first edition of Advanced Dungeons & Dragons subdivided these disciplines into lesser powers called "devotions" and greater powers called "sciences". It also had separate classifications for psionic "attack" and "defense" powers/modes that were a sort of telepathic means of combat between psionically endowed beings.
Systems:
An early discussion of psionics in AD&D is given in Dragon magazine issue 78, which is devoted to psionics, and the relation with magic within AD&D is discussed in Spells can be psionic, too: How and why magic resembles mental powers. The distinction it draws is that psionics are the exercise of "mental energy" (an internal source), while the power that "drives" magic (from magic users and clerics) are instead magical art or divinity (an external source), though these latter may involve minds and some use of mental power.
Systems:
In most campaign settings, psionics are a secondary system, less prominent than magic. This is reversed in the Dark Sun setting, which features psionics prominently and magic secondarily, and treats magic (here called "arcane magic") unconventionally by AD&D standards.
The d20 System, being a de-branded version of the Dungeons & Dragons rules, shares these mechanics for psionics in nearly every detail.
GURPS In GURPS 3rd edition there is a broad range of psionic abilities, vaguely game-balanced with its magic system. In the case of GURPS, categories of ability are “powers”, purchased and refined by the player during character creation.
Systems:
In GURPS 4th edition psi abilities are bought as all other Advantages, with a 10% discount for the fact that they can be neutralized by anti-psi powers and technologies. The reason of such a change was the game balance problem: 3rd edition psis (and mages) were highly versatile at low point levels and became rapidly more powerful as point budgets increased.
Systems:
In Nomine Satanis/Magna Veritas In the In Nomine Satanis/Magna Veritas French roleplaying game, psionic powers (here called psi) are wielded by a few humans. These psis were first described in the Mindstorm supplement. The first psi were Adam and Eve, who were, in this game, not the first human beings, but instead mere humans infused with powers by God. God used them as the pawns of a small game with Satan, to see if humans untainted by society and the harsh life of Earth would succumb to evil. As told in the Bible, Eve and Adam eventually were tempted by Satan, and were thrown down to Earth. The modern psi are their surviving scions. Despite these powers, the psis are usually considered as weaker and much more fragile than the main protagonists of the game, angels and demons.
Systems:
Lusternia, Age of Ascension A Mage archetype is allowed to select Psionics out of their tertiary skillset – Dreamweaving, Runes or Psionics. Mages can specialize from the Psionics skill in either Telepathy or Telekenisis, each granting its own unique abilities. Monks can choose between Psionics and Acrobatics as well, and have the ability to specialize in Psychometabolism, a form of Psionics that affects the physical body.
Systems:
Palladium Megaverse Several of the games published by Palladium Books, most notably Beyond the Supernatural, feature psychic characters. The psychic powers in this universe are powered by Inner Strength Points (or ISP). Beyond the Supernatural (both 1st and 2nd editions) focuses almost exclusively on various forms of psychics, each with differing abilities. The games Heroes Unlimited, Palladium Fantasy Role-Playing Game and Rifts also make extensive use of these rules. The basic psionics system does not vary much between each product.
Systems:
Paranoia, Gamma World, et al.
In some games (e.g. Paranoia and Gamma World), widespread, radiation-induced genetic mutation is the sole trigger responsible for psionic powers in player characters.
Systems:
Space Opera The roleplaying game Space Opera treated psionics as an advanced science with many fields of studies, three levels of functioning (Psionically dead, Psionically open, Psionically Awakened) and vast number of skills. Characters that were open and been Psionically attacked or had contact with a raw PK Crystal could awaken, and characters with very high Psionic scores might be "contacted" and trained.
Systems:
Star Trek, Star Wars, et al.
Many role-playing games based on popular science fiction settings have at least telepathic powers available to players. Examples include the Psi Corps and other telepathic characters from Babylon 5, Vulcans from Star Trek, and the Jedi from Star Wars, all of whom have demonstrated various degrees of psionic abilities ranging from telepathy to telekinesis to mental domination.
Systems:
Traveller Traveller includes the mastery of psionics as a career option in the character creation stage. The odds of naturally developing psionic powers are unlikely (the player must roll a seven on the Events table, followed by a twelve, followed by a one), if a player achieves this, they have access to a number of powers that they may develop during the character creation phase.
Systems:
Torg In the Torg roleplaying game, psionics are only available at character creation to characters from the cosms of Core Earth (modern-day Earth) or the Star Sphere (the space opera cosm). Characters from other cosms 'can' learn psionic skills and powers during play, though when such characters use (or even possess) them it counts as a Contradiction.
White Wolf In White Wolf Publishing's World of Darkness, Mages sometimes work magic through a paradigm of psionic power. In addition, more ordinary humans in the setting sometimes possess psychic abilities, and these powers and others like them are often referred to as Numina.
In the Trinity Universe, the psions of the Æon Trinity are created from ordinary humans to battle against the return of the mutated Aberrants.
PSI World Psi World is a game from the 80's put out by Fantasy Games Unlimited that focused on psionic powers. The player characters were either psi-cops on the hunt for psychics, or they were psychics on the run. Being psychic was illegal in this dystopia. Psionics were the result of a plague that nearly wiped out humans.
Silver Cord Another game that focuses on psionic powers.
The World of Synnabarr A game by Raven c.s. McCracken, The World of Synnibarr.
Systems:
Science-fiction themed RPGs in general Psionics is sometimes used as a setting-compatible replacement for magic in role-playing games with science-fiction settings, particularly in the form of optional additional rules, such as in Star Frontiers. This is also true, to some extent, of settings, such as Star Trek and Star Wars, taken from films, television series or literature, though often (as in the two examples given) psionics are already present in some form in the setting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hardy's inequality**
Hardy's inequality:
Hardy's inequality is an inequality in mathematics, named after G. H. Hardy. It states that if a1,a2,a3,… is a sequence of non-negative real numbers, then for every real number p > 1 one has ∑n=1∞(a1+a2+⋯+ann)p≤(pp−1)p∑n=1∞anp.
If the right-hand side is finite, equality holds if and only if an=0 for all n.
An integral version of Hardy's inequality states the following: if f is a measurable function with non-negative values, then ∫0∞(1x∫0xf(t)dt)pdx≤(pp−1)p∫0∞f(x)pdx.
If the right-hand side is finite, equality holds if and only if f(x) = 0 almost everywhere.
Hardy's inequality was first published and proved (at least the discrete version with a worse constant) in 1920 in a note by Hardy. The original formulation was in an integral form slightly different from the above.
General one-dimensional version:
The general weighted one dimensional version reads as follows:: §329 If α+1p<1 , then ∫0∞(yα−1∫0yx−αf(x)dx)pdy≤1(1−α−1p)p∫0∞f(x)pdx If α+1p>1 , then ∫0∞(yα−1∫y∞x−αf(x)dx)pdy≤1(α+1p−1)p∫0∞f(x)pdx.
Multidimensional version:
In the multidimensional case, Hardy's inequality can be extended to Lp -spaces, taking the form ‖f|x|‖Lp(Rn)≤pn−p‖∇f‖Lp(Rn),2≤n,1≤p<n, where f∈C0∞(Rn) , and where the constant pn−p is known to be sharp.
Fractional Hardy inequality:
If 1≤p<∞ and 0<λ<∞ , λ≠1 , there exists a constant C such that for every f:(0,∞)→R satisfying ∫0∞|f(x)|p/xλdx<∞ , one has: Lemma 2 ∫0∞|f(x)|pxλdx≤C∫0∞∫0∞|f(x)−f(y)|p|x−y|1+λdxdy.
Proof of the inequality:
Integral version A change of variables gives (∫0∞(1x∫0xf(t)dt)pdx)1/p=(∫0∞(∫01f(sx)ds)pdx)1/p , which is less or equal than ∫01(∫0∞f(sx)pdx)1/pds by Minkowski's integral inequality. Finally, by another change of variables, the last expression equals ∫01(∫0∞f(x)pdx)1/ps−1/pds=pp−1(∫0∞f(x)pdx)1/p Discrete version: from the continuous version Assuming the right-hand side to be finite, we must have an→0 as n→∞ . Hence, for any positive integer j, there are only finitely many terms bigger than 2−j . This allows us to construct a decreasing sequence b1≥b2≥⋯ containing the same positive terms as the original sequence (but possibly no zero terms). Since a1+a2+⋯+an≤b1+b2+⋯+bn for every n, it suffices to show the inequality for the new sequence. This follows directly from the integral form, defining f(x)=bn if n−1<x<n and f(x)=0 otherwise. Indeed, one has ∫0∞f(x)pdx=∑n=1∞bnp and, for n−1<x<n , there holds 1x∫0xf(t)dt=b1+⋯+bn−1+(x−n+1)bnx≥b1+⋯+bnn (the last inequality is equivalent to (n−x)(b1+⋯+bn−1)≥(n−1)(n−x)bn , which is true as the new sequence is decreasing) and thus ∑n=1∞(b1+⋯+bnn)p≤∫0∞(1x∫0xf(t)dt)pdx Discrete version: Direct proof Let p>1 and let b1,…,bn be positive real numbers. Set Sk=∑i=1kbi First we prove the inequality ∑n=1NSnpnp≤pp−1∑n=1NbnSnp−1np−1(∗) , Let Tn=Snn and let Δn be the difference between the n -th terms in the RHS and LHS of (∗) , that is, := Tnp−pp−1bnTnp−1 . We have: Δn=Tnp−pp−1bnTnp−1=Tnp−pp−1(nTn−(n−1)Tn−1)Tnp−1 or Δn=Tnp(1−npp−1)+p(n−1)p−1Tn−1Tnp.
Proof of the inequality:
According to Young's inequality we have: Tn−1Tnp−1≤Tn−1pp+(p−1)Tnpp, from which it follows that: Δn≤n−1p−1Tn−1p−np−1Tnp.
By telescoping we have: ∑n=1NΔn≤0−1p−1T1p+1p−1T1p−2p−1T2p+2p−1T2p−3p−1T3p⋮+N−1p−1TN−1p−Np−1TNp=−Np−1TNp<0, proving (∗) . By applying Hölder's inequality to the RHS of (∗) we have: ∑n=1NSnpnp≤pp−1∑n=1NbnSnp−1np−1≤pp−1(∑n=1Nbnp)1/p(∑n=1NSnpnp)(p−1)/p from which we immediately obtain: ∑n=1NSnpnp≤(pp−1)p∑n=1Nbnp.
Letting N→∞ we obtain Hardy's inequality. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Carbonate-hosted lead-zinc ore deposits**
Carbonate-hosted lead-zinc ore deposits:
Carbonate-hosted lead-zinc ore deposits are important and highly valuable concentrations of lead and zinc sulfide ores hosted within carbonate (limestone, marl, dolomite) formations and which share a common genetic origin.
Carbonate-hosted lead-zinc ore deposits:
These ore bodies range from 0.5 million tonnes of contained ore, to 20 million tonnes or more, and have a grade of between 4% combined lead and zinc to over 14% combined lead and zinc. These ore bodies tend to be compact, fairly uniform plug-like or pipe-like replacements of their host carbonate sequences and as such can be extremely profitable mines.
Carbonate-hosted lead-zinc ore deposits:
This classification of ore deposits is also known as Mississippi Valley Type or MVT ore deposits, after a number of such deposits along the Mississippi River in the United States, where such ores were first recognised; these include the famed Southeast Missouri Lead District of southeastern Missouri, and deposits in northeast Iowa, southwest Wisconsin, and northwest Illinois.
Similarly Irish-type carbonate lead-zinc ores, exemplified by Lisheen Mine in County Tipperary, are formed in similar ways.
Sources:
The ultimate source of the mineralizing fluid(s) in MVT deposits is unknown. The ore fluids of MVT deposits are typically low temperature (100 °C-150 °C) and have the composition of basinal brines (10-30 wt.% NaCl equivalent) with pH's of 4.5-5 (buffered by host carbonates). This hydrothermal fluid may or may not carry the required sulfur to form sulfide minerals. Mobile hydrocarbons may have played a role in delivering reduced sulfur to certain MVT systems, while methane and other organic matter can potentially reduce sulfate carried by an acidic fluid. The ore fluid is suspected to be derived from clastic red bed sequences (potential metal source) that contain evaporites (potential sulfur source).
Transport:
Two potential transportation mechanisms for the metal-bearing ore fluid have been proposed. The first involves compaction of sediments in basins with rapid sedimentation. Mineralizing fluids within the basin become trapped within discrete, over-pressured aquifers and escape episodically and rapidly. The second fluid transportation mechanism is topographically-driven gravitational fluid flow. This occurs during uplift that is commonly associated with an orogenic event. One edge of a basin is uplifted during the formation of a foreland fold and thrust belt, and basinal fluids migrate laterally away from the deformation front as the basin is uplifted. Migration of the fluids through deep portions of the basin may result in the acquisition of metals and sulfur contained within the basin.
Trap:
The trap for carbonate-hosted lead-zinc sulfides is a chemical reaction which occurs as a consequence of concentration of sulfur, often hydrocarbons, and zinc and lead which are absorbed by the hydrocarbons. The hydrocarbons can either leak out of the fault zone or fold hinge, leaving a stockwork of weakly mineralized carbonate-sulfide veins, or can degrade via pyrolysis in place to form bitumens.
Trap:
Once hydrocarbons are converted to bitumen, their ability to chelate metal ions and sulfur is reduced and results in these elements being expelled into the fluid, which becomes saturated in zinc, lead, iron and sulfur. Sulfide minerals such as galena, sphalerite, marcasite and pyrite thus form.
Commonly MVT deposits form by the combination of hydrocarbon pyrolysis liberating zinc-lead ions and sulfur to form an acidic solution which dissolves the host carbonate formation and replaces it with massive sulfide accumulations. This may also take the morphology of fault-hosted stockworks, massive tabular replacements and so forth.
Porous limestones may form disseminated ores, however most MVT deposits are massive sulfides with knife-like margins between carbonate and sulfide mineralogies.
Mineralogy and alteration:
Ore minerals in carbonate replacement deposits are typically lead sulfide, galena, and zinc sulfide sphalerite. Weathered equivalents form anglesite, cerussite, smithsonite, hydrozincite and secondary galena and sphalerite within the supergene zone.
MVT and Irish type deposits are commonly associated with a 'dolomite front' alteration, which manifests as a yellow-cream wash of dolomite (calcium-magnesium carbonate) within calcite-aragonite assemblages of unaltered carbonate formations.
Mineralogy and alteration:
Most ore bodies are quite sulfidic, and most are very low-iron, with pyrite-marcasite contents typically below 30% of the mass of sulfides. This makes MVT lead-zinc deposits particularly easy to treat from a metallurgical view. Some MVT deposits can, however, be very iron-rich and some sulfide replacement and alteration zones are associated with no lead-zinc at all, resulting in massive accumulations of pyrite-marcasite, which are essentially worthless.
Mineralogy and alteration:
There is sometimes an association with quartz veining and colloform silica, however silicate gangue minerals are often rare.
Oil synergies:
The importance and synergies between hydrocarbon source-transport-trap 'fairways' and MVT and Irish Type lead-zinc deposits has been known for several decades. Often the prospectivity of particular carbonate formations for lead-zinc deposits of this nature is first identified by core drilling by oil explorers.
This concept of a cogeneration of hydrocarbons and precursor brines by the same process allows many lead-zinc explorers to use hydrocarbon basin models to predict if a carbonate sequence is likely to host MVT or Irish Type mineralization.
Exploration:
Exploration for MVT deposits is relatively complex in theory and straightforward in practise. During the area selection phase, attention must be paid to the nature of the carbonate sequences, especially if there is a 'dolomite front' alteration identified within oil exploration wells, which is commonly associated with lead-zinc mineralisation.
Exploration:
Thereafter, attention must be paid to picking floral facies of any reef carbonates formed from coral reef accumulations. The facies of the carbonate sequence is critical, as this is controlled mostly by faults which are the ultimate target of exploration. A fore-reef/back-reef transition is the 'sweet spot', and thus depending on the age of the carbonate sequence, familiarity with coral palaeontology is considered essential.
Exploration:
Finally, once a basin model of the carbonate sequence is formulated, and the primary basin-margin faults are roughly identified, a gravity survey is often carried out, which is the only geophysical technique which can directly detect MVT deposits. Gravity surveys aim to detect significant accumulations of lead and zinc due to their greater density relative to their surrounding host rocks.
Exploration:
Finally, the 'pointy end' of an exploration programme is to drill each and every one of the gravity targets in sequence, with no favour or prejudice given to the strength or amplitude of any anomaly. It is well known that unsubtle and unsophisticated methods of pattern drilling have found MVT deposits missed by more selective explorers, for instance the Lennard Shelf Deposits in Western Australia were found on the second last hole of an extensive drilling programme.
Similar deposit styles:
Similar deposit styles may be encountered in sheared and deformed carbonate belts where zinc-lead sulfides are hosted at the sheared contact of carbonates with siliciclastic sequences. Examples include the Dharwar Basin zinc-lead deposits, India where sulfides are hosted in shears within dolomite sequences.
Examples:
Admiral Bay, Zn-Pb-Ag deposit, Northwest Shelf, Western Australia, theorised to be an MVT replacement type (undeveloped) Pine Point Mine, Zn-Pb, deposit, Northwest Territories, Canada. (producer, 1964-1988) Manbarrum-Sorby Hills zinc and lead deposits, Bonaparte Basin, Western Australia and Northern Territory (undeveloped) Lennard Shelf Lead-Zinc deposits, Lennard Shelf, Kimberleys, Western Australia.
Tara Mine, Ireland Topla and Mežica mines on Petzen, Austrian-Slovenian border. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Advertising**
Advertising:
Advertising is the practice and techniques employed to bring attention to a product or service. Advertising aims to put a product or service in the spotlight in hopes of drawing it attention from consumers. It is typically used to promote a specific good or service, but there are wide range of uses, the most common being the commercial advertisement.
Advertising:
Commercial advertisements often seek to generate increased consumption of their products or services through "branding", which associates a product name or image with certain qualities in the minds of consumers. On the other hand, ads that intend to elicit an immediate sale are known as direct-response advertising. Non-commercial entities that advertise more than consumer products or services include political parties, interest groups, religious organizations and governmental agencies. Non-profit organizations may use free modes of persuasion, such as a public service announcement. Advertising may also help to reassure employees or shareholders that a company is viable or successful.
Advertising:
In the 19th century, soap businesses were among the first to employ large-scale advertising campaigns. Thomas J. Barratt was hired by Pears to be its brand manager—the first of its kind—and in addition to creating slogans and images he recruited West End stage actress and socialite Lillie Langtry to become the poster-girl for Pears, making her the first celebrity to endorse a commercial product. Modern advertising originated with the techniques introduced with tobacco advertising in the 1920s, most significantly with the campaigns of Edward Bernays, considered the founder of modern, "Madison Avenue" advertising.Worldwide spending on advertising in 2015 amounted to an estimated US$529.43 billion. Advertising's projected distribution for 2017 was 40.4% on TV, 33.3% on digital, 9% on newspapers, 6.9% on magazines, 5.8% on outdoor and 4.3% on radio. Internationally, the largest ("Big Five") advertising agency groups are Omnicom, WPP, Publicis, Interpublic, and Dentsu.In Latin, advertere means "to turn towards".
History:
Egyptians used papyrus to make sales messages and wall posters. Commercial messages and political campaign displays have been found in the ruins of Pompeii and ancient Arabia. Lost and found advertising on papyrus was common in ancient Greece and ancient Rome. Wall or rock painting for commercial advertising is another manifestation of an ancient advertising form, which is present to this day in many parts of Asia, Africa, and South America. The tradition of wall painting can be traced back to Indian rock art paintings that date back to 4000 BC.In ancient China, the earliest advertising known was oral, as recorded in the Classic of Poetry (11th to 7th centuries BC) of bamboo flutes played to sell confectionery. Advertisement usually takes in the form of calligraphic signboards and inked papers. A copper printing plate dated back to the Song dynasty used to print posters in the form of a square sheet of paper with a rabbit logo with "Jinan Liu's Fine Needle Shop" and "We buy high-quality steel rods and make fine-quality needles, to be ready for use at home in no time" written above and below is considered the world's earliest identified printed advertising medium.In Europe, as the towns and cities of the Middle Ages began to grow, and the general population was unable to read, instead of signs that read "cobbler", "miller", "tailor", or "blacksmith", images associated with their trade would be used such as a boot, a suit, a hat, a clock, a diamond, a horseshoe, a candle or even a bag of flour. Fruits and vegetables were sold in the city square from the backs of carts and wagons and their proprietors used street callers (town criers) to announce their whereabouts. The first compilation of such advertisements was gathered in "Les Crieries de Paris", a thirteenth-century poem by Guillaume de la Villeneuve.
History:
18th-19th century: Newspaper Advertising In the 18th century advertisements started to appear in weekly newspapers in England. These early print advertisements were used mainly to promote books and newspapers, which became increasingly affordable with advances in the printing press; and medicines, which were increasingly sought after. However, false advertising and so-called "quack" advertisements became a problem, which ushered in the regulation of advertising content.
History:
In the United States, newspapers grew quickly in the first few decades of the 19th century, in part due to advertising. By 1822, the United States had more newspaper readers than any other country. About half of the content of these newspapers consisted of advertising, usually local advertising, with half of the daily newspapers in the 1810s using the word "advertiser" in their name.In June 1836, French newspaper La Presse was the first to include paid advertising in its pages, allowing it to lower its price, extend its readership and increase its profitability and the formula was soon copied by all titles. Around 1840, Volney B. Palmer established the roots of the modern day advertising agency in Philadelphia. In 1842 Palmer bought large amounts of space in various newspapers at a discounted rate then resold the space at higher rates to advertisers. The actual ad – the copy, layout, and artwork – was still prepared by the company wishing to advertise; in effect, Palmer was a space broker. The situation changed when the first full-service advertising agency of N.W. Ayer & Son was founded in 1869 in Philadelphia. Ayer & Son offered to plan, create, and execute complete advertising campaigns for its customers. By 1900 the advertising agency had become the focal point of creative planning, and advertising was firmly established as a profession.
History:
Around the same time, in France, Charles-Louis Havas extended the services of his news agency, Havas to include advertisement brokerage, making it the first French group to organize. At first, agencies were brokers for advertisement space in newspapers.
History:
Late 19th century: Modern Advertising Thomas J. Barratt of London has been called "the father of modern advertising". Working for the Pears soap company, Barratt created an effective advertising campaign for the company products, which involved the use of targeted slogans, images and phrases. One of his slogans, "Good morning. Have you used Pears' soap?" was famous in its day and into the 20th century. In 1882, Barratt recruited English actress and socialite Lillie Langtry to become the poster-girl for Pears, making her the first celebrity to endorse a commercial product.
History:
Becoming the company's brand manager in 1865, listed as the first of its kind by the Guinness Book of Records, Barratt introduced many of the crucial ideas that lie behind successful advertising and these were widely circulated in his day. He constantly stressed the importance of a strong and exclusive brand image for Pears and of emphasizing the product's availability through saturation campaigns. He also understood the importance of constantly reevaluating the market for changing tastes and mores, stating in 1907 that "tastes change, fashions change, and the advertiser has to change with them. An idea that was effective a generation ago would fall flat, stale, and unprofitable if presented to the public today. Not that the idea of today is always better than the older idea, but it is different – it hits the present taste." Enhanced advertising revenues was one effect of the Industrial Revolution in Britain. Thanks to the revolution and the consumers it created, by the mid-19th century biscuits and chocolate became products for the masses, and British biscuit manufacturers were among the first to introduce branding to distinguish grocery products. One the world's first global brands, Huntley & Palmers biscuits were sold in 172 countries in 1900, and their global reach was reflected in their advertisements.
History:
20th century As a result of massive industrialization, advertising increased dramatically in the United States. In 1919 it was 2.5 percent of gross domestic product (GDP) in the US, and it averaged 2.2 percent of GDP between then and at least 2007, though it may have declined dramatically since the Great Recession.
History:
Industry could not benefit from its increased productivity without a substantial increase in consumer spending. This contributed to the development of mass marketing designed to influence the population's economic behavior on a larger scale. In the 1910s and 1920s, advertisers in the U.S. adopted the doctrine that human instincts could be targeted and harnessed – "sublimated" into the desire to purchase commodities. Edward Bernays, a nephew of Sigmund Freud, became associated with the method and is sometimes called the founder of modern advertising and public relations. Bernays claimed that:"[The] general principle, that men are very largely actuated by motives which they conceal from themselves, is as true of mass as of individual psychology. It is evident that the successful propagandist must understand the true motives and not be content to accept the reasons which men give for what they do."In other words, selling products by appealing to the rational minds of customers (the main method used prior to Bernays) was much less effective than selling products based on the unconscious desires that Bernays felt were the true motivators of human action. "Sex sells" became a controversial issue, with techniques for titillating and enlarging the audience posing a challenge to conventional morality.In the 1920s, under Secretary of Commerce Herbert Hoover, the American government promoted advertising. Hoover himself delivered an address to the Associated Advertising Clubs of the World in 1925 called 'Advertising Is a Vital Force in Our National Life." In October 1929, the head of the U.S. Bureau of Foreign and Domestic Commerce, Julius Klein, stated "Advertising is the key to world prosperity." This was part of the "unparalleled" collaboration between business and government in the 1920s, according to a 1933 European economic journal.The tobacco companies became major advertisers in order to sell packaged cigarettes. The tobacco companies pioneered the new advertising techniques when they hired Bernays to create positive associations with tobacco smoking.Advertising was also used as a vehicle for cultural assimilation, encouraging workers to exchange their traditional habits and community structure in favor of a shared "modern" lifestyle. An important tool for influencing immigrant workers was the American Association of Foreign Language Newspapers (AAFLN). The AAFLN was primarily an advertising agency but also gained heavily centralized control over much of the immigrant press.
History:
At the turn of the 20th century, advertising was one of the few career choices for women. Since women were responsible for most household purchasing done, advertisers and agencies recognized the value of women's insight during the creative process. In fact, the first American advertising to use a sexual sell was created by a woman – for a soap product. Although tame by today's standards, the advertisement featured a couple with the message "A skin you love to touch".In the 1920s, psychologists Walter D. Scott and John B. Watson contributed applied psychological theory to the field of advertising. Scott said, "Man has been called the reasoning animal but he could with greater truthfulness be called the creature of suggestion. He is reasonable, but he is to a greater extent suggestible". He demonstrated this through his advertising technique of a direct command to the consumer.
History:
Radio from the 1920s In the early 1920s, the first radio stations were established by radio equipment manufacturers, followed by non-profit organizations such as schools, clubs and civic groups who also set up their own stations. Retailer and consumer goods manufacturers quickly recognized radio's potential to reach consumers in their home and soon adopted advertising techniques that would allow their messages to stand out; slogans, mascots, and jingles began to appear on radio in the 1920s and early television in the 1930s.The rise of mass media communications allowed manufacturers of branded goods to bypass retailers by advertising directly to consumers. This was a major paradigm shift which forced manufacturers to focus on the brand and stimulated the need for superior insights into consumer purchasing, consumption and usage behaviour; their needs, wants and aspirations. The earliest radio drama series were sponsored by soap manufacturers and the genre became known as a soap opera. Before long, radio station owners realized they could increase advertising revenue by selling 'air-time' in small time allocations which could be sold to multiple businesses. By the 1930s, these advertising spots, as the packets of time became known, were being sold by the station's geographical sales representatives, ushering in an era of national radio advertising.By the 1940s, manufacturers began to recognize the way in which consumers were developing personal relationships with their brands in a social/psychological/anthropological sense. Advertisers began to use motivational research and consumer research to gather insights into consumer purchasing. Strong branded campaigns for Chrysler and Exxon/Esso, using insights drawn research methods from psychology and cultural anthropology, led to some of the most enduring campaigns of the 20th century.
History:
Commercial television in the 1950s In the early 1950s, the DuMont Television Network began the modern practice of selling advertisement time to multiple sponsors. Previously, DuMont had trouble finding sponsors for many of their programs and compensated by selling smaller blocks of advertising time to several businesses. This eventually became the standard for the commercial television industry in the United States. However, it was still a common practice to have single sponsor shows, such as The United States Steel Hour. In some instances the sponsors exercised great control over the content of the show – up to and including having one's advertising agency actually writing the show. The single sponsor model is much less prevalent now, a notable exception being the Hallmark Hall of Fame.
History:
Cable television from the 1980s The late 1980s and early 1990s saw the introduction of cable television and particularly MTV. Pioneering the concept of the music video, MTV ushered in a new type of advertising: the consumer tunes in for the advertising message, rather than it being a by-product or afterthought. As cable and satellite television became increasingly prevalent, specialty channels emerged, including channels entirely devoted to advertising, such as QVC, Home Shopping Network, and ShopTV Canada.
History:
Internet from the 1990s With the advent of the ad server, online advertising grew, contributing to the "dot-com" boom of the 1990s. Entire corporations operated solely on advertising revenue, offering everything from coupons to free Internet access. At the turn of the 21st century, some websites, including the search engine Google, changed online advertising by personalizing ads based on web browsing behavior. This has led to other similar efforts and an increase in interactive advertising.The share of advertising spending relative to GDP has changed little across large changes in media since 1925. In 1925, the main advertising media in America were newspapers, magazines, signs on streetcars, and outdoor posters. Advertising spending as a share of GDP was about 2.9 percent. By 1998, television and radio had become major advertising media; by 2017, the balance between broadcast and online advertising had shifted, with online spending exceeding broadcast. Nonetheless, advertising spending as a share of GDP was slightly lower – about 2.4 percent.Guerrilla marketing involves unusual approaches such as staged encounters in public places, giveaways of products such as cars that are covered with brand messages, and interactive advertising where the viewer can respond to become part of the advertising message. This type of advertising is unpredictable, which causes consumers to buy the product or idea. This reflects an increasing trend of interactive and "embedded" ads, such as via product placement, having consumers vote through text messages, and various campaigns utilizing social network services such as Facebook or Twitter.The advertising business model has also been adapted in recent years. In media for equity, advertising is not sold, but provided to start-up companies in return for equity. If the company grows and is sold, the media companies receive cash for their shares.
History:
Domain name registrants (usually those who register and renew domains as an investment) sometimes "park" their domains and allow advertising companies to place ads on their sites in return for per-click payments. These ads are typically driven by pay per click search engines like Google or Yahoo, but ads can sometimes be placed directly on targeted domain names through a domain lease or by making contact with the registrant of a domain name that describes a product. Domain name registrants are generally easy to identify through WHOIS records that are publicly available at registrar websites.
Classification:
Advertising may be categorized in a variety of ways, including by style, target audience, geographic scope, medium, or purpose.: 9–15 For example, in print advertising, classification by style can include display advertising (ads with design elements sold by size) vs. classified advertising (ads without design elements sold by the word or line). Advertising may be local, national or global. An ad campaign may be directed toward consumers or to businesses. The purpose of an ad may be to raise awareness (brand advertising), or to elicit an immediate sale (direct response advertising). The term above the line (ATL) is used for advertising involving mass media; more targeted forms of advertising and promotion are referred to as below the line (BTL). The two terms date back to 1954 when Procter & Gamble began paying their advertising agencies differently from other promotional agencies. In the 2010s, as advertising technology developed, a new term, through the line (TTL) began to come into use, referring to integrated advertising campaigns.
Classification:
Traditional media Virtually any medium can be used for advertising. Commercial advertising media can include wall paintings, billboards, street furniture components, printed flyers and rack cards, radio, cinema and television adverts, web banners, mobile telephone screens, shopping carts, web popups, skywriting, bus stop benches, human billboards and forehead advertising, magazines, newspapers, town criers, sides of buses, banners attached to or sides of airplanes ("logojets"), in-flight advertisements on seatback tray tables or overhead storage bins, taxicab doors, roof mounts and passenger screens, musical stage shows, subway platforms and trains, elastic bands on disposable diapers, doors of bathroom stalls, stickers on apples in supermarkets, shopping cart handles (grabertising), the opening section of streaming audio and video, posters, and the backs of event tickets and supermarket receipts. Any situation in which an "identified" sponsor pays to deliver their message through a medium is advertising.
Classification:
Television Television advertising is one of the most expensive types of advertising; networks charge large amounts for commercial airtime during popular events. The annual Super Bowl football game in the United States is known as the most prominent advertising event on television – with an audience of over 108 million and studies showing that 50% of those only tuned in to see the advertisements. During the 2014 edition of this game, the average thirty-second ad cost US$4 million, and $8 million was charged for a 60-second spot. Virtual advertisements may be inserted into regular programming through computer graphics. It is typically inserted into otherwise blank backdrops or used to replace local billboards that are not relevant to the remote broadcast audience. Virtual billboards may be inserted into the background where none exist in real-life. This technique is especially used in televised sporting events. Virtual product placement is also possible. An infomercial is a long-format television commercial, typically five minutes or longer. The name blends the words "information" and "commercial". The main objective in an infomercial is to create an impulse purchase, so that the target sees the presentation and then immediately buys the product through the advertised toll-free telephone number or website. Infomercials describe and often demonstrate products, and commonly have testimonials from customers and industry professionals.Radio Radio advertisements are broadcast as radio waves to the air from a transmitter to an antenna and a thus to a receiving device. Airtime is purchased from a station or network in exchange for airing the commercials. While radio has the limitation of being restricted to sound, proponents of radio advertising often cite this as an advantage. Radio is an expanding medium that can be found on air, and also online. According to Arbitron, radio has approximately 241.6 million weekly listeners, or more than 93 percent of the U.S. population.Online Online advertising is a form of promotion that uses the Internet and World Wide Web for the expressed purpose of delivering marketing messages to attract customers. Online ads are delivered by an ad server. Examples of online advertising include contextual ads that appear on search engine results pages, banner ads, in pay per click text ads, rich media ads, Social network advertising, online classified advertising, advertising networks and e-mail marketing, including e-mail spam. A newer form of online advertising is Native Ads; they go in a website's news feed and are supposed to improve user experience by being less intrusive. However, some people argue this practice is deceptive.Domain names Domain name advertising is most commonly done through pay per click web search engines, however, advertisers often lease space directly on domain names that generically describe their products. When an Internet user visits a website by typing a domain name directly into their web browser, this is known as "direct navigation", or "type in" web traffic. Although many Internet users search for ideas and products using search engines and mobile phones, a large number of users around the world still use the address bar. They will type a keyword into the address bar such as "geraniums" and add ".com" to the end of it. Sometimes they will do the same with ".org" or a country-code Top Level Domain (TLD such as ".co.uk" for the United Kingdom or ".ca" for Canada). When Internet users type in a generic keyword and add .com or another top-level domain (TLD) ending, it produces a targeted sales lead. Domain name advertising was originally developed by Oingo (later known as Applied Semantics), one of Google's early acquisitions.Product placements Covert advertising is when a product or brand is embedded in entertainment and media. For example, in a film, the main character can use an item or other of a definite brand, as in the movie Minority Report, where Tom Cruise's character John Anderton owns a phone with the Nokia logo clearly written in the top corner, or his watch engraved with the Bulgari logo. Another example of advertising in film is in I, Robot, where main character played by Will Smith mentions his Converse shoes several times, calling them "classics", because the film is set far in the future. I, Robot and Spaceballs also showcase futuristic cars with the Audi and Mercedes-Benz logos clearly displayed on the front of the vehicles. Cadillac chose to advertise in the movie The Matrix Reloaded, which as a result contained many scenes in which Cadillac cars were used. Similarly, product placement for Omega Watches, Ford, VAIO, BMW and Aston Martin cars are featured in recent James Bond films, most notably Casino Royale. In "Fantastic Four: Rise of the Silver Surfer", the main transport vehicle shows a large Dodge logo on the front. Blade Runner includes some of the most obvious product placement; the whole film stops to show a Coca-Cola billboard.Print Print advertising describes advertising in a printed medium such as a newspaper, magazine, or trade journal. This encompasses everything from media with a very broad readership base, such as a major national newspaper or magazine, to more narrowly targeted media such as local newspapers and trade journals on very specialized topics. One form of print advertising is classified advertising, which allows private individuals or companies to purchase a small, narrowly targeted ad paid by the word or line. Another form of print advertising is the display ad, which is generally a larger ad with design elements that typically run in an article section of a newspaper.: 14 Outdoor Billboards, also known as hoardings in some parts of the world, are large structures located in public places which display advertisements to passing pedestrians and motorists. Most often, they are located on main roads with a large amount of passing motor and pedestrian traffic; however, they can be placed in any location with large numbers of viewers, such as on mass transit vehicles and in stations, in shopping malls or office buildings, and in stadiums. The form known as street advertising first came to prominence in the UK by Street Advertising Services to create outdoor advertising on street furniture and pavements. Working with products such as Reverse Graffiti, air dancers and 3D pavement advertising, for getting brand messages out into public spaces. Sheltered outdoor advertising combines outdoor with indoor advertisement by placing large mobile, structures (tents) in public places on temporary bases. The large outer advertising space aims to exert a strong pull on the observer, the product is promoted indoors, where the creative decor can intensify the impression. Mobile billboards are generally vehicle mounted billboards or digital screens. These can be on dedicated vehicles built solely for carrying advertisements along routes preselected by clients, they can also be specially equipped cargo trucks or, in some cases, large banners strewn from planes. The billboards are often lighted; some being backlit, and others employing spotlights. Some billboard displays are static, while others change; for example, continuously or periodically rotating among a set of advertisements. Mobile displays are used for various situations in metropolitan areas throughout the world, including: target advertising, one-day and long-term campaigns, conventions, sporting events, store openings and similar promotional events, and big advertisements from smaller companies.Point-of-sale In-store advertising is any advertisement placed in a retail store. It includes placement of a product in visible locations in a store, such as at eye level, at the ends of aisles and near checkout counters (a.k.a. POP – point of purchase display), eye-catching displays promoting a specific product, and advertisements in such places as shopping carts and in-store video displays.Novelties Advertising printed on small tangible items such as coffee mugs, T-shirts, pens, bags, and such is known as novelty advertising. Some printers specialize in printing novelty items, which can then be distributed directly by the advertiser, or items may be distributed as part of a cross-promotion, such as ads on fast food containers.Celebrity endorsements Advertising in which a celebrity endorses a product or brand leverages celebrity power, fame, money, popularity to gain recognition for their products or to promote specific stores' or products. Advertisers often advertise their products, for example, when celebrities share their favorite products or wear clothes by specific brands or designers. Celebrities are often involved in advertising campaigns such as television or print adverts to advertise specific or general products. The use of celebrities to endorse a brand can have its downsides, however; one mistake by a celebrity can be detrimental to the public relations of a brand. For example, following his performance of eight gold medals at the 2008 Olympic Games in Beijing, China, swimmer Michael Phelps' contract with Kellogg's was terminated, as Kellogg's did not want to associate with him after he was photographed smoking marijuana. Celebrities such as Britney Spears have advertised for multiple products including Pepsi, Candies from Kohl's, Twister, NASCAR, and Toyota.Aerial Using aircraft, balloons or airships to create or display advertising media. Skywriting is a notable example.
Classification:
New media approaches A new advertising approach is known as advanced advertising, which is data-driven advertising, using large quantities of data, precise measuring tools and precise targeting. Advanced advertising also makes it easier for companies which sell ad-space to attribute customer purchases to the ads they display or broadcast.Increasingly, other media are overtaking many of the "traditional" media such as television, radio and newspaper because of a shift toward the usage of the Internet for news and music as well as devices like digital video recorders (DVRs) such as TiVo.Online advertising began with unsolicited bulk e-mail advertising known as "e-mail spam". Spam has been a problem for e-mail users since 1978. As new online communication channels became available, advertising followed. The first banner ad appeared on the World Wide Web in 1994. Prices of Web-based advertising space are dependent on the "relevance" of the surrounding web content and the traffic that the website receives.In online display advertising, display ads generate awareness quickly. Unlike search, which requires someone to be aware of a need, display advertising can drive awareness of something new and without previous knowledge. Display works well for direct response. Display is not only used for generating awareness, it is used for direct response campaigns that link to a landing page with a clear 'call to action'.As the mobile phone became a new mass medium in 1998 when the first paid downloadable content appeared on mobile phones in Finland, mobile advertising followed, also first launched in Finland in 2000. By 2007 the value of mobile advertising had reached $2 billion and providers such as Admob delivered billions of mobile ads.More advanced mobile ads include banner ads, coupons, Multimedia Messaging Service picture and video messages, advergames and various engagement marketing campaigns. A particular feature driving mobile ads is the 2D barcode, which replaces the need to do any typing of web addresses, and uses the camera feature of modern phones to gain immediate access to web content. 83 percent of Japanese mobile phone users already are active users of 2D barcodes.Some companies have proposed placing messages or corporate logos on the side of booster rockets and the International Space Station.Unpaid advertising (also called "publicity advertising"), can include personal recommendations ("bring a friend", "sell it"), spreading buzz, or achieving the feat of equating a brand with a common noun (in the United States, "Xerox" = "photocopier", "Kleenex" = tissue, "Vaseline" = petroleum jelly, "Hoover" = vacuum cleaner, and "Band-Aid" = adhesive bandage). However, some companies oppose the use of their brand name to label an object. Equating a brand with a common noun also risks turning that brand into a generic trademark – turning it into a generic term which means that its legal protection as a trademark is lost.Early in its life, The CW aired short programming breaks called "Content Wraps", to advertise one company's product during an entire commercial break. The CW pioneered "content wraps" and some products featured were Herbal Essences, Crest, Guitar Hero II, CoverGirl, and Toyota.A new promotion concept has appeared, "ARvertising", advertising on augmented reality technology.Controversy exists on the effectiveness of subliminal advertising (see mind control), and the pervasiveness of mass messages (propaganda).
Classification:
Rise in new media With the Internet came many new advertising opportunities. Pop-up, Flash, banner, pop-under, advergaming, and email advertisements (all of which are often unwanted or spam in the case of email) are now commonplace. Particularly since the rise of "entertaining" advertising, some people may like an advertisement enough to wish to watch it later or show a friend. In general, the advertising community has not yet made this easy, although some have used the Internet to widely distribute their ads to anyone willing to see or hear them. In the last three quarters of 2009, mobile and Internet advertising grew by 18% and 9% respectively, while older media advertising saw declines: −10.1% (TV), −11.7% (radio), −14.8% (magazines) and −18.7% (newspapers). Between 2008 and 2014, U.S. newspapers lost more than half their print advertising revenue.
Classification:
Niche marketing Another significant trend regarding future of advertising is the growing importance of the niche market using niche or targeted ads. Also brought about by the Internet and the theory of the long tail, advertisers will have an increasing ability to reach specific audiences. In the past, the most efficient way to deliver a message was to blanket the largest mass market audience possible. However, usage tracking, customer profiles and the growing popularity of niche content brought about by everything from blogs to social networking sites, provide advertisers with audiences that are smaller but much better defined, leading to ads that are more relevant to viewers and more effective for companies' marketing products. Among others, Comcast Spotlight is one such advertiser employing this method in their video on demand menus. These advertisements are targeted to a specific group and can be viewed by anyone wishing to find out more about a particular business or practice, from their home. This causes the viewer to become proactive and actually choose what advertisements they want to view.
Classification:
Niche marketing could also be helped by bringing the issue of colour into advertisements. Different colours play major roles when it comes to marketing strategies, for example, seeing the blue can promote a sense of calmness and gives a sense of security which is why many social networks such as Facebook use blue in their logos.
Google AdSense is an example of niche marketing. Google calculates the primary purpose of a website and adjusts ads accordingly; it uses keywords on the page (or even in emails) to find the general ideas of topics disused and places ads that will most likely be clicked on by viewers of the email account or website visitors.
Classification:
Crowdsourcing The concept of crowdsourcing has given way to the trend of user-generated advertisements. User-generated ads are created by people, as opposed to an advertising agency or the company themselves, often resulting from brand sponsored advertising competitions. For the 2007 Super Bowl, the Frito-Lays division of PepsiCo held the "Crash the Super Bowl" contest, allowing people to create their own Doritos commercials. Chevrolet held a similar competition for their Tahoe line of SUVs. Due to the success of the Doritos user-generated ads in the 2007 Super Bowl, Frito-Lays relaunched the competition for the 2009 and 2010 Super Bowl. The resulting ads were among the most-watched and most-liked Super Bowl ads. In fact, the winning ad that aired in the 2009 Super Bowl was ranked by the USA Today Super Bowl Ad Meter as the top ad for the year while the winning ads that aired in the 2010 Super Bowl were found by Nielsen's BuzzMetrics to be the "most buzzed-about". Another example of companies using crowdsourcing successfully is the beverage company Jones Soda that encourages consumers to participate in the label design themselves.This trend has given rise to several online platforms that host user-generated advertising competitions on behalf of a company. Founded in 2007, Zooppa has launched ad competitions for brands such as Google, Nike, Hershey's, General Mills, Microsoft, NBC Universal, Zinio, and Mini Cooper. Crowdsourcing remains controversial, as the long-term impact on the advertising industry is still unclear.
Classification:
Globalization Advertising has gone through five major stages of development: domestic, export, international, multi-national, and global. For global advertisers, there are four, potentially competing, business objectives that must be balanced when developing worldwide advertising: building a brand while speaking with one voice, developing economies of scale in the creative process, maximising local effectiveness of ads, and increasing the company's speed of implementation. Born from the evolutionary stages of global marketing are the three primary and fundamentally different approaches to the development of global advertising executions: exporting executions, producing local executions, and importing ideas that travel.Advertising research is key to determining the success of an ad in any country or region. The ability to identify which elements and/or moments of an ad contribute to its success is how economies of scale are maximized. Once one knows what works in an ad, that idea or ideas can be imported by any other market. Market research measures, such as Flow of Attention, Flow of Emotion and branding moments provide insight into what is working in an ad in any country or region because the measures are based on the visual, not verbal, elements of the ad.
Classification:
Foreign public messaging Foreign governments, particularly those that own marketable commercial products or services, often promote their interests and positions through the advertising of those goods because the target audience is not only largely unaware of the forum as a vehicle for foreign messaging but also willing to receive the message while in a mental state of absorbing information from advertisements during television commercial breaks, while reading a periodical, or while passing by billboards in public spaces. A prime example of this messaging technique is advertising campaigns to promote international travel. While advertising foreign destinations and services may stem from the typical goal of increasing revenue by drawing more tourism, some travel campaigns carry the additional or alternative intended purpose of promoting good sentiments or improving existing ones among the target audience towards a given nation or region. It is common for advertising promoting foreign countries to be produced and distributed by the tourism ministries of those countries, so these ads often carry political statements and/or depictions of the foreign government's desired international public perception. Additionally, a wide range of foreign airlines and travel-related services which advertise separately from the destinations, themselves, are owned by their respective governments; examples include, though are not limited to, the Emirates airline (Dubai), Singapore Airlines (Singapore), Qatar Airways (Qatar), China Airlines (Taiwan/Republic of China), and Air China (People's Republic of China). By depicting their destinations, airlines, and other services in a favorable and pleasant light, countries market themselves to populations abroad in a manner that could mitigate prior public impressions.
Classification:
Diversification In the realm of advertising agencies, continued industry diversification has seen observers note that "big global clients don't need big global agencies any more". This is reflected by the growth of non-traditional agencies in various global markets, such as Canadian business TAXI and SMART in Australia and has been referred to as "a revolution in the ad world".
Classification:
New technology The ability to record shows on digital video recorders (such as TiVo) allow watchers to record the programs for later viewing, enabling them to fast forward through commercials. Additionally, as more seasons of pre-recorded box sets are offered for sale of television programs; fewer people watch the shows on TV. However, the fact that these sets are sold, means the company will receive additional profits from these sets.
Classification:
To counter this effect, a variety of strategies have been employed. Many advertisers have opted for product placement on TV shows like Survivor. Other strategies include integrating advertising with internet-connected program guidess (EPGs), advertising on companion devices (like smartphones and tablets) during the show, and creating mobile apps for TV programs. Additionally, some like brands have opted for social television sponsorship.The emerging technology of drone displays has recently been used for advertising purposes.
Classification:
Education In recent years there have been several media literacy initiatives, and more specifically concerning advertising, that seek to empower citizens in the face of media advertising campaigns.Advertising education has become popular with bachelor, master and doctorate degrees becoming available in the emphasis. A surge in advertising interest is typically attributed to the strong relationship advertising plays in cultural and technological changes, such as the advance of online social networking. A unique model for teaching advertising is the student-run advertising agency, where advertising students create campaigns for real companies. Organizations such as the American Advertising Federation establish companies with students to create these campaigns.
Purposes:
Advertising is at the front of delivering the proper message to customers and prospective customers. The purpose of advertising is to inform the consumers about their product and convince customers that a company's services or products are the best, enhance the image of the company, point out and create a need for products or services, demonstrate new uses for established products, announce new products and programs, reinforce the salespeople's individual messages, draw customers to the business, and to hold existing customers.
Purposes:
Sales promotions and brand loyalty Sales promotions are another way to advertise. Sales promotions are double purposed because they are used to gather information about what type of customers one draws in and where they are, and to jump start sales. Sales promotions include things like contests and games, sweepstakes, product giveaways, samples coupons, loyalty programs, and discounts. The ultimate goal of sales promotions is to stimulate potential customers to action.
Criticisms:
While advertising can be seen as necessary for economic growth, it is not without social costs. Unsolicited commercial e-mail and other forms of spam have become so prevalent as to have become a major nuisance to users of these services, as well as being a financial burden on internet service providers. Advertising is increasingly invading public spaces, such as schools, which some critics argue is a form of child exploitation. This increasing difficulty in limiting exposure to specific audiences can result in negative backlash for advertisers. In tandem with these criticisms, the advertising industry has seen low approval rates in surveys and negative cultural portrayals.One of the most controversial criticisms of advertisement in the present day is that of the predominance of advertising of foods high in sugar, fat, and salt specifically to children. Critics claim that food advertisements targeting children are exploitive and are not sufficiently balanced with proper nutritional education to help children understand the consequences of their food choices. Additionally, children may not understand that they are being sold something, and are therefore more impressionable. Michelle Obama has criticized large food companies for advertising unhealthy foods largely towards children and has requested that food companies either limit their advertising to children or advertise foods that are more in line with dietary guidelines. The other criticisms include the change that are brought by those advertisements on the society and also the deceiving ads that are aired and published by the corporations. Cosmetic and health industry are the ones which exploited the highest and created reasons of concern.A 2021 study found that for more than 80% of brands, advertising had a negative return on investment. Unsolicited ads have been criticized as attention theft.
Regulation:
There have been increasing efforts to protect the public interest by regulating the content and the influence of advertising. Some examples include restrictions for advertising alcohol, tobacco or gambling imposed in many countries, as well as the bans around advertising to children, which exist in parts of Europe. Advertising regulation focuses heavily on the veracity of the claims and as such, there are often tighter restrictions placed around advertisements for food and healthcare products.The advertising industries within some countries rely less on laws and more on systems of self-regulation. Advertisers and the media agree on a code of advertising standards that they attempt to uphold. The general aim of such codes is to ensure that any advertising is 'legal, decent, honest and truthful'. Some self-regulatory organizations are funded by the industry, but remain independent, with the intent of upholding the standards or codes like the Advertising Standards Authority in the UK.In the UK, most forms of outdoor advertising such as the display of billboards is regulated by the UK Town and County Planning system. Currently, the display of an advertisement without consent from the Planning Authority is a criminal offense liable to a fine of £2,500 per offense. In the US, many communities believe that many forms of outdoor advertising blight the public realm. As long ago as the 1960s in the US, there were attempts to ban billboard advertising in the open countryside. Cities such as São Paulo have introduced an outright ban with London also having specific legislation to control unlawful displays.
Regulation:
Some governments restrict the languages that can be used in advertisements, but advertisers may employ tricks to try avoiding them. In France for instance, advertisers sometimes print English words in bold and French translations in fine print to deal with Article 120 of the 1994 Toubon Law limiting the use of English.The advertising of pricing information is another topic of concern for governments. In the United States for instance, it is common for businesses to only mention the existence and amount of applicable taxes at a later stage of a transaction. In Canada and New Zealand, taxes can be listed as separate items, as long as they are quoted up-front. In most other countries, the advertised price must include all applicable taxes, enabling customers to easily know how much it will cost them.
Theory:
Hierarchy-of-effects models Various competing models of hierarchies of effects attempt to provide a theoretical underpinning to advertising practice.
The model of Clow and Baack clarifies the objectives of an advertising campaign and for each individual advertisement. The model postulates six steps a buyer moves through when making a purchase: Awareness Knowledge Liking Preference Conviction Purchase Means-end theory suggests that an advertisement should contain a message or means that leads the consumer to a desired end-state.
Leverage points aim to move the consumer from understanding a product's benefits to linking those benefits with personal values.
Theory:
Marketing mix The marketing mix was proposed by professor E. Jerome McCarthy in the 1960s. It consists of four basic elements called the "four Ps". Product is the first P representing the actual product. Price represents the process of determining the value of a product. Place represents the variables of getting the product to the consumer such as distribution channels, market coverage and movement organization. The last P stands for Promotion which is the process of reaching the target market and convincing them to buy the product.
Theory:
In the 1990s, the concept of four Cs was introduced as a more customer-driven replacement of four P's. There are two theories based on four Cs: Lauterborn's four Cs (consumer, cost, communication, convenience) and Shimizu's four Cs (commodity, cost, communication, channel) in the 7Cs Compass Model (Co-marketing). Communications can include advertising, sales promotion, public relations, publicity, personal selling, corporate identity, internal communication, SNS, and MIS.
Theory:
Research Advertising research is a specialized form of research that works to improve the effectiveness and efficiency of advertising. It entails numerous forms of research which employ different methodologies. Advertising research includes pre-testing (also known as copy testing) and post-testing of ads and/or campaigns.
Theory:
Pre-testing includes a wide range of qualitative and quantitative techniques, including: focus groups, in-depth target audience interviews (one-on-one interviews), small-scale quantitative studies and physiological measurement. The goal of these investigations is to better understand how different groups respond to various messages and visual prompts, thereby providing an assessment of how well the advertisement meets its communications goals.Post-testing employs many of the same techniques as pre-testing, usually with a focus on understanding the change in awareness or attitude attributable to the advertisement. With the emergence of digital advertising technologies, many firms have begun to continuously post-test ads using real-time data. This may take the form of A/B split-testing or multivariate testing.
Theory:
Continuous ad tracking and the Communicus System are competing examples of post-testing advertising research types.
Theory:
Semiotics Meanings between consumers and marketers depict signs and symbols that are encoded in everyday objects. Semiotics is the study of signs and how they are interpreted. Advertising has many hidden signs and meanings within brand names, logos, package designs, print advertisements, and television advertisements. Semiotics aims to study and interpret the message being conveyed in (for example) advertisements. Logos and advertisements can be interpreted at two levels – known as the surface level and the underlying level. The surface level uses signs creatively to create an image or personality for a product. These signs can be images, words, fonts, colors, or slogans. The underlying level is made up of hidden meanings. The combination of images, words, colors, and slogans must be interpreted by the audience or consumer. The "key to advertising analysis" is the signifier and the signified. The signifier is the object and the signified is the mental concept. A product has a signifier and a signified. The signifier is the color, brand name, logo design, and technology. The signified has two meanings known as denotative and connotative. The denotative meaning is the meaning of the product. A television's denotative meaning might be that it is high definition. The connotative meaning is the product's deep and hidden meaning. A connotative meaning of a television would be that it is top-of-the-line.Apple's commercials used a black silhouette of a person that was the age of Apple's target market. They placed the silhouette in front of a blue screen so that the picture behind the silhouette could be constantly changing. However, the one thing that stays the same in these ads is that there is music in the background and the silhouette is listening to that music on a white iPod through white headphones. Through advertising, the white color on a set of earphones now signifies that the music device is an iPod. The white color signifies almost all of Apple's products.The semiotics of gender plays a key influence on the way in which signs are interpreted. When considering gender roles in advertising, individuals are influenced by three categories. Certain characteristics of stimuli may enhance or decrease the elaboration of the message (if the product is perceived as feminine or masculine). Second, the characteristics of individuals can affect attention and elaboration of the message (traditional or non-traditional gender role orientation). Lastly, situational factors may be important to influence the elaboration of the message.There are two types of marketing communication claims-objective and subjective. Objective claims stem from the extent to which the claim associates the brand with a tangible product or service feature. For instance, a camera may have auto-focus features. Subjective claims convey emotional, subjective, impressions of intangible aspects of a product or service. They are non-physical features of a product or service that cannot be directly perceived, as they have no physical reality. For instance the brochure has a beautiful design. Males tend to respond better to objective marketing-communications claims while females tend to respond better to subjective marketing communications claims.Voiceovers are commonly used in advertising. Most voiceovers are done by men, with figures of up to 94% having been reported. There have been more female voiceovers in recent years, but mainly for food, household products, and feminine-care products.
Gender effects on comprehension:
According to a 1977 study by David Statt, females process information comprehensively, while males process information through heuristic devices such as procedures, methods or strategies for solving problems, which could have an effect on how they interpret advertising. According to this study, men prefer to have available and apparent cues to interpret the message, whereas females engage in more creative, associative, imagery-laced interpretation. Later research by a Danish team found that advertising attempts to persuade men to improve their appearance or performance, whereas its approach to women aims at transformation toward an impossible ideal of female presentation. In Paul Suggett's article "The Objectification of Women in Advertising" he discusses the negative impact that these women in advertisements, who are too perfect to be real, have on women, as well as men, in real life. Advertising's manipulation of women's aspiration to these ideal types as portrayed in film, in erotic art, in advertising, on stage, within music videos and through other media exposures requires at least a conditioned rejection of female reality and thereby takes on a highly ideological cast. Studies show that these expectations of women and young girls negatively affect their views about their bodies and appearances. These advertisements are directed towards men. Not everyone agrees: one critic viewed this monologic, gender-specific interpretation of advertising as excessively skewed and politicized. There are some companies like Dove and aerie that are creating commercials to portray more natural women, with less post production manipulation, so more women and young girls are able to relate to them.More recent research by Martin (2003) reveals that males and females differ in how they react to advertising depending on their mood at the time of exposure to the ads and on the affective tone of the advertising. When feeling sad, males prefer happy ads to boost their mood. In contrast, females prefer happy ads when they are feeling happy. The television programs in which ads are embedded influence a viewer's mood state. Susan Wojcicki, author of the article "Ads that Empower Women don't just Break Stereotypes—They're also Effective" discusses how advertising to women has changed since the first Barbie commercial, where a little girl tells the doll that, she wants to be just like her. Little girls grow up watching advertisements of scantily clad women advertising things from trucks to burgers and Wojcicki states that this shows girls that they are either arm candy or eye candy.
Alternatives:
Other approaches to revenue include donations, paid subscriptions, microtransactions, and data monetization. Websites and applications are "ad-free" when not using advertisements at all for revenue. For example, the online encyclopaedia Wikipedia provides free content by receiving funding from charitable donations.
"Fathers" of advertising:
Late 1700s – Benjamin Franklin (1706–1790) – "father of advertising in America" Late 1800s – Thomas J. Barratt (1841–1914) of London – called "the father of modern advertising" by T.F.G. Coates Early 1900s – J. Henry ("Slogan") Smythe, Jr of Philadelphia – "world's best known slogan writer" Early 1900s – Albert Lasker (1880–1952) – the "father of modern advertising"; defined advertising as "salesmanship in print, driven by a reason why" Mid-1900s – David Ogilvy (1911–1999) – advertising tycoon, founder of Ogilvy & Mather, known as the "father of advertising" Influential thinkers in advertising theory and practice | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Algorithm**
Algorithm:
In mathematics and computer science, an algorithm ( (listen)) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning), achieving automation eventually. Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus".In contrast, a heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result.As an effective method, an algorithm can be expressed within a finite amount of space and time, and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
History:
Ancient algorithms Since antiquity, step-by-step procedures for solving mathematical problems have been attested. This includes Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later; e.g. Shulba Sutras, Kerala School, and Brāhmasphuṭasiddhānta), The Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC, e.g. sieve of Eratosthenes and Euclidean algorithm), and Arabic mathematics (9th century, e.g. cryptographic algorithms for code-breaking based on frequency analysis).
History:
Al-khwarizmi and the term algorithm Around 825, Muhammad ibn Musa al-Khwarizmi wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). Both of these texts are lost in the original Arabic at this time. (However, his other book on algebra remains.)In the early 12th century, Latin translations of said al-Khwarizmi texts involving the Hindu–Arabic numeral system and arithmetic appeared: Liber Alghoarismi de practica arismetrice (attributed to John of Seville) and Liber Algorismi de numero Indorum (attributed to Adelard of Bath). Hereby, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi ("Thus spoke Al-Khwarizmi").In 1240, Alexander of Villedieu writes a Latin text titled Carmen de Algorismo. It begins with: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris.
History:
which translates to: Algorism is the art by which at present we use those Indian figures, which number two times five.
The poem is a few hundred lines long and summarizes the art of calculating with the new styled Indian dice (Tali Indorum), or Hindu numerals.
English evolution of the word Around 1230, the English word algorism is attested and then by Chaucer in 1391. English adopted the French term.In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus.
In 1656, in the English dictionary Glossographia, it says: Algorism ([Latin] algorismus) the Art or use of Cyphers, or of numbering by Cyphers; skill in accounting.
Augrime ([Latin] algorithmus) skil in accounting or numbring.
In 1658, in the first edition of The New World of English Words, it says: Algorithme, (a word compounded of Arabick and Spanish,) the art of reckoning by Cyphers.
In 1706, in the sixth edition of The New World of English Words, it says: Algorithm, the Art of computing or reckoning by numbers, which contains the five principle Rules of Arithmetick, viz. Numeration, Addition, Subtraction, Multiplication and Division; to which may be added Extraction of Roots: It is also call'd Logistica Numeralis.
Algorism, the practical Operation in the several Parts of Specious Arithmetick or Algebra; sometimes it is taken for the Practice of Common Arithmetick by the ten Numeral Figures.
In 1751, in the Young Algebraist's Companion, Daniel Fenning contrasts the terms algorism and algorithm as follows: Algorithm signifies the first Principles, and Algorism the practical Part, or knowing how to put the Algorithm in Practice.
Since at least 1811, the term algorithm is attested to mean a "step-by-step procedure" in English.In 1842, in the Dictionary of Science, Literature and Art, it says: ALGORITHM, signifies the art of computing in reference to some particular subject, or in some particular way; as the algorithm of numbers; the algorithm of the differential calculus.
History:
Machine usage In 1928, a partial formalization of the modern concept of algorithm began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939.
Informal definition:
An informal definition could be "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and (for example) any prescribed bureaucratic procedure or cook-book recipe.In general, a program is only an algorithm if it stops eventually—even though infinite loops may sometimes prove desirable.
A prototypical example of an algorithm is the Euclidean algorithm, which is used to determine the maximum common divisor of two integers; an example (there are others) is described by the flowchart above and as an example in a later section.
Informal definition:
Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word "algorithm" in the following quotation: No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller without limit ... you'd be trying to write on molecules, on atoms, on electrons") to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human who is capable of carrying out only very elementary operations on symbols.
Informal definition:
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large. For example, an algorithm can be an algebraic equation such as y = m + n (i.e., two arbitrary "input variables" m and n that produce an output y), but various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of (for the addition example): Precise instructions (in a language understood by "the computer") for a fast, efficient, "good" process that specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained information and capabilities) to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = ... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format.The concept of algorithm is also used to define the notion of decidability—a notion that is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related to the customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.
Informal definition:
Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.
Formalization:
Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform—in a specific order—to carry out a specified task, such as calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987), and Gurevich (2000): Minsky: "But we will also maintain, with Turing ... that any procedure which could "naturally" be called effective, can, in fact, be realized by a (simple) machine. Although this may seem extreme, the arguments ... in its favor are hard to refute".
Formalization:
Gurevich: "… Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine … according to Savage [1987], an algorithm is a computational process defined by a Turing machine".Turing machines can define computational processes that do not terminate. The informal definitions of algorithms generally require that the algorithm always terminates. This requirement renders the task of deciding whether a formal procedure is an algorithm impossible in the general case—due to a major theorem of computability theory known as the halting problem.
Formalization:
Typically, when an algorithm is associated with processing information, data can be read from an input source, written to an output device and stored for further processing. Stored data are regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures.
For some of these computational processes, the algorithm must be rigorously defined: and specified in the way it applies in all possible circumstances that could arise. This means that any conditional steps must be systematically dealt with, case by case; the criteria for each case must be clear (and computable).
Because an algorithm is a precise list of precise steps, the order of computation is always crucial to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting "from the top" and going "down to the bottom"—an idea that is described more formally by flow of control.
Formalization:
So far, the discussion on the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception—one which attempts to describe a task in discrete, "mechanical" means. Unique to this conception of formalized algorithms is the assignment operation, which sets the value of a variable. It derives from the intuition of "memory" as a scratchpad. An example of such an assignment can be found below.
Formalization:
For some alternate conceptions of what constitutes an algorithm, see functional programming and logic programming.
Expressing algorithms:
Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in the statements based on natural language. Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are also often used as a way to define or document algorithms.
Expressing algorithms:
There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see finite-state machine, state transition table and control table for more), as flowcharts and drakon-charts (see state diagram for more), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see Turing machine for more).
Expressing algorithms:
Representations of algorithms can be classed into three accepted levels of Turing machine description, as follows: 1 High-level description "...prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head." 2 Implementation description "...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level, we do not give details of states or transition function." 3 Formal description Most detailed, "lowest level", gives the Turing machine's "state table".For an example of the simple algorithm "Add m+n" described in all three levels, see Examples.
Design:
Algorithm design refers to a method or a mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern.
One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g. an algorithm's run-time growth as the size of its input increases.
Typical steps in the development of algorithms: Problem definition Development of a model Specification of the algorithm Designing an algorithm Checking the correctness of the algorithm Analysis of algorithm Implementation of algorithm Program testing Documentation preparation
Computer algorithms:
"Elegant" (compact) programs, "good" (fast) programs : The notion of "simplicity and elegance" appears informally in Knuth and precisely in Chaitin: Knuth: " ... we want good algorithms in some loosely defined aesthetic sense. One criterion ... is the length of time taken to perform the algorithm .... Other criteria are adaptability of the algorithm to computers, its simplicity, and elegance, etc."Chaitin: " ... a program is 'elegant,' by which I mean that it's the smallest possible program for producing the output that it does"Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant'"—such a proof would solve the Halting problem (ibid).
Computer algorithms:
Algorithm versus function computable by an algorithm: For a given function multiple algorithms may exist. This is true, even without expanding the available instruction set available to the programmer. Rogers observes that "It is ... important to distinguish between the notion of algorithm, i.e. procedure and the notion of function computable by algorithm, i.e. mapping yielded by procedure. The same function may have several different algorithms".Unfortunately, there may be a tradeoff between goodness (speed) and elegance (compactness)—an elegant program may take more steps to complete a computation than one less elegant. An example that uses Euclid's algorithm appears below.
Computer algorithms:
Computers (and computors), models of computation: A computer (or human "computer") is a restricted type of machine, a "discrete deterministic mechanical device" that blindly follows its instructions. Melzak's and Lambek's primitive models reduced this notion to four elements: (i) discrete, distinguishable locations, (ii) discrete, indistinguishable counters (iii) an agent, and (iv) a list of instructions that are effective relative to the capability of the agent.Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability". Minsky's machine proceeds sequentially through its five (or six, depending on how one counts) instructions unless either a conditional IF-THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution) operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g. L ← L − 1). Rarely must a programmer write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT. However, a few different assignment instructions (e.g. DECREMENT, INCREMENT, and ZERO/CLEAR/EMPTY for a Minsky machine) are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is convenient; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0 THEN GOTO xxx is unconditional.
Computer algorithms:
Simulation of an algorithm: computer (computor) language: Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example". But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a quadratic equation the computer must know how to take a square root. If they do not, then the algorithm, to be effective, must provide a set of rules for extracting a square root.This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor).
Computer algorithms:
But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, the arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters". When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" instruction available rather than just subtraction (or worse: just Minsky's "decrement").
Computer algorithms:
Structured programming, canonical structures: Per the Church–Turing thesis, any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations, Turing completeness requires only four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction.Canonical flowchart symbols: The graphical aide called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). Like the program flow of a Minsky machine, a flowchart always starts at the top of a page and proceeds down. Its primary symbols are only four: the directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm–Jacopini canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. The symbols and their use to build the canonical structures are shown in the diagram.
Examples:
Algorithm example One of the simplest algorithms is to find the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be stated in a high-level description in English prose, as: High-level description: If there are no numbers in the set, then there is no highest number.
Examples:
Assume the first number in the set is the largest number in the set.
For each remaining number in the set: if this number is larger than the current largest number, consider this number to be the largest number in the set.
Examples:
When there are no numbers left in the set to iterate over, consider the current largest number to be the largest number of the set.(Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code: Euclid's algorithm In mathematics, the Euclidean algorithm or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC). It is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.
Examples:
Euclid poses the problem thus: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including zero. To "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s. In modern words, remainder r = l − q×s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division.For Euclid's method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be zero, AND (ii) the subtraction must be "proper"; i.e., a test must guarantee that the smaller of the two numbers is subtracted from the larger (or the two can be equal so their subtraction yields zero).
Examples:
Euclid's original proof adds a third requirement: the two lengths must not be prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest. While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another, it yields the number "1" for their common measure. So, to be precise, the following is really Nicomachus' algorithm.
Examples:
Computer language for Euclid's algorithm Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction.
A location is symbolized by upper case letter(s), e.g. S, A, etc.
The varying quantity (number) in a location is written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009.
Examples:
An inelegant program for Euclid's algorithm The following algorithm is framed as Knuth's four-step version of Euclid's and Nicomachus', but, rather than using division to find the remainder, it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-level description, shown in boldface, is adapted from Knuth 1973:2–4: INPUT: 1 [Into two locations L and S put the numbers l and s that represent the two lengths]: INPUT L, S 2 [Initialize R: make the remaining length r equal to the starting/initial/input length l]: R ← L E0: [Ensure r ≥ s.] 3 [Ensure the smaller of the two numbers is in S and the larger in R]: IF R > S THEN the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6: GOTO step 7 ELSE swap the contents of R and S.
Examples:
4 L ← R (this first step is redundant, but is useful for later discussion).
5 R ← S 6 S ← L E1: [Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R.
7 IF S > R THEN done measuring so GOTO 10 ELSE measure again, 8 R ← R − S 9 [Remainder-loop]: GOTO 7.
E2: [Is the remainder zero?]: EITHER (i) the last measure was exact, the remainder in R is zero, and the program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S.
10 IF R = 0 THEN done so GOTO step 15 ELSE CONTINUE TO step 11, E3: [Interchange s and r]: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s; L serves as a temporary location.
11 L ← R 12 R ← S 13 S ← L 14 [Repeat the measuring process]: GOTO 7 OUTPUT: 15 [Done. S contains the greatest common divisor]: PRINT S DONE: 16 HALT, END, STOP.
Examples:
An elegant program for Euclid's algorithm The following version of Euclid's algorithm requires only six core instructions to do what thirteen are required to do by "Inelegant"; worse, "Inelegant" requires more types of instructions. The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language, the steps are numbered, and the instruction LET [] = [] is the assignment instruction symbolized by ←.
Examples:
How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at last the minuend M is less than or equal to the subtrahend S (Difference = Minuend − Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses.
Examples:
The following version can be used with programming languages from the C-family: Testing the Euclid algorithms Does an algorithm do what its author wants it to do? A few test cases usually give some confidence in the core functionality. But tests are not enough. For test cases, one source uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively prime numbers 14157 and 5950.
Examples:
But "exceptional cases" must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane 5 Flight 501 rocket failure (June 4, 1996).
Examples:
Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm". Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof.
Examples:
Measuring and improving the Euclid algorithms Elegance (compactness) versus goodness (speed): With only six core instructions, "Elegant" is the clear winner, compared to "Inelegant" at thirteen instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed.
Examples:
Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective"—that is, it computes the function intended by its author—then the question becomes, can it be improved? The compactness of "Inelegant" can be improved by the elimination of five steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm; rather, it can only be done heuristically; i.e., by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps, together with steps 2 and 3, can be eliminated. This reduces the number of core instructions from thirteen to eight, which makes it "more elegant" than "Elegant", at nine steps.
Examples:
The speed of "Elegant" can be improved by moving the "B=0?" test outside of the two subtraction loops. This change calls for the addition of three instructions (B = 0?, A = 0?, GOTO). Now "Elegant" computes the example-numbers faster; whether this is always the case for any given A, B, and R, S would require a detailed analysis.
Algorithmic analysis:
It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm which adds up the elements of a list of n numbers would have a time requirement of O(n), using big O notation. At all times the algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. Therefore, it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted.
Algorithmic analysis:
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost O(log n)) outperforms a sequential search (cost O(n) ) when used for table lookups on sorted lists or arrays.
Algorithmic analysis:
Formal versus empirical The analysis, and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware/software platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off" problem, the efficiency of a particular algorithm may not have significant consequences (unless n is extremely large) but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Algorithmic analysis:
Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.
Empirical tests cannot replace formal analysis, though, and are not trivial to perform in a fair manner.
Algorithmic analysis:
Execution efficiency To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.
Classification:
There are various ways to classify algorithms, each with its own merits.
By implementation One way to classify algorithms is by implementation means.
Classification:
Recursion A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition (also known as termination condition) matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.
Classification:
Serial, parallel or distributed Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms are algorithms that take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms are algorithms that use multiple machines connected with a computer network. Parallel and distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. For example, a CPU would be an example of a parallel algorithm. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems.
Classification:
Deterministic or non-deterministic Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics.
Classification:
Exact or approximate While many algorithms reach an exact solution, approximation algorithms seek an approximation that is closer to the true solution. The approximation can be reached by either using a deterministic or a random strategy. Such algorithms have practical value for many hard problems. One of the examples of an approximate algorithm is the Knapsack problem, where there is a set of given items. Its goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. Total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value.
Classification:
Quantum algorithm They run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement.
Classification:
By design paradigm Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories includes many different types of algorithms. Some common paradigms are: Brute-force or exhaustive search Brute force is a method of problem-solving that involves systematically trying every possible option until the optimal solution is found. This approach can be very time consuming, as it requires going through every possible combination of variables. However, it is often used when other methods are not available or too complex. Brute force can be used to solve a variety of problems, including finding the shortest path between two points and cracking passwords.
Classification:
Divide and conquer A divide-and-conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease-and-conquer algorithm, which solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of a decrease and conquer algorithm is the binary search algorithm.
Classification:
Search and enumeration Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking.
Classification:
Randomized algorithm Such algorithms make some choices randomly (or pseudo-randomly). They can be very useful in finding approximate solutions for problems where finding exact solutions can be impractical (see heuristic method below). For some of these problems, it is known that the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithms for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms:Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time.
Classification:
Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.Reduction of complexity This technique involves solving a difficult problem by transforming it into a better-known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.
Classification:
Back tracking In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution.
Classification:
Optimization problems For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: Linear programming When searching for optimal solutions to a linear function bound to linear equality and inequality constraints, the constraints of the problem can be used directly in producing the optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem additionally requires that one or more of the unknowns must be an integer then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem.
Classification:
Dynamic programming When a problem shows optimal substructures—meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity.
Classification:
The greedy method A greedy algorithm is similar to a dynamic programming algorithm in that it works by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution, which may be given or have been constructed in some way, and improve it by making small modifications. For some problems they can find the optimal solution while for others they stop at local optima, that is, at solutions that cannot be improved by the algorithm but are not optimum. The most popular use of greedy algorithms is for finding the minimal spanning tree where finding the optimal solution is possible with this method. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem.
Classification:
The heuristic method In optimization problems, heuristic algorithms can be used to find a solution close to the optimal solution in cases where finding the optimal solution is impractical. These algorithms work by getting closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. Their merit is that they can find a solution very close to the optimal solution in a relatively short time. Such algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some of them, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm.
Classification:
By field of study Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques.
Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented for optimization of resource consumption in industry but is now used in solving a broad range of problems in many fields.
By complexity Algorithms can be classified by the amount of time they need to complete compared to their input size: Constant time: if the time needed by the algorithm is the same, regardless of the input size. E.g. an access to an array element.
Logarithmic time: if the time is a logarithmic function of the input size. E.g. binary search algorithm.
Linear time: if the time is proportional to the input size. E.g. the traverse of a list.
Polynomial time: if the time is a power of the input size. E.g. the bubble sort algorithm has quadratic time complexity.
Classification:
Exponential time: if the time is an exponential function of the input size. E.g. Brute-force search.Some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them.
Classification:
Continuous algorithms The adjective "continuous" when applied to the word "algorithm" can mean: An algorithm operating on data that represents continuous quantities, even though this data is represented by discrete approximations—such algorithms are studied in numerical analysis; or An algorithm in the form of a differential equation that operates continuously on the data, running on an analog computer.
Algorithm = Logic + Control:
In logic programming, an algorithm is viewed as having both "a logic component, which specifies the knowledge to be used in solving problems, and a control component, which determines the problem-solving strategies by means of which that knowledge is used."The Euclidean algorithm illustrates this view of an algorithm. Here, in typical logic programming style, the function gcd(A, B) = C is represented as a relation gcd(A, B, C): These sentences have a purely logical (and "declarative") reading, as a recursive (or inductive) definition, which is independent of how the logic is used to solve problems: The gcd of A and A is A.
Algorithm = Logic + Control:
The gcd of A and B is C, if A > B, and the gcd of A-B and B is C.
Algorithm = Logic + Control:
The gcd of A and B is C, if B > A, and the gcd of A and B-A is C.Different problem-solving strategies turn the logic into different algorithms. In particular, backward reasoning using SLD resolution, turns the logic into the following version of the Euclidean algorithm: To find the gcd C of two given numbers A and B: If A = B, then C = A.
Algorithm = Logic + Control:
If A is greater than B, then find the gcd of A-B and B, which is C.
If B is greater than A, then find the gcd of A and B-A, which is C.
Algorithm = Logic + Control:
However, to implement the algorithm in the logic programming language Prolog, the embedded subtractions, A-B and B-A, have to be extracted and written as separate conditions: Other problem solving strategies can also be used for the same logic. In theory, given a pair of integers A and B, forward reasoning could be used to generate all instances of the gcd relation, terminating when the desired gcd of A and B is generated. Of course, forward reasoning is entirely useless in this example. But in other cases, as in the definition of the Fibonacci sequence and as in Datalog, forward reasoning can be an efficient problem solving strategy.
Algorithm = Logic + Control:
One of the advantages of the logic programming representation of the algorithm is that its purely logical reading makes it easier to verify that the algorithm is correct relative to the standard non-recursive definition of gcd. Here is the standard definition written in Prolog: This definition, which is the specification of the Euclidean algorithm, is also executable in Prolog: Backward reasoning treats the specification as the brute-force algorithm that iterates through all of the integers C between 1 and A, checking whether C divides both A and B, and then for each such C iterates again through all of the integers D between 1 and A, until it finds a C such that C is greater than or equal to all of the D that also divide both A and B. Although this algorithm is hopelessly inefficient, it shows that formal specifications can often be written in logic programming form, and they can be executed by Prolog, to check that they correctly represent informal requirements.
Legal issues:
Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW patent.
Legal issues:
Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).
History: Development of the notion of "algorithm":
Ancient Near East The earliest evidence of algorithms is found in the Babylonian mathematics of ancient Mesopotamia (modern Iraq). A Sumerian clay tablet found in Shuruppak near Baghdad and dated to c. 2500 BC described the earliest division algorithm. During the Hammurabi dynasty c. 1800 – c. 1600 BC, Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus c. 1550 BC. Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus,: Ch 9.2 and the Euclidean algorithm, which was first described in Euclid's Elements (c. 300 BC).: Ch 9.1 Discrete and distinguishable symbols Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying: accumulating stones or marks scratched on sticks or making discrete symbols in clay. Through the Babylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine computations.
History: Development of the notion of "algorithm":
Manipulation of symbols as "place holders" for numbers: algebra Muhammad ibn Mūsā al-Khwārizmī, a Persian mathematician, wrote the Al-jabr in the 9th century. The terms "algorism" and "algorithm" are derived from the name al-Khwārizmī, while the term "algebra" is derived from the book Al-jabr. In Europe, the word "algorithm" was originally used to refer to the sets of rules and techniques used by Al-Khwarizmi to solve algebraic equations, before later being generalized to refer to any set of rules or techniques. This eventually culminated in Leibniz's notion of the calculus ratiocinator (c. 1680): A good century and a half ahead of his time, Leibniz proposed an algebra of logic, an algebra that would specify the rules for manipulating logical concepts in the manner that ordinary algebra specifies the rules for manipulating numbers.
History: Development of the notion of "algorithm":
Cryptographic algorithms The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm.
History: Development of the notion of "algorithm":
Mechanical contrivances with discrete states The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particular, the verge escapement that provides us with the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines"—the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace, mid-19th century. Lovelace is credited with the first creation of an algorithm intended for processing on a computer—Babbage's analytical engine, the first device considered a real Turing-complete computer instead of just a calculator—and is sometimes called "history's first programmer" as a result, though a full implementation of Babbage's second device would not be realized until decades after her lifetime.
History: Development of the notion of "algorithm":
Logical machines 1870 – Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to reduce Boolean equations when presented in a form similar to what is now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically ... More recently, however, I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc.] ...". With this machine he could analyze a "syllogism or any other simple logical argument".This machine he displayed in 1870 before the Fellows of the Royal Society. Another logician John Venn, however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all that can be rationally expected of any logical machine".Jacquard loom, Hollerith punch cards, telegraphy and telephony – the electromechanical relay: Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers. By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape (c. 1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter (c. 1910) with its punched-paper use of Baudot code on tape.
History: Development of the notion of "algorithm":
Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".The mathematician Martin Davis observes the particular importance of the electromechanical relay (with its two "binary states" open and closed): It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical relays, that machines were built having the scope Babbage had envisioned." Mathematics during the 19th century up to the mid-20th century Symbols and rules: In rapid succession, the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language".But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a "'formula language', that is a lingua characterica, a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules". The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913).
History: Development of the notion of "algorithm":
The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular, the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox. The resultant considerations led to Kurt Gödel's paper (1931)—he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers.
History: Development of the notion of "algorithm":
Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene and J.B. Rosser's λ-calculus a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene. Church's proof that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no decision about the next instruction. Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"—in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine". Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I", and a few years later Kleene's renaming his Thesis "Church's Thesis" and proposing "Turing's Thesis".
History: Development of the notion of "algorithm":
Emil Post (1936) and Alan Turing (1936–37, 1939) Emil Post (1936) described the actions of a "computer" (human being) as follows: "...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to be carried out, and a fixed unalterable set of directions.His symbol space would be "a two-way infinite sequence of spaces or boxes ... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time. ... a box is to admit of but two possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke."One box is to be singled out and called the starting point. ... a specific problem is to be given in symbolic form by a finite number of boxes [i.e., INPUT] being marked with a stroke. Likewise, the answer [i.e., OUTPUT] is to be given in symbolic form by such a configuration of marked boxes..."A set of directions applicable to a general problem sets up a deterministic process when applied to each specific problem. This process terminates only when it comes to the direction of type (C ) [i.e., STOP]". See more at Post–Turing machineAlan Turing's work preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter, and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'". Given the prevalence at the time of Morse code, telegraphy, ticker tape machines, and teletypewriters, it is quite possible that all were influences on Turing during his youth.
History: Development of the notion of "algorithm":
Turing—his model of computation is now called a Turing machine—begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further and creates a machine as a model of computation of numbers.
History: Development of the notion of "algorithm":
"Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book...I assume then that the computation is carried out on one-dimensional paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite..."The behavior of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares that the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite..."Let us imagine that the operations performed by the computer to be split up into 'simple operations' which are so elementary that it is not easy to imagine them further divided."Turing's reduction yields the following: "The simple operations must therefore include: "(a) Changes of the symbol on one of the observed squares "(b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares."It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must, therefore, be taken to be one of the following: "(A) A possible change (a) of symbol together with a possible change of state of mind.
History: Development of the notion of "algorithm":
"(B) A possible change (b) of observed squares, together with a possible change of state of mind""We may now construct a machine to do the work of this computer."A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it: "A function is said to be "effectively calculable" if its values can be found by some purely mechanical process. Though it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more definite, mathematical expressible definition ... [he discusses the history of the definition pretty much as presented above with respect to Gödel, Herbrand, Kleene, Church, Turing, and Post] ... We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability † with effective calculability...
History: Development of the notion of "algorithm":
"† We shall use the expression "computable function" to mean a function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions".
History: Development of the notion of "algorithm":
J. B. Rosser (1939) and S. C. Kleene (1943) J. Barkley Rosser defined an "effective [mathematical] method" in the following manner (italicization added): "'Effective method' is used here in the rather special sense of a method each step of which is precisely determined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date. [his footnote #5; see discussion immediately below]. The simplest of these to state (due to Post and Turing) says essentially that an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer. All three definitions are equivalent, so it doesn't matter which one is used. Moreover, the fact that all three are equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–226)Rosser's footnote No. 5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular, Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and their use of recursion, in particular, Gödel's use in his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–37) in their mechanism-models of computation.
History: Development of the notion of "algorithm":
Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the following context (boldface in original): "12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, "is the predicate value true?"" (Kleene 1943:273) History after 1950 A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing thesis) and philosophy of mind (especially arguments about artificial intelligence). For more, see Algorithm characterizations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HxD**
HxD:
HxD is a freeware hex editor, disk editor, and memory editor developed by Maël Hörz for Windows. It can open files larger than 4 GiB and open and edit the raw contents of disk drives, as well as display and edit the memory used by running processes. Among other features, it can calculate various checksums, compare files, or shred files.HxD is distributed as freeware and is available in multiple languages of which the English version is the first in the category of coding utilities on Download.com. The c't magazine has featured HxD in several issues and online-specials.
Features:
Disk editor (both Windows 9x/NT and up) Memory editor Data-folding to show/hide memory sections.
Data inspector Converts current data into many types, for editing and viewing Open source plugin-framework to extend with new, custom type converters Multiple files are presented using a mixture of tabbed document interface and multiple document interface.
Large files up to 8 EiB can be loaded and edited.
Partial file loading for performance.
Search and replace for several data types (including Unicode-strings, floats and integers).
Calculation and checking of checksums and hashes.
File utility operations File shredder for safe file deletion.
Splitting or concatenating of files.
File compare (only byte by byte) Importing and exporting of hex files (Intel HEX, Motorola S-record) Exporting of data to several formats Source code (C, Pascal, Java, C#, VB.NET, PureBasic) Formatted output (plain text, HTML, RTF, TeX) Statistical view: Graphical representation of the character distribution.
Available in 32 and 64-bit (including memory editor) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fire Serpent**
Fire Serpent:
Fire Serpent is a 2007 Sci Fi Channel monster movie directed by John Terlesky.
Plot:
A solar flare from the sun sends a serpentine alien composed of fire to Earth where it begins to wreak havoc throughout a small community. During its search for more fuel to consume it stumbles upon a large military oil reserve. It soon becomes clear that an old man may hold the key to destroying it in the form of a Halogen Gun which may be used as a makeshift fire extinguisher of sorts. A small group of citizens decides to use this technology to make a stand against the creature only to face additional resistance from the beast, as well as a government employer who voluntarily helps the snake because he believes it is the spirit of a god.
Cast:
Nicholas Brendon as Jake Relm Sandrine Holt as Christina Andrews Randolph Mantooth as Dutch Fallon Diego Klattenhoff as Young Dutch Fallon Robert Beltran as Cooke Lisa Langlois as Heather Allman Patrice Goodman as Billie Steve Boyle as Dave Massaro Richard Clarkin as Kohler Michelle Morgan as Donna Marks Vito Rezza as The Bartender Joseph Motiki as Lieutenant Oliver Marco Bianco as State Trooper Parsons
Reception:
David Cornelius from DVD Talk gave the film a negative review, writing, "Fire Serpent is a wretched sci-fi/horror mess, with laughable CGI effects, an empty plot, and zero suspense. (In other words, it's your run-of-the-mill Sci-Fi Channel production. Zing!) It's the kind of B picture that fails not because it's cheap, but because it's terminally dull." Jon Condit from Dread Central awarded the film a negative score of 1 out of five, calling it "dull". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DICE framework**
DICE framework:
The DICE framework or duration, integrity, commitment and effort is a tool originally developed by Perry Keenan, Kathleen Conlon, and Alan Jackson (all current or former partners at the Boston Consulting Group). It was originally published at Harvard Business Review (HBR) article The Hard Side of Change in 2005 and has been republished in HBR's "Lead Change--Successfully", HBR's OnPoint Magazine was recognized in HBR's "10 Must Reads on Change Management" publication. The DICE framework was awarded a patent in 2014.A DICE score is an indicator of the likely success of a project based on various measures. The DICE framework attempts consistency in evaluating various projects with subjective inputs and can be used to track and manage portfolios of projects. Using this framework, leaders can predict project outcomes and allocate resources strategically to maximize delivery of an overall program or portfolio of initiatives.
DICE framework:
Although, originally developed at the Boston Consulting Group (BCG), this framework has become widely adopted and is used by many companies and professionals.
DICE acronym:
The acronym DICE stands for: Duration (D) either the total duration of short projects, or the time between two milestones on longer projects Team performance integrity (I) the project team's ability to execute successfully, with specific emphasis on the ability of the project leader Commitment (C) levels of support, composed of two factors: C1 visible backing from the sponsor and senior executives for the change C2 support from those who are impacted by the change Effort (E) how much effort will it require to implement (above and beyond business as usual)
Calculation:
Based on the statistical analysis from the outcome of change projects, success can be determined by assessing four factors (duration, team performance integrity, commitment, and effort). A DICE score between 7 and 14 is in the "Win" Zone (very likely to succeed), while a DICE score between 14 and 17 falls in the "Worry" Zone (hard to predict success), and a DICE score higher than 17 falls in the "Woe" Zone (indicating high unpredictability or likely to not succeed). The DICE score is calculated according to the following formula: D + (2 x I) + (2 x C1) + C2 + EDuration < 2 months = 1 2-4 months = 2 4-8 months = 3 > 8 months = 4Team performance integrity Very good = 1 Good = 2 Average = 3 Poor = 4Commitment (senior management) Clearly and strongly communicate the need = 1 Seem to want success = 2 Neutral = 3 Reluctant = 4 Commitment (local) Eager = 1 Willing = 2 Reluctant = 3 Strongly Reluctant = 4 Effort < 10% additional = 1 10-20% additional = 2 20-40% additional = 3 > 40 % additional = 4 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Market cross**
Market cross:
A market cross, or in Scots, a mercat cross, is a structure used to mark a market square in market towns, where historically the right to hold a regular market or fair was granted by the monarch, a bishop or a baron.
History:
Market crosses were originally from the distinctive tradition in Early Medieval Insular art of free-standing stone standing or high crosses, often elaborately carved, which goes back to the 7th century. Market crosses can be found in many market towns in Britain. British emigrants often installed such crosses in their new cities, and several can be found in Canada and Australia.The market cross could be representing the official site for a medieval town or village market, granted by a charter, or it could have once represented a traditional religious marking at a crossroads.
Design:
These structures range from carved stone spires, obelisks or crosses, common to small market towns such as that in Stalbridge, Dorset, to large, ornate covered structures, such as the Chichester Cross, or Malmesbury Market Cross. They can also be constructed from wood; an example is at Wymondham, Norfolk.
Towns and villages in Great Britain with a market cross:
A B C D E F Frome G H I K L M N O P Q Quainton R S T U Uttoxeter W | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flu-flu arrow**
Flu-flu arrow:
A flu-flu arrow is a type of arrow specifically designed to travel a short distance. Such arrows are particularly useful when shooting at aerial targets or for certain types of recreational archery where the arrow must not travel too far. One of the main uses of these arrows is that they do not get lost as easily if they miss the target.
Flu-flu arrow:
A flu-flu is a design of fletching, normally made by using long sections of feathers; in most cases six or more sections are used, rather than the traditional three. Alternatively, two long feathers can be spiraled around the end of the arrow shaft. In either case, the excessive fletching serves to generate more drag and slow the arrow down rapidly after a short distance (about 30 m or 33 yards). Recreational flu-flus usually have rubber points to add weight and keep the flight slower.
Uses:
Flu-flu arrows were and still are used to hunt birds. When taking aim at the bird the archer must lead the bird and release the arrow in anticipation of the bird's travel path. Because flu-flu arrows fly short distances, it is easy for the archer to recover the arrow if the target is missed. Special bird points are used that entangle the bird as it flies into a wire harness attached to the end of the arrow.
Uses:
These arrows often have a blunt point. If shooting at squirrels or other tree dwellers, the blunt point will prevent the arrow from sticking in the branch or trunk of the tree, making it easier to retrieve. The blunt points were also used for other reasons. "Although the first game preserves in England were established by William the Conqueror at this time, the Saxon was permitted to shoot birds and small beasts in his fields and therefore was allowed to use a blunt arrow, headed with a lead tip or pilum, hence our term pile, or target point. If found with a sharp arrowhead, the so-called broad-head used for killing the king's deer, he was promptly hanged."Another author said: "After shooting with bows and arrows for a short time, the archer no doubt will marvel at the way an arrow can lose itself in even the shortest grass and how a pointed arrow can bury itself for an inch [2.5 cm] or so in a tree trunk or branch so that it takes a half-hour or more to dig it out. For this kind of shooting, blunt arrows cannot be beat. These blunt arrows have tremendous hitting power. They do not sneak under the grass as easily as do other arrows, but the chances of getting a rabbit with a blunt arrow are much better than with a hunting point. These blunt arrows will stand a lot of hard knocks too."Flu-flu arrows are often used for children's archery, and can be used to play flu-flu golf. Similar to disc golf, the player must go to where the arrow landed, pick it up, shoot it again, and repeat this process until he reaches a specified place. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Animal (journal)**
Animal (journal):
Animal: An International Journal of Animal Bioscience is an academic journal established February 2007 and published monthly by Cambridge University Press.
It is owned by British Society of Animal Science (BSAS), Institut national de recherche pour l’agriculture, l’alimentation et l’environnement (INRAE) and European Association for Animal Production (EAAP).
It is the merge of three journals: Animal Science - ISSN 1357-7298 (BSAS) Animal Research - ISSN 0003-424X/ISSN 1627-3583 (INRA) Reproduction, Nutrition, Development - ISSN 0926-5287/ISSN 0926-5309 (INRA) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Immediate constituent analysis**
Immediate constituent analysis:
In linguistics, immediate constituent analysis or IC analysis is a method of sentence analysis that was proposed by Wilhelm Wundt and named by Leonard Bloomfield. The process reached a full-blown strategy for analyzing sentence structure in the distributionalist works of Zellig Harris and Charles F. Hockett, and in glossematics by Knud Togeby. The practice is now widespread. Most tree structures employed to represent the syntactic structure of sentences are products of some form of IC-analysis. The process and result of IC-analysis can, however, vary greatly based upon whether one chooses the constituency relation of phrase structure grammars (= constituency grammars) or the dependency relation of dependency grammars as the underlying principle that organizes constituents into hierarchical structures.
IC-analysis in phrase structure grammars:
Given a phrase structure grammar (= constituency grammar), IC-analysis divides up a sentence into major parts or immediate constituents, and these constituents are in turn divided into further immediate constituents. The process continues until irreducible constituents are reached, i.e., until each constituent consists of only a word or a meaningful part of a word. The end result of IC-analysis is often presented in a visual diagrammatic form that reveals the hierarchical immediate constituent structure of the sentence at hand. These diagrams are usually trees. For example: This tree illustrates the manner in which the entire sentence is divided first into the two immediate constituents this tree and illustrates IC-analysis according to the constituency relation; these two constituents are further divided into the immediate constituents this and tree, and illustrates IC-analysis and according to the constituency relation; and so on.
IC-analysis in phrase structure grammars:
An important aspect of IC-analysis in phrase structure grammars is that each individual word is a constituent by definition. The process of IC-analysis always ends when the smallest constituents are reached, which are often words (although the analysis can also be extended into the words to acknowledge the manner in which words are structured). The process is, however, different in dependency grammars, since many individual words do not end up as constituents in dependency grammars.
IC-analysis in dependency grammars:
As a rule, dependency grammars do not employ IC-analysis, as the principle of syntactic ordering is not inclusion but, rather, asymmetrical dominance-dependency between words. When an attempt is made to incorporate IC-analysis into a dependency-type grammar, the results are some kind of a hybrid system. In actuality, IC-analysis is different in dependency grammars. Since dependency grammars view the finite verb as the root of all sentence structure, they cannot and do not acknowledge the initial binary subject-predicate division of the clause associated with phrase structure grammars. What this means for the general understanding of constituent structure is that dependency grammars do not acknowledge a finite verb phrase (VP) constituent and many individual words also do not qualify as constituents, which means in turn that they will not show up as constituents in the IC-analysis. Thus in the example sentence This tree illustrates IC-analysis according to the dependency relation, many of the phrase structure grammar constituents do not qualify as dependency grammar constituents: This IC-analysis does not view the finite verb phrase illustrates IC-analysis according to the dependency relation nor the individual words tree, illustrates, according, to, and relation as constituents.
IC-analysis in dependency grammars:
While the structures that IC-analysis identifies for dependency and constituency grammars differ in significant ways, as the two trees just produced illustrate, both views of sentence structure acknowledge constituents. The constituent is defined in a theory-neutral manner: Constituent A given word/node plus all the words/nodes that that word/node dominatesThis definition is neutral with respect to the dependency vs. constituency distinction. It allows one to compare the IC-analyses across the two types of structure. A constituent is always a complete tree or a complete subtree of a tree, regardless of whether the tree at hand is a constituency or a dependency tree.
Constituency tests:
The IC-analysis for a given sentence is arrived at usually by way of constituency tests. Constituency tests (e.g. topicalization, clefting, pseudoclefting, pro-form substitution, answer ellipsis, passivization, omission, coordination, etc.) identify the constituents, large and small, of English sentences. Two illustrations of the manner in which constituency tests deliver clues about constituent structure and thus about the correct IC-analysis of a given sentence are now given. Consider the phrase The girl in the following trees: The acronym BPS stands for "bare phrase structure", which is an indication that the words are used as the node labels in the tree. Again, focusing on the phrase The girl, the tests unanimously confirm that it is a constituent as both trees show: ...the girl is happy - Topicalization (invalid test because test constituent is already at front of sentence) It is the girl who is happy. - Clefting (The one)Who is happy is the girl. - Pseudoclefting She is happy. - Pro-form substitution Who is happy? -The girl. - Answer ellipsisBased on these results, one can safely assume that the noun phrase The girl in the example sentence is a constituent and should therefore be shown as one in the corresponding IC-representation, which it is in both trees. Consider next what these tests tell us about the verb string is happy: *...is happy, the girl. - Topicalization *It is is happy that the girl. - Clefting *What the girl is is happy. - Pseudoclefting *The girl so/that/did that. - Pro-form substitution What is the girl? -*Is happy. - Answer ellipsisThe star * indicates that the sentence is not acceptable English. Based on data like these, one might conclude that the finite verb string is happy in the example sentence is not a constituent and should therefore not be shown as a constituent in the corresponding IC-representation. Hence this result supports the IC-analysis in the dependency tree over the one in the constituency tree, since the dependency tree does not view is happy as a constituent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Snijders Blok-Campeau syndrome**
Snijders Blok-Campeau syndrome:
Snijders Blok–Campeau syndrome is a genetic disorder caused by mutations in the CHD3 gene. It is characterized by impaired intellectual development, macrocephaly, dysarthria and apraxia of speech, and certain distinctive facial features.Snijders Blok–Campeau syndrome is typically a de novo mutation which generally occurs during the early embryonic stages of development or during the formation of the parent's reproductive cells. This allows for prenatal diagnosis.
Signs and symptoms:
Snijders Blok–Campeau syndrome almost always comes with both physical and intellectual disabilities. Those with the condition will typically have trouble in the development of speech and language. Around one half typically have some form of macrocephaly, while around one third show signs of autism or similar conditions.
Cause:
The CHD3 gene is required for chromatin remodeling, a process that regulates gene expression. By allowing for the creation of chromatin, the CHD3 gene affects how tightly DNA is packed into chromosomes. A mutation of the CHD3 gene changes the amount of chromatin produced, causing over or underexpression of other genes.
History:
Due to the rarity of the condition, with only approximately 60 cases documented in scientific literature, Snijders Blok–Campeau syndrome was only discovered in 2018 by clinical geneticist Lot Snijders Blok and clinician-scientist Philippe M Campeau. The mutation was first documented in the paper "CHD3 helicase domain mutations cause a neurodevelopmental syndrome with macrocephaly and impaired speech and language". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Roque Redonde Formation**
Roque Redonde Formation:
The Roque Redonde Formation is a geologic formation in France. It preserves fossils dating back to the Carboniferous period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Superpartner**
Superpartner:
In particle physics, a superpartner (also sparticle) is a class of hypothetical elementary particles predicted by supersymmetry, which, among other applications, is one of the well-studied ways to extend the standard model of high-energy physics.When considering extensions of the Standard Model, the s- prefix from sparticle is used to form names of superpartners of the Standard Model fermions (sfermions), e.g. the stop squark. The superpartners of Standard Model bosons have an -ino (bosinos) appended to their name, e.g. gluino, the set of all gauge superpartners are called the gauginos.
Theoretical predictions:
According to the supersymmetry theory, each fermion should have a partner boson, the fermion's superpartner, and each boson should have a partner fermion. Exact unbroken supersymmetry would predict that a particle and its superpartners would have the same mass. No superpartners of the Standard Model particles have yet been found. This may indicate that supersymmetry is incorrect, or it may also be the result of the fact that supersymmetry is not an exact, unbroken symmetry of nature. If superpartners are found, their masses would indicate the scale at which supersymmetry is broken.For particles that are real scalars (such as an axion), there is a fermion superpartner as well as a second, real scalar field. For axions, these particles are often referred to as axinos and saxions.
Theoretical predictions:
In extended supersymmetry there may be more than one superparticle for a given particle. For instance, with two copies of supersymmetry in four dimensions, a photon would have two fermion superpartners and a scalar superpartner.In zero dimensions it is possible to have supersymmetry, but no superpartners. However, this is the only situation where supersymmetry does not imply the existence of superpartners.
Recreating superpartners:
If the supersymmetry theory is correct, it should be possible to recreate these particles in high-energy particle accelerators. Doing so will not be an easy task; these particles may have masses up to a thousand times greater than their corresponding "real" particles.Some researchers have hoped the Large Hadron Collider at CERN might produce evidence for the existence of superpartner particles. However, as of 2018, no such evidence has been found. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SVIL**
SVIL:
Supervillin is a protein that in humans is encoded by the SVIL gene.
Function:
This gene encodes a bipartite protein with distinct amino- and carboxy-terminal domains. The amino-terminus contains nuclear localization signals and the carboxy-terminus contains numerous consecutive sequences with extensive similarity to proteins in the gelsolin family of actin-binding proteins, which cap, nucleate, and/or sever actin filaments. The gene product is tightly associated with both actin filaments and plasma membranes, suggesting a role as a high-affinity link between the actin cytoskeleton and the membrane. Its function may include recruitment of actin and other cytoskeletal proteins into specialized structures at the plasma membrane and in the nuclei of growing cells. Two transcript variants encoding different isoforms of supervillin have been described.
Interactions:
SVIL has been shown to interact with Androgen receptor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CBD-DMH**
CBD-DMH:
Cannabidiol-dimethylheptyl (CBD-DMH or DMH-CBD) is a synthetic homologue of cannabidiol where the pentyl chain has been replaced by a dimethylheptyl chain. Several isomers of this compound are known. The most commonly used isomer in research is (−)-CBD-DMH, which has the same stereochemistry as natural cannabidiol, and a 1,1-dimethylheptyl side chain. This compound is not psychoactive and acts primarily as an anandamide reuptake inhibitor, but is more potent than cannabidiol as an anticonvulsant and has around the same potency as an antiinflammatory. Unexpectedly the “unnatural” enantiomer (+)-CBD-DMH, which has reversed stereochemistry from cannabidiol, was found to be a directly acting cannabinoid receptor agonist with a Ki of 17.4nM at CB1 and 211nM at CB2, and produces typical cannabinoid effects in animal studies, as does its 7-OH derivative.
CBD-DMH:
Another closely analogous compound has also been described, with the double bond in the cyclohexene ring shifted to between the 1,6-positions rather than the 2,3-positions (i.e. analogous to synthetic THC analogues such as parahexyl), the isopropenyl group saturated to isopropyl, and a 1,2-dimethylheptyl side chain. It is synthesized by Birch reduction from the 1,2-dimethylheptyl analogue of cannabidiol. This compound also produces potent cannabinoid-like effects in animals, but has three chiral centers and is composed of a mixture of eight stereoisomers, which have not been studied individually, so it is not known which stereoisomers are active. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Complexometric indicator**
Complexometric indicator:
A complexometric indicator is an ionochromic dye that undergoes a definite color change in presence of specific metal ions. It forms a weak complex with the ions present in the solution, which has a significantly different color from the form existing outside the complex.
Complexometric indicators are also known as pM indicators.
Complexometric titration:
In analytical chemistry, complexometric indicators are used in complexometric titration to indicate the exact moment when all the metal ions in the solution are sequestered by a chelating agent (most usually EDTA). Such indicators are also called metallochromic indicators.
The indicator may be present in another liquid phase in equilibrium with the titrated phase, the indicator is described as extraction indicator.
Some complexometric indicators are sensitive to air and are destroyed. When such solution loses color during titration, a drop or two of fresh indicator may have to be added.
Examples:
Complexometric indicators are water-soluble organic molecules. Some examples are: Calcein with EDTA for calcium Patton-Reeder Indicator with EDTA for calcium with magnesium Curcumin for boron, that forms Rosocyanine, although the red color change of curcumin also occurs for pH > 8.4 Eriochrome Black T for aluminium, cadmium, zinc, calcium and magnesium Fast Sulphon Black with EDTA for copper Hematoxylin for copper Murexide for calcium and rare earths, but also for copper, nickel, cobalt, and thorium Xylenol orange for gallium, indium and scandium
Redox indicators:
In some settings, when the titrated system is a redox system whose equilibrium is influenced by the removal of the metal ions, a redox indicator can function as a complexometric indicator. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RAM drive**
RAM drive:
A RAM drive (also called a RAM disk) is a block of random-access memory (primary storage or volatile memory) that a computer's software is treating as if the memory were a disk drive (secondary storage). RAM drives provide high-performance temporary storage for demanding tasks and protect non-volatile storage devices from wearing down, since RAM is not prone to wear from writing, unlike non-volatile flash memory. They are in a sense the reverse of virtual memory: RAM drive uses a volatile fast memory as if it's a nonvolatile slow memory. Virtual memory is the opposite.
RAM drive:
It is sometimes referred to as a virtual RAM drive or software RAM drive to distinguish it from a hardware RAM drive that uses separate hardware containing RAM, which is a type of battery-backed solid-state drive.
RAM drive:
Historically primary storage based mass storage devices were conceived to bridge the performance gap between internal memory and secondary storage devices. In the advent of solid-state devices this advantage lost most of its appeal. However, solid-state devices do suffer from wear from frequent writing. Primary memory writes do not so or in far lesser effect. So RAM devices do offer an advantage to store frequently changing data, like temporary or cached information.
Performance:
The performance of a RAM drive is generally orders of magnitude faster than other forms of digital storage, such as SSD, tape, optical, hard disk, and floppy drives. This performance gain is due to multiple factors, including access time, maximum throughput, and file system characteristics.
Performance:
File access time is greatly reduced since a RAM drive is solid state (no moving parts). A physical hard drive, optical (e.g, CD-ROM, DVD, and Blu-ray) or other media (e.g. magnetic bubble, acoustic storage, magnetic tape) must move the information to a particular position before reading or writing can occur. RAM drives can access data with only the address, eliminating this latency.
Performance:
Second, the maximum throughput of a RAM drive is limited by the speed of the RAM, the data bus, and the CPU of the computer. Other forms of storage media are further limited by the speed of the storage bus, such as IDE (PATA), SATA, USB or FireWire. Compounding this limitation is the speed of the actual mechanics of the drive motors, heads, or eyes.
Performance:
Third, the file system in use, such as NTFS, HFS, UFS, ext2, etc., uses extra accesses, reads and writes to the drive, which although small, can add up quickly, especially in the event of many small files vs. few larger files (temporary internet folders, web caches, etc.).
Performance:
Because the storage is in RAM, it is volatile memory, which means it will be lost in the event of power loss, whether intentional (computer reboot or shutdown) or accidental (power failure or system crash). This is, in general, a weakness (the data must periodically be backed up to a persistent-storage medium to avoid loss), but is sometimes desirable: for example, when working with a decrypted copy of an encrypted file, or using the RAM drive to store the system's temporary files.
Performance:
In many cases, the data stored on the RAM drive is created from data permanently stored elsewhere, for faster access, and is re-created on the RAM drive when the system reboots.
Apart from the risk of data loss, the major limitation of RAM drives is capacity, which is constrained by the amount of installed RAM. Multi-terabyte SSD storage has become common, but RAM is still measured in gigabytes.
Performance:
RAM drives use normal system memory as if it were a partition on a physical hard drive rather than accessing the data bus normally used for secondary storage. Though RAM drives can often be supported directly in the operating system via special mechanisms in the OS kernel, it is generally simpler to access a RAM drive through a virtual device driver. This makes the non-disk nature of RAM drives invisible to both the OS and applications.
Performance:
Usually no battery backup is needed due to the temporary nature of the information stored in the RAM drive, but an uninterruptible power supply can keep the system running during a short power outage.
Some RAM drives use a compressed file system such as cramfs to allow compressed data to be accessed on the fly, without decompressing it first. This is convenient because RAM drives are often small due to the higher price per byte than conventional hard drive storage.
History and operating system specifics:
The first software RAM drive for microcomputers was invented and written by Jerry Karlin in the UK in 1979/80. The software, known as the Silicon Disk System was further developed into a commercial product and marketed by JK Systems Research which became Microcosm Research Ltd when the company was joined by Peter Cheesewright of Microcosm Ltd. The idea was to enable the early microcomputers to use more RAM than the CPU could directly address. Making bank-switched RAM behave like a disk drive was much faster than the disk drives - especially in those days before hard drives were readily available on such machines.
History and operating system specifics:
The Silicon Disk was launched in 1980, initially for the CP/M operating system and later for MS-DOS. Due to the limitations in memory addressing on Atari 8-bit, Apple II series and Commodore computers, a RAM drive was also a popular application on the Atari 130XE, Commodore 64 and Commodore 128 systems with RAM Expansion Units and on Apple II series computers with more than 64kB of RAM. Apple Computer supported a software RAM drive natively in ProDOS: on systems with 128kB or more of RAM, ProDOS would automatically allocate a RAM drive named /RAM.
History and operating system specifics:
IBM added a RAM drive named VDISK.SYS to PC DOS (version 3.0) in August 1984, which was the first DOS component to use extended memory. VDISK.SYS was not available in Microsoft's MS-DOS as it, unlike most components of early versions of PC DOS, was written by IBM. Microsoft included the similar program RAMDRIVE.SYS in MS-DOS 3.2 (released in 1986), which could also use expanded memory. It was discontinued in Windows 7. DR-DOS and the DR family of multi-user operating systems also came with a RAM disk named VDISK.SYS. In Multiuser DOS, the RAM disk defaults to the drive letter M: (for memory drive). AmigaOS has had a built in RAM drive since the release of version 1.1 in 1985 and still has it in AmigaOS 4.1 (2010). Apple Computer added the functionality to the Apple Macintosh with System 7's Memory control panel in 1991, and kept the feature through the life of Mac OS 9. Mac OS X users can use the hdid, newfs (or newfs hfs) and mount utilities to create, format and mount a RAM drive.
History and operating system specifics:
A RAM drive innovation introduced in 1986 but made generally available in 1987 by Perry Kivolowitz for AmigaOS was the ability of the RAM drive to survive most crashes and reboots. Called the ASDG Recoverable Ram Disk, the device survived reboots by allocating memory dynamically in the reverse order of default memory allocation (a feature supported by the underlying OS) so as to reduce memory fragmentation. A "super-block" was written with a unique signature which could be located in memory upon reboot. The super-block, and all other RRD disk "blocks" maintained check sums to enable the invalidation of the disk if corruption was detected. At first, the ASDG RRD was locked to ASDG memory boards and used as a selling feature. Later, the ASDG RRD was made available as shareware carrying a suggested donation of 10 dollars. The shareware version appeared on Fred Fish Disks 58 and 241. AmigaOS itself would gain a Recoverable Ram Disk (called "RAD") in version 1.3.Many Unix and Unix-like systems provide some form of RAM drive functionality, such as /dev/ram on Linux, or md(4) on FreeBSD. RAM drives are particularly useful in high-performance, low-resource applications for which Unix-like operating systems are sometimes configured. There are also a few specialized "ultra-lightweight" Linux distributions which are designed to boot from removable media and stored in a ramdisk for the entire session.
History and operating system specifics:
Dedicated hardware RAM drives There have been RAM drives which use DRAM memory that is exclusively dedicated to function as an extremely low latency storage device. This memory is isolated from the processor and not directly accessible in the same manner as normal system memory.
History and operating system specifics:
An early example of a hardware RAM drive was introduced by Assimilation Process, Inc. in 1986 for the Macintosh. Called the "Excalibur", it was an external 2MB RAM drive, and retailed for between $599 and $699 US. With the RAM capacity expandable in 1MB increments, its internal battery was said to be effective for between 6 and 8 hours, and, unusual for the time, it was connected via the Macintosh floppy disk port.In 2002, Cenatek produced the Rocket Drive, max 4 GB, which had four DIMM slots for PC133 memory, with up to a maximum of four gigabytes of storage. At the time, common desktop computers used 64 to 128 megabytes of PC100 or PC133 memory. The one gigabyte PC133 modules (the largest available at the time) cost approximately $1,300 (equivalent to $2,115 in 2022). A fully outfitted Rocket Drive with four GB of storage would have cost $5,600 (equivalent to $9,111 in 2022).In 2005, Gigabyte Technology produced the i-RAM, max 4 GB, which functioned essentially identically to the Rocket Drive, except upgraded to use the newer DDR memory technology, though also limited to a maximum of 4 GB capacity.For both of these devices, the dynamic RAM requires continuous power to retain data; when power is lost, the data fades away. For the Rocket Drive, there was a connector for an external power supply separate from the computer, and the option for an external battery to retain data during a power failure. The i-RAM included a small battery directly on the expansion board, for 10-16 hours of protection.
History and operating system specifics:
Both devices used the SATA 1.0 interface to transfer data from the dedicated RAM drive to the system. The SATA interface was a slow bottleneck that limited the maximum performance of both RAM drives, but these drives still provided exceptionally low data access latency and high sustained transfer speeds, compared to mechanical hard drives.
History and operating system specifics:
In 2006, Gigabyte Technology produced the GC-RAMDISK, max 8GB, which was the second generation creation for the i-RAM. It has a maximum of 8 GB capacity, twice that of the i-RAM. It used the SATA-II port, again twice that of the i-RAM. One of its best selling points is that it can be used as a boot device.In 2007, ACard Technology produced the ANS-9010 Serial ATA RAM disk, max 64 GB. Quote from the tech report: The ANS-9010 "which has eight DDR2 DIMM slots and support for up to 8 GB of memory per slot. The ANS-9010 also features a pair of Serial ATA ports, allowing it to function as a single drive or masquerade as a pair of drives that can easily be split into an even faster RAID 0 array."In 2009, Acard Technology produced the ACARD ANS-9010BA 5.25 Dynamic SSD SATA-II RAM Disk, max 64GB. It uses a single SATA-II port.
History and operating system specifics:
Both variants are equipped with one or more CompactFlash card interface located in the front panel, allowing non-volatile data being stored on the RAM drive to be copied on the CompactFlash card in case of power failure and low backup battery. Two pushbuttons located on the front panel allows the user to manually backup / restore data on the RAM drive. The CompactFlash card itself is not accessible to the user by normal means as the CF card is solely intended for RAM backup and restoration. The CF card's capacity has to meet / exceed the RAM module's total capacity in order to effectively work as a reliable backup.
History and operating system specifics:
In 2009, DDRdrive, LLC produced the DDRDrive X1, which claims to be the fastest solid state drive in the world. The drive is a primary 4GB DDR dedicated RAM drive for regular use, which can back up to and recall from a 4GB SLC NAND drive. The intended market is for keeping and recording log files. If there is a power loss the data can be saved to an internal 4GB ssd in 60 seconds, via the use of a battery backup. Thereafter the data can be recovered back in to RAM once power is restored. A host power loss triggers the DDRdrive X1 to back up volatile data to on-board non-volatile storage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amazon ElastiCache**
Amazon ElastiCache:
Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services (AWS). The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis (also called "ElastiCache for Redis").As a web service running in the computing cloud, Amazon ElastiCache is designed to simplify the setup, operation, and scaling of memcached and Redis deployments. Complex administration processes like patching software, backing up and restoring data sets and dynamically adding or removing capabilities are managed automatically. Scaling ElastiCache resources can be performed by a single API call.Amazon ElastiCache was first released on August 22, 2011, supporting memcached. This was followed by support for reserved instances on April 5, 2012 and Redis on September 4, 2013.
Uses:
As a managed database service with multiple supported engines, Amazon ElastiCache has a wide range of uses, including Performance acceleration Database limitations are often a bottleneck for application performance. By placing Amazon ElastiCache between an application and its database tier, database operations can be accelerated.
Cost reduction Using ElastiCache for database performance acceleration can significantly reduce the infrastructure needed to support the database. In many cases, the cost savings outweigh the cache costs. Expedia was able to use ElastiCache to reduce provisioned DynamoDB capacity by 90%, reducing total database cost by 6x.
Processing time series data Using the Redis engine, ElastiCache can rapidly process time-series data, quickly selecting newest or oldest records or events within range of a point-in-time.
Leaderboards Leaderboards are an effective way to show a user quickly where they currently stand within a gamified system. For systems with large numbers of gamers, calculating and publishing player ranks can be challenging. Using Amazon ElastiCache with the Redis engine can enable high-speed at scale for leaderboards.
Rate limitation Some APIs only allow a limited number of requests per time period. Amazon ElastiCache for Redis engine can use incremental counters and other tools to throttle API access to meet restrictions.
Uses:
Atomic counter Programs can use incremental counters to limit allowed quantities, such as the maximum number of students enrolled in a course or ensuring a game has at least 2 but not more than 8 players. Using counters can create a race condition where an operation is allowed because a counter was not updated promptly. Using the ElastiCache for Redis atomic counter functions, where a single operation both checks and increments the counter's value, prevents race conditions.
Uses:
Chat rooms and message boards ElastiCache for Redis supports publish-subscribe patterns, which enable the creation of chat rooms and message boards where messages are automatically distributed to interested users.
Deployment options:
Amazon ElastiCache can use on-demand cache nodes or reserved cache nodes.
Deployment options:
On-demand nodes provide cache capacity by the hour, with resources in the AWS cloud assigned when a cache node is provisioned. An on-demand node can be removed from service by its owner at any time. Each month, the owner will be billed for the hours used.Reserved nodes require a 1-year or 3-year commitment, which dedicates cache resources to the owner. The hourly cost of reserved nodes is significantly lower than the hourly cost of on-demand nodes.
Performance:
An efficient cache can significantly increase application's performance and user navigation speed. Amazon CloudWatch exposes ElastiCache performance metrics that can be tracked.
Performance:
Key performance metrics Client metrics (measure the volume of client connections and requests): Number of current client connections to the cache, Get and Set commands received by the cache Cache performance: Hits, misses, Replication Lag, Latency Memory metrics: Memory usage, Evictions, Amount of free memory available on the host, Swap Usage, Memory fragmentation ratio Other host-level metrics: CPU utilization, Number of bytes read from the network by the host, Number of bytes written to the network by the host Metric collection Many ElastiCache metrics can be collected from AWS via CloudWatch or directly from the cache engine, whether Redis or Memcached, with a monitoring tool integrating with it: Using the AWS Management ConsoleUsing the online management console is the simplest way to monitor ElastiCache with CloudWatch. It allows to set up basic automated alerts and to get a visual picture of recent changes in individual metrics. CloudWatch Command Line InterfaceMetrics related to ElastiCache can also be retrieved using command lines. It can be used for spot checks and ad hoc investigations.
Performance:
Monitoring tool integrated with CloudWatchThe third way to collect ElastiCache metrics is via a dedicated monitoring tool integrating with Amazon CloudWatch.
Notable customers:
Users of Amazon ElastiCache include Airbnb, Expedia, Zynga, FanDuel, and Mapbox
Limitations:
As an AWS service, ElastiCache is designed to be accessed exclusively from within AWS, though it is possible to connect the service to applications and databases that are not hosted by AWS.
Alternatives:
Other vendors provide cloud data cache services comparable to Amazon ElastiCache, including Azure Cache for Redis, Redis Ltd (company behind open source Redis and Redis Enterprise), Redis To Go, IBM Compose, Oracle Application Container Cloud Service, and Rackspace ObjectRocket. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genetic correlation**
Genetic correlation:
In multivariate quantitative genetics, a genetic correlation (denoted rg or ra ) is the proportion of variance that two traits share due to genetic causes, the correlation between the genetic influences on a trait and the genetic influences on a different trait estimating the degree of pleiotropy or causal overlap. A genetic correlation of 0 implies that the genetic effects on one trait are independent of the other, while a correlation of 1 implies that all of the genetic influences on the two traits are identical. The bivariate genetic correlation can be generalized to inferring genetic latent variable factors across > 2 traits using factor analysis. Genetic correlation models were introduced into behavioral genetics in the 1970s–1980s.
Genetic correlation:
Genetic correlations have applications in validation of genome-wide association study (GWAS) results, breeding, prediction of traits, and discovering the etiology of traits & diseases.
Genetic correlation:
They can be estimated using individual-level data from twin studies and molecular genetics, or even with GWAS summary statistics. Genetic correlations have been found to be common in non-human genetics and to be broadly similar to their respective phenotypic correlations, and also found extensively in human traits, dubbed the 'phenome'.This finding of widespread pleiotropy has implications for artificial selection in agriculture, interpretation of phenotypic correlations, social inequality, attempts to use Mendelian randomization in causal inference, the understanding of the biological origins of complex traits, and the design of GWASes.
Genetic correlation:
A genetic correlation is to be contrasted with environmental correlation between the environments affecting two traits (e.g. if poor nutrition in a household caused both lower IQ and height); a genetic correlation between two traits can contribute to the observed (phenotypic) correlation between two traits, but genetic correlations can also be opposite observed phenotypic correlations if the environment correlation is sufficiently strong in the other direction, perhaps due to tradeoffs or specialization. The observation that genetic correlations usually mirror phenotypic correlations is known as "Cheverud's Conjecture" and has been confirmed in animals and humans, and showed they are of similar sizes; for example, in the UK Biobank, of 118 continuous human traits, only 29% of their intercorrelations have opposite signs, and a later analysis of 17 high-quality UKBB traits reported correlation near-unity.
Interpretation:
Genetic correlations are not the same as heritability, as it is about the overlap between the two sets of influences and not their absolute magnitude; two traits could be both highly heritable but not be genetically correlated or have small heritabilities and be completely correlated (as long as the heritabilities are non-zero).
Interpretation:
For example, consider two traits – dark skin and black hair. These two traits may individually have a very high heritability (most of the population-level variation in the trait due to genetic differences, or in simpler terms, genetics contributes significantly to these two traits), however, they may still have a very low genetic correlation if, for instance, these two traits were being controlled by different, non-overlapping, non-linked genetic loci.
Interpretation:
A genetic correlation between two traits will tend to produce phenotypic correlations – e.g. the genetic correlation between intelligence and SES or education and family SES implies that intelligence/SES will also correlate phenotypically. The phenotypic correlation will be limited by the degree of genetic correlation and also by the heritability of each trait. The expected phenotypic correlation is the bivariate heritability' and can be calculated as the square roots of the heritabilities multiplied by the genetic correlation. (Using a Plomin example, for two traits with heritabilities of 0.60 & 0.23, 0.75 , and phenotypic correlation of r=0.45 the bivariate heritability would be 0.60 0.75 0.23 0.28 , so of the observed phenotypic correlation, 0.28/0.45 = 62% of it is due to genetics.)
Cause:
Genetic correlations can arise due to: linkage disequilibrium (two neighboring genes tend to be inherited together, each affecting a different trait) biological pleiotropy (a single gene having multiple otherwise unrelated biological effects, or shared regulation of multiple genes) mediated pleiotropy (a gene causes trait X and trait X causes trait Y).
biases: population stratification such as ancestry or assortative mating (sometimes called "gametic phase disequilibrium"), spurious stratification such as ascertainment bias/self-selection or Berkson's paradox, or misclassification of diagnoses
Uses:
Causes of changes in traits Genetic correlations are scientifically useful because genetic correlations can be analyzed over time within an individual longitudinally (e.g. intelligence is stable over a lifetime, due to the same genetic influences – childhood genetically correlates 0.62 with old age), or across studies or populations or ethnic groups/races, or across diagnoses, allowing discovery of whether different genes influence a trait over a lifetime (typically, they do not), whether different genes influence a trait in different populations due to differing local environments, whether there is disease heterogeneity across times or places or sex (particularly in psychiatric diagnoses there is uncertainty whether 1 country's 'autism' or 'schizophrenia' is the same as another's or whether diagnostic categories have shifted over time/place leading to different levels of ascertainment bias), and to what degree traits like autoimmune or psychiatric disorders or cognitive functioning meaningfully cluster due sharing a biological basis and genetic architecture (for example, reading & mathematics disability genetically correlate, consistent with the Generalist Genes Hypothesis, and these genetic correlations explain the observed phenotypic correlations or 'co-morbidity'; IQ and specific measures of cognitive performance such as verbal, spatial, and memory tasks, reaction time, long-term memory, executive function etc. all show high genetic correlations as do neuroanatomical measurements, and the correlations may increase with age, with implications for the etiology & nature of intelligence). This can be an important constraint on conceptualizations of the two traits: traits which seem different phenotypically but which share a common genetic basis require an explanation for how these genes can influence both traits.
Uses:
Boosting GWASes Genetic correlations can be used in GWASes by using polygenic scores or genome-wide hits for one (often more easily measured) trait to increase the prior probability of variants for a second trait; for example, since intelligence and years of education are highly genetically correlated, a GWAS for education will inherently also be a GWAS for intelligence and be able to predict variance in intelligence as well and the strongest SNP candidates can be used to increase the statistical power of a smaller GWAS, a combined analysis on the latent trait done where each measured genetically-correlated trait helps reduce measurement error and boosts the GWAS's power considerably (e.g. Krapohl et al. 2017, using elastic net and multiple polygenic scores, improving intelligence prediction from 3.6% of variance to 4.8%; Hill et al. 2017b uses MTAG to combine 3 g-loaded traits of education, household income, and a cognitive test score to find 107 hits & doubles predictive power of intelligence) or one could do a GWAS for multiple traits jointly.Genetic correlations can also quantify the contribution of correlations <1 across datasets which might create a false "missing heritability", by estimating the extent to which differing measurement methods, ancestral influences, or environments create only partially overlapping sets of relevant genetic variants.
Uses:
Breeding Hairless dogs have imperfect teeth; long-haired and coarse-haired animals are apt to have, as is asserted, long or many horns; pigeons with feathered feet have skin between their outer toes; pigeons with short beaks have small feet, and those with long beaks large feet. Hence if man goes on selecting, and thus augmenting any peculiarity, he will almost certainly modify unintentionally other parts of the structure, owing to the mysterious laws of correlation.
Uses:
Genetic correlations are also useful in applied contexts such as plant/animal breeding by allowing substitution of more easily measured but highly genetically correlated characteristics (particularly in the case of sex-linked or binary traits under the liability-threshold model, where differences in the phenotype can rarely be observed but another highly correlated measure, perhaps an endophenotype, is available in all individuals), compensating for different environments than the breeding was carried out in, making more accurate predictions of breeding value using the multivariate breeder's equation as compared to predictions based on the univariate breeder's equation using only per-trait heritability & assuming independence of traits, and avoiding unexpected consequences by taking into consideration that artificial selection for/against trait X will also increase/decrease all traits which positively/negatively correlate with X. The limits to selection set by the inter-correlation of traits, and the possibility for genetic correlations to change over long-term breeding programs, lead to Haldane's dilemma limiting the intensity of selection and thus progress.
Uses:
Breeding experiments on genetically correlated traits can measure the extent to which correlated traits are inherently developmentally linked & response is constrained, and which can be dissociated. Some traits, such as the size of eyespots on the butterfly Bicyclus anynana can be dissociated in breeding, but other pairs, such as eyespot colors, have resisted efforts.
Mathematical definition:
Given a genetic covariance matrix, the genetic correlation is computed by standardizing this, i.e., by converting the covariance matrix to a correlation matrix. Generally, if Σ is a genetic covariance matrix and diag (Σ) , then the correlation matrix is D−1ΣD−1 . For a given genetic covariance cov g between two traits, one with genetic variance Vg1 and the other with genetic variance Vg2 , the genetic correlation is computed in the same way as the correlation coefficient cov gVg1Vg2
Computing the genetic correlation:
Genetic correlations require a genetically informative sample. They can be estimated in breeding experiments on two traits of known heritability and selecting on one trait to measure the change in the other trait (allowing inferring the genetic correlation), family/adoption/twin studies (analyzed using SEMs or DeFries–Fulker extremes analysis), molecular estimation of relatedness such as GCTA, methods employing polygenic scores like HDL (High-Definition Likelihood), LD score regression, BOLT-REML, CPBayes, or HESS, comparison of genome-wide SNP hits in GWASes (as a loose lower bound), and phenotypic correlations of populations with at least some related individuals.As with estimating SNP heritability and genetic correlation, the better computational scaling & the ability to estimate using only established summary association statistics is a particular advantage for HDL and LD score regression over competing methods. Combined with the increasing availability of GWAS summary statistics or polygenic scores from datasets like the UK Biobank, such summary-level methods have led to an explosion of genetic correlation research since 2015.The methods are related to Haseman–Elston regression & PCGC regression. Such methods are typically genome-wide, but it is also possible to estimate genetic correlations for specific variants or genome regions.One way to consider it is using trait X in twin 1 to predict trait Y in twin 2 for monozygotic and dizygotic twins (i.e. using twin 1's IQ to predict twin 2's brain volume); if this cross-correlation is larger for the more genetically-similar monozygotic twins than for the dizygotic twins, the similarity indicates that the traits are not genetically independent and there is some common genetics influencing both IQ and brain volume. (Statistical power can be boosted by using siblings as well.) Genetic correlations are affected by methodological concerns; underestimation of heritability, such as due to assortative mating, will lead to overestimates of longitudinal genetic correlation, and moderate levels of misdiagnoses can create pseudo correlations.As they are affected by heritabilities of both traits, genetic correlations have low statistical power, especially in the presence of measurement errors biasing heritability downwards, because "estimates of genetic correlations are usually subject to rather large sampling errors and therefore seldom very precise": the standard error of an estimate rg is σ(rg)=1−rg22⋅σ(hx2)⋅σ(hy2)hx2⋅hy2 . (Larger genetic correlations & heritabilities will be estimated more precisely.) However, inclusion of genetic correlations in an analysis of a pleiotropic trait can boost power for the same reason that multivariate regressions are more powerful than separate univariate regressions.Twin methods have the advantage of being usable without detailed biological data, with human genetic correlations calculated as far back as the 1970s and animal/plant genetic correlations calculated in the 1930s, and require sample sizes in the hundreds for being well-powered, but they have the disadvantage of making assumptions which have been criticized, and in the case of rare traits like anorexia nervosa it may be difficult to find enough twins with a diagnosis to make meaningful cross-twin comparisons, and can only be estimated with access to the twin data; molecular genetic methods like GCTA or LD score regression have the advantage of not requiring specific degrees of relatedness and so can easily study rare traits using case-control designs, which also reduces the number of assumptions they rely on, but those methods could not be run until recently, require large sample sizes in the thousands or hundreds of thousands (to obtain precise SNP heritability estimates, see the standard error formula), may require individual-level genetic data (in the case of GCTA but not LD score regression).
Computing the genetic correlation:
More concretely, if two traits, say height and weight have the following additive genetic variance-covariance matrix: Then the genetic correlation is .55, as seen is the standardized matrix below: In practice, structural equation modeling applications such as Mx or OpenMx (and before that, historically, LISREL) are used to calculate both the genetic covariance matrix and its standardized form. In R, cov2cor() will standardize the matrix.
Computing the genetic correlation:
Typically, published reports will provide genetic variance components that have been standardized as a proportion of total variance (for instance in an ACE twin study model standardised as a proportion of V-total = A+C+E). In this case, the metric for computing the genetic covariance (the variance within the genetic covariance matrix) is lost (because of the standardizing process), so you cannot readily estimate the genetic correlation of two traits from such published models. Multivariate models (such as the Cholesky decomposition) will, however, allow the viewer to see shared genetic effects (as opposed to the genetic correlation) by following path rules. It is important therefore to provide the unstandardised path coefficients in publications.
Cited sources:
Falconer, Douglas Scott (1960). Introduction to Quantitative Genetics.
Plomin, Robert; DeFries, John C.; Knopik, Valerie S. and Neiderhiser, Jenae M. (2012). Behavioral Genetics. ISBN 978-1-4292-4215-8.{{cite book}}: CS1 maint: multiple names: authors list (link) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**T-cell depletion**
T-cell depletion:
T-cell depletion (TCD) is the process of T cell removal or reduction, which alters the immune system and its responses. Depletion can occur naturally (i.e. in HIV) or be induced for treatment purposes. TCD can reduce the risk of graft-versus-host disease (GVHD), which is a common issue in transplants. The idea that TCD of the allograft can eliminate GVHD was first introduced in 1958. In humans the first TCD was performed in severe combined immunodeficiency patients.
Depletion methods:
T cell depletion methods can be broadly categorized into either physical or immunological. Examples of physical separation include using counterflow centrifugal elutriation, fractionation on density gradients, or the differential agglutination with lectins followed by rosetting with sheep red blood cells. Immunological methods utilize antibodies, either alone, in conjunction with homologous, heterologous, or rabbit complement factors which are directed against the T cells. In addition, these techniques can be used in combinations.These techniques can be performed either in vivo, ex vivo, or in vitro. Ex vivo techniques enable a more accurate count of the T cells in a graft and also has the option to 'addback' a set number of T cells if necessary. Currently, ex vivo techniques most commonly employ positive or negative selection methods using immunomagnetic separation. In contrast, in-vivo TCD is performed using anti-T cell antibodies or, most recently, post-HSCT cyclophosphamide.The method by which depletion occurs can heavily affect the results. Ex vivo TCD is predominantly used in GVHD prevention, where it offers the best results. However, complete TCD via ex vivo, especially in acute myeloid leukemia (AML), patients usually does not improve survival. In vivo depletion often uses monoclonal antibodies (eg, alemtuzumab) or heteroantisera. In haploidentical hematopoietic stem cell transplantation, in vivo TCD suppressed lymphocytes early on. However, the incidence rate of cytomegalovirus (CMV) reactivations is elevated. These problems can be overcome by combining TCD haploidentical graft with post-HSCT cyclophosphamide. In contrast, both in vivo TCD with alemtuzumab and in vitro TCD with CD34+ selection performed comparably.Although TCD is beneficial to prevent GVHD there are some problems it can cause a delay in recovery of the immune system of the transplanted individual and a decreased Graft-versus-tumor effect. This problem is partially answered by more selective depletion, such as depletion of CD3+ or αβT-cell and CD19 B cell, which preserves other important cells of the immune system. Another method is addition of cells back into the graft, after a comprehensive TCD method, examples are re-introduction of natural killer cells (NK), γδ T-cells and T regulatory cells (Tregs).Early on it was apparent that TCD was good for preventing GVHD, but also led to increased graft rejection, this problem can be solved by transplanting more hematopoietic stem cells. This procedure is called 'megadose transplantation' and it prevents rejection because the stem cells have an ability (i.e. veto cell killing) to protect themselves from the host's immune system. Experiments show that transplantation of other types of veto cells along with megadose haploidentical HSCT allows to reduce the toxicity of the conditioning regimen, which makes this treatment much safer and more applicable to many diseases. These veto cells can also exert graft vs tumor effect.
Role in disease:
In HIV HIV has been confirmed to target CD4+ T cells and destroy them, making T cell depletion an important hallmark of HIV. In comparison to HIV- individuals, CD4+ T cells proliferate at a higher rate in those who are HIV+. Apoptosis also occurs more frequently in HIV+ patients.Depletion of regulatory T cells increases immune activation. Glut1 regulation is associated with the activation of CD4+ T cells, thus its expression can be used to track the loss of CD4+ T cells during HIV.Antiretroviral therapy, the most common treatment for patients with HIV, has been shown to restore CD4+ T cell counts.The body responds to T cell depletion by producing an equal amount of T cells. However, over time, an individual's immune system can no longer continue to replace CD4+ T cells. This is called the "tap and drain hypothesis." In cancer TCD's role in cancer increasing with the rise of immunotherapies being investigated, specifically those that target self-antigens. One example is antigen-specific CD4+ T cell tolerance, which serves as the primary mechanism restricting immunotherapeutic responses to the endogenous self antigen guanylyl cyclase c (GUCY2C) in colorectal cancer. However, in some cases, selective CD4+ T cell tolerance provides a unique therapeutic opportunity to maximize self antigen-targeted immune and antitumor responses without inducing autoimmunity by incorporating self antigen-independent CD4+ T cell epitopes into cancer vaccines.In a mammary carcinoma model, depletion of CD25+ regulatory T cells increase the amount of CD8+CD11c+PD110, which target and kill the tumors.
Role in disease:
In lupus Phenotypic and functional characteristics of regulatory T cells in lupus patients do not differ from healthy patients. However, depletion of regulatory T cells results in more intense flares of systemic lupus erythematosus. The in vivo depletion of regulatory T cells is hypothesized to occur via early apoptosis induction, which follow exposure to self Ags that arise during the flare.
Role in disease:
In murine cytomegalovirus (MCMV) infection MCMV is a rare herpesvirus that can cause disseminated and fatal disease in the immunodeficient animals similar to the disease caused by human cytomegalovirus in immunodeficient humans. Depletion of CD8+ T cells prior to a MCMV infection effectively upregulates the antiviral activity of natural killer cells. Depletion post infection has no effect on the NK cells.
Role in disease:
In arthritis A preliminary study of the effect on TCD in arthritis in mice models has shown that regulatory T cells play an important role in delayed-type hypersensitivity arthritis (DTHA) inflammation. This occurs by TCD inducing increased neutrofils and activity of IL-17 and RANKL.
Treatment use:
Haploidentical stem cell transplantation TCD is heavily used in haploidentical stem cell transplantation (HSCT), a process in which cancer patients receive an infusion of healthy stem cells from a compatible donor to replenish their blood-forming elements.In patients with Acute Myeloid Leukemia (AML) and in their first remission, ex vivo TCD greatly reduced the incidence rate of GVHD, though survival was comparable to conventional transplants.
Treatment use:
Bone marrow transplantation In allogeneic bone marrow transplants (BMT), the transplanted stem cells derive from the bone marrow. In cases where the donors are genetically similar, but not identical, risk of GVHD is increased. The first ex vivo TCD trials used monoclonal antibodies, but still had high incidence rates of GVHD. Additional treatment using complement or immunotoxins (along with anti-T-cell antibody) improved the depletion, thus increasing the prevention of GVHD. Depleting αβ T cells from the infused graft spares γδ T cells and NK cells promotes their homeostatic reconstitution, thus reducing the risk of GVHD.In vitro TCD selectively with an anti-T12 monoclonal antibody lowers the rate of acute and chronic GVHD post allogeneic BMT. Further, immune suppressive medications are usually unnecessary if CD6+ T cells are removed from the donor marrow.Patients can relapse even after a TCD allogeneic bone marrow transplant, though patients with chronic myelogenous leukemia (CML) who receive a donor lymphocyte infusion (DLI) can restore complete remission. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**University of California, San Diego Performance-Based Skills Assessment**
University of California, San Diego Performance-Based Skills Assessment:
The University of California, San Diego Performance-Based Skills Assessment (UPSA) was created by Dr. Thomas L. Patterson to provide a more reliable measure of every day functioning in patients with schizophrenia than the previously utilized methods such as self-report, clinician ratings or direct observation.
History:
While everyday functioning has long been known to be affected in those with schizophrenia, the focus of testing and treatment had traditionally been on the symptoms of psychosis. As everyday functioning is fundamental for patients with cognitive impairment, the focal point has shifted towards accurate assessment of everyday functional capabilities.Self-reporting or observations by a clinician have been the most common instruments used to assess everyday functioning, but these methods have weaknesses. For example, when using the self-report method, interference by the subject's psychopathy in the perception of their abilities can cause results to be distorted, and in the case of clinicians’ ratings, patients are typically only observed for a short duration, so clinicians may not be capable of comprehensively evaluating the patient's ability to perform daily tasks.Dr. Thomas L. Patterson created the University of California, San Diego (UCSD) Performance-Based Skills Assessment (UPSA) to provide clinicians with a standardized set of tasks to assess a participant's real-world abilities. During an UPSA evaluation, participants perform every day activities under a clinician's direction. As a performance-based assessment, the UPSA has been found to be less vulnerable to error than self-report by the participant as it doesn't rely on the participant's level of awareness of their own abilities. The UPSA has been shown to be predictive of outcomes such as employment status, independence, and social skills, and shows a strong correlation with neuropsychological deficits.
Description:
The UPSA is a role-play test in which participants are asked to utilize props to demonstrate how well they perform every day activities. Depending on the version, the UPSA is a paper-and-pen or electronic cognitive assessment that evaluates up to 6 domains of every day functioning: Household Management Communication Financial Skills Transportation Comprehension/Planning Medication Management
Administration:
The Household Management, Communication, Transportation, Comprehension and Planning and Medication Management domains consist of one task each. The Financial Skills domain consists of two tasks. Depending on the version used, the assessment will encompass all or some of the following sub tests:
Versions:
UPSA-2 A general version that allows for nationwide use in the United States. It uses all subscales, including Medication Management, and is intended for use in both small studies, and large multi-site clinical trials.
Versions:
UPSA-B (Brief) A shorter version of the UPSA-2 that uses only the financial skills and communication skills subscales (i.e. counting change, telephone calls, and paying bills). This version of the UPSA takes approximately 15 minutes to complete and has been shown to be an accurate predictor of patient ability to live independently, as compared to the full version of the UPSA. When used outside of the United States, the check-writing portion of the UPSA -B is replaced with a verbal response portion for populations with little to no familiarity with check writing.
Versions:
UPSA-2-VIM (Validation of Intermediate Measures) A full version of the UPSA-2, excluding the medication management task.
UPSA-2-ER (Extended Range) A full version of the UPSA-2, containing additional questions to increase the level of difficulty for each subscale.
Versions:
C-UPSA (Computerized) A computer-based version of the UPSA that requires either a laptop or a desktop computer for test administration. It was created to meet the demand of a more portable and less material-heavy everyday functioning assessment. Validation studies found that the UPSA and C-UPSA scores were significantly correlated and that the C-UPSA provided increased benefits to the users, including a decreased administration time and minimization of examiner impact on performance.
Versions:
UPSA-M (Mobile) An iOS tablet-based mobile application version of the UPSA, successive of the C-UPSA. Advantages of the UPSA-M include standardized instructions, audio recording of the subject's responses, easier administration, and the option to administer the entire UPSA or the UPSA-B through the same program. Additionally, the tablet touch-screen design mimics real-life.
Use:
Various versions of the UPSA have been used in multiple phase 2 and 3 clinical trials, as well as academic studies in populations with schizophrenia, bipolar disorder, Alzheimer's disease, mild cognitive impairment (MCI), psychosis, and others. Translated and localized versions of the UPSA-2, UPSA-2-VIM, UPSA-2ER and UPSA-B are available for international use.
Other Cognitive Assessment Tools:
SCoRS - Schizophrenia Cognition Rating Scale VRFCAT - Virtual Reality Functional Capacity Assessment Tool | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Restylane**
Restylane:
Restylane is the trade name for a range of injectable fillers with a specific formulation of non-animal sourced hyaluronic acid (HA).
In the United States, Restylane was the first hyaluronic acid filler to be approved by the U.S. Food and Drug Administration (FDA) for cosmetic injection into subdermal facial tissues.Restylane is produced by Galderma.
Medical uses:
Restylane is most commonly used for lip enhancement (volume and contouring). It is used to diminish wrinkles and aging lines of the face such as the nasolabial folds (nose to mouth lines) and melomental folds (sad mouth corners). It may also be used for filling aging-related facial hollows and "orbital troughs" (under and around the eyes), as well as for cheek volume and contouring of the chin, lips and nose.
Side effects:
A treatment with a dermal filler like Restylane can cause some temporary bruising in addition to swelling and numbness for a few days. In rare cases there has been reports of lumps or granulomas. These side effects can be easily reversed with a treatment of hyaluronidase, which is an enzyme that speeds up the natural degradation of the injected hyaluronic acid filler.Several studies have been done to understand the long-term side effects of restylane and other hyaluronic acid fillers. In certain cases, the filler results in a granulomatous foreign body reaction.Even though side effects are rare Restylane should not be used in or near areas where there is or has been skin disease, inflammation or related conditions. Restylane has not been tested in pregnant or breast-feeding women.
Contraindications:
Restylane dermal fillers are generally considered safe, there are certain contraindications and safety rules that online licensed providers should be aware of before buying and injecting Restylane fillers.
Contraindications:
Contraindications for Restylane Dermal Fillers: Patients with a history of severe allergies or anaphylaxis should not receive Restylane fillers; Restylane fillers should not be used in patients who are allergic to hyaluronic acid or any of the other ingredients in the product; Patients with active infections or inflammation at the injection site should not receive Restylane fillers until the infection or inflammation has cleared up; Restylane fillers should not be used in patients who are pregnant or breastfeeding, as the safety of these products has not been established in these populations; Patients with bleeding disorders or taking blood-thinning medications should be closely monitored when receiving Restylane fillers.
Treatment techniques:
Most injectors inject the filler with a small needle under the skin. Numbing creams or injections decrease pain.
A new way to use Restylane was described in the August 2007 issue of the Journal of Drugs in Dermatology by Dutch cosmetic doctor Tom van Eijk, whose "fern pattern" injection technique aims to restore dermal elasticity rather than to fill underneath the wrinkles.
Advantages of this procedure:
It is worth noting that mimic wrinkles are inextricably linked to the skin and affect the formation of fine wrinkles. Muscles woven into the dermis, contracting, tighten the skin, which provides mimic facial mobility. Their constant movement stretches the skin and leads to its sagging. This drug helps to restore weakened tissue and increase its volume without external damage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chinese whispers**
Chinese whispers:
Chinese whispers (some Commonwealth English), or telephone (American English and Canadian English), is an internationally popular children's game in which messages are whispered from person to person and then the original and final messages are compared. This sequential modification of information is called transmission chaining in the context of cultural evolution research, and is primarily used to identify the type of information that is more easily passed on from one person to another.Players form a line or circle, and the first player comes up with a message and whispers it to the ear of the second person in the line. The second player repeats the message to the third player, and so on. When the last player is reached, they announce the message they just heard, to the entire group. The first person then compares the original message with the final version. Although the objective is to pass around the message without it becoming garbled along the way, part of the enjoyment is that, regardless, this usually ends up happening. Errors typically accumulate in the retellings, so the statement announced by the last player differs significantly from that of the first player, usually with amusing or humorous effect. Reasons for changes include anxiousness or impatience, erroneous corrections, or the difficult-to-understand mechanism of whispering.
Chinese whispers:
The game is often played by children as a party game or on the playground. It is often invoked as a metaphor for cumulative error, especially the inaccuracies as rumours or gossip spread, or, more generally, for the unreliability of typical human recollection.
Etymology:
United Kingdom, Australian, and New Zealand usage In the UK, Australia and New Zealand, the game is typically called "Chinese whispers"; in the UK, this is documented from 1964.Various reasons have been suggested for naming the game after the Chinese, but there is no concrete explanation. One suggested reason is a widespread British fascination with Chinese culture in the 18th and 19th centuries during the Enlightenment. Another theory posits that the game's name stems from the supposed confused messages created when a message was passed verbally from tower to tower along the Great Wall of China.Critics who focus on Western use of the word Chinese as denoting "confusion" and "incomprehensibility" look to the earliest contacts between Europeans and Chinese people in the 17th century, attributing it to a supposed inability on the part of Europeans to understand China's culture and worldview. In this view, using the phrase "Chinese whispers" is taken as evidence of a belief that the Chinese language itself is not understandable. Yunte Huang, a professor of English at the University of California, Santa Barbara, has said that: "Indicating inaccurately transmitted information, the expression 'Chinese Whispers' carries with it a sense of paranoia caused by espionage, counterespionage, Red Scare, and other war games, real or imaginary, cold or hot." Usage of the term has been defended as being similar to other expressions such as "It's all Greek to me" and "Double Dutch".
Etymology:
Alternative names As the game is popular among children worldwide, it is also known under various other names depending on locality, such as Russian scandal, whisper down the lane, broken telephone, operator, grapevine, gossip, secret message, the messenger game, and pass the message, among others. In Turkey, this game is called kulaktan kulağa, which means "from (one) ear to (another) ear". In France, it is called téléphone arabe ("Arabic telephone") or téléphone sans fil ("wireless telephone"). In Germany the game is known as Stille Post ("quiet mail"). In Poland it is called głuchy telefon, meaning "deaf telephone". In Medici-era Florence it was called the "game of the ear".The game has also been known in English as Russian Scandal, Russian Gossip and Russian Telephone.In North America, the game is known under the name telephone. Alternative names used in the United States include Broken Telephone, Gossip, and Rumors. This North American name is followed in a number of languages where the game is known by the local language's equivalent of "broken telephone", such in Malaysia as telefon rosak, in Israel as telefon shavur (טלפון שבור), in Finland as rikkinäinen puhelin, and in Greece as halasmeno tilefono (χαλασμένο τηλέφωνο).
Game:
The game has no winner: the entertainment comes from comparing the original and final messages. Intermediate messages may also be compared; some messages will become unrecognizable after only a few steps.
Game:
As well as providing amusement, the game can have educational value. It shows how easily information can become corrupted by indirect communication. The game has been used in schools to simulate the spread of gossip and its possible harmful effects. It can also be used to teach young children to moderate the volume of their voice, and how to listen attentively; in this case, a game is a success if the message is transmitted accurately with each child whispering rather than shouting. It can also be used for older or adult learners of a foreign language, where the challenge of speaking comprehensibly, and understanding, is more difficult because of the low volume, and hence a greater mastery of the fine points of pronunciation is required.
Notable games:
In 2008 1,330 children and celebrities set a world record for the game of Chinese Whispers involving the most people. The game was held at the Emirates Stadium in London and lasted two hours and four minutes. Starting with "together we will make a world of difference", the phrase morphed into "we're setting a record" part way down the chain, and by the end had become simply "haaaaa". The previous record, set in 2006 by the Cycling Club of Chengdu, China, had involved 1,083 people.In 2017 a new world record was set for the largest game of Chinese Whispers in terms of the number of participants by school-children in Tauranga, New Zealand. The chain involved 1,763 school children and other individuals and was held as part of Hearing Week 2017. The starting phrase was "Turn it down". As of 2022 this remained the world record for the largest game of Chinese Whispers by number of participants according to the Guinness Book of Records.In 2012 a global game of Chinese Whispers was played spanning 237 individuals speaking seven different languages. Beginning in St Kilda Library in Melbourne, Australia, the starting phrase "Life must be lived as play" (a paraphrase of Plato) had become "He bites snails" by the time the game reached its end in Alaska 26 hours later. In 2013, the Global Gossip Game had 840 participants and travelled to all 7 continents.
Variants:
A variant of Chinese Whispers is called Rumors. In this version of the game, when players transfer the message, they deliberately change one or two words of the phrase (often to something more humorous than the previous message). Intermediate messages can be compared. There is a second derivative variant, no less popular than Rumors, known as Mahjong Secrets (UK), or Broken Telephone (US), where the objective is to receive the message from the whisperer and whisper to the next participant the first word or phrase that comes to mind in association with what was heard. At the end, the final phrase is compared to the first in front of all participants.
Variants:
The pen-and-paper game Telephone Pictionary (also known as Eat Poop You Cat) is played by alternately writing and illustrating captions, the paper being folded so that each player can only see the previous participant's contribution. The game was first implemented online by Broken Picture Telephone in early 2007. Following the success of Broken Picture Telephone, commercial boardgame versions Telestrations and Cranium Scribblish were released two years later in 2009. Drawception, and other websites, also arrived in 2009.
Variants:
A translation relay is a variant in which the first player produces a text in a given language, together with a basic guide to understanding, which includes a lexicon, an interlinear gloss, possibly a list of grammatical morphemes, comments on the meaning of difficult words, etc. (everything except an actual translation). The text is passed on to the following player, who tries to make sense of it and casts it into their language of choice, then repeating the procedure, and so on. Each player only knows the translation done by his immediate predecessor, but customarily the relay master or mistress collects all of them. The relay ends when the last player returns the translation to the beginning player.
Variants:
Another variant of Chinese whispers is shown on Ellen's Game of Games under the name of Say Whaaat?. However, the difference is that the four players will be wearing earmuffs; therefore the players have to read their lips.
A party game variant of telephone known as "wordpass" involves saying words out loud and saying a related word, until a word is repeated.
As a metaphor:
Chinese whispers is used in a number of fields as a metaphor for imperfect data transmission over multiple iterations. For example the British zoologist Mark Ridley in his book Mendel's demon used Chinese Whispers as an analogy for the imperfect transmission of genetic information across multiple generations. In another example, Richard Dawkins used Chinese Whispers as a metaphor for infidelity in memetic replication, referring specifically to children trying to reproduce drawing of a Chinese junk in his essay Chinese Junk and Chinese Whispers. It was used in the movie Tár to represent gossip circling within an orchestra. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Huntington's disease in popular culture**
Huntington's disease in popular culture:
Huntington's disease has been shown in numerous formats, more so as awareness of the condition has increased. Here is a list of references to it in popular culture;
Books:
Ann Brashares's 2011 novel Sisterhood Everlasting (later found one of the four "sisters", Tibby Rollins, had HD) James S. A. Corey's 2015 novella The Vital Abyss, part of The Expanse book series (reveals the backstory of the former Protogen researcher Paolo Cortázar, whose mother was diagnosed with "Type C Huntington's" in his adolescence, which was the primary impetus for his becoming a research scientist).
Books:
Kathy Reichs' 2020 novel A Conspiracy of Bones Lisa Genova's 2015 novel Inside the O'Briens (relates the slow development of HD in the main character, a Boston police officer, and its effects on his identity, work, and family) Pål Johan Karlsen's 2002 novel Daimler (main character Daniel Grimsgaard is affected).
Joe Klein's Woody Guthrie: A Life: The book discloses the effects of the disorder in both Woody Guthrie and his mother.
Ian McEwan's 2005 novel Saturday. The character of Baxter is negatively portrayed in his affliction.
Nick O'Donohoe's Crossroads books (BJ Vaughan has HD).
Ruth Rendell, writing as Barbara Vine, 1989 British novel The House of Stairs (main character Elizabeth Vetch is at risk).
Robert J. Sawyer's 1997 novel Frameshift (main character Pierre Tardivel).
Steven T. Seagle's autobiographical 2004 graphic novel It's a Bird... features the author coming to grips with the presence of HD in his family.
Dorothy Norvell Snyder's semi-autobiographical 1980 novel Heirloom: A Novel, How One Family Lived with One of Life's Cruelest Diseases—Huntington's.
Mary Helen Specht's 2015 novel Migratory Animals.
Jacqueline Susann's 1966 novel Valley of the Dolls (night club singer Tony Polar).
Diane Tullson's 2001 novel Saving Jasey (Trist, Jasey and their grandfather).
Kurt Vonnegut's 1985 novel Galapagos.
Nancy Werlin's 2004 novel Double Helix (Ava Samuels (mother of the protagonist), Kayla Matheson and others).
Walter Jon Williams' 1986 cyberpunk novel Hardwired features a genetically engineered, virally-transmitted version of Huntington's.
Charlotte Raven's 2021 memoir ‘’Patient 1’’ which describes her becoming affected by Huntington's and participating in a clinical trial of the drug tominersen.
Films:
In the 1967 film Valley of the Dolls, Tony Polar, the singer married to Jennifer North, has Huntington's Chorea.
Arlo Guthrie's 1969 film Alice's Restaurant, which depicts Guthrie's father Woody suffering from what was then called "Huntington's Chorea", and features numerous mentions of the condition by the younger Guthrie to his peers and the draft board's medical staff.
Films:
Broken Elf, a 2010 documentary by Robert Ciesla featuring Jukka, an alcoholic man with advanced Huntington's disease. Screened at Reikäreuna Film Festival, September 7, 2013 Do You Really Want to Know?, a 2013 documentary by John Zaritsky featuring Huntington's disease researcher Jeff Carroll Huntington's Dance a 2014 release by Chris Furbee an 18-year-long journey with a family affected by Huntington's Disease. World Premiere at Slamdance, January 19, 2014 The Lion's Mouth Opens, a 2014 documentary by Lucy Walker (director) featuring filmmaker Marianna Palka The Inheritance, a 2014 documentary film The Faith of Anna Waters, a 2016 horror movie released in the U.S. as The Offering, depicts a crime reporter whose sister dies of Huntington's and whose niece inherits it ... as does, eventually, she as well.
Television:
Sammy Davis Jr. in the 1970 episode "Song of Willie" from the series The Mod Squad Dr Ethan Hardy in BBC drama series Casualty.
Joseph Campanella in the 1970 episode "Dance to No Music" from the series Marcus Welby, M.D.
Edward Dunglass, the goth teen from the Australian soap opera Home and Away (1999–2000) The 2018 thriller series "Philharmonia" has the mother and grandmother of the central protagonist, Helene Barizet, being victims of Huntington's disease. Thus questions about her sanity play a part in the plot.
Dr. Samantha O'Hara from the Australian medical drama All Saints Hannah's father from the American drama Everwood, as revealed in the episode "Need to Know" (3x10) Characters in the episodes "Pad'ar" (3x08) and "the Sins of the Father" (4x03) of Gene Roddenberry's Earth: Final Conflict.
Angie Padgett from the episode "In Which Charlotte Goes Down the Rabbit Hole" (1x06) of Private Practice.
Dr. Remy "Thirteen" Hadley, one of the doctors who joins House's second team on House. Her mother died of Huntington's disease when she was a child. During the season 4 finale, Hadley discovers she also has the disease. In season 7 it is revealed she assisted her brother in killing himself when his Huntington's symptoms got too bad.
An episode of the BBC drama Waterloo Road In the season 8 finale of Scrubs, a woman is diagnosed with Huntington's disease, and her son has to make the decision to find out whether or not he also has the condition.
In The Bold Ones: The New Doctors, Dr. Paul Hunter counsels a pregnant woman whose irritable husband is found to have the disease. After explaining the 50-50 odds, he advises her to have the baby, trusting in the advance of medical science.
Television:
The episode "Second Sight" of the third season of Without a Trace ends with the revelation that the disappearance and kidnapping of the girl Malone's team was searching for, was in truth an elaborate set-up so that she could return to her estranged gypsy family – a decision she took after discovering she had the disease, inherited from her grandmother. Knowing she just wants to live the time she has left with her parents, Malone lets her and her father go.
Television:
In a season 4 episode of Breaking Bad, Walter White shares a childhood memory of his father who was diagnosed with Huntington's disease with his son.
In the season 3 premiere episode "Fear" (aired September 29, 2013) of the ABC series Revenge, the character of Conrad Grayson, played by Henry Czerny, is seen passing out during a political speech. In the very next scene, a doctor informs him that he has Huntington's disease.
"Fighting Huntington's Disease", a 2010 episode of the CBC News Network documentary series Connect with Mark Kelley, depicted the life and work of Huntington's disease researcher and advocate Dr Jeff Carroll, himself a carrier of the genetic mutation that causes Huntington's disease. The episode was nominated for a Gemini award for 'Best Lifestyle/Practical Information Segment.
Television:
David Collins was a fictional character in the BBC soap opera EastEnders in 2004. He was played by Dan Milne. David was the husband of Jane Collins. He had Huntington's disease and lived in a hospice. Jane kept David a secret from her new boss Ian, but one day Ian demanded to know why she was infrequent with her work, and Jane took him to meet David. Ian regularly visited him until David died shortly after Christmas 2004, leaving Jane heartbroken.
Television:
In the second episode of the second season of BBC's Ripper Street, HD (referred to as Huntington's Chorea) and the possibility of it being passed on in a prominent family are the cause of the death of a woman and the stealing of her child by the patriarch of the family. His son had been diagnosed with HD and is the father of the stolen child. The patriarch took the child to make sure that HD ended with his son, his intention being to kill the child if signs of the disorder manifested themselves.
Television:
Season 2, Episode 3 "Drop Dead Diva", Gloria Rubens plays a professor that petitions the court to be cryogenically frozen before the disease causes too much irreversible damage.
Season 7, Episode 3 Call the Midwife, An expectant mother is diagnosed with the degenerative neurological disorder Huntington's disease, and quickly deteriorates to the point where she is unable to look after her children. Her eldest daughter too is diagnosed, and placed in a hospice.
Season 2, Episode 15 The Rookie, Colin has Huntington's disease and Rachel may have it too but she's refused to get tested because she doesn't want to know. Colin thinks that because Bradford is committed to the job, he'll never be able to take care of Rachel the way she may need one day.
In the Amazon Prime series ZeroZeroZero Chris Lynwood has Huntington's disease. The show reveals that Chris and his older sister Emma (played by Andrea Riseborough) lost their mother to the disease, which the now-30-year-old Chris inherited from her.
The 2020 Ken Burns documentary The Gene: An Intimate History discussed Huntington's disease, including the discovery of the gene and interviews with Nancy Wexler and other prominent scientists involved in Huntington's disease research and drug development.
In the Australian soap opera Neighbours Chloe Brennan and her mother Fay Brennan both have Huntington's. Fay died due to complications caused by the disease in episode 8573 in 2021.
Season 1, Episode 8 Pooch Perfect Contestant Corina reveals she has Huntington's disease.
In Season 58, Episode 163 of the soap opera General Hospital, Britt Westbourne finds out she may have Huntington's disease.
In Season 2, Episode 3 of the science fiction show The Expanse, it is revealed that Paolo Cortázar's mother died from Huntington's disease.
Television:
In Season 3 (1995), Episode 7 (Family Ties) of the British TV show Peak Practice, Jack becomes involved in the plight of Nancy who has Huntington's disease. He is concerned about her when she tries to take her life. Jack tracks down her sister who hasn't visited Nancy for years because she is scared she too may have the illness. Jack is alarmed when Nancy's sister reveals that Nancy has a son who doesn't even know he's at risk from having Huntington's disease.
Television:
In Season 6 (1988), Episode 129 (Curtains) of the TV show St. Elsewhere, Dr. Morrison counsels a family about genetic testing and Huntington's disease.
In Season 8 (1997), Episode 7 (Out of the Blue) of the TV show Baywatch, Mitch tries to get Jordan to meet her real biological mother on a fishing outing, who's dying from a brain disorder and thinks the problem is hereditary. It turns out to be Huntington's Disease.
In Season 3 (2002), Episode 6 (Old Wounds) of the TV show The District, Nancy is diagnosed with Huntington's disease after falling from a fire escape and having a concussion while chasing a criminal.
Television:
In Season 2 (2009), Episode 1 (Gilted Lily) of the TV show In Plain Sight, Lily is laid out serenely on the bed. Her note says she found her birth father years ago, and discovered he died of Huntington's Disease, that she started showing symptoms a few months ago and that she wants her kids to remember her as she was.
Television:
In the Barbara Walter's ABC News Special (1990) A Perfect Baby, Walters shows the ravages of Huntington's disease, which does not appear until middle age and then destroys its victims both physically and mentally.
In Season 7 (2010), Episode 4 (Can't Fight Biology) of the TV show Grey's Anatomy, Meredith is treating a woman, Lila, who eventually reveals that she has Huntington's Disease.
In Season 1 (2022), Episode 1 of the FX Cable TV series The Old Man, Dan Chase played by Jeff Bridges is a widower whose wife died from Huntington's Disease.
In Season 4 (2022), Episode 12 (The Long Goodbye) of the Netflix series Virgin River, Denny reveals to Lizzie that he is suffering from Huntington's disease which is the true reason he needs the medication Klonopin — which helps to ease muscle tremors, rigidity and anxiety in Huntington's patients.
In Season 2 (2022), Episode 16 (Champagne Problems) of the CW TV series Walker, Cassie tells Ben that after they learned Lucas had Huntington's (disease), she couldn’t handle it.
In Season 3 (2018), Episode 3 (Snakeskin) of the Australian TV series Wanted, Lola and Chelsea take refuge with an eccentric loner. Lola struggles with her past, while Chelsea hides her Huntington's symptoms. Unbeknown to them, both Brady and Maxine close in.
In Season 14 (2017), Episode 12 (Off the Grid) of the CBS TV series NCIS, Bodie is revealed to have been suffering from Huntington's disease. Given the hereditary nature of the disease, it is likely that Ramsay also suffers from it. The team uses that, along with cell phone data, to track down Ramsay.
Television:
In Season 6 (2000), Episode 16 (Viable Options) of the NBC TV series ER, Dr. Mark Greene tells a patient that he has Huntington’s disease and that it’s genetic and that there is no cure. He also says that that might be why his father committed suicide. The patient worries that he might have passed on this trait to his daughter.
Television:
In Season 9 (2002), Episode 3 (Insurrection) of the NBC TV series ER, a late-stage Huntington's disease patient is rushed into the Emergency Room of a hospital. During the chaos caused when Dr. John Carter leads a walk-out to protest unsafe working conditions, the man's mother disconnects his ventilator so that he can die in peace. Dr. Susan Lewis figures out what the mother did but protects her.
Television:
In Season 4 (2018), Episode 5 (What You Don't Know) of the NBC TV series Chicago Med, Dr. Charles identifies Keith's illness as Huntington's. Keith admits the diagnosis, telling Daniel that there is no cure and that he wants to die before his Huntington's progresses further.
In Season 7 (1993), Episode 11 (Bare Witness) of the NBC TV series L.A. Law, Gwen tells Daniel that she had a blood test for Huntington's chorea. That it's hereditary, hidden in midlife, causing lack of coordination, then mental deterioration and finally death. And that there's a 50/50 chance, she doesn't have it.
In Season 4 (2023), Episode 15 (Donors) of the Fox TV series 9-1-1: Lone Star, Robert reveals that he was diagnosed with Huntington's Disease.
Video games:
In the "Cold, Cold Heart" DLC for Batman: Arkham Origins, it establishes the cause of Nora Fries' terminal illness to be Huntington's chorea.
Radio:
Mind Matters: RTÉ Radio 1 programme on Huntington's Disease, featuring a family affected from Ireland.
Huntington's disease - two part ABC Radio National "The Health Report" program on the disorder examines the effects on families and the challenges it presents for the health system and society. (part 2) Dr. Gilmer and Mr. Hyde - This American Life report on a man imprisoned for life for a murder committed while undiagnosed with Huntington's disease.
"What Are You Doing For The Test of Your Life" [2] - This American Life segment from the episode titled "It Says So Right Here": a report on a woman who has several family members who either have had or have been tested positive for Huntington's Disease and is going through the process of being tested. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.